Published Online:https://doi.org/10.5465/annals.2018.0057

Artificial intelligence (AI) characterizes a new generation of technologies capable of interacting with the environment and aiming to simulate human intelligence. The success of integrating AI into organizations critically depends on workers’ trust in AI technology. This review explains how AI differs from other technologies and presents the existing empirical research on the determinants of human “trust” in AI, conducted in multiple disciplines over the last 20 years. Based on the reviewed literature, we identify the form of AI representation (robot, virtual, and embedded) and its level of machine intelligence (i.e., its capabilities) as important antecedents to the development of trust and propose a framework that addresses the elements that shape users’ cognitive and emotional trust. Our review reveals the important role of AI’s tangibility, transparency, reliability, and immediacy behaviors in developing cognitive trust, and the role of AI’s anthropomorphism specifically for emotional trust. We also note several limitations in the current evidence base, such as the diversity of trust measures and overreliance on short-term, small sample, and experimental studies, where the development of trust is likely to be different than in longer-term, higher stakes field environments. Based on our review, we suggest the most promising paths for future research.

REFERENCES

  • Alan, A., Costanza, E., Fischer, J., Ramchurn, S. D., Rodden, T., & Jennings, N. R. 2014. A field study of human-agent interaction for electricity tariff switching. Proceedings of the13th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2014): 965–972. Paris, France. Accessed November 13, 2018. Available at https://dl.acm.org/doi/10.5555/2615731.2617400. Google Scholar
  • Ananny, M., & Crawford, K. 2018. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3): 973–989. Google Scholar
  • Andrews, P. Y. 2012. System personality and persuasion in human-computer dialogue. ACM Transactions on Interactive Intelligent Systems, 2(2): 1–27. Google Scholar
  • Andrist, S., Bohus, D., Yu, Z., & Horvitz, E. 2016. Are you messing with me? Querying about the sincerity of interactions in the open world. ACM/IEEE International Conference on Human-Robot Interaction, 2016-April(1): 409–410. Accessed May 27, 2019. Available at https://doi.org/10.1109/HRI.2016.7451780. Google Scholar
  • Appel, M., Weber, S., Krause, S., & Mara, M. 2016. On the eeriness of service robots with emotional capabilities. ACM/IEEE International Conference on Human-Robot Interaction, 2016-April: 411–412. Accessed October 16, 2019. Available at https://doi.org/10.1109/HRI.2016.7451781. Google Scholar
  • Bainbridge, W. A., Hart, J. W., Kim, E. S., & Scassellati, B. 2011. The benefits of interactions with physically present robots over video-displayed agents. International Journal of Social Robotics, 3(1): 41–52. Google Scholar
  • Baraglia, J., Cakmak, M., Nagai, Y., Rao, R., & Asada, M. 2016. Initiative in robot assistance during collaborative task execution. ACM/IEEE International Conference on Human-Robot Interaction (HRI): 67–74. Accessed October 16, 2019. Available at https://doi.org/10.1109/HRI.2016.7451735. Google Scholar
  • Bartneck, C., Kulić, D., Croft, E., & Zoghbi, S. 2009. Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics, 1: 71–81. Google Scholar
  • Bartneck, C., Suzuki, T., Kanda, T., & Nomura, T. 2006. The influence of people’s culture and prior experiences with Aibo on their attitude towards robots. AI and Society, 21(1-2): 217–230. Google Scholar
  • Beldad, A., De Jong, M., & Steehouder, M. 2010. How shall I trust the faceless and the intangible? A literature review on the antecedents of online trust. Computers in Human Behavior, 26(5): 857–869. Google Scholar
  • Ben Mimoun, M. S., Poncin, I., & Garnier, M. 2012. Case study—Embodied virtual agents: An analysis on reasons for failure. Journal of Retailing and Consumer Services, 19(6): 605–612. Google Scholar
  • Ben Mimoun, M. S., Poncin, I., & Garnier, M. 2017. Animated conversational agents and e-consumer productivity: The roles of agents and individual characteristics. Information and Management, 54(5): 545–559. Google Scholar
  • Bickmore, T., Pfeifer, L., & Schulman, D. 2011. Relational agents improve engagement and learning in science museum visitors. International Workshop on Intelligent Virtual Agents (IVA 2011): 55–67. Accessed June 29, 2019. Available at https://doi.org/10.1007/978-3-642-23974-8_7. Google Scholar
  • Bickmore, T. W., Vardoulakis, L. M. P., & Schulman, D. 2013. Tinker: A relational agent museum guide. Autonomous Agents and Multi-Agent Systems, 27(2): 254–276. Google Scholar
  • Birnbaum, G. E., Mizrahi, M., Hoffman, G., Reis, H. T., Finkel, E. J., & Sass, O. 2016. Machines as a source of consolation: Robot responsiveness increases human approach behavior and desire for companionship. ACM/IEEE International Conference on Human-Robot Interaction, 2016-April: 165–171. Accessed December 1, 2018. Available at https://doi.org/10.1109/HRI.2016.7451748. Google Scholar
  • Bradshaw, J. M., Feltovich, P., & Johnson, M. 2011. Human-agent interaction. In G. A. Boy (Ed.), Handbook of human-machine interaction: 293–302. Available at https://doi.org/https://doi.org/10.1201/9781315557380. Google Scholar
  • Brynjolfsson, E., & Mitchell, T. 2017. What can machine learning do? Workforce implications. Science, 358(6370): 1530–1534. Google Scholar
  • Brynjolfsson, E., Mitchell, T., & Rock, D. 2018. What can machines learn, and what does it mean for occupations and the economy? AEA Papers and Proceedings, 108: 43–47. Google Scholar
  • Burr, C., Cristianini, N., & Ladyman, J. 2018. An analysis of the interaction between intelligent software agents and human users. Minds and Machines, 28: 735–774. Google Scholar
  • Camilleri, A. R., Cam, M.-A., & Hoffmann, R. 2007. Nudges and signposts—The effect of smart defaults and pictographic risk information on retirement saving investment choices. Journal of Behavioral Decision Making, 111(2): 154–162. Google Scholar
  • Carlson, Z., Sweet, T., Rhizor, J., Poston, J., Lucas, H., & Feil-seifer, D. 2015. Team-building activities for heterogeneous groups of humans and robots. In A. TapusE. AndréJ. MartinF. FerlandM. Ammi (Eds.), Social Robotics. ICSR 2015. Lecture Notes in Computer Science, vol. 9388: 113–123. Cham, Switzerland: Springer. Google Scholar
  • Chao, C. Y., Chang, T. C., Wu, H. C., Lin, Y. S., & Chen, P. C. 2016. The interrelationship between intelligent agents’ characteristics and users’ intention in a search engine by making beliefs and perceived risks mediators. Computers in Human Behavior, 64, 117–125. Google Scholar
  • Charalambous, G., Fletcher, S., & Webb, P. 2016. The development of a scale to evaluate trust in industrial human-robot collaboration. International Journal of Social Robotics, 8(2): 193–209. Google Scholar
  • Chattaraman, V., Kwon, W.-S., Gilbert, J. E., & Li, Y. 2014. Virtual shopping agents. Journal of Research in Interactive Marketing, 8(2): 144–162. Google Scholar
  • Chen, J. Y. C., & Barnes, M. J. 2014. Human–agent teaming for multirobot control: A review of human factors issues. IEEE Transactions on Human-Machine Systems, 44(1): 13–28. Google Scholar
  • Cho, J. E., & Hu, H. 2009. The effect of service quality on trust and commitment varying across generations. International Journal of Consumer Studies, 33(4): 468–476. Google Scholar
  • Christin, A. 2017. Algorithms in practice: Comparing web journalism and criminal justice. Big Data & Society, 4(2): 1–14. https://doi.org/10.1177/2053951717718855. Google Scholar
  • Cormier, D., Young, J., Nakane, M., Newman, G., & Durocher, S. 2013. Would you do as a robot commands? An obedience study for human-robot interaction. International Conference on Human-Agent Interaction, I-3–1. Sapporo, Japan. Google Scholar
  • Cotter, K., Cho, J., & Rader, E. 2017. Explaining the news feed algorithm. Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’17): 1553–1560. Accessed January 3, 2019. Available at https://doi.org/10.1145/3027063.3053114. Google Scholar
  • Crisp, C. B., & Jarvenpaa, S. L. 2013. Swift trust in global virtual teams: Trusting beliefs and normative actions. Journal of Personnel Psychology, 12(1): 45–56. Google Scholar
  • Cuddy, A. J. C., Glick, P., & Beninger, A. 2011. The dynamics of warmth and competence judgments, and their outcomes in organizations. Research in Organizational Behavior, 31: 73–98. Google Scholar
  • Culley, K. E., & Madhavan, P. 2013. A note of caution regarding anthropomorphism in HCI agents. Computers in Human Behavior, 29(3): 577–579. Google Scholar
  • Dabholkar, P. A., & Sheng, X. 2012. Consumer participation in using online recommendation agents: Effects on satisfaction, trust, and purchase intentions. The Service Industries Journal, 32(9): 1433–1449. Google Scholar
  • Danaher, J. 2017. Will life be worth living in a world without work? Technological unemployment and the meaning of life. Science and Engineering Ethics, 23(1): 41–64. Google Scholar
  • Danks, D., & London, A. J. 2017. Algorithmic bias in autonomous systems. International Joint Conference on Artificial Intelligence, (January), Melbourne, Australia: 4691–4697. Available at https://doi.org/10.1111/j.1365-2796.2007.01905.x. Google Scholar
  • Davenport, T., H., & Short, J., E. 1990. The new industrial engineering: Information technology and business process redesign. Sloan Management Review, 31(4): 11–27. Google Scholar
  • Davis, F. D. 1989. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3): 319–340. Google Scholar
  • Davis, G. F. 2019. How to communicate large-scale social challenges: The problem of the disappearing American corporation. Proceedings of the National Academy of Sciences of the United States of America, 116(16): 7698–7702. https://doi.org/10.1073/pnas.1805867115 Google Scholar
  • de Melo, C. M., Marsella, S., & Gratch, J. 2016. “Do as I say, not as I do:” Challenges in delegating decisions to automated agents. International Conference on Autonomous Agents and Multiagent Systems (AAMAS): 949–956. https://doi.org/https://dl.acm.org/doi/10.5555/2936924.2937063 Google Scholar
  • de Melo, C. M., Marsella, S., & Gratch, J. 2017. Increasing fairness by delegating decisions to autonomous agents. Conference on Autonomous Agents and MultiAgent Systems, (AAMAS): 419–425. Available at https://dl.acm.org/doi/10.5555/3091125.3091188. Google Scholar
  • de Visser, E., & Parasuraman, R. 2011. Adaptive aiding of human-robot teaming: Effects of imperfect automation on performance, trust, and workload. Journal of Cognitive Engineering and Decision Making, 5(2): 209–231. Google Scholar
  • de Visser, E. J., Monfort, S. S., Goodyear, K., Lu, L., O’Hara, M., Lee, M. R., Parasuraman, R., & Krueger, F. 2017. A little anthropomorphism goes a long way: Effects of oxytocin on trust, compliance, and team performance with automated agents. Human Factors, 59(1): 116–133. Google Scholar
  • de Visser, E. J., Monfort, S. S., McKendrick, R., Smith, M. A. B., McKnight, P. E., Krueger, F., & Parasuraman, R. 2016. Almost human: Anthropomorphism increases trust resilience in cognitive agents. Journal of Experimental Psychology: Applied, 22(3): 331–349. Google Scholar
  • de Winter, J. C. F., & Dodou, D. 2014. Why the Fitts list has persisted throughout the history of function allocation. Cognition, Technology & Work, 16(1): 1–11. Google Scholar
  • Demir, M., McNeese, N. J., & Cooke, N. J. 2017. Team situation awareness within the context of human-autonomy teaming. Cognitive Systems Research, 46: 3–12. Google Scholar
  • Demir, M., McNeese, N. J., Cooke, N. J., Ball, J. T., Myers, C., & Frieman, M. 2015. Synthetic teammate communication and coordination with humans. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 59(1): 951–955. Google Scholar
  • Desai, M., Kaniarasu, P., Medvedev, M., Steinfeld, A., & Yanco, H. 2013. Impact of robot failures and feedback on real-time trust. ACM/IEEE International Conference on Human-Robot Interaction: 251–258. Available at https://doi.org/10.1109/HRI.2013.6483596. Google Scholar
  • Desai, M., Medvedev, M., Vázquez, M., McSheehy, S., Gadea-Omelchenko, S., Bruggeman, C., Bruggeman, C., Steinfeld, A., & Yanco, H. 2012. Effects of changing reliability on trust of robot systems. Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction—HRI’ 12: 73–80. Available at https://doi.org/10.1145/2157689.2157702. Google Scholar
  • Dietvorst, B. J., Simmons, J. P., & Massey, C. 2015. Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1): 114–126. Google Scholar
  • Dietvorst, B. J., Simmons, J. P., & Massey, C. 2016. Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management Science, 64: 1155–1170. Google Scholar
  • Duarte, J., Siegel, S., & Young, L. 2012. Trust and credit: The role of appearance in peer-to-peer lending. Review of Financial Studies, 25(8): 2455–2484. Google Scholar
  • Dzindolet, M. T., Peterson, S. A., Pomranky, R. A., Pierce, L. G., & Beck, H. P. 2003. The role of trust in automation reliance. International Journal of Human-Computer Studies, 58(6): 697–718. Google Scholar
  • Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A., Karahalios, K., Hamilton, K., & Sandvig, C. 2015. I always assumed that I wasn’t really that close to [her]: Reasoning about invisible algorithms in news feeds. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15): 153–162. Accessed January 3, 2019. Available at https://doi.org/10.1145/2702123.270255. Google Scholar
  • Fan, X., Oh, S., McNeese, M., Yen, J., Cuevas, H., Strater, L., & Endsley, M. R. 2008. The influence of agent reliability on trust in human-agent collaboration. ECCE’ 08: Proceedings of the 15th European Conference on Cognitive Ergonomics: The Ergonomics of Cool Interaction, ACM International Conference Proceeding Series, vol. 369: 1–8. Accessed February 11, 2019. Available at https://doi.org/10.1145/1473018.1473028. Google Scholar
  • Faraj, S., Pachidi, S., & Sayegh, K. 2018. Working and organizing in the age of the learning algorithm. Information and Organization, 28(1): 62–70. Google Scholar
  • Fenster, M., Zuckerman, I., & Kraus, S. 2012. Guiding user choice during discussion by silence, examples and justifications. Frontiers in Artificial Intelligence and Applications, 242: 330–335. Google Scholar
  • Ferràs-Hernández, X. 2018. The future of management in a world of electronic brains. Journal of Management Inquiry, 27(2): 260–263. Google Scholar
  • Fox, J., (Grace) Ahn, S. J., Janssen, J. H., Yeykelis, L., Segovia, K. Y., & Bailenson, J. N. 2015. Avatars versus agents: A meta-analysis quantifying the effect of agency on social influence, human-computer interaction. Human-Computer Interaction, 30(5): 401–432. Google Scholar
  • Frantz, R. 2003. Herbert Simon. Artificial intelligence as a framework for understanding intuition. Journal of Economic Psychology, 24(2): 265–277. Google Scholar
  • Freedy, A., DeVisser, E., Weltman, G., & Coeyman, N. 2007. Measurement of trust in human-robot collaboration. 2007 International Symposium on Collaborative Technologies and Systems: 106–114. Accessed on October 14, 2019. Available at https://doi.org/10.1109/CTS.2007.4621745. Google Scholar
  • Gaudiello, I., Zibetti, E., Lefort, S., Chetouani, M., & Ivaldi, S. 2016. Trust as indicator of robot functional and social acceptance. An experimental study on user conformation to iCub answers. Computers in Human Behavior, 61: 633–655. Google Scholar
  • Ghazizadeh, M., Lee, J. D., & Boyle, L. N. 2012. Extending the technology acceptance model to assess automation. Cognition Technology and Work, 14(1): 39–49. Google Scholar
  • Glass, A., McGuinness, D. L., & Wolverton, M. 2008. Toward establishing trust in adaptive agents. Proceedings of the 13th International Conference on Intelligent User Interfaces—IUI’ 08: 227–236. Accessed on January 3, 2019. Available at https://doi.org/10.1145/1378773.1378804. Google Scholar
  • Gombolay, M. C., Gutierrez, R. A., Clarke, S. G., Sturla, G. F., & Shah, J. A. 2015. Decision-making authority, team efficiency and human worker satisfaction in mixed human–robot teams. Autonomous Robots, 39(3): 293–312. Google Scholar
  • Graetz, G., & Michaels, G. 2018. Robots at work. The Review of Economics and Statistics, 100(5): 753–768. Google Scholar
  • Gratch, J., Lucas, G., King, A., & Morency, L.-P. 2014. It’s only a computer: The impact of human-agent interaction in clinical interviews. Proceedings of the 2014 International Conference on Autonomous Agents and Multi-Agent Systems, (AAMAS): 85–92. Google Scholar
  • Groom, V., Nass, C., Chen, T., Nielsen, A., Scarborough, J. K., & Robles, E. 2009. Evaluating the effects of behavioral realism in embodied agents. International Journal of Human Computer Studies, 67(10): 842–849. Google Scholar
  • Gross, T. 2010. Towards a new human-centred computing methodology for cooperative ambient intelligence. Journal of Ambient Intelligence and Humanized Computing, 1(1): 31–42. Google Scholar
  • Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., De Visser, E. J., & Parasuraman, R. 2011. A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5): 517–527. Google Scholar
  • Haring, K. S., Silvera-Tawil, D., Watanabe, K., & Velonaki, M. 2016. The influence of robot appearance and interactive ability in HRI: A cross-cultural study. Lecture Notes in Computer Science, 9979 LNAI, 392–401. New York: Springer Verlag. Available at https://doi.org/10.1007/978-3-319-47437-3_38. Google Scholar
  • Headleand, C. J., Jackson, J., Williams, B., Priday, L., Teahan, W. J., & Ap Cenydd, L. 2016. How the perceived identity of a NPC companion influences player behavior. Lecture Notes in Computer Science, 9590: 88–107. Google Scholar
  • Helldin, T., Falkman, G., Riveiro, M., & Davidsson, S. 2013. Presenting system uncertainty in automotive UIs for supporting trust calibration in autonomous driving. Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI, 13): 210–217. Eindhoven, The Netherlands: ACM. Google Scholar
  • Hengstler, M., Enkel, E., & Duelli, S. 2016. Applied artificial intelligence and trust—The case of autonomous vehicles and medical assistance devices. Technological Forecasting and Social Change, 105: 105–120. Google Scholar
  • Hinds, P. J., Roberts, T. L., & Jones, H. 2004. Whose job is it anyway? A study of human-robot interaction in a collaborative task. Human-Computer Interaction, 19: 151–181. Google Scholar
  • Ho, C.-C., & MacDorman, K. F. 2010. Revisiting the uncanny valley theory: Developing and validating an alternative to the Godspeed indices. Computers in Human Behavior, 26(6): 1508–1518. Google Scholar
  • Hoff, K. A., & Bashir, M. 2015. Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3): 407–434. Google Scholar
  • Hoffman, G., & Breazeal, C. 2007. Effects of anticipatory action on human-robot teamwork efficiency, fluency, and perception of team. 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI): 1–8. Available at https://doi.org/10.1145/1228716.1228718. Google Scholar
  • Hollis, V., Pekurovsky, A., Wu, E., & Whittaker, S. 2018. On being told how we feel: How algorithmic sensor feedback influences emotion perception. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2(3): 1–31. Google Scholar
  • Huang, M. H., & Rust, R. T. 2018. Artificial intelligence in service. Journal of Service Research, 21(2): 155–172. Google Scholar
  • Isbister, K., Nakanishi, H., Ishida, T., & Nass, C. 2000. Helper agent. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems—CHI’ 00, (April 2000): 57–64. Available at https://doi.org/10.1145/332040.332407. Google Scholar
  • Jacq, A., Lemaignan, S., Garcia, F., Dillenbourg, P., & Paiva, A. 2016. Building successful long child-robot interactions in a learning context. ACM/IEEE International Conference on Human-Robot Interaction, April 2016: 239–246. Available at https://doi.org/10.1109/HRI.2016.7451758. Google Scholar
  • Jago, A. S. 2019. Algorithms and authenticity. Academy of Management Discoveries, 5(1): 38–56.LinkGoogle Scholar
  • Jaimes, A., Gatica-Perez, D., Sebe, N., & Huang, T. S. 2007. Guest editors’ introduction: Human-centered computing—Toward a human revolution. Computer, 40(5): 30–34. Google Scholar
  • Jiang, W., Fischer, J. E., Greenhalgh, C., Ramchurn, S. D., Wu, F., Jennings, N. R., & Rodden, T. 2014. Social implications of agent-based planning support for human teams. 2014 International Conference on Collaboration Technologies and Systems, CTS 2014: 310–317. Available at https://doi.org/10.1109/CTS.2014.6867582. Google Scholar
  • Jones, G. R., & George, J. M. 1998. The experience and evolution of trust: Implications for cooperation and teamwork. The Academy of Management Review, 23(3): 531–546.LinkGoogle Scholar
  • Jung, M. F., Lee, J. J., DePalma, N., Adalgeirsson, S. O., Hinds, P. J., & Breazeal, C. 2013. Engaging robots: Easing complex human-robot teamwork using backchanneling. Proceedings of the 2013 conference on Computer supported cooperative work—CSCW’ 13: 1555–1566. Available at https://doi.org/10.1145/2441776.2441954. Google Scholar
  • Kaplan, J. 2015. Humans need not apply: A guide to wealth and work in the age of artificial intelligence. New Haven: Yale University Press. Google Scholar
  • Kaptein, M., Markopoulos, P., de Ruyter, B., & Aarts, E. 2011. Two acts of social intelligence: The effects of mimicry and social praise on the evaluation of an artificial agent. AI and Society, 26(3): 261–273. Google Scholar
  • Kellogg, K., Valentine, M., & Christin, A. 2019. Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14: 366–410. Google Scholar
  • Khan, R. F., & Sutcliffe, A. 2014. Attractive agents are more persuasive. International Journal of Human-Computer Interaction, 30(2): 142–150. Google Scholar
  • Kizilcec, R. F. 2016. How much information? Effects of transparency on trust in an algorithmic interface. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems: 2390–2395. Available at https://doi.org/10.1145/2858036.2858402. Google Scholar
  • Komiak, S. Y. X., & Benbasat, I. 2006. The effects of personalization and familiarity on trust and adoption of recommendation agents. Management Information Systems Quarterly, 30(4): 941–960. Google Scholar
  • Kopp, S., Gesellensetter, L., Krämer, N. C., & Wachsmuth, I. 2005. A conversational agent as museum guide—Design and evaluation of a real-world application. Lecture Notes in Computer Science: 329–343. Available at https://doi.org/10.1007/11550617_28. Google Scholar
  • Krämer, N. C., Lucas, G., Schmitt, L., & Gratch, J. 2017. Social snacking with a virtual agent—On the interrelation of need to belong and effects of social responsiveness when interacting with artificial entities. International Journal of Human Computer Studies, 109: 112–121. Google Scholar
  • Lee, M. K. 2018. Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1): 205395171875668. Google Scholar
  • Lee, M. K., & Baykal, S. 2017. Algorithmic mediation in group decisions: fairness perceptions of algorithmically mediated vs. discussion-based social division. Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing – CSCW’ 17: 1035–1048. Available at https://doi.org/10.1145/2998181.2998230. Google Scholar
  • Lee, K. M., Jung, Y., Kim, J., & Kim, S. R. 2006. Are physically embodied social agents better than disembodied social agents?: The effects of physical embodiment, tactile interaction, and people’s loneliness in human–robot interaction. International Journal of Human-Computer Studies, 64(10): 962–973. Google Scholar
  • Lee, M. K., Kusbit, D., Metsky, E., & Dabbish, L. 2015. Working with machines: The impact of algorithmic and data-driven management on human workers. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems – CHI’ 15: 1603–1612. Available at https://doi.org/10.1145/2702123.2702548. Google Scholar
  • Lee, K. M., Park, N., & Song, H. 2005. Can a robot be perceived as a developing creature? Human Communication Research, 31(4): 538–563. https://doi.org/10.1111/j.1468-2958.2005.tb00882.x. Google Scholar
  • Lee, K. M., Peng, W., Jin, S. A., & Yan, C. 2006. Can robots manifest personality?: An empirical test of personality recognition, social responses, and social presence in human-robot interaction. Journal of Communication, 56(4): 754–772. Google Scholar
  • Lee, J. D., & See, K. A. 2004. Trust in automation: Designing for appropriate reliance. Human Factors: The Journal of the Human Factors and Ergonomics Society, 46(1): 50–80. Google Scholar
  • Leonardi, P. M. 2009. Why do people reject new technologies and stymie organizational changes of which they are in favor? Exploring misalignments between social interactions and materiality. Human Communication Research, 35(3): 407–441. Google Scholar
  • Lewandowsky, S., Mundy, M., & Tan, G. P. A. 2000. The dynamics of trust: Comparing humans to automation. Journal of Experimental Psychology: Applied, 6(2): 104–123. Google Scholar
  • Li, J. 2015. The benefit of being physically present: A survey of experimental works comparing copresent robots, telepresent robots and virtual agents. International Journal of Human-Computer Studies, 77: 23–37. Google Scholar
  • Linkov, F., Sanei-Moghaddam, A., Edwards, R. P., Lounder, P. J., Ismail, N., Goughnour, S. L., Kang, C., Mansuria, S. M., & Comerci, J. T. 2017. Implementation of hysterectomy pathway: Impact on complications. Women’s Health Issues, 27(4): 493–498. Google Scholar
  • Loebbecke, C., & Picot, A. 2015. Reflections on societal and business model transformation arising from digitization and big data analytics: A research agenda. Journal of Strategic Information Systems, 24(3): 149–157. Google Scholar
  • Logg, J. M., Minson, J. A., & Moore, D. A. 2018. Algorithm appreciation: People prefer algorithmic to human judgment (No. 17–086). Retrieved from https://www.hbs.edu/faculty/Publication Files/17-086_610956b6-7d91-4337-90cc-5bb5245316a8.pdf. Google Scholar
  • Looije, R., Neerincx, M. A., & Cnossen, F. 2010. Persuasive robotic assistant for health self-management of older adults: Design and evaluation of social behaviors. International Journal of Human-Computer Studies, 68(6): 386–397. Google Scholar
  • Lucas, G. M., Gratch, J., King, A., & Morency, L. P. 2014. It’s only a computer: Virtual humans increase willingness to disclose. Computers in Human Behavior, 37: 94–100. Google Scholar
  • Madhavan, P., & Wiegmann, D. A. 2007. Similarities and differences between human–human and human–automation trust: An integrative review. Theoretical Issues in Ergonomics Science, 8(4): 277–301. Google Scholar
  • Malle, B. F., Scheutz, M., Forlizzi, J., & Voiklis, J. 2016. Which robot am I thinking about? The impact of action and appearance on people’s evaluations of a moral robot. ACM/IEEE International Conference on Human-Robot Interaction, 2016-April: 125–132. Available at https://doi.org/10.1109/HRI.2016.7451743. Google Scholar
  • Manzey, D., Reichenbach, J., & Onnasch, L. 2012. Human performance consequences of automated decision aids: The impact of degree of automation and system experience. Journal of Cognitive Engineering and Decision Making, 6(1): 57–87. Google Scholar
  • Martelaro, N., Jung, M., & Hinds, P. 2015. Using robots to moderate team conflict. Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts—HRI’15 Extended Abstracts: 271–271. Available at https://doi.org/10.1145/2701973.2702094. Google Scholar
  • Matsui, T., & Yamada, S. 2019. Designing trustworthy product recommendation virtual agents operating positive emotion and having copious amount of knowledge. Frontiers in Psychology, 10: 675. Google Scholar
  • Matz, S. C., Kosinski, M., Nave, G., & Stillwell, D. J. 2017. Psychological targeting as an effective approach to digital mass persuasion. Proceedings of the National Academy of Sciences of the United States of America, 114(48): 12714–12719. Google Scholar
  • Mayer, R. C., Davis, J. H., & Schoorman, D. F. 1995. An integrative model of organizational trust. The Academy of Management Review, 20(3): 709–734.LinkGoogle Scholar
  • McAllister, D. J. 1995. Affect- and cognition-based trust as foundations for interpersonal cooperation in organizations. Academy of Management Journal, 38(1): 24–59.LinkGoogle Scholar
  • McCarthy, J., & Feigenbaum, E. A. 1990. In memoriam: Arthur Samuel: Pioneer in machine learning. AI Magazine, 11(3): 10. Google Scholar
  • Mehrabian, A. 1967. Attitudes inferred from non-immediacy of verbal communications. Journal of Verbal Learning and Verbal Behavior, 6(2): 294–295. Google Scholar
  • Miller, D., Johns, M., Mok, B., Gowda, N., Sirkin, D., Lee, K., & Ju, W. 2016. Behavioral measurement of trust in automation. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 60(1): 1849–1853. Google Scholar
  • Mirnig, N., Stollnberger, G., Miksch, M., Stadler, S., Giuliani, M., & Tscheligi, M. 2017. To err is robot: How humans assess and act toward an erroneous social robot. Frontiers Robotics AI, 4: 21. Google Scholar
  • Möhlmann, M., & Zalmanson, L. 2017. Hands on the wheel: Navigating algorithmic management and uber drivers’ autonomy. Proceedings of the Thirty Eighth International Conference on Information Systems (ICIS 2017), (December): 1–17. Seoul, South Korea. Google Scholar
  • Moran, S., Pantidi, N., Bachour, K., Fischer, J. E., Flintham, M., Rodden, T., Evans, S., & Johnson, S. 2013. Team reactions to voiced agent instructions in a pervasive game. Proceedings of the 2013 International Conference on Intelligent User Interfaces—IUI’ 13. Available at https://doi.org/10.1145/2449396.2449445. Google Scholar
  • Mori, M. 1970. Bukimi no tani [The uncanny valley]. Energy, 7(4): 33–35. Google Scholar
  • Mumm, J., & Mutlu, B. 2011. Designing motivational agents: The role of praise, social comparison, and embodiment in computer feedback. Computers in Human Behavior, 27(5): 1643–1650. Google Scholar
  • Murray, A., Rhymer, J., & Sirmon, D. 2019. Humans and agentic technologies: Toward a theory of conjoined agency in organizational routines. In ACM (Ed.), Collective Intelligence. Pittsburgh, PA Google Scholar
  • Murray, A., Rhymer, J., & Sirmon, D.G. 2020. Humans and technology: Forms of conjoined agency in organizations. Academy of Management Review. https://doi.org/10.5465/amr.2019.0186. Google Scholar
  • Ng, K.-Y., & Chua, R. Y. J. 2006. Do I contribute more when I trust more? Differential effects of cognition- and affect-based trust. Management and Organization Review, 2(1): 43–66. Google Scholar
  • Nomura, T., Suzuki, T., Kanda, T., & Kato, K. 2006. Measurement of negative attitudes toward robots. Interaction Studies, 7(3): 437–454. Google Scholar
  • Obaid, M., Salem, M., Ziadee, M., Boukaram, H., Moltchanova, E., & Sakr, M. 2016a. Investigating effects of professional status and ethnicity in Human-Agent interaction. HAI 2016—Proceedings of the 4th International Conference on Human Agent Interaction: 179–186. Available at https://doi.org/10.1145/2974804.2974813. Google Scholar
  • Obaid, M., Sandoval, E. B., Zlotowski, J., Moltchanova, E., Basedow, C. A., & Bartneck, C. 2016b. Stop! That is close enough. How body postures influence human-robot proximity. 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN): 354–361. Available at https://doi.org/10.1109/ROMAN.2016.7745155. Google Scholar
  • Oistad, B. C., Sembroski, C. E., Gates, K. A., Krupp, M. M., Fraune, M. R., & Šabanović, S. 2016. Colleague or tool? Interactivity increases positive perceptions of and willingness to interact with a robotic co-worker. International Conference on Social Robotics: 774–785. Available at https://doi.org/10.1007/978-3-319-47437-3_76. Google Scholar
  • Pak, R., Fink, N., Price, M., Bass, B., & Sturre, L. 2012. Decision support aids with anthropomorphic characteristics influence trust and performance in younger and older adults. Ergonomics, 55(9): 1059–1072. Google Scholar
  • Panella, M., Marchisio, S., & Di Stanislao, F. 2003. Reducing clinical variations with clinical pathways: Do pathways work? International Journal for Quality in Health Care, 15(6): 509–521. Google Scholar
  • Parasuraman, R., & Manzey, D. H. 2010. Complacency and bias in human use of automation: An attentional integration. Human Factors: The Journal of the Human Factors and Ergonomics Society, 52(3): 381–410. Google Scholar
  • Parasuraman, R., & Riley, V. 1997. Humans and automation: Use, misuse, disuse, abuse. Human Factors: The Journal of the Human Factors and Ergonomics Society, 39(2): 230–253. Google Scholar
  • Pavlou, P. A. 2003. Consumer acceptance of electronic commerce: Integrating trust and risk with the technology acceptance model. International Journal of Electronic Commerce, 7(3): 101–134. Google Scholar
  • Pfeffer, J. 2018. The role of the general manager in the new economy: Can we save people from technology dysfunctions? (April 19, 2018). Stanford University Graduate School of Business Research Paper No. 3714. Available at http://dx.doi.org/10.2139/ssrn.3247251. Google Scholar
  • Pieters, W. 2011. Explanation and trust: What to tell the user in security and AI? Ethics and Information Technology, 13(1): 53–64. Google Scholar
  • Powers, E. 2017. My news feed is filtered? Digital Journalism, 5(10): 1315–1335. Google Scholar
  • Qiu, L., & Benbasat, I. 2009. Evaluating anthropomorphic product recommendation agents: A social relationship perspective to designing information systems. Journal of Management Information Systems, 25(4): 145–182. Google Scholar
  • Ragni, M., Rudenko, A., Kuhnert, B., & Arras, K. O. 2016. Errare humanum est: Erroneous robots in human-robot interaction. 25th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2016: 501–506. Available at https://doi.org/10.1109/ROMAN.2016.7745164. Google Scholar
  • Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J. F., Breazeal, C., Crandall, J. W., Christakis, N. A., Couzin, I. D., Jackson, M. O., Jennings, N. R., Kamar, E., Kloumann, I. M., Larochelle, H., Lazer, D., McElreath, R., Mislove, A., Parkes, D. C., Pentland, A. S., Roberts, M. E., & Wellman, M. 2019. Machine behaviour. Nature, 568(7753): 477–486. Google Scholar
  • Raj, M., & Seamans, R. 2019. Primer on artificial intelligence and robotics. Journal of Organization Design, 8(1): 11. Google Scholar
  • Ramchurn, S. D., Wu, F., Jiang, W., Fischer, J. E., Reece, S., Roberts, S., Rodden, T., Greenhalgh, C., & Jennings, N. R. 2016. Human–agent collaboration for disaster response. Autonomous Agents and Multi-Agent Systems, 30(1): 82–111. Google Scholar
  • Robinette, P., Howard, A. M., & Wagner, A. R. 2015. Timing is key for robot trust repair. Seventh International Conference on Social Robotics: 574–583. Available at https://doi.org/10.1007/978-3-319-25554-5_57. Google Scholar
  • Robinette, P., Howard, A. M., & Wagner, A. R. 2017. Effect of robot performance on human-robot trust in time-critical situations. IEEE Transactions on Human-Machine Systems, 47(4): 425–436. Google Scholar
  • Robinette, P., Li, W., Allen, R., Howard, A. M., & Wagner, A. R. 2016. Overtrust of robots in emergency evacuation scenarios. ACM/IEEE International Conference on Human-Robot Interaction, 2016-April: 101–108. Available at https://doi.org/10.1109/HRI.2016.7451740. Google Scholar
  • Rossi, A., Holthaus, P., Dautenhahn, K., Koay, K. L., & Walters, M. L. 2018. Getting to know pepper: Effects of people’s awareness of a robot’s capabilities on their trust in the robot. HAI 2018—Proceedings of the 6th International Conference on Human-Agent Interaction: 246–252. Available at https://doi.org/10.1145/3284432.3284464. Google Scholar
  • Rousseau, D. M., Sitkin, S. B., Burt, R. S., & Camerer, C. 1998. Not so different after all: A cross-discipline view of trust. Academy of Management Review, 23(3): 393–404.LinkGoogle Scholar
  • Russell, S. J., & Norvig, P. 1995. Artificial intelligence a modern approach. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.259.8854&rep=rep1&type=pdf. Google Scholar
  • Salem, M., Lakatos, G., Amirabdollahian, F., & Dautenhahn, K. 2015. Would you trust a (faulty) robot? Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction—HRI’ 15: 141–148. Available at https://doi.org/10.1145/2696454.2696497. Google Scholar
  • Samuel, A. L. 1959. Some studies in machine learning using the game of checkers. IBM Journal of Research and Development, 3(3), 210–229. Google Scholar
  • Sanders, T. L., Wixon, T., Schafer, K. E., Chen, J. Y. C., & Hancock, P. A. 2014. The influence of modality and transparency on trust in human-robot interaction. 2014 IEEE International Inter-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA): 156–159. Available at https://doi.org/10.1109/CogSIMA.2014.6816556. Google Scholar
  • Sandoval, E. B., Brandstetter, J., & Bartneck, C. 2016. Can a robot bribe a human? The measurement of the negative side of reciprocity in human robot interaction. ACM/IEEE International Conference on Human-Robot Interaction, April 2016: 117–124. Available at https://doi.org/10.1109/HRI.2016.7451742. Google Scholar
  • Schoorman, F. D., Mayer, R. C., & Davis, J. H. 2007. An integrative model of organizational trust: Past, present and future. Academy of Management Review, 32(2): 344–354.LinkGoogle Scholar
  • Schwab, K. 2017. The fourth industrial revolution. Retrieved from https://books.google.com/books?hl=en&lr=&id=GVekDQAAQBAJ&oi=fnd&pg=PR7&dq=The+fourth+industrial+revolution&ots=NhKeFDzwhG&sig=SxKMGj8OWFndH_0YSdJMKbknCwA#v=onepage&q=The fourth industrial revolution&f=false. Google Scholar
  • Shim, J., & Arkin, R. C. 2014. Other-oriented robot deception: a computational approach for deceptive action generation to benefit the mark. IEEE International Conference on Robotics and Biomimetics (ROBIO 2014): 528–535. Available at https://doi.org/10.1109/ROBIO.2014.7090385. Google Scholar
  • Shinozawa, K., Naya, F., Yamato, J., & Kogure, K. 2005. Differences in effect of robot and screen agent recommendations on human decision-making. International Journal of Human-Computer Studies, 62(2): 267–279. Google Scholar
  • Sierhuis, M., Bradshaw, J. M., Acquisti, A., Van Hoof, R., Jeffers, R., & Uszok, A. 2003. Human—Agent teamwork and adjustable autonomy in practice. Proceedings of the Seventh International Symposium on Artificial Intelligence, Robotics and Automation in Space (i-SAIRAS). Google Scholar
  • Strohkorb, S., Fukuto, E., Warren, N., Taylor, C., Berry, B., & Scassellati, B. 2016. Improving human-human collaboration between children with a social robot. 25th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2016: 551–556. Available at https://doi.org/10.1109/ROMAN.2016.7745172. Google Scholar
  • Strohkorb Sebo, S., Traeger, M., Jung, M., & Scassellati, B. 2018. The Ripple effects of vulnerability: The effects of a robot’s vulnerable behavior on trust in human-robot teams. ACM/IEEE International Conference on Human-Robot Interaction: 178–186. Available at https://doi.org/10.1145/3171221.3171275. Google Scholar
  • Stubbs, K., Wettergreen, D., & Hinds, P. J. 2007. Autonomy and common ground in human-robot interaction: A field study. IEEE Intelligent Systems, 22(2): 42–50. Google Scholar
  • Tussyadiah, I. P., Zach, F. J., & Wang, J. 2019. Do travelers trust intelligent service robots? Annals of Tourism Research, 81: 102886. Google Scholar
  • Ullman, D., & Malle, B. F. 2017. Human-robot trust: Just a button press away. Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction – HRI’ 17, (March 6–9), 309–310. Available at https://doi.org/10.1145/3029798.3038423. Google Scholar
  • Ullman, D., & Malle, B. F. 2018. What does it mean to trust a robot? Steps toward a multidimensional measure of trust. HRI 18 Companion: 263–264. Available at https://doi.org/10.1145/3173386.3176991. Google Scholar
  • Verberne, F. M. F., Ham, J., & Midden, C. J. H. 2015. Trusting a virtual driver that looks, acts, and thinks like you. Human Factors, 57(5): 895–909. Google Scholar
  • Von Der Pütten, A. M., Krämer, N. C., Gratch, J., & Kang, S. H. 2010. “It doesn’t matter what you are!” Explaining social effects of agents and avatars. Computers in Human Behavior, 26(6): 1641–1650. Google Scholar
  • Walker, F., Verwey, W., & Martens, M. 2018. Gaze behaviour as a measure of trust in automated vehicles. Proceedings of the 6th Humanist Conference. Accessed on February 27, 2019. Retrieved from http://www.humanist-vce.eu/fileadmin/contributeurs/humanist/TheHague2018/29-walker.pdf. Google Scholar
  • Wang, W., & Benbasat, I. 2007. Recommendation agents for electronic commerce: Effects of explanation facilities on trusting beliefs. Journal of Management Information Systems, 23(4): 217–246. Google Scholar
  • Wang, W., Qiu, L., Kim, D., & Benbasat, I. 2016. Effects of rational and social appeals of online recommendation agents on cognition- and affect-based trust. Decision Support Systems, 86: 48–60. Google Scholar
  • Waytz, A., Heafner, J., & Epley, N. 2014. The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52: 113–117. Google Scholar
  • Wirtz, J., Patterson, P. G., Kunz, W. H., Gruber, T., Lu, V. N., Paluch, S., & Martins, A. 2018. Brave new world: Service robots in the frontline. Journal of Service Management, 29(5): 907–931. Google Scholar
  • Xiao, B., & Benbasat, I. 2007. E-commerce product recommendation agents: Use, characteristics, and impact. MIS Quarterly, 31(1): 137–209. Google Scholar
  • Yagoda, R. E., & Gillan, D. J. 2012. You want me to trust a ROBOT? The development of a human–robot interaction trust scale. International Journal of Social Robotics, 4(3): 235–248. Google Scholar
  • You, S., & Robert, L. 2019. Subgroup formation in human-robot teams. ICIS 2019 Proceedings. Accessed on February 18, 2020. Retrieved from https://aisel.aisnet.org/icis2019/general_topics/general_topics/18. Google Scholar
  • Zhang, T., Kaber, D. B., Zhu, B., Swangnetr, M., Mosaly, P., & Hodge, L. 2010. Service robot feature design effects on user perceptions and emotional responses. Intelligent Service Robotics, 3(2): 73–88. Google Scholar
  • Złotowski, J. A., Sumioka, H., Nishio, S., Glas, D. F., Bartneck, C., & Ishiguro, H. 2015. Persistence of the uncanny valley: The influence of repeated interactions and a robot’s attitude on its perception. Frontiers in Psychology, 6: 883: 1–13. https://doi.org/10.3389/fpsyg.2015.00883. Google Scholar
  • Złotowski, J., Sumioka, H., Nishio, S., Glas, D. F., Bartneck, C., & Ishiguro, H. 2016. Appearance of a robot affects the impact of its behaviour on perceived trustworthiness and empathy. Paladyn, Journal of Behavioral Robotics, 7(1): 55–66. Google Scholar
  • Złotowski, J., Yogeeswaran, K., & Bartneck, C. 2017. Can we control it? Autonomous robots threaten human identity, uniqueness, safety, and resources. International Journal of Human-Computer Studies, 100: 48–54. https://doi.org/10.1016/J.IJHCS.2016.12.008. Google Scholar