Trust in Human-Robot Interaction
Coordonnateurs : Nam Chang S., Lyons Joseph B.
Part I: Fundamentals of Human-Robot Interaction 1. Robotics 2. Human information processing 3. R&D framework/platform 4. Applications
PART II: Determinants of Trust in Human-Robot Interaction 5. Individual differences 6. Social factors 7. Transparency 8. Understanding robot intent 9. Robot behavior and trust 10. Contextual factors and trust 11. Agent-based systems 12. Communication and trust 13. Physical Design and trust 14. Engagement and trust A. Modelling trust in HRI
PART III: Human-Robot Interaction and Emerging Issues 11. Ethics and privacy 12. User-Centered Design and Development Process
Joseph B. Lyons is currently a Senior Research Psychologist for the United States Air Force Research laboratory. He received his Ph.D. in Industrial/Organizational psychology with a minor in human factors from Wright State university. His primary research interests include: human-machine trust, interpersonal trust, leadership, and organizational science. Currently, Lyons serves as an Associate Editor for the journal Military Psychology, and he has served as a guest editor for IEEE Transactions on Human-Machine Systems. Formerly, he served as the Editor for The Military Psychologist.
- Presents a repository of the open questions and challenges in trust in HRI
- Includes contributions from many disciplines participating in HRI research, including psychology, neuroscience, sociology, engineering and computer science
- Examines human information processing as a foundation for understanding HRI
- Details the methods and techniques used to test and quantify trust in HRI
Date de parution : 11-2020
Ouvrage de 614 p.
15x22.8 cm
Thème de Trust in Human-Robot Interaction :
Mots-clés :
Affect; After-action review; AI; Anthropomorphism; Artificial intelligence; Automation; Autonomous systems; Bidirectional communication; Carebots; Computational model; Context; D2T2; Decision and feedback processing; Decision making; Deep reinforcement learning; Dementia; Distributed; Domains of risk; Dynamic; Eldercare; Emotion; Emotional gestures; Ethics; Explainable AI; Explanation; HRI; Human evaluation of explainability; Human-agent teaming; Human-AI-robot teaming; Human-autonomy teaming; Human-machine interaction; Human-machine teaming; Human-robot interaction; Human-robot teams; Human-robot trust; Individual characteristics; Individual differences; Industrial robots; Intelligent agents; Interactive dialogue; Interdependence; Interface design; Mental models; Model explainability; Mood; Moral; Movement; Neural correlates of trust; Neural network; Obedience; Path planning; Peacekeeping; Perceived relational risk; Perceived risk; Perceived situational risk; Personality; POMDP; Postmission debriefs; Premission planning; Proximate interaction; Psychophysiological criteria; Risk; Risk budgeting; Risk evaluation; Risk perception; Risk-aware autonomy; Robot; Robot control; Robotics; Robots; Self-confidence; Shared mental model; Social robotics; Social robots; Swift trust; Swift trust model; Team trust; Teaming; Teamwork; Technology; Telehealth; Teleology; Time; Transparency; Trust; Trust calibration; Trust determinants; Trust evaluation; Trust in automation; Trust in robots; Trust measurement; Trust-based decision-making; Unmanned vehicles; Virtual assistants; wCulture; Workplace safety