Lavoisier S.A.S.
14 rue de Provigny
94236 Cachan cedex
FRANCE

Heures d'ouverture 08h30-12h30/13h30-17h30
Tél.: +33 (0)1 47 40 67 00
Fax: +33 (0)1 47 40 67 02


Url canonique : www.lavoisier.fr/livre/sciences-humaines-et-sociales/trust-in-human-robot-interaction/descriptif_4414984
Url courte ou permalien : www.lavoisier.fr/livre/notice.asp?ouvrage=4414984

Trust in Human-Robot Interaction

Langue : Anglais

Coordonnateurs : Nam Chang S., Lyons Joseph B.

Couverture de l’ouvrage Trust in Human-Robot Interaction
Trust in Human-Robot Interaction addresses the gamut of factors that influence trust of robotic systems. The book presents the theory, fundamentals, techniques and diverse applications of the behavioral, cognitive and neural mechanisms of trust in human-robot interaction, covering topics like individual differences, transparency, communication, physical design, privacy and ethics.

Part I: Fundamentals of Human-Robot Interaction 1. Robotics 2. Human information processing 3. R&D framework/platform 4. Applications

PART II: Determinants of Trust in Human-Robot Interaction 5. Individual differences 6. Social factors 7. Transparency 8. Understanding robot intent 9. Robot behavior and trust 10. Contextual factors and trust 11. Agent-based systems 12. Communication and trust 13. Physical Design and trust 14. Engagement and trust A. Modelling trust in HRI

PART III: Human-Robot Interaction and Emerging Issues 11. Ethics and privacy 12. User-Centered Design and Development Process

Graduate students, researchers, academics and professionals in the areas of human factors, robotics, social psychology, neuroscience, computer science, and engineering psychology.
Chang S. Nam is currently a Professor of Industrial and Systems Engineering at North Carolina State University (NCSU), USA. He is also an associated faculty in the UNC/NCSU Joint Department of Biomedical Engineering, Department of Psychology, and Brain Research Imaging Center (BRIC) at UNC. He received a PhD at Virginia Tech. His research interests center around brain-computer interfaces, computational neuroscience, neuroergonomics, and human-AI/Robot/Automation interaction. He is the editor of “Brain-Computer Interfaces Handbook: Technological and Theoretical Advances” (with Drs. Nijholt and Lotte, CRC Press), “Neuroergonomics: Principles and Practices (Springer), “Mobile Brain-Body Imaging and the Neuroscience of Art, Innovation and Creativity (with Contreras-Vidal et al., Springer), “Trust in Human-Robot Interaction: Research and Applications” (with Lyons, Elsevier), and “Human-centered AI: Research and Applications” (with Jung & Lee, Elsevier). Currently, Nam serves as the Editor-in-Chief of the journal Brain-Computer Interfaces.
Joseph B. Lyons is currently a Senior Research Psychologist for the United States Air Force Research laboratory. He received his Ph.D. in Industrial/Organizational psychology with a minor in human factors from Wright State university. His primary research interests include: human-machine trust, interpersonal trust, leadership, and organizational science. Currently, Lyons serves as an Associate Editor for the journal Military Psychology, and he has served as a guest editor for IEEE Transactions on Human-Machine Systems. Formerly, he served as the Editor for The Military Psychologist.
  • Presents a repository of the open questions and challenges in trust in HRI
  • Includes contributions from many disciplines participating in HRI research, including psychology, neuroscience, sociology, engineering and computer science
  • Examines human information processing as a foundation for understanding HRI
  • Details the methods and techniques used to test and quantify trust in HRI

Date de parution :

Ouvrage de 614 p.

15x22.8 cm

Disponible chez l'éditeur (délai d'approvisionnement : 14 jours).

146,54 €

Ajouter au panier

Thème de Trust in Human-Robot Interaction :

Mots-clés :

Affect; After-action review; AI; Anthropomorphism; Artificial intelligence; Automation; Autonomous systems; Bidirectional communication; Carebots; Computational model; Context; D2T2; Decision and feedback processing; Decision making; Deep reinforcement learning; Dementia; Distributed; Domains of risk; Dynamic; Eldercare; Emotion; Emotional gestures; Ethics; Explainable AI; Explanation; HRI; Human evaluation of explainability; Human-agent teaming; Human-AI-robot teaming; Human-autonomy teaming; Human-machine interaction; Human-machine teaming; Human-robot interaction; Human-robot teams; Human-robot trust; Individual characteristics; Individual differences; Industrial robots; Intelligent agents; Interactive dialogue; Interdependence; Interface design; Mental models; Model explainability; Mood; Moral; Movement; Neural correlates of trust; Neural network; Obedience; Path planning; Peacekeeping; Perceived relational risk; Perceived risk; Perceived situational risk; Personality; POMDP; Postmission debriefs; Premission planning; Proximate interaction; Psychophysiological criteria; Risk; Risk budgeting; Risk evaluation; Risk perception; Risk-aware autonomy; Robot; Robot control; Robotics; Robots; Self-confidence; Shared mental model; Social robotics; Social robots; Swift trust; Swift trust model; Team trust; Teaming; Teamwork; Technology; Telehealth; Teleology; Time; Transparency; Trust; Trust calibration; Trust determinants; Trust evaluation; Trust in automation; Trust in robots; Trust measurement; Trust-based decision-making; Unmanned vehicles; Virtual assistants; wCulture; Workplace safety