|
|
|||
|
||||
OverviewIn the tradition of moral philosophy, long dominated by a rationalist paradigm, the idea of moral intuition has often been a source of embarrassment. How can the mind form a moral judgment within seconds, without any apparent reasoning? In the spirit of neuroethics, this book demystifies moral intuition by examining the mental and neural processes that generate such automatic evaluations. Addressed to specialists in philosophy, psychology, and AI ethics, the book systematically investigates three questions: how moral intuitions work, how they can improve, and how they can be implemented in artificial agents. Challenging the dominant default-interventionist view of moral reasoning, the first part argues that moral intuitions play a dual role: they detect harm and help in the environment, and they metacognitively regulate the deployment of cognitive resources, triggering reflection when intuitive outputs are uncertain or conflicting. Building on this foundation, the book offers a dyadic classification of the cognitive biases that shape moral intuitions and critically assesses strategies for mitigating them, including reasoning, expertise, and nudging. The final part extends this moral-psychological framework to artificial intelligence, arguing that the implementation of moral intuitions in artificial agents is both a feasible and a philosophically defensible goal, compatible with the functional capacities of contemporary AI systems. In doing so, the book sets a new research agenda for understanding, improving, and implementing moral intuitions in both human and artificial agents. Full Product DetailsAuthor: Dario CecchiniPublisher: Springer Nature Switzerland AG Imprint: Springer Nature Switzerland AG ISBN: 9783032201164ISBN 10: 3032201160 Pages: 224 Publication Date: 23 April 2026 Audience: College/higher education , Postgraduate, Research & Scholarly Format: Hardback Publisher's Status: Forthcoming Availability: Not yet available This item is yet to be released. You can pre-order this item and we will dispatch it to you upon its release. Table of ContentsIntroduction.- Part 1: Moral intuition: Psychological foundations.- 1. The automaticity of intuitions.- 2. The strength of intuitions: A metacognitive account.- 3. The content of moral intuitions: Dyadic harm and help.- 4. Moral reasoning: The intuition-reflection interplay.- Part 2: Toward better intuitions: Standards, biases, and strategies.- 5. The progress of moral intuitions: A dyadic theory.- 6. Challenges to moral intuitions’ progress: The problem of biases.- 7. Debiasing strategies: The direct path.- 8. Debiasing strategies: Indirect and hybrid approaches.- Part 3: Moral intuitions and artificial intelligence.- 9. Artificial agency and the alignment problem.- 10. Artificial moral agents.- 11. Toward better socio-digital environments.- Conclusion.- Bibliography.ReviewsAuthor InformationDario Cecchini is a Postdoctoral Research Scholar at NC State University. He joined the NeuroComputational Ethics Research Group in November 2022, funded by the National Science Foundation project “Virtual Reality Simulations of Moral Decision Making for Autonomous Vehicles.” Before arriving in Raleigh, Dr. Cecchini obtained a Ph.D. in Moral Philosophy at the University of Genoa (Italy) in March 2022. He studied philosophy for his BA and MA degrees in Florence and Pisa. With a solid background in metaethics and moral psychology, Dr. Cecchini’s research interests have recently expanded into the field of applied ethics, focusing particularly on the ethics of artificial intelligence. His current projects concern moral judgment, the alignment problem for AI, the ethics of automated vehicles and carebots. Tab Content 6Author Website:Countries AvailableAll regions |
||||