|
|
|||
|
||||
OverviewThis open access book introduces social aspects relevant to research and developments of explainable AI (XAI). The new surge of XAI responds to the societal challenge that many algorithmic approaches (such as machine learning or autonomous intelligent systems) are rapidly increasing in complexity, making justified use of their recommendations difficult for the users. A large body of approaches now exists with many ideas of how algorithms should be explainable or even be able to explain their output. However, few of them consider the users' perspective, and even less address the social aspects of using XAI. To fill the gap, the book offers a conceptualization of explainability as a social practice, a framework for contextual factors, and an operationalization of users' involvement in creating relevant explanations. For this, scholars across disciplines gathered at the Shonan meeting to account for how explanation generation can be tailored to diverse users and their heterogeneous goals when interacting with XAI. As a result, social interaction is the key to the involvement of the users. Accordingly, we define sXAI (social eXplainable AI) as systems that interact with the users in such a way that an incremental adaptation of explaining to the users is possible, along with the unfolding context of interaction, to yield a relevant explanation at the interface with both active partners—human and AI. To encourage novel interdisciplinary research, we propose to account for the following dimensions: • Patterndness: XAI should account for different contexts that yield different social roles impacting the construction of explanations. • Incrementality: XAI should build on the contribution of the involved partners who adapt to each other. • Multimodality: XAI needs to use different communication modalities (e.g., visual, verbal, and auditory). This book also addresses how to evaluate social XAI systems and what ethical aspects must be considered when employing sXAI. Together, the book pushes forward the building of a community interested in sXAI. To increase the readability across disciplines, each chapter offers a rapid access to its content. Full Product DetailsAuthor: Katharina Rohlfing , Brian Lim , Kirsten Thommes , Kary FrämlingPublisher: Springer Nature Switzerland AG Imprint: Springer Nature Switzerland AG ISBN: 9789819652891ISBN 10: 9819652898 Pages: 615 Publication Date: 19 April 2026 Audience: Professional and scholarly , College/higher education , Professional & Vocational , Postgraduate, Research & Scholarly Format: Hardback Publisher's Status: Forthcoming Availability: Not yet available This item is yet to be released. You can pre-order this item and we will dispatch it to you upon its release. Table of ContentsChapter 1 TBD.- Chapter 2 TBD.- Chapter X TBD.ReviewsAuthor InformationThis open access book introduces social aspects relevant to research and developments of explainable AI (XAI). The new surge of XAI responds to the societal challenge that many algorithmic approaches (such as machine learning or autonomous intelligent systems) are rapidly increasing in complexity, making justified use of their recommendations difficult for the users. A large body of approaches now exists with many ideas of how algorithms should be explainable or even be able to explain their output. However, few of them consider the users' perspective, and even less address the social aspects of using XAI. To fill the gap, the book offers a conceptualization of explainability as a social practice, a framework for contextual factors, and an operationalization of users' involvement in creating relevant explanations. For this, scholars across disciplines gathered at the Shonan meeting to account for how explanation generation can be tailored to diverse users and their heterogeneous goals when interacting with XAI. As a result, social interaction is the key to the involvement of the users. Accordingly, we define sXAI (social eXplainable AI) as systems that interact with the users in such a way that an incremental adaptation of explaining to the users is possible, along with the unfolding context of interaction, to yield a relevant explanation at the interface with both active partners--human and AI. To encourage novel interdisciplinary research, we propose to account for the following dimensions: - Patterndness: XAI should account for different contexts that yield different social roles impacting the construction of explanations. - Incrementality: XAI should build on the contribution of the involved partners who adapt to each other. - Multimodality: XAI needs to use different communication modalities (e.g., visual, verbal, and auditory). This book also addresses how to evaluate social XAI systems and what ethical aspects must be considered when employing sXAI. Together, the book pushes forward the building of a community interested in sXAI. To increase the readability across disciplines, each chapter offers a rapid access to its content. This open access book introduces social aspects relevant to research and developments of explainable AI (XAI). The new surge of XAI responds to the societal challenge that many algorithmic approaches (such as machine learning or autonomous intelligent systems) are rapidly increasing in complexity, making justified use of their recommendations difficult for the users. A large body of approaches now exists with many ideas of how algorithms should be explainable or even be able to explain their output. However, few of them consider the users' perspective, and even less address the social aspects of using XAI. To fill the gap, the book offers a conceptualization of explainability as a social practice, a framework for contextual factors, and an operationalization of users' involvement in creating relevant explanations. For this, scholars across disciplines gathered at the Shonan meeting to account for how explanation generation can be tailored to diverse users and their heterogeneous goals when interacting with XAI. As a result, social interaction is the key to the involvement of the users. Accordingly, we define sXAI (social eXplainable AI) as systems that interact with the users in such a way that an incremental adaptation of explaining to the users is possible, along with the unfolding context of interaction, to yield a relevant explanation at the interface with both active partners--human and AI. To encourage novel interdisciplinary research, we propose to account for the following dimensions: - Patterndness: XAI should account for different contexts that yield different social roles impacting the construction of explanations. - Incrementality: XAI should build on the contribution of the involved partners who adapt to each other. - Multimodality: XAI needs to use different communication modalities (e.g., visual, verbal, and auditory). This book also addresses how to evaluate social XAI systems and what ethical aspects must be considered when employing sXAI. Together, the book pushes forward the building of a community interested in sXAI. To increase the readability across disciplines, each chapter offers a rapid access to its content. Tab Content 6Author Website:Countries AvailableAll regions |
||||