Which way forward?
With the advent of technology that can learn and change itself, and the integration of vast data sources tracking every detail of human lives, engineering now entails decision-making with complex moral implications and global impact. As part of daily practice, technologists face values-laden tensions concerning privacy, justice, transparency, wellbeing, human rights, and questions that strike at the very nature of what it is to be human.
We recently edited a Special Issue of IEEE Transaction on Technology and Society about “After Covid-19: Crises, Ethics, and Socio-Technical Change”
"Our research works to understand the paths toward a future in which technology benefits all of humankind and the planet. We collaborate with social scientists to develop practical methods and socio-technical solutions to equip engineers and designers with the tools necessary for practicing responsibly through every step of the development process. "
Partners
Projects
Responsible Tech Design Library
Find out more about tools and methods for more ethical practice in technology design
Staff
Prof. Rafael Calvo
Prof. Rafael Calvo
Dr Celine Mougenot
Dr Celine Mougenot
Prof. Sebastian Deterding
Prof. Sebastian Deterding
Dr Fangzhou You
Dr Fangzhou You
Laura Moradbakhti
Laura Moradbakhti
Dr Juan Pablo Bermudez
Dr Juan Pablo Bermudez
Marco Da Re
Marco Da Re
Results
- Showing results for:
- Reset all filters
Search results
-
Journal articleSadek M, Calvo R, Mougenot C, 2024,
Designing value-sensitive AI: a critical review and recommendations for socio-technical design processes
, AI and Ethics, Vol: 9, Pages: 949-967, ISSN: 2730-5961This paper presents a critical review of how different socio-technical design processes for AI-based systems, from scholarly works and industry, support the creation of value-sensitive AI (VSAI). The review contributes to the emerging field of human-centred AI, and the even more embryonic space of VSAI in four ways: (i) it introduces three criteria for the review of VSAI based on their contribution to design processes’ overall value-sensitivity, and as a response to criticisms that current interventions are lacking in these aspects: comprehensiveness, level of guidance offered, and methodological value-sensitivity, (ii) it provides a novel review of socio-technical design processes for AI-based systems, (iii) it assesses each process based on the mentioned criteria and synthesises the results into broader trends, and (iv) it offers a resulting set of recommendations for the design of VSAI. The objective of the paper is to help creators and followers of design processes—whether scholarly or industry-based—to understand the level of value-sensitivity offered by different socio-technical design processes and act accordingly based on their needs: to adopt or adapt existing processes or to create new ones.
-
Journal articleSadek M, Calvo R, Mougenot C, 2023,
Co-designing conversational agents: a comprehensive review and recommendations for best practices
, Design Studies, Vol: 89, ISSN: 0142-694XThis paper presents a comprehensive review of fifty-two studies co-designing conversational agents (CAs). Its objectives are to synthesise prior CA co-design efforts and provide actionable recommendations for future endeavours in CA co-design. The review systematically evaluates studies' methodological and contextual aspects, revealing trends and limitations. These insights converge into practical recommendations for co-designing CAs, including (1) selecting the most suitable design technique aligned with desired CA outcomes, (2) advocating continuous stakeholder involvement throughout the design process, and (3) emphasising the elicitation and embodiment of stakeholder values to ensure CA designs align with their perspectives. This paper contributes to standardising and enhancing co-design practices, promising to improve the quality of outcomes in the case of CAs while benefiting stakeholders and users.
-
Conference paperWidjaya MA, Bermudez J, Moradbakhti L, et al., 2023,
Drivers of trust in generative AI-powered voice assistants: the role of references
, 36th International BCS Human-Computer Interaction Conference, Pages: 110-119, ISSN: 1477-9358The boom in generative artificial intelligence (AI) and continuing growth of Voice Assistants (VAs) suggests their trajectories will converge. This conjecture aligns with the development of AI-driven conversational agents, aiming to utilise advance natural language processing (NLP) methods to enhance the capabilities of voice assistants. However, design guidelines for VAs prioritise maximum efficiency by advocating for the use of concise answers. This poses a conflict with the challenges around generative AI, such as inaccuracies and misinterpretation, as shorter responses may not adequately provide users with meaningful information. AI-VA systems can adapt drivers of trust formation, such as references and authorship, to improve credibility. A better understanding of user behaviour when using the system is needed to develop revised design recommendations for AI-powered VA systems. This paper reports an online survey of 256 participants residing in the U.K. and nine follow-up interviews, where user behaviour is investigated to identify drivers of trust in the context of obtaining digital information from a generative AI-based VA system. Adding references is promising as a tool for increasing trust in systems producing text, yet we found no evidence that the inclusion of references in a VA response contributed towards the perceived reliability or trust towards the system. We examine further variables driving user trust in AI-powered VA systems.
-
Conference paperEspinoza Lau-Choleon F, Cook D, Butler C, et al., 2023,
Supporting dementia caregivers in Peru through chatbots: generative AI vs structured conversations
, 36th International BCS Human-Computer Interaction Conference 36th International BCS Human-Computer Interaction Conference Human-Computer Interaction Conference, Publisher: Association for Computing Machinery (ACM), Pages: 89-98, ISSN: 1477-9358In Peru, dementia caregivers face burnout, depression, stress, and financial strain. Addressing their needs involves tackling the intricacies of caregiving and managing emotional burdens. Chatbots can serve as a viable support mechanism in regions with limited resources. This study delves into the perceptions of dementia caregivers in Peru regarding a chatbot tailored to offer care navigation andemotional support. We divided the study into three phases: the initial stage encompassed engaging stakeholders to define design requirements for the chatbot; the second stage focused on the creation of ‘Ana’, a chatbot for dementia caregivers; and the final stage assessed the chatbot through interviews and a caregiver satisfaction survey. ‘Ana’ was tested in two configurations - oneemployed pre-defined conversation patterns, while the other harnessed generative AI for more dynamic responses. The findings reveal that caregivers seek immediate access to information on handling behavioural symptoms and a platform for emotional release. Moreover, participantspreferred the generative AI alternative of Ana, as it was perceived to be more empathic and human-like. The participants valued the generative approach despite knowing the potential risk of receiving inaccurate information.
-
Conference paperSadek M, Calvo RA, Mougenot C, 2023,
Trends, challenges and processes in conversational agent design: exploring practitioners’ views through semi-structured interviews
, CUI '23: ACM conference on Conversational User Interfaces, Publisher: ACM, Pages: 1-10The aim of this study is to explore the challenges and experiences of conversational agent (CA) practitioners in order to highlight their practical needs and bring them into consideration within the scholarly sphere. A range of data scientists, conversational designers, executive managers and researchers shared their opinions and experiences through semi-structured interviews. They were asked about emerging trends, the challenges they face, and the design processes they follow when creating CAs. In terms of trends, findings included mixed feelings regarding no-code solutions and a desire for a separation of roles. The challenges mentioned included a lack of socio-technical tools and conversational archetypes. Finally, practitioners followed different design processes and did not use the design processes described in the academic literature. These findings were analyzed to establish links between practitioners’ insights and discussions in related literature. The goal of this analysis is to highlight research-practice gaps by synthesising five practitioner needs that are not currently being met. By highlighting these research-practice gaps and foregrounding the challenges and experiences of CA practitioners, we can begin to understand the extent to which emerging literature is influencing industrial settings and where more research is needed to better support CA practitioners in their work.
-
Book chapterPeters D, Calvo RA, 2023,
Self-Determination Theory and Technology Design
, The Oxford Handbook of Self-Determination Theory, Editors: Ryan, Publisher: Oxford University Press, ISBN: 9780197600047 -
Conference paperBallou N, Deterding S, Tyack A, et al., 2022,
Self-determination theory in HCI: shaping a research agenda
, New York, CHI Conference on Human Factors in Computing Systems (CHI ’22), Publisher: ACM, Pages: 1-6Self-determination theory (SDT) has become one of the most frequently used and well-validated theories used in HCI research, modelling the relation of basic psychological needs, intrinsic motivation, positive experience and wellbeing. This makes it a prime candidate for a ‘motor theme’ driving more integrated, systematic, theory-guided research. However, its use in HCI has remained superficial and disjointed across various application domains like games, health and wellbeing, or learning. This workshop therefore convenes researchers across HCI to co-create a research agenda on how SDT-informed HCI research can maximise its progress in the coming years.
-
Journal articlePorat T, Burnell R, Calvo R, et al., 2021,
'Vaccine Passports’ may backfire: findings from a cross-sectional study in the UK and Israel on willingness to vaccinate against Covid-19
, Vaccines, Vol: 9, Pages: 1-11, ISSN: 2076-393XDomestic “vaccine passports” are being implemented across the world, as a way ofincreasing vaccinated people’s freedom of movement and to encourage vaccination. However, thesevaccine passports may affect people’s vaccination decisions in unintended and undesirable ways.This cross-sectional study investigated whether people’s willingness and motivation to getvaccinated relate to their psychological needs (autonomy, competence and relatedness), and howvaccine passports might affect these needs. Across two countries and 1358 participants we foundthat need frustration – particularly autonomy frustration – was associated with lower willingnessto vaccinate and with a shift from self-determined to external motivation. In Israel (a country withvaccine passports), people reported greater autonomy frustration than in the UK (a country withoutvaccine passports). Our findings suggest that control measures, such as domestic vaccine passportsmay have detrimental effects on people’s autonomy, motivation, and willingness to get vaccinated.Policies should strive to achieve a highly vaccinated population by supporting individuals’autonomous motivation to be vaccinated and using messages of autonomy and relatedness, ratherthan applying pressure and external controls.
-
Conference paperPillai AG, Kocaballi AB, Leong TW, et al., 2021,
Co-designing Resources for Ethics Education in HCI
, CHI Conference on Human Factors in Computing Systems, Publisher: ASSOC COMPUTING MACHINERY- Author Web Link
- Cite
- Citations: 4
-
Book chapterCalvo R, Peters D, Vold K, et al., 2020,
Supporting Human Autonomy in AI Systems: A Framework for Ethical Enquiry
, Ethics of Digital Well-Being: A Multidisciplinary Approach, Editors: Burr, Floridi, Publisher: Springer, Cham, Pages: 31-54, ISBN: 978-3-030-50585-1Autonomy has been central to moral and political philosophy for millennia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences are neither straightforward nor consistent, and are complicated by commercial interests and tensions around compulsive overuse. This multi-layered reality requires an analysis that is itself multidimensional and that takes into account human experience at various levels of resolution. We borrow from HCI and psychological research to apply a model (“METUX”) that identifies six distinct spheres of technology experience. We demonstrate the value of the model for understanding human autonomy in a technology ethics context at multiple levels by applying it to the real-world case study of an AI-enhanced video recommender system. In the process we argue for the following three claims: (1) There are autonomy-related consequences to algorithms representing the interests of third parties, and they are not impartial and rational extensions of the self, as is often perceived; (2) Designing for autonomy is an ethical imperative critical to the future design of responsible AI; and (3) Autonomy-support must be analysed from at least six spheres of experience in order to appropriately capture contradictory and downstream effects.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.
Contact us
Dyson School of Design Engineering
Imperial College London
25 Exhibition Road
South Kensington
London
SW7 2DB
design.engineering@imperial.ac.uk
Tel: +44 (0) 20 7594 8888