Which way forward?

 

With the advent of technology that can learn and change itself, and the integration of vast data sources tracking every detail of human lives, engineering now entails decision-making with complex moral implications and global impact.  As part of daily practice, technologists face values-laden tensions concerning privacy, justice, transparency, wellbeing, human rights, and questions that strike at the very nature of what it is to be human.

We recently edited a Special Issue of IEEE Transaction on Technology and Society about “After Covid-19: Crises, Ethics, and Socio-Technical Change”

"Our research works to understand the paths toward a future in which technology benefits all of humankind and the planet. We collaborate with social scientists to develop practical methods and socio-technical solutions to equip engineers and designers with the tools necessary for practicing responsibly through every step of the development process. "

Projects

Responsible Tech Design Library

Find out more about tools and methods for more ethical practice in technology design

Staff

Prof. Rafael Calvo

Prof. Rafael Calvo

Dr Celine Mougenot

Dr Celine Mougenot

Prof. Sebastian Deterding

Prof. Sebastian Deterding

Dr Fangzhou You

Dr Fangzhou You

Laura Moradbakhti

Laura Moradbakhti

Dr Juan Pablo Bermudez

Dr Juan Pablo Bermudez

Marco Da Re

Marco Da Re

Citation

BibTex format

@inproceedings{Widjaya:2023:ewic/BCSHCI2023.13,
author = {Widjaya, MA and Bermudez, J and Moradbakhti, L and Calvo, R},
doi = {ewic/BCSHCI2023.13},
pages = {110--119},
title = {Drivers of trust in generative AI-powered voice assistants: the role of references},
url = {http://dx.doi.org/10.14236/ewic/BCSHCI2023.13},
year = {2023}
}

RIS format (EndNote, RefMan)

TY  - CPAPER
AB - The boom in generative artificial intelligence (AI) and continuing growth of Voice Assistants (VAs) suggests their trajectories will converge. This conjecture aligns with the development of AI-driven conversational agents, aiming to utilise advance natural language processing (NLP) methods to enhance the capabilities of voice assistants. However, design guidelines for VAs prioritise maximum efficiency by advocating for the use of concise answers. This poses a conflict with the challenges around generative AI, such as inaccuracies and misinterpretation, as shorter responses may not adequately provide users with meaningful information. AI-VA systems can adapt drivers of trust formation, such as references and authorship, to improve credibility. A better understanding of user behaviour when using the system is needed to develop revised design recommendations for AI-powered VA systems. This paper reports an online survey of 256 participants residing in the U.K. and nine follow-up interviews, where user behaviour is investigated to identify drivers of trust in the context of obtaining digital information from a generative AI-based VA system. Adding references is promising as a tool for increasing trust in systems producing text, yet we found no evidence that the inclusion of references in a VA response contributed towards the perceived reliability or trust towards the system. We examine further variables driving user trust in AI-powered VA systems.
AU - Widjaya,MA
AU - Bermudez,J
AU - Moradbakhti,L
AU - Calvo,R
DO - ewic/BCSHCI2023.13
EP - 119
PY - 2023///
SN - 1477-9358
SP - 110
TI - Drivers of trust in generative AI-powered voice assistants: the role of references
UR - http://dx.doi.org/10.14236/ewic/BCSHCI2023.13
UR - https://www.scienceopen.com/hosted-document?doi=10.14236/ewic/BCSHCI2023.13
UR - http://hdl.handle.net/10044/1/106752
ER -

Contact us

Dyson School of Design Engineering
Imperial College London
25 Exhibition Road
South Kensington
London
SW7 2DB

design.engineering@imperial.ac.uk
Tel: +44 (0) 20 7594 8888

Campus Map