Misinformation in third-party voice applications

Mary Bispham, Suliman Kalim Sattar, Clara Zard, Xavier Ferrer-Aran, Jide Edu, Guillermo Suarez-Tangil, Jose Such

Research output: Chapter in Book/Report/Conference proceedingConference contribution book

1 Citation (Scopus)
43 Downloads (Pure)

Abstract

This paper investigates the potential for spreading misinformation via third-party voice applications in voice assistant ecosystems such as Amazon Alexa and Google Assistant. Our work fills a gap in prior work on privacy issues associated with third-party voice applications, looking at security issues related to outputs from such applications rather than compromises to privacy from user inputs. We define misinformation in the context of third-party voice applications and implement an infrastructure for testing third-party voice applications using automated natural language interaction. Using our infrastructure, we identify — for the first time — several instances of misinformation in third-party voice applications currently available on the Google Assistant and Amazon Alexa platforms. We then discuss the implications of our work for developing measures to pre-empt the threat of misinformation and other types of harmful content in third-party voice assistants becoming more significant in the future.
Original languageEnglish
Title of host publicationCUI '23
Subtitle of host publicationProceedings of the 5th International Conference on Conversational User Interfaces
EditorsMinha Lee, Cosmin Munteanu
Place of PublicationNew York, NY.
Number of pages6
ISBN (Electronic)9798400700149
DOIs
Publication statusPublished - 19 Jul 2023

Keywords

  • security and privacy
  • voice assistants
  • online harm
  • misinformation
  • human-centered computing
  • human computer interaction (HCI)

Fingerprint

Dive into the research topics of 'Misinformation in third-party voice applications'. Together they form a unique fingerprint.

Cite this