Abstract
Disinformation attacks that make use of social media platforms, e.g., the attacks orchestrated by the Russian “Internet Research Agency” during the 2016 U.S. Presidential election campaign and the 2016 Brexit referendum in the U.K., have led to increasing demands from governmental agencies for AI tools that are capable of identifying such attacks in their earliest stages, rather than responding to them in retrospect. This research undertaken on behalf the of the Canadian Armed Forces and Department of National Defence. Our ultimate objective is the development of an integrated set of machine-learning algorithms which will mobilize artificial intelligence to identify hostile disinformation activities in “near-real-time.” Employing The Dark Crawler, the Posit Toolkit, TensorFlow (Deep Neural Networks), plus the Random Forest classifier and short-text classification programs known as LibShortText and LibLinear, we have analyzed a wide sample of social media posts that exemplify the “fake news” that was disseminated by Russia’s Internet Research Agency, comparing them to “real news” posts in order to develop an automated means of classification.
Original language | English |
---|---|
Pages (from-to) | 15141-15163 |
Number of pages | 23 |
Journal | Neural Computing and Applications |
Volume | 34 |
Issue number | 18 |
Early online date | 9 Jun 2022 |
DOIs | |
Publication status | Published - Sept 2022 |
Keywords
- hostile disinformation
- machine learning
- deep neural network
- Internet Research Agency