Abstract
YouTube is one of the leading social media platforms and online spaces for people who self-harm to search and view deliberate self-harm videos, share their experience and seek help via comments. These comments may contain information that signals a commentator could be at risk of potential harm. Due to a large amount of responses generated from these videos, it is very challenging for social media teams to respond to a vulnerable commentator who is at risk. We considered this issue as a multi-class problem and triaged viewers' comments into one of four severity levels. Using current state-of-the-art classifiers, we propose a model enriched with psycho-linguistic and sentiment features that can detect critical comments in need of urgent support. On average, our model achieved up to 60% precision, recall, and f1-score which indicates the effectiveness of the model.
Original language | English |
---|---|
Title of host publication | CHIIR 2020 - Proceedings of the 2020 Conference on Human Information Interaction and Retrieval |
Place of Publication | New York |
Pages | 383-386 |
Number of pages | 4 |
ISBN (Electronic) | 9781450368926 |
DOIs | |
Publication status | Published - 15 Mar 2020 |
Event | ACM SIGIR Conference on Human Information Interaction and Retrieval - Vancouver, Canada Duration: 14 Mar 2020 → 18 Mar 2020 Conference number: 5 |
Publication series
Name | CHIIR 2020 - Proceedings of the 2020 Conference on Human Information Interaction and Retrieval |
---|
Conference
Conference | ACM SIGIR Conference on Human Information Interaction and Retrieval |
---|---|
Abbreviated title | CHIIR |
Country/Territory | Canada |
City | Vancouver |
Period | 14/03/20 → 18/03/20 |
Keywords
- self-harm
- social media
- YouTube
- video content
- classification
- HCI