Preventing cyber bulling has proven to be significant challenge for social media platforms. However, it seems technology that enables cyber bulling can also be used to prevent it. The artificial intelligence is being developed to enable computers to recognize bulling comments and remove them from the social media, and do it more effectively than humans. According to a language researcher from Belgium “it is nearly impossible for human moderators to go through all posts manually to determine if there is a problem. AI is key to automating detection and moderation of bullying and trolling.”
The machine learning algorithm is being trained to sport phrases and words commonly associated with bulling, but sarcasm remains a big challenge. Additional difficulty comes from the fact offensive language comes in different types and forms, and doesn`t necessarily have to involve any offensive words typically associated with bulling. According to another researcher in this field “we need individual hate-speech filters for separate targets of hate speech”. Instagram is an example as a company that uses bulling filters from 2017, but has recently started employing machine learning to detect in particular bulling based on users character or appearance.
This is not the only anticipated use, however, since there are attempts to develop technology that will help victims of mobbing and harassment at workplace remember accurately and in detail the instances of abuse. This in turn will give additional credibility to their accusations, which is often used as a counter argument. A tool that goes even further actually provides advice to victims of sexual harassment. The first version that was tested was 89% accurate, according to the evaluation results. This tool can be particularly useful to predict if a person is at risk of suicide, allowing sufficient time for reaction and prevention measures to be put in place. The initial tests showed AI tools can achieve this with great precision.