A startup called Spectrum Labs provides artificial intelligence technology to platform providers to detect and shut down toxic exchanges in real-time. But experts say that AI monitoring also raises privacy issues.  “AI monitoring often requires looking at patterns over time, which necessitates retaining the data,” David Moody, a senior associate at Schellman, a security and privacy compliance assessment company, told Lifewire in an email interview. “This data may include data that laws have flagged as privacy data (personally identifiable information or PII).” 

More Hate Speech

Spectrum Labs promises a high-tech solution to the age-old problem of hate speech.  “On average, we help platforms reduce content moderation efforts by 50% and increase detection of toxic behaviors by 10x,” the company claims on its website.  Spectrum says it worked with research institutes with expertise in specific harmful behaviors to build over 40 behavior identification models. The company’s Guardian content moderation platform was built by a team of data scientists and moderators to “support safeguarding communities from toxicity.” There’s a growing need for ways to combat hate speech as it’s impossible for a human to monitor every piece of online traffic, Dylan Fox, the CEO of AssemblyAI, a startup that provides speech recognition and has customers involved in monitoring hate speech, told Lifewire in an email interview.  “There are about 500 million tweets a day on Twitter alone,” he added. “Even if one person could check a tweet every 10 seconds, twitter would need to employ 60 thousand people to do this. Instead, we use smart tools like AI to automate the process.” Unlike a human, AI can operate 24/7and potentially be more equitable because it is designed to uniformly apply its rules to all users without any personal beliefs interfering, Fox said. There is also a cost for those people who have to monitor and moderate content.  “They can be exposed to violence, hatred, and sordid acts, which can be damaging to a person’s mental health,” he said.  Spectrum isn’t the only company that seeks to detect online hate speech automatically. For example, Centre Malaysia recently launched an online tracker designed to find hate speech among Malaysian netizens. The software they developed—called the Tracker Benci—uses machine learning to detect hate speech online, particularly on Twitter.

Privacy Concerns

While tech solutions like Spectrum might fight online hate speech, they also raise questions about how much policing computers should be doing.  There are free speech implications, but not just for the speakers whose posts would be removed as hate speech, Irina Raicu, director of internet ethics at the Markkula Center for Applied Ethics at Santa Clara University, told Lifewire in an email interview.  “Allowing harassment in the name of ‘freedom of speech’ has driven the targets of such speech (especially when aimed at particular individuals) to stop speaking—to abandon various conversations and platforms entirely,” Raicu said. “The challenge is how to create spaces in which people can really engage with each other constructively.” AI speech monitoring shouldn’t raise privacy issues if companies use publicly available information during monitoring, Fox said. However, if the company buys details on how users interact on other platforms to pre-identify problematic users, this could raise privacy concerns.  “It can definitely be a bit of a gray area, depending on the application,” he added.   Justin Davis, the CEO of Spectrum Labs told Lifewire in an email that the company’s technology can review 2 to 5 thousand rows of data within fractions of a second. “Most importantly, technology can reduce the amount of toxic content human moderators are exposed to,” he said. We may be on the cusp of a revolution in AI monitoring human speech and text online. Future advances include better independent and autonomous monitoring capabilities to identify previously unknown forms of hate speech or any other censorable patterns that will evolve, Moody said.  AI will also soon be able to recognize patterns in the specific speech patterns and relate sources and their other activities through news analysis, public filings, traffic pattern analysis, physical monitoring, and many other options, he added. But some experts say that humans will always need to be working with computers to monitor hate speech.  “AI alone won’t work,” Raicu said. “It has to be recognized as one imperfect tool that has to be used in conjunction with other responses.” Correction 1/25/2022: Added quote from Justin Davis in the 5th paragraph from the end to reflect a post-publication email.