- A source of both invaluable content and toxic abuse, online comments are a double-edged sword.
- New technologies such as machine learning and artificial intelligence are being investigated for their ability to moderate human comments.
- Unlocking automatic comment moderation at scale may encourage new audiences to speak up.
Located across many of the internet’s news sites, social media platforms and blogs, comments sections have become a quintessential – and frequently controversial – part of the online experience. For readers scrolling through these ruminations from the peanut gallery, reactions might include amusement, despair, thoughtful debate, or anything in between.
Online comments present a quandary for the digital world. At their best, communities unite under a common interest to share and converse, serving up a steady stream of invaluable crowdsourced content, engagement and feedback. At their worst, there’s toxicity, trolling, harassment and threats.
This duality may not remain the status quo for much longer. Ways to improve the darker side of internet discussions are being explored, from migrating conversations to new platforms, to trialing automated moderation technologies.
It may soon be possible for any organisation to have a digitally curated comments section, bringing many of the benefits but with few of the drawbacks. In turn, the customer experience of interacting with brands on a public platform could also improve.
The brief history
of online comments
It’s not difficult to see how online comments evolved in the way they did. For much of human history, individuals contributed very little published discourse unless they were professional writers or critics.
Enter the internet with its decentralised approach to speech and expression, a vast audience reach, and a rapacious appetite for content (any content will do). Suddenly, opinions could be read widely and responded to instantaneously, turning the digital world into a never-ending conversation on current events. Given the internet’s ability to anonymise, however, these conversations were not always conducted respectfully or constructively1.
Social media’s emergence as an engine room for digital discussions brought new commercial possibilities to online conversations. Now any organisation – councils, media brands, businesses, education institutes – could establish their own presence on these platforms, bringing them within earshot of millions of users. The result was a new channel for marketing and customer service, with customers able to air their feedback and grievances in a highly public setting to prompt faster response times.
How can organisations keep their electronic feedback civil? Several options have been available. They can allow them to be posted unaltered, shut them down completely or filter them via moderation.
If companies choose the path of moderation, they can employ human moderators, deploy keyword filters or empower select community members to flag or ban inappropriate content. None of these options are entirely economical or accurate, though – particularly for organisations with larger audiences.
Other attempts to discourage negativity, such as de-anonymising the identity of commenters have so far had mixed results. In 2013, for instance, YouTube’s comments system was integrated with the Google+ social media network, a change that required formerly anonymous commenters to sign up to the social media service using real identities. The decision was reversed eight months later due to user confusion over the ‘unclear’ policy2.
New technologies and platforms might soon unlock the elusive ability to moderate at scale. Media companies such as the Washington Post and podcasting network Gimlet Media have experimented with hosting online discussion communities within the productivity messaging platform Slack3, which provides considerable administrator oversight and a range of moderation tools.
Grander innovations involve machine learning and artificial intelligence. Through its Jigsaw think tank, Google is exploring how machine learning and artificial intelligence can automatically identify and flag abusive online language.
Collaborating with entities such as the New York Times and Wikipedia, Jigsaw used machine learning to ‘teach’ its Conversation AI tool to identify abuse online. For this, large volumes of discussion data were combined with human-supplied definitions of inappropriate content. These were then fed through machine learning software.
The resulting tool claims to be able to detect harassment with 92% certainty4, rivaling a test panel of humans.
Once ready for primetime, the New York Times plans to use the tool to conduct an initial run through the thousands of comments it receives every day, flagging up potentially inflammatory content for review by its (now considerably unburdened) human moderators.
Will technology have
the final say?
Though showing great promise in the ability to efficiently identify and flag large volumes of abusive content, digital technology will likely not see the complete end of online abuse or harassment.
However, it may provide tools that better empower organisations to invest in online communities or comment sections, providing scalable safe spaces that encourage civil discussion. In turn, online comments could become a vastly improved channel for user engagement and crowdsourced content ideas or feedback, enhancing the customer experience when digitally interacting with companies and brands.
In doing so, perhaps the silent majority of online audiences can also be encouraged to speak up, joining in with online discussions and further diversifying the internet’s breadth of opinion and calibre of content.