feature image

Digital Pulse

in·no·va·tion – A creation resulting from study and/or experimentation. The act of starting something new.

dis·rup·tion – An act of delaying or interrupting continuity. An event that results in displacement or discontinuity. The act of causing disorder.

Digital change is a journey, not a destination.
It requires agility and flexibility to underpin a continually evolving business environment.

Industry Change – The ongoing economic, technological and social development of commercial business and market sectors.

Taming the trolls of online comments

Key takeawaysonline comments

  • A source of both invaluable content and toxic abuse, online comments are a double-edged sword.
  • New technologies such as machine learning and artificial intelligence are being investigated for their ability to moderate human comments.
  • Unlocking automatic comment moderation at scale may encourage new audiences to speak up.

Located across many of the internet’s news sites, social media platforms and blogs, comments sections have become a quintessential – and frequently controversial – part of the online experience. For readers scrolling through these ruminations from the peanut gallery, reactions might include amusement, despair, thoughtful debate, or anything in between.

Online comments present a quandary for the digital world. At their best, communities unite under a common interest to share and converse, serving up a steady stream of invaluable crowdsourced content, engagement and feedback. At their worst, there’s toxicity, trolling, harassment and threats.

This duality may not remain the status quo for much longer. Ways to improve the darker side of internet discussions are being explored, from migrating conversations to new platforms, to trialing automated moderation technologies.

It may soon be possible for any organisation to have a digitally curated comments section, bringing many of the benefits but with few of the drawbacks. In turn, the customer experience of interacting with brands on a public platform could also improve.

The brief history of online comments

It’s not difficult to see how online comments evolved in the way they did. For much of human history, individuals contributed very little published discourse unless they were professional writers or critics.

Enter the internet with its decentralised approach to speech and expression, a vast audience reach, and a rapacious appetite for content (any content will do). Suddenly, opinions could be read widely and responded to instantaneously, turning the digital world into a never-ending conversation on current events. Given the internet’s ability to anonymise, however, these conversations were not always conducted respectfully or constructively1.

Social media’s emergence as an engine room for digital discussions brought new commercial possibilities to online conversations. Now any organisation – councils, media brands, businesses, education institutes – could establish their own presence on these platforms, bringing them within earshot of millions of users. The result was a new channel for marketing and customer service, with customers able to air their feedback and grievances in a highly public setting to prompt faster response times.

Everything in moderation

How can organisations keep their electronic feedback civil? Several options have been available. They can allow them to be posted unaltered, shut them down completely or filter them via moderation.

If companies choose the path of moderation, they can employ human moderators, deploy keyword filters or empower select community members to flag or ban inappropriate content. None of these options are entirely economical or accurate, though – particularly for organisations with larger audiences.

Other attempts to discourage negativity, such as de-anonymising the identity of commenters have so far had mixed results. In 2013, for instance, YouTube’s comments system was integrated with the Google+ social media network, a change that required formerly anonymous commenters to sign up to the social media service using real identities. The decision was reversed eight months later due to user confusion over the ‘unclear’ policy2.

Watched over by machines

New technologies and platforms might soon unlock the elusive ability to moderate at scale. Media companies such as the Washington Post and podcasting network Gimlet Media have experimented with hosting online discussion communities within the productivity messaging platform Slack3, which provides considerable administrator oversight and a range of moderation tools.

Grander innovations involve machine learning and artificial intelligence. Through its Jigsaw think tank, Google is exploring how machine learning and artificial intelligence can automatically identify and flag abusive online language.

Collaborating with entities such as the New York Times and Wikipedia, Jigsaw used machine learning to ‘teach’ its Conversation AI tool to identify abuse online. For this, large volumes of discussion data were combined with human-supplied definitions of inappropriate content. These were then fed through machine learning software.

The resulting tool claims to be able to detect harassment with 92% certainty4, rivaling a test panel of humans.

Once ready for primetime, the New York Times plans to use the tool to conduct an initial run through the thousands of comments it receives every day, flagging up potentially inflammatory content for review by its (now considerably unburdened) human moderators.

Will technology have the final say?

Though showing great promise in the ability to efficiently identify and flag large volumes of abusive content, digital technology will likely not see the complete end of online abuse or harassment.

However, it may provide tools that better empower organisations to invest in online communities or comment sections, providing scalable safe spaces that encourage civil discussion. In turn, online comments could become a vastly improved channel for user engagement and crowdsourced content ideas or feedback, enhancing the customer experience when digitally interacting with companies and brands.

In doing so, perhaps the silent majority of online audiences can also be encouraged to speak up, joining in with online discussions and further diversifying the internet’s breadth of opinion and calibre of content.


1 https://www.scientificamerican.com/article/why-is-everyone-on-the-internet-so-angry/
2 https://www.theguardian.com/technology/2014/jul/16/youtube-trolls-google-real-name-commenter-policy
3 https://contently.com/strategist/2016/09/21/gimlet-slack-comments/
4 https://www.wired.com/2016/09/inside-googles-internet-justice-league-ai-powered-war-trolls/

 

One Comment

  1. avatar
    Thomas Heath says:

    Interesting article Valentine! Trolling can be harmful. Tocqueville famously coined the adage ‘tyranny of the majority’, meaning the majority censor either by condemnation or ridicule the minority fringe views; he considered this pernicious to both free speech and democracy. I am of the view that this sort of AI filtering is simply tyranny of the majority of another form, that is my concern. Perhaps comments could be filtered depending on age, so if for example you were under 18 – then AI filters comments, otherwise no comments are filtered.

Leave a Comment

Share This