Call of Duty will use AI-powered moderation to counter voice chat toxicity

by Danny Craig  · 
Call of Duty will use AI-powered moderation to counter voice chat toxicity
Activision

Activision has announced that it will incorporate a new moderation technology using AI into Modern Warfare II and the upcoming Modern Warfare III to detect players using hateful language in in-game voice chat.

The details:

  • The beta test was announced in a new blog post by the publisher, who described it as the "next leap forward" in combating "toxic and disruptive behavior" within Call of Duty's voice chat system. Its moderation system will utilize ToxMod, an AI chat system developed by Modulate that can detect hateful language in real-time and suspend those who use it.
  • It was implemented in North America with the Season 5 Reloaded update on August 30 and will begin rolling out globally (excluding Asia) with the release of MW3 on November 10. So far, it can only detect English speakers, but there are plans to add additional languages such as Spanish, French, and others in the future.
  • With the introduction of voice chat moderation, Activision has now covered all in-game communication methods available to players, with its text-based filtering system covering 14 different languages and being able to scan both text chat and username for offending messages.
  • Activision also revealed that over one million accounts have violated the MW2 code of conduct in some way since the game's release, and while roughly 20% did not re-offend after a warning, the remaining players have been subjected to a variety of suspensions and restrictions. The issue of accounts receiving "shadow bans," which place innocent players with actual rule breakers, has also been addressed, with offenders punished for abusing the system.

More gaming news:

Latest News