もっと詳しく

Twitch introduced a new way for streamers to fight accounts that evade channel-level bans Tuesday. The automated tool is called “Suspicious User Detection” and can spot users trying to get around bans, giving anyone moderating a Twitch channel more recourse for dealing with potentially disruptive behavior before it starts. The company announced that the new ban evasion detection tool was on the way back in August.

The company says it developed the tool in direct response to community feedback calling for more robust moderation options for handling users popping up with new accounts after being banned. After an account is flagged as either a “possible” or “likely” ban evader, a channel’s moderators can choose to take action against it manually.

Twitch suspicious user detection tool

Image Credits: Twitch

Any messages sent from an account flagged as a likely violator will be automatically screened from chat, pending review by a moderator. For channels that want to be more aggressive with moderation, the same setting can be turned on for accounts flagged as possible ban evaders. Mods can also manually add users to the suspicious account list to keep closer tabs on them.

Twitch notes that as with any automated moderation tool, false positives are possible, though it hopes to strike a balance between proactive detection powered by machine learning systems and human intervention. “You’re the expert when it comes to your community, and you should make the final call on who can participate,” Twitch wrote in a blog post, adding that the system will improve over time after training on input from human moderators.

Twitch sees the new ban evasion detection system as a modular solution alongside AutoMod, which gives moderators a way to review potentially harmful messages in chat, and Phone-Verified Chat, an option that Twitch added last month that requires users to verify their accounts before chatting. Twitch users can sign up for five accounts with a single phone number, but a channel ban now impacts all accounts linked to that number, sealing up one of the easier workarounds for anyone looking to skirt the platform’s policies.

Twitch streamers have long been pushing the company to do more to protect creators, particularly those most vulnerable to online harassment. This year alone, the #ADayOffTwitch and #TwitchDoBetter campaigns escalated visibility for marginalized creators facing widespread abuse on the platform, prompting the company to respond.

“We’ve seen a lot of conversation about botting, hate raids, and other forms of harassment targeting marginalized creators,” the company tweeted at the time. “You’re asking us to do better, and we know we need to do more to address these issues.”

Twitch’s longstanding lack of discovery tools already made success on the platform an uphill challenge for underrepresented creators, but targeted harassment campaigns made matters much worse. A trove of Twitch payout data leaked last month painted a grim picture of diversity in the upper echelons of streaming success, where top creators are nearly universally white men.

In May, Twitch added more than 350 tags to help users find streamers by identifiers like gender, sexuality, race and ability. The update was an overdue step to drive discovery and surface more diverse creators on the platform, but without adequate moderation tools many users worried that the system was driving targeted harassment toward their communities. In September, Twitch took the unusual step of filing a lawsuit against two users linked to thousands of bots powering mass harassment campaigns.