Twitter has committed to a specific timeline for rolling out changes to its Safety features, and announced new policies, including a ban on hateful display names, and improvements for second-hand "witness reporting" of abuse.
By January, Twitter plans to have implemented all the abuse changes outlined in the internal email published by Wired earlier this week, as well as the new ones shared today. The company even apologized for frequently promising improvements but then failing to take action, writing, "Far too often in the past we’ve said we’d do better and promised transparency but have fallen short in our efforts."
Here's a breakdown of what's new, beyond the enhancements to existing safety features:
Hateful Display Names - The ban on hateful display names could deter or punish people for "nameflaming" other users, wherein when a quote is tweeted by a critic, someone changes their display name to insult the critic, thereby having that insult show up to all the critic's followers who see the quote tweet.
Witness Reporting - Twitter will use how you're related to the victim and abuser when you to more strictly enforce rules against harassment. This could help ensure reports aren't actually concerted trolling efforts and are instead coming from people legitimately offended by an abusive tweet. Twitter also will send notifications in-app and via email to second-hand reporters of abuse. This closing of the loop should boost people's sense of safety on the platform even if they aren't the victim in this instance.
Content Rules - Violent groups will be banned, hateful symbols in avatars and profile headers will be banned while this content in tweets will be obscured with an interstitial warning, account relationship signals will be used to determine if sexual advances were unwanted, spam will be better defined and technology will be adopted to prioritize the most egregious violations of these rules.
Here's the calendar:
The most glaring gap in this road map is any functional change to the way that Twitter users interact. As we wrote about last week, and as had been suggested by Hunter Walk, Twitter's biggest opportunity to shut down abuse lies in changing how replies work.
Right now, Twitter leaves it up to users to choose to mute replies from certain accounts, like ones that don't follow them, have a newly set up account or that haven't added a profile image, confirmed email or confirmed phone number. But the devil is the defaults that leave these off. Meanwhile, hard-set rules chosen by users could accidentally silence innocent replies.
Twitter should consider turning on some of these rules by default while warning repliers that their messages might not get through unless they complete their profiles. That's important, because registering a phone number in particular makes it tough for trolls to abandon a suspended account and simply harass people from a different handle.
By using a combination of signals, Twitter could start more aggressively filtering out replies from suspected abusers, yet give people a path to regaining the ability to @ people by taking actions that introduce friction for trolls. Though it might take a little while to get right, and some benign content may be unnecessarily censored, right now the balance is far too skewed toward a laissez-faire approach that permits harassment.
For more on how tech could fight abuse, check out our feature article Silenced by 'free speech.'
This article originally appeared on TechCrunch.