As part of its analyst day presentation, Twitter has shared plans of new tools destined to arrive on its platform soon. One of them is a Super Follow tool that will allow creators to charge for their content, while their followers get perks such as access to exclusive material, badges, etc. However, the company is working on two other ideas that might go live soon – a Communities feature and a Safety Mode tool for auto-blocking abusive accounts. 

Communities will be able to draft their own guidelines and code of conduct

Starting with Communities, the fundamental premise appears similar to Facebook’s Community Pages. Product lead at Twitter, Kayvon Beykpour, mentioned that communities will be able to set their own guidelines regarding participation rules in a group, and the general code of conduct for engaging with the other members. However, the company has not revealed when this feature will be rolled out widely. 

Communities (Image: Twitter)

We’re working to create a product experience that makes it easier for people to form, discover, and participate in conversations that are more targeted to the relevant communities or geographies they’re interested in,” Twitter wrote in its slide. Essentially, it appears to be building on the idea of Lists, but instead of following certain handles, Communities will allow users to converge around a shared topic of interest that can be anything from a social movement to a particular type of music, TV show fandom, or animals.

Safety mode is coming to Twitter

The social media giant also gave us a glimpse of another upcoming feature called Safety mode, which has been created to curb hateful or abusive behavior on its platform. If your tweet is attracting some negative or vile responses, Twitter will inform you about the same via a notification. More importantly, Twitter will automatically block accounts that break its rule regarding hate speech and abusive language, once you enable Safety mode.

Abusive accounts will also be prohibited from engaging with users for a week

Additionally, Twitter will reduce the visibility of such abusive replies by showing them to a lesser number of people. In the past, Twitter has reduced the reach and visibility of posts that disseminate misleading content, with the latest example being those related to the US elections and COVID-19 vaccine, and has even deleted some of them for violating its content policies.

When you’re in Safety mode, we detect accounts that might be acting abusive or spammy and we limit the ability of those accounts to engage with you for 7 days.” Unfortunately, there is no official information regarding a wider release of this feature, but it sure is a move in the right direction.