Twitter Safety Mode
Safety Mode (Image: Twitter)

As part of its analyst day presentation, Twitter has shared plans of new tools destined to arrive on its platform soon. One of them is a Super Follow tool that will allow creators to charge for their content, while their followers get perks such as access to exclusive material, badges, etc. However, the company is working on two other ideas that might go live soon – a Communities feature and a Safety Mode tool for auto-blocking abusive accounts. 

Communities will be able to draft their own guidelines and code of conduct

Starting with Communities, the fundamental premise appears similar to Facebook’s Community Pages. Product lead at Twitter, Kayvon Beykpour, mentioned that communities will be able to set their own guidelines regarding participation rules in a group, and the general code of conduct for engaging with the other members. However, the company has not revealed when this feature will be rolled out widely. 

Communities (Image: Twitter)

We’re working to create a product experience that makes it easier for people to form, discover, and participate in conversations that are more targeted to the relevant communities or geographies they’re interested in,” Twitter wrote in its slide. Essentially, it appears to be building on the idea of Lists, but instead of following certain handles, Communities will allow users to converge around a shared topic of interest that can be anything from a social movement to a particular type of music, TV show fandom, or animals.

Safety mode is coming to Twitter

The social media giant also gave us a glimpse of another upcoming feature called Safety mode, which has been created to curb hateful or abusive behavior on its platform. If your tweet is attracting some negative or vile responses, Twitter will inform you about the same via a notification. More importantly, Twitter will automatically block accounts that break its rule regarding hate speech and abusive language, once you enable Safety mode.

Abusive accounts will also be prohibited from engaging with users for a week

Additionally, Twitter will reduce the visibility of such abusive replies by showing them to a lesser number of people. In the past, Twitter has reduced the reach and visibility of posts that disseminate misleading content, with the latest example being those related to the US elections and COVID-19 vaccine, and has even deleted some of them for violating its content policies.

When you’re in Safety mode, we detect accounts that might be acting abusive or spammy and we limit the ability of those accounts to engage with you for 7 days.” Unfortunately, there is no official information regarding a wider release of this feature, but it sure is a move in the right direction. 

I’ve been writing about consumer technology for over three years now, having worked with names such as NDTV and Beebom in the past. Aside from covering the latest news, I’ve reviewed my fair share of devices ranging from smartphones and laptops to smart home devices. I also have interviewed tech execs and appeared as a host in YouTube videos talking about the latest and greatest gadgets out there.
You May Also Like
Facebook data of over half a billion users leaked. How to check if you were affected?
Were you among the 530+ million users whose Facebook account data was leaked? Here’s how to check, and what to do next in order to stay safe.
OPPO Watch
OPPO starts rolling out iPhone support for Watch and Band Style with HeyTap Health app
Prior to this, the OPPO Watch had the option of synchronizing the data of the smartwatch with Android and iOS devices using Google’s Wear OS application
Discord plans to block NSFW communities and channels on iOS devices
Discord is one of the most popular messaging platforms in existence, with…