Twitter is ramping up efforts to deal with the non-stop wave of harassment on its platform -- the social media will now give users a second chance before hitting the reply button. Its new update is currently being tested among selected users, in which a prompt will appear if the site detects the use of "harmful" language.

Twitter is a strange place, and things can get toxic quickly, so while it's understandable that horrendous things can be said in the "heat of the moment," there's still a need to address the online world's toxicity.

Instagram had actually tested out a similar feature last year that would "nudge" users with a warning before they post a potentially offensive comment. The social media platform concluded the test with an update, saying these types of nudges can actually encourage people to reconsider their words.

Updates like these are relevant in today's times, as many companies are figuring out a way to moderate content while operating with only a few workers. Social networks have since increased reliance on AI detection since the pandemic is keeping workers away from offices. As for Facebook, Mark Zuckerberg says content moderators are among the firsts to return to their jobs.

Twitter says the feature is a limited experiment and that it will only show up in iOS devices. The prompt that is now supposed to pop up in certain situations will allow you to rethink your choice of words first.

What's unclear is the process of how Twitter is identifying what's harmful language from what's not, but the company has already made updates to its Twitter rules and made additions to its hate speech policies. So far, the company is implementing more stringent guidelines to curb cases of online abuse and harassment, and any forms of violence.

Twitter wants to clarify it won't be removing something simply because it's offensive, except, of course, if a post or reply violates its hate speech policies. If the content of a certain post proves too extreme, however, moderators will remove, suspend, or even ban the account.

This new feature is designed in order to encourage users to simply not engage in senseless online arguments that will only provoke a user into resorting to harmful words. Still, the prompt is only a warning and you can post the reply anyway. It doesn't take away one's freedom to voice out something, but at least Twitter has already done its part.