当前位置: 当前位置:首页 >探索 >【】正文

【】

作者:知識 来源:時尚 浏览: 【】 发布时间:2024-11-10 01:47:44 评论数:

If at first you don't succeed...

Twitter announced Wednesday an update to its ongoing effort to pare back the volume of what it deems to be toxic replies sloshing around the social media platform. Specifically, starting May 5 on the Twitter iOS app and shortly thereafter on the Android app, English-language users may be shown "improved prompts" asking them to rethink their typed-but-not-yet-sent replies in a new — and presumably more nuanced — set of circumstances.

Wednesday's announcement signals an evolution of an experiment first announced in May of 2020. Distinct from, but related in spirit to, Twitter's "humanization prompts" test, the idea as initially explained by Twitter in 2020 was that sometimes people benefit from taking a deep breath before tweeting.

"When things get heated, you may say things you don't mean," explained the company at the time. "To let you rethink a reply, we're running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it's published if it uses language that could be harmful."

Notably, prompted users could still tweet whatever nonsense they wanted — they just had to deal with an additional step thrown in the mix by Twitter first.

At the time, the system was called out by some for being perhaps a bit too blunt in its deployment of gentle scolding.

Now, Twitter says it's learned from those early day.

Mashable Light SpeedWant more out-of-this world tech, space and science stories?Sign up for Mashable's weekly Light Speed newsletter.By signing up you agree to our Terms of Use and Privacy Policy.Thanks for signing up!

"In early tests, people were sometimes prompted unnecessarily because the algorithms powering the prompts struggled to capture the nuance in many conversations and often didn't differentiate between potentially offensive language, sarcasm, and friendly banter," read the press release in part.

Be nicer.Credit: twitterOr not.Credit: twitter

As for what Wednesday's announcement means in practice? Well, a few things.

Twitter says the updated system now takes into consideration the relationship between the person writing the reply and the account at which it's directed. In other words, replies between two accounts that have long exchanged friendly missives might be treated differently than a first-time reply directed at an account the user doesn't follow.

The company also claims its systems can now more accurately detect profanity, and can distinguish — at least to some extent — context. Twitter, for example, lists "Adjustments to our technology to better account for situations in which language may be reclaimed by underrepresented communities and used in non-harmful ways" as one of the ways in which its prompts system has been improved since the initial rollout of the test last year.

And while this all sounds a bit Sisyphean, Twitter insists its past prompting efforts have actually shown tangible results.

SEE ALSO: Twitter tests 'humanization prompts' in effort to reduce toxic replies

"If prompted, 34% of people revised their initial reply or decided to not send their reply at all," claims the company's press release. "After being prompted once, people composed, on average, 11% fewer offensive replies in the future."

Twitter, in other words, says these prompts work. Whether or not its oft-harassed users will agree is another thing altogether.

Related Video: How to permanently delete your social media

TopicsSocial MediaTwitter