Well, it finally happened. After several legal snags and a full-blown trial, Elon Musk bought Twitter for $44 billion.
One of Musk’s first priorities as the owner of the social media giant is to change Twitter Blue. Twitter Blue is an optional $4.99-per-month subscription that Twitter launched last year. Subscribers currently get an edit button, ad-free access to some news sources, a feed of popular articles from the people they follow and a few other features. Musk wants to revamp the subscription and drive the price up to $20 a month. He also wants the account verification process to be part of that package. And this is where it gets troubling.
Currently, being “verified” means that Twitter has determined an account is authentic. Users can see whether an account is verified by looking for a blue check mark next to the account name. Twitter launched this feature to verify accounts of “public interest.” That includes government officials, companies, brands, journalists and media companies, sports teams, individual celebrities, influencers and others. You can see the check mark on the New York Times Twitter account page below.
Twitter launched this feature as a way to combat misinformation. The anonymity of the internet makes it easy for people to create fake accounts and pretend to be a public figure or organization. But it’s easier to recognize a fake account when it’s not verified.
Today, users must go through an application process in order to get verified. But under Musk’s plan, users could simply pay for verification — if they’re willing to pony up $20 a month for it. And that kind of defeats the purpose of verification.
As a result, many people are concerned that Musk’s new process will exacerbate the dangerous spread of misinformation. It can already be hard to distinguish real news from sensational lies. And as deepfake content powered by artificial intelligence (AI) continues to proliferate, the spread of misinformation is likely to get worse.
Fighting Fakes With Facts
Fortunately, where there’s a problem, you can bet that a startup somewhere is working on a solution. Factmata is fighting misinformation with its own blend of AI and a language processing engine. Factmata’s technology gathers posts from social media sites, groups them into different “narratives,” ranks the narratives on the basis of popularity and threat level, then tracks how those narratives originate and how much they spread.
This kind of technology is fascinating because it goes beyond the basic step of collecting data points on what people post about on the internet. If we’re able to broadly understand the stories people are telling, we can better understand our society as a whole. (Factmata was also rated very highly by our friends at KingsCrowd.)
Trust Lab develops software for social media platforms to help measure and detect extremism, hate speech, misinformation and other harmful content. The company’s “Trust Graph” provides scores, labels and metadata for content, accounts and transactions to help platforms assess their riskiness.
Factually Health tackles misinformation from a healthcare angle. The platform uses AI to give medical information a credibility score for both healthcare organizations and patients. The score is based on content from forums, commercial bodies and other sources, all of which help healthcare organizations with reputation management.
I miss the days when fake news was dominated by The Onion. Unfortunately, there’s a lot of nonsense floating around the internet that’s presented as factual information. And that means startups like Factmata, Trust Lab and Factually Health have a lot of work to do.
Will Musk’s Twitter takeover lead to an uptick in hate speech and misinformation on the platform? In some ways, it already has. But I have hope that determined startups and smart developers will help fight the fake news with facts.