Use advice from DARPA or the CRAAP test to tell the difference between accurate information and suspicious posts.
In late 2019, YouTube removed 3 million videos and nearly 2 million channels for breaking spamming rules, the company said. Twitter challenges millions of accounts every week to look for bots. Facebook removed 1.7 billion fake accounts in the first quarter of this year, up from 1.1 billion at the end of 2019.
It’s still news when any of the social media sites flag a post or a video as inaccurate or even remove it. Twitter and Facebook have promised to enforce community standards more consistently, but this hasn’t translated into action. It’s increasingly up to individuals to do their own analysis before sharing a social media post. Sharing false information is not just bad internet etiquette; it can have offline consequences.
Two researchers at Clemson University have studied the strategy and tactics of professional trolls. In an interview with “Here and Now,” associate professor of economics Patrick Warren said the goal of these accounts is to cause social conflict online, which can become real conflict in the world.
In an interview with CNET, Gideon Blocq, CEO of VineSight, said the amplification of divisive toxic information is very effective.
“If you have a topic that’s already divisive and happens to be true, and you can use it to amplify inauthentic activity, that strategy is used a lot,” he said.
SEE: Chatbot trends: How organizations are leveraging AI chatbots (free PDF) (TechRepublic)
VineSight uses artificial intelligence to track the spread of viral disinformation.
Here are four questions to ask when you’re wondering about a particular account, one website and two browser extensions that will analyze Twitter accounts, and a few search engines for reverse image searches.
What does the profile look like?
Table of Contents
One way to get a quick read on a social media account is to scan the profile of the person or bot doing the posting and ask a few questions. Check out the profile picture. Is it the generic icon that new users get by default or is it an original picture?
What is the name of the account? If it is a string of numbers and letters, that’s also a good sign that the name was auto-generated and the account is a bot.
Another element to check is the date the account was opened. If it is brand new or opened within the last six to 12 months, that’s another suspicious sign.
What’s happening on the account’s timeline?
Look at the volume and frequency of posts. Is the account posting at all hours or posting more times per day than is humanly possible? That’s another warning sign.
Some researchers also suggest looking for frequent use of URL shorteners as another sign of a bot account. Another tip-off is when you see the same post in multiple languages.
Another data point to check is the followers of an account. Marketers and influencers sometimes buy a bot to boost an account’s follower count. Caroline Forsey of the marketing firm Hubspot explained the tactic this way on a blog post: “Essentially, someone will buy a bot, and the bot will like and comment on posts with a specific hashtag, or follow people with the hope that those people will then follow them back.”
Forsey went on to explain how to do an Instagram audit to detect fake followers. If a company is paying an influencer whose followers are mostly fake accounts or bot, that’s essentially a waste of money.
Another data point to consider is the ratio between how many followers the account has compared to how many people the account is following. If the account in question follows hundreds of people but no one follows it back, that’s suspicious.
How do the posts read?
MIT Technology Review took some bot-spotting advice based on a 2015 contest sponsored by the Defense Advanced Research Projects Agency (DARPA).
In addition to examining the profile, researchers recommend reading a few of the tweets or posts aloud. Do they sound like they were written by a native speaker or are they a little awkward? Bots often use formulaic or repetitive language in posts. Also, if an account tweets the same link over and over or seems fixated on one topic, that’s another telltale sign of a bot.
These tactics work about 40% of the time, according to the DARPA research, so ultimately you’ll need to use your own best judgment when deciding whether or not to share a suspicious post.
What is the source of the information?
Finally, it’s a good time to sharpen your media literacy skills. Before you share anything, look at the source of the information and apply some critical thinking. Do you recognize the name of the media company or organization? Do you know if the site has a particular point of view, either conservative or liberal?
Sarah Blakeslee and her team of librarians at California State University, Chico developed the CRAAP test to help students evaluate a source of information:
- Currency: How recent is the information?
- Relevance: How does the topic relate to information given in an article or post?
- Authority: What are the author’s credentials for writing on the topic?
- Accuracy: Does the writer provide sources for the information in the article?
- Purpose: Is the goal of the post or article to inform or to provoke a response?
FactCheck.org is another resource for checking news which seems too good, bad, or bizarre to be true.
Tools and browser extensions
If you’d rather take a more analytical approach to spotting bots, there are websites and browser extensions that will give you an educated guess. Bot Sentinel is a free platform developed to detect and track political bots, trollbots, and untrustworthy Twitter accounts. Bot Sentinel uses machine learning to assess accounts based on the account’s content and posting habits. You paste a Twitter handle into the tool and the algorithm spits out a ranking:
- Normal: 0 – 24%
- Satisfactory: 25 – 49%
- Disruptive: 50 – 74%
- Problematic: 75 – 100%
The company also maintains a searchable database of all accounts analyzed by the algorithm, including deactivated accounts. There is also a Bot Sentinel browser extension for FireFox and Chrome.
In May, NortonLifeLock Research Group released the BotSight browser extension that also analyzes Twitter accounts. It displays a small icon and a percentage score next to each account to show whether it is more likely to be a real person or a bot. According to the FAQ, any account that scores over 90% is likely run by a human. The extension rates accounts mentioned in a tweet as well as the account tweeting. The extension works in Brave, Chrome, and Firefox. BotSight is available as an app for iOS.
Image search tools
With deep fake videos and altered images, it’s smart to be skeptical about images also. If you are suspicious about a social media account, you can do a reverse image search on the profile image or images that the account has shared.
If you’re using Chrome, you can right click on an image and use the “Search Google for Image.” If you’re using another browser, you can go directly to Google Image search and paste the URL into the search engine.
TinEye is another image search tool that will help you determine the source of an image. This engine uses image recognition instead of metadata to spot duplicate images. You can upload an image from your computer, search by URL, or drag an image from a tab in your browser to search on an image. TinEye also has browser extensions for Firefox, Chrome, and Opera. Here are the results of an image search for “dollar bill.”
If you’re on your phone, these are apps will give you an idea of where an image came from or how many places it has been used:
- Google Lens (Android): A free app to identify images and learn more about them.
- Reversee (iOS): A free app that lets you get to Google Search quickly and search for images.
- Photo Sherlock (Android and iOS): A free app that lets you take a picture or grab it from your photo library and search Google with it.