By
June 21, 2017 11:51 am - NewsBehavingBadly.com

We encourage you to read Kris Shaffer’s excellent analysis of social media bots and their effect on politics. Especially useful is the 101 ox spotting them, particularly on Twitter:

Sleepless account: “[I[t never sleeps. That is, if you download its tweets from a time period of several days, and there is no break in activity when a human would sleep.”

Retweet bots: “[P]rogrammed to automatically retweet content that comes from certain ‘catalyst’ accounts, or content that contains certain keywords. … [R]etweet bots can function to amplify, normalize, and mainstream disinformation.

Reply bots: “[I]mmediately replies with pre-loaded content. During the GamerGate campaign, [the author] was barraged with disinformation tweets from several of these accounts every time a tweet of mine contained the text “GamerGate”.

Stolen content: “Automated accounts seeking to spread malware and/or collect user data… steal content from other accounts. As we’ve discussed elsewhere, accounts seeking to misdirect users to their site sometimes monitor accounts known for having ‘click-bait’ content… and reproduce their content, but replacing the link in the original tweet with a link to their own site.

Stolen profile images: “Many bots and sockpuppets are examples of catfishing, the use of a false, often stolen, persona for dishonorable purposes.”

Tell-tale account names: “[F]ake accounts will often have tell-tale account names (link goes to a pdf download), often artifacts of the automated account-creation process. These names may include variations on a single ‘real’ name, variations on a celebrity name, and long strings of alphanumeric garbage (often after a ‘real’ name).

Recent accounts: “Many of the bots and sockpuppets we have seen have been created recently.”

Activity gaps or filler content: “Some users hold onto their false accounts for multiple campaigns. When they do so, there may be significant gaps in the account activity between campaigns… or they may insert filler content, most commonly inspirational quotes or pornography.”

Coordination: “Bots and sockpuppets often work in coordination with other bots and sockpuppets.”

Metadata similarities: “In addition to the data ― the actual content of the tweets ― similarities in account metadata can be indicative of a botnet (or socknet). For example, accounts employed in a botnet were often created at or around the same time. Twitter’s API (application programming interface) includes down-to-the-second information about when an account was created with every tweet downloaded. If we observe multiple accounts participating in a disinformation or harassment campaign and they were created within minutes of each other, there’s a strong chance they are coordinated by the same individual.”

As The Hill reported last month, social media platforms are beginning – a full year too late – to fight back, but you can take proactive steps yourself:

• Bookmark the link to report abusive users learn what information is needed to post an effective report, and utilize it. Twitter and Facebook also recommend blocking these accounts.

• Two social media experts we work with recommend posting a comment in your timeline with a “broken” atlink (putting a / slash between the @ symbol and username on Twitter, for example).

• Both experts also recommend making a separate post calling out the disinformation and linking to a reliable, accurate source with real news to clarify and/or debunk the fake news.

D.B. Hirsch
D.B. Hirsch is a political activist, news junkie, and retired ad copy writer and spin doctor. He lives in Brooklyn, New York.