eli5: How do social media “bots” work? Are they like programmed trolls? How do their accounts/comments seem as if they could be a real person’s?

110 views
0

eli5: How do social media “bots” work? Are they like programmed trolls? How do their accounts/comments seem as if they could be a real person’s?

In: Technology

A lot are actual people. There are offices in some countries full of people getting paid to maintain dozens to hundreds of accounts and shitpost constantly all day long. Also commonly called a Troll Farm.

It sounds crazy, but think how many posts you could make in a solid 10hrs if you were just swooping in, posting some sort of divisive one-liner, and then dipping out to let the people argue amongst themselves in your wake. It’s probably the highest rate of return of anything you can spend your Psy-Ops money on.

Machine learning often works by mimicking human behavior, and if there is enough data, they can mimic complex things like language/text. If you wanted to make a bot that mimicked football fans, you would find everyone on a social network that follows football pages/accounts, and teach the bot to mimick the responses that these people write. The bot would very effectively sound like a football fan, but in longer posts or threads it wouldn’t make much sense. You can do the same thing by mimicking trolls or political accounts.

The reason they are able to seem so much like people is that people are imperfect at using language themselves. The bot will mimick all these little mistakes as well, which makes it look like a person. The first chatbots that tricked people into thinking they were real people mimicked young children or people learning English for the first time. It was hard for the people talking to the chatbots to determine if the language mistakes were due to the language barrier, or the fact that it was a computer program.

Humans often tend to give the benefit of the doubt to someone they are talking to, whether that is a scammer or a bot.

Bots that tweet completely random things and don’t really respond to other accounts are often just retweeting or copy-pasting politically charged tweets from other accounts. It can do this by evaluating the response to tweets by looking at what kind of people retweet those things. So these bots appear human because they are literally just copying other humans.

There’s different levels of bots. Some are posting generic crap that is already typed out word for word in a database of things to say. Others are searching for tweets with keyword combinations and then responding with appropriate cached responses for those keywords.

The trick is not to create the movement, but to create the flag for people to rally behind.

A bot is just a piece of software designed to do the work a human would otherwise do.

A social media bot is a piece of software that interacts with social media so that a human doesn’t have to.

A bot could automate web browser behaviour to do its job and the social media platform can’t do a whole lot to stop it (CAPTCHAs aren’t 100% effective), but social media platforms happily provide ways to access their functionality directly through software code because bots have all kinds of useful purposes, like allowing companies to schedule posts or post to multiple platforms at once.

Of course bots can also be used nefariously, posting spam, sewing discord, spreading a particular political message, trying to scam people etc. And obviously some of these goals can be achieved more effectively if the bot is indistinguishable from a real live human. So the bot could be used in combination with pre-written messages, simple logic to determine what to post and when, what keywords to respond to, and if the developer was really dedicated, even use artificial intelligence to interact (even the best AI isn’t good enough at conversation to fool any reasonable person after more than a couple messages; but then the kind of people that will believe crazy conspiracy theories just because of a single post they read on Facebook, aren’t exactly the most rational people).

The bot might be programmed to perform all sorts of benign actions over weeks, months or even years to cultivate a seemingly authentic account, before it starts doing its intended behaviour, or the developer rents his bot collection to the highest bidder (which is what happens when people “pay for likes”). Or, the bot developer could find/purchase compromised accounts that previously belonged to real people, such things are readily available on the dark web.

It could also be that the bot isn’t a bot at all, but a real person. Maybe they are just really dedicated to a particular ideology. Or perhaps this is their job and they are paid by the state for this work; it’s well known that Russia, China, and sometimes even the USA have used state-backed actors to spread a political message on social media.