Internet insecurity has reached a new milestone: More web traffic (51%) now comes from bots, small pieces of software that run automated tasks, rather than humans, according to a new report.
More than a third (37%) comes from so-called bad bots — bots designed to perform harmful activities, such as scraping sensitive data, spamming and launching denial-of-service attacks — for which banks are a top target. (“Good bots,” such as search engine crawlers that index content, account for 14% of web activity.)
About 40% of bot attacks on application programming interfaces in 2024 were directed at the financial sector, according to the 2025 Bad Bot Report from Imperva, a Thales company. Almost a third of those (31%) involved scraping sensitive or proprietary data from APIs, 26% were payment fraud bots that exploited vulnerabilities in checkout systems to trigger unauthorized transactions and 12% were account takeover attacks in which bots used stolen or brute-forced credentials to gain unauthorized access to user accounts, then commit a breach or theft from there.
For the report, researchers analyzed bot attack data for more than 4,500 customers, 53,000 customer accounts and more than 200,000 customer sites. So this report is not a complete representation of all internet activity, but experts say it matches what they are seeing in the field.
“The findings are directionally correct but not surgically precise,” said Gary McAlum, former chief security officer at USAA and former chief information security officer at AIG. “Imperva’s dataset is very large, so good enough. Banks have been dealing with bots for years, particularly regarding account takeover and credential stuffing attacks.”
The idea that 51% of web traffic is coming from bots was not surprising to him.
“The real value proposition of bots, both good and bad, is they provide speed and scale,” McAlum said. “While good bots serve important roles like indexing sites for search engines or monitoring website performance, the surge in malicious bots shows the growing sophistication and scale of cyber threats. The rise of AI is only going to make this worse.”
Valerie Abend, global financial services cybersecurity lead at Accenture, said she is also seeing the growing threat of AI in these figures.
“Bot deployment is the classic whack-a-mole issue,” she said. “It’s not a new issue, but it’s grown in volume and pace.”
AI driving the rise of bad bots
The rise of bad bots over the past few years, from 30% of all web traffic in 2022 to 33% in 2023 to 37% in 2024, was largely driven by the adoption of AI and large language models, according to Imperva researchers’ analysis.
Attackers now use AI not only to generate bots but also to analyze failed attempts and refine their techniques to bypass detection with greater efficiency, the report said.
“A few years ago, there were bot-driven hacks, but they were bots designed by human beings, a bad guy who would sit there and analyze a given set of APIs, like banking APIs, and then figure out, ‘How can I write a bot that would mimic that?'” said Kevin Kohut, founder of API First, LLC and former senior manager of cloud security at Accenture. “Now what we’re seeing is, you don’t need to be as smart as the bad guys. You can just go to an AI model and say, how would I write something to open a new bank account?”
Some bad bots can mimic legitimate traffic coming from a residential address, which makes detection more challenging. According to the report, 21% of all bot attacks using internet service providers were conducted through residential proxies.
The report also looked into which generative AI models are being used to create bad bots. More than half (54%) are developed using Bytespider Bot, according to the report. Just over a quarter (26%) were made using Apple Bot, 13% with ClaudeBot and 6% with ChatGPT. “ByteSpider’s dominance in AI-enabled attacks can largely be attributed to its widespread recognition as a legitimate web crawler, making it an ideal candidate for spoofing,” the report said.
Experts interviewed for this article were most struck by the rise in bots attacking APIs.
“People used to say APIs are the new perimeter,” Abend said. “I would also say they are the supply chain of the bank increasingly. These APIs are enabling application-to-application data flow. The idea that you have automated bots going after automated API calls – that’s the future of cyber warfare.”
What banks can do about bad bots
Banks typically apply a combination of detective and preventive controls in the fight against bad bots, McAlum said.
“This is an arms-race problem, so the ability to detect and differentiate bot traffic is key,” McAlum said. “Traditional rules-based systems based on velocity and frequency will not be enough.”
AI-generated bots can bypass even advanced Captcha screens, he said.
“Advanced capabilities within web application firewalls along with a strong cyber threat intelligence sharing model will help,” McAlum said. “Securing APIs is critical and enforcing strict authentication protocols along with rate limiting [setting limits on the number of requests a user can make to a server or application within a specified time period] and anomaly detection to prevent exploitation.”
Traditional threat detection techniques, such as watching for abnormal upticks in web traffic, can help organizations realize that site traffic could be artificial and potentially malicious, said Tracy Goldberg, director of cybersecurity at Javelin Strategy & Research. More threat intelligence sharing of suspicious IP addresses would help organizations better identify bad bots, she said.
“Honeypots, which remain a great tactic for deception in detection, also play an underappreciated role in detecting bots,” she said. Honeypots are generally enticing-looking but fake datasets that are put out in the open to lure attackers into a trap and watch how they operate.
Another way banks can help protect themselves is by investing in developing a model context protocol, or MCP. “What a development portal would be to human developers, an MCP server would be to AI agents,” Kohut said. “So the idea is, instead of having AI agents take a wild guess at how they’re supposed to consume our APIs, we will create an MCP server that will give them the information they need.”
There is a catch-22 to this, Kohut said, because for a given AI model to properly consume MCP, the model has to know what the MCP protocol is.
Banks also need to make sure the API systems they are using are secure and locked down, Kohut said.
Securing APIs is a “classic challenge of ensuring that the API inventory is maintained, that it’s accurate, and then that all of that is encompassed in that gateway,” Abend said. “Just like authentication and authorization are important and role-based access control and least privilege, encryption and protecting your keys, testing and scanning, threat modeling, and doing all the things you would do for other areas.”
Meanwhile, the bot problem continues to grow, McAlum said. “While banks and financial institutions are fighting this problem on the receiving end, until internet service providers take more aggressive action to help identify and filter this traffic, it will continue to be an uphill battle,” he said.