Key Takeaway
Artificial intelligence is now being used by cybercriminals to scan thousands of smart contracts in minutes, find hidden vulnerabilities, and execute profitable exploits at almost zero cost. Research from Anthropic confirms that AI agents successfully exploited 63% of real smart contracts from the past five years, with simulated theft exceeding 4.6 million dollars. The biggest targets are old, unmaintained contracts still holding live funds — and most DeFi developers are not yet protected.
For years, hacking a smart contract required deep technical skill, weeks of manual code review, and a high-value target worth the effort. That reality no longer exists.
Today, a cybercriminal with a few hundred dollars of computing budget can point an artificial intelligence agent at thousands of blockchain smart contracts, let it scan automatically for vulnerabilities, generate working exploits, and attempt to drain funds — all without writing a single line of code manually. Security researchers are now sounding the loudest alarm the decentralised finance industry has ever heard, and the numbers behind their warning are impossible to ignore.
What Is Actually Happening Right Now?
Security experts at blockchain security firm Halborn have confirmed a sharp increase in automated attacks targeting legacy smart contracts in 2026. Attackers are using large language models — the same technology that powers consumer AI chatbots — to scan contract code at machine speed and surface exploitable weaknesses that human auditors missed.
Gabi Urrutia, Field Chief Information Security Officer at Halborn, described the threat in clear terms: “AI has made legacy-contract hunting cheaper, faster, and more scalable, especially for old forks, dusty deployments, under-maintained vaults, and inherited code paths.”
The pattern security teams are now observing is consistent with automation rather than manual effort. Attackers are probing thousands of contracts in minutes, running identical exploit attempts across multiple protocols simultaneously — a volume and speed that no human hacker working alone could achieve.
Also Read: AI in Smart Contract Auditing Explained
What Did Anthropic’s Research Reveal?
The clearest picture of this threat comes from research published by Anthropic, the artificial intelligence safety company. Anthropic built a benchmark called SCONE-bench, which tested AI agents against 405 real smart contracts that were exploited in the wild between 2020 and 2025.
The results were alarming for the entire blockchain industry.
| What Was Tested | Result |
|---|---|
| 405 real exploited smart contracts (2020 to 2025) | 63% successfully exploited by AI agents |
| Simulated total theft across exploited contracts | $4.6 million |
| 2,849 newly deployed contracts with no known vulnerabilities | 2 new zero-day vulnerabilities discovered |
| Average cost to analyse one contract using AI | $1.22 |
| Rate at which simulated exploit value is growing | Doubling every 1.3 months |
To put this in perspective: the AI spent only $3,476 in computing costs to find exploits worth $3,694 in the newly deployed contracts. Real-world profitable autonomous exploitation is no longer theoretical. It is technically live today.
Why Are Old Smart Contracts the Primary Target?
Smart contracts written several years ago were compiled using older versions of programming languages like Solidity. They were built before the current generation of AI-powered scanning tools existed. Many of them were audited once at launch, then left running indefinitely while holding significant funds.
These older contracts are exactly the kind of target that becomes significantly more attractive when AI can cheaply scan legacy codebases and surface vulnerabilities that were never found by manual review.
The recent 26 million dollar hack of DeFi protocol Truebit is being cited by security researchers as a likely example of this new threat. The contract that was attacked was compiled using Solidity version 0.6.10 and had been deployed more than five years before the exploit occurred. Urrutia noted that this is precisely the type of target profile that AI tools can identify and exploit most efficiently.
Gerrit Hall, co-founder of smart contract security platform Firepan and a veteran of five years building on DeFi exchange Curve Finance, put it plainly: “Offensive capacity is improving far faster than defensive tooling.”
How Has AI Changed the Economics of Hacking?
Before AI-powered scanning tools became available, finding a vulnerability in a smart contract required significant time and skill. Attackers would only invest that effort if the potential reward was large enough to justify it. A contract holding a few thousand dollars in value was simply not worth targeting manually.
That calculation has now completely changed. Because AI can automate the scanning process at very low cost, attackers can profitably target contracts at value thresholds that were previously not worth the effort.
Urrutia stated: “Attackers can profit at much lower value thresholds than defenders can justify for equivalent detection effort. That is enough to change attacker economics even without perfect attribution.”
This creates a structural imbalance. Defenders must secure every contract, find every vulnerability, and fix every issue before an attacker does. Attackers only need to find one working path to profit. AI tips this balance further in the attacker’s favour.
Also Read: What Is Blockchain Technology and How Does It Work?
Which Types of Smart Contracts Are Most at Risk?
Security researchers have identified several categories of contracts that face the highest immediate risk from AI-powered scanning tools. All DeFi ecosystems are exposed, including ERC-20 token contracts, decentralised exchange swap platforms, decentralised autonomous organisations, lending vaults, and liquidity pools.
The contracts at greatest risk share common characteristics. They were deployed two or more years ago. They were compiled with older Solidity versions. They have not been updated or redeployed since their original launch. They were audited once but never tested again after integrations or upgrades. And they continue to hold real funds on live networks.
A separate benchmark study conducted by AI security firm Cecuro evaluated 90 real-world DeFi contracts that were exploited between late 2024 and early 2026. Those contracts collectively represented 228 million dollars in exploit value. A purpose-built AI security agent detected vulnerabilities in 92% of those contracts — dramatically higher than the 34% detection rate achieved by a standard general-purpose AI coding agent running the same underlying model.
What Can DeFi Developers Do to Protect Themselves?
Security experts agree that a one-time audit at launch is no longer sufficient protection for any smart contract holding real value. The threat environment has changed too quickly, and the tools available to attackers have improved too fast.
Here is what blockchain developers and DeFi protocol teams are now advised to do:
- Move to continuous adversarial testing — Run exploit simulations the way engineering teams run load tests. Treat every upgrade, every new integration, and every change to permissions as a completely fresh attack surface that requires new testing.
- Migrate or sunset old contracts — Contracts compiled with Solidity versions older than 0.8 and deployed more than two years ago without updates should be evaluated for migration. If a contract cannot be upgraded, consider deprecating it and moving funds to a newly audited deployment.
- Add circuit breakers and rate limits — Build emergency pause mechanisms into contracts so that unusual outflows can be detected and stopped automatically before a full exploit drains the protocol.
- Use AI-powered defensive tools — The same AI technology that attackers are using can be deployed for continuous monitoring and real-time vulnerability detection. Proactive use of autonomous agents for auditing is now considered an essential layer of defence.
- Segment trust boundaries — Reduce the blast radius of any single exploit by limiting what one contract can call or access. Contracts should not have unlimited permissions over the entire protocol.
What Does This Mean for the Broader Blockchain Industry?
The DeFi industry currently holds tens of billions of dollars in total value locked across smart contracts on Ethereum, Binance Smart Chain, Base, and other networks. On-chain insurance capacity protecting that value remains measured in hundreds of millions, not tens of billions. The gap between what is at risk and what is protected is enormous.
Security experts predict that 2026 will see AI change the tempo of security on both sides simultaneously. Defenders will increasingly rely on AI-driven monitoring that operates at machine speed. Attackers will use the same class of tools for vulnerability research, exploit development, and automated scanning at scale.
The industry that wins this race will be the one that deploys defensive AI faster than attackers deploy offensive AI. Right now, the offensive side has a significant head start.
For smart contract development companies, blockchain security firms, and DeFi protocol teams, the message is clear. The standard of security that was acceptable in 2022 is not adequate in 2026. Every contract currently on a live network holding real funds needs to be re-evaluated against the threat that AI-powered tools now represent.
The Bottom Line
AI has fundamentally changed who can attack a smart contract, how quickly they can do it, and how little it costs them to try. The 26 million dollar Truebit hack is not an isolated incident. It is a preview of what becomes increasingly common as AI scanning tools grow more capable every few weeks.
Blockchain developers and DeFi teams cannot wait for regulation or industry standards to mandate better security practices. The threat is live now, the tools are real, and the cost of inaction is measured in millions of dollars. A single continuous adversarial testing process and a well-configured AI-powered defensive agent can meaningfully reduce the risk — but only if teams act before an attacker does.
Reviewed & Edited By

Aman Vaths
Founder of Nadcab Labs
Aman Vaths is the Founder & CTO of Nadcab Labs, a global digital engineering company delivering enterprise-grade solutions across AI, Web3, Blockchain, Big Data, Cloud, Cybersecurity, and Modern Application Development. With deep technical leadership and product innovation experience, Aman has positioned Nadcab Labs as one of the most advanced engineering companies driving the next era of intelligent, secure, and scalable software systems. Under his leadership, Nadcab Labs has built 2,000+ global projects across sectors including fintech, banking, healthcare, real estate, logistics, gaming, manufacturing, and next-generation DePIN networks. Aman’s strength lies in architecting high-performance systems, end-to-end platform engineering, and designing enterprise solutions that operate at global scale.







