Mooly Sagiv, Chief Scientist at Certora, cautioned that code generated by large language models may conceal major vulnerabilities, following the first AI-linked DeFi security breach.
Mooly Sagiv, Chief Scientist at blockchain security company Certora, warned that code written by large language models (LLMs) could create severe vulnerabilities by ‘quietly skipping the hard part.’ His comments follow the recent MoonwellDeFi incident, described as the first DeFi exploit connected to AI-generated code. The case has sparked debate over the safety of using artificial intelligence tools in decentralized finance development, highlighting the potential for unnoticed flaws in smart contracts.