Certora Chief Warns of AI Risks After MoonwellDeFi Exploit

Mooly Sagiv, Chief Scientist at Certora, cautioned that code generated by large language models may conceal major vulnerabilities, following the first AI-linked DeFi security breach.

Summary

Mooly Sagiv, Chief Scientist at blockchain security company Certora, warned that code written by large language models (LLMs) could create severe vulnerabilities by ‘quietly skipping the hard part.’ His comments follow the recent MoonwellDeFi incident, described as the first DeFi exploit connected to AI-generated code. The case has sparked debate over the safety of using artificial intelligence tools in decentralized finance development, highlighting the potential for unnoticed flaws in smart contracts.

Terms & Concepts
  • DeFi (Decentralized Finance): A blockchain-based financial system offering services like lending and trading without intermediaries.
  • LLM (Large Language Model): An advanced artificial intelligence model trained on massive text data, capable of generating or completing code and natural language.
  • Smart contract: Self-executing blockchain code that automatically enforces terms of an agreement without intermediaries.