SlowMist Founder Warns of Prompt Injection Risks in AI Tools

Yu Xian highlights confirmed cases of prompt injection in agents md, skills md, and mcp, urging users to disable dangerous mode to prevent unauthorized computer control.

Summary

Yu Xian, founder of blockchain security firm SlowMist, warned of confirmed prompt injection attacks affecting AI tools such as agents md, skills md, and mcp. He explained that enabling dangerous mode grants these tools unrestricted control over a computer without user prompts, while disabling it requires manual confirmation for each action. The advisory emphasizes caution when configuring AI applications to avoid security risks.

Terms & Concepts
  • Prompt Injection: A cybersecurity attack that manipulates AI models through crafted inputs to execute unintended actions.
  • Dangerous Mode: A configuration setting that grants AI tools full control over a user’s computer without requiring confirmation for each action.