Gate News, March 18 — SlowMist and a certain CEX jointly released an AI Agent Security Report. The report points out that as AI Agents take on tasks such as market analysis, strategy generation, and automated trading within the Web3 ecosystem, their attack surface is expanding. The report systematically outlines seven major security threats: prompt injection attacks that can manipulate the Agent’s decision logic; supply chain poisoning risks in the Skills/plugin ecosystem, with SlowMist discovering over 400 malicious Skill samples in the OpenClaw plugin center ClawHub, showing characteristics of group-based mass attacks; task orchestration layers that can be tampered with by altering key parameters, leading to abnormal execution; sensitive information in IDE/CLI environments that could be leaked by malicious plugins; model hallucinations that could cause irreversible financial losses during on-chain operations; the irreversible nature of high-value Web3 transactions amplifies automation risks; and high-permission executions that could lead to system-level risks. The security team from the CEX offers practical protection suggestions, including enabling Passkey passwordless login and two-factor authentication, following the principle of least privilege when configuring API keys and binding IP whitelists, using sub-account isolation mechanisms to limit potential losses, establishing continuous trading monitoring and anomaly detection systems, and only installing Skills that have been officially reviewed. SlowMist also proposes a five-layer security governance framework from L1 to L5, covering the entire protection system from development baseline, permission convergence, threat perception, on-chain risk analysis, to ongoing inspection.