Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
The nationwide "shrimp farming" craze is sweeping the internet, but the banking industry is collectively "ignoring" it; experts say: OpenClaw's high system permissions conflict inherently with financial compliance requirements.
Financial Daily Reporter: Li Yuwen Financial Daily Editor: Zhang Yiming
Recently, open-source AI (artificial intelligence) agents like OpenClaw (also known as “Lobster”) have become a viral sensation, attracting widespread industry attention. However, the banking sector generally remains cautious about this “shrimp farming” trend. A head office of a joint-stock bank told Financial Daily that they have recently received risk alerts from regulators regarding “Lobster.”
However, before OpenClaw’s popularity surged, the banking industry was already exploring and applying intelligent agents. Many banks are actively promoting the use of agents in frontline operations to enhance efficiency.
As a risk-controlled institution, how can banks balance innovation and compliance in the face of the AI wave?
Multiple Banks Take a Cautious View of the “Shrimp Farming” Trend
OpenClaw, named for its icon resembling a red lobster, is also called “Lobster.” The process of installing and deploying it is colloquially referred to as “shrimp farming.” Unlike purely conversational AI like ChatGPT, OpenClaw integrates communication software and large language models, enabling it to autonomously perform complex tasks such as file management, email handling, and data processing on users’ local computers. It acts like a “digital employee” working on behalf of users, which has attracted many to experiment with its practical applications.
As OpenClaw continues to gain popularity, security concerns are increasingly in the public eye. Recently, the Ministry of Industry and Information Technology and the National Internet Emergency Center issued risk alerts, warning users to exercise caution due to potential security risks associated with OpenClaw.
Amid this “shrimp farming” craze, the banking industry remains quite “calm.” Recently, a joint-stock bank’s head office received a risk alert from regulators about “Lobster.” An executive from a state-owned bank also told Financial Daily that their bank has not yet deployed OpenClaw or arranged training on it.
Why are banks cautious about OpenClaw?
“Unlike conversational AI, OpenClaw as an agent needs access to local files, external APIs, and even system-level permissions. This ‘end-to-end’ automation mechanism can easily trigger cyberattacks and lead to leakage of core transaction data, which conflicts with banks’ strict regulatory standards and zero-tolerance policies,” said Wang Peng, Associate Research Fellow at Beijing Academy of Social Sciences, in an interview with Financial Daily on March 16.
Gao Chengfei, General Manager of the IP Business Unit at Zhiyuan Marketing Consulting, shared a similar view: “OpenClaw’s high system permissions are inherently at odds with financial compliance requirements.”
Gao explained that OpenClaw defaults to high-level permissions such as local file access and API calls. While this can improve office efficiency, multiple medium- and high-risk vulnerabilities have been publicly disclosed. Its plugin functions lack effective security review mechanisms, posing risks of malicious exploitation to steal online banking passwords, payment keys, and other sensitive information. More critically, its autonomous execution capabilities could cause errors like unauthorized fund transfers or purchasing investment products in financial scenarios. Since AI technology still lacks full explainability, responsibility for automated actions is difficult to determine. Additionally, data generated during agent operation might be transmitted to third parties, raising compliance risks when involving sensitive information like credit data and loan approval materials.
Therefore, Gao believes that in the short term, OpenClaw is more suitable for small-scale pilots in non-core business scenarios. Large-scale deployment should wait until key issues such as security, clear responsibilities, and explainable algorithms are resolved.
Wang Peng suggests that banks will not directly adopt open-source OpenClaw but will instead incorporate its technological approach. Future implementations are likely to be “private deployment in restricted environments,” meaning within the bank’s internal network, using self-developed or customized solutions to apply agents in non-core, high-sensitivity scenarios like office automation and risk control support.
Banks Are Already Exploring Agent Applications
It is worth noting that even before OpenClaw’s rise, the banking industry was already exploring and applying intelligent agents. Many banks are actively promoting agent-enabled frontline services to improve operational efficiency.
For example, Nanjing Bank has partnered with Volcano Engine to explore large-scale deployment of intelligent agents in financial scenarios. They have launched a one-stop intelligent agent workstation called HiAgent, which has already deployed over 20 high-quality agents. These are deeply integrated into key areas such as office work, operations, business development, and risk management.
How effective are these implementations? For instance, corporate relationship managers often spend significant time gathering pre-visit information across multiple systems and platforms. An “One-Page” pre-visit intelligent agent can automatically aggregate data from internal and external sources, perform cleaning, fusion, and quality checks, and generate a comprehensive, accurate pre-visit report within five minutes—reducing preparation time from two hours to minutes. This tool has become essential during peak marketing periods.
KPMG’s recent “2026 Outlook for China’s Banking Industry” report notes that analysis of public tender information and case studies from KPMG show a rising trend in bank large-model projects from January to November 2025, with a small peak in August. From project content, initial efforts (January-June) focused on knowledge Q&A with sporadic applications. Starting in July, agent applications surged, especially in October and November, with all project types being agent-related.
So, how should banks balance innovation and compliance when exploring agent applications?
On March 16, Fu Yifu, a special researcher at Su Commercial Bank, told Financial Daily that when promoting agent-enabled frontline services, banks need to innovate management mechanisms, test new technologies in controlled environments, and ensure risks are measurable and controllable. They should strengthen data privacy protections and algorithm audits, follow the “least privilege” principle to avoid excessive customer data collection, and maintain close communication with regulators. Participating in industry standard-setting can help identify compliance boundaries early. Additionally, banks should establish manual review processes for key decisions made by agents to prevent automation errors. Embedding compliance requirements throughout the R&D process and cultivating multidisciplinary talent will help banks safely unlock the innovative potential of intelligent agents.