Malicious ML Models on Hugging Face Exploit Broken Pickle Format

Nikita Shekhawat
Nikita Shekhawat

Marketing Analyst

 
February 11, 2025 2 min read

In a recent discovery, cybersecurity researchers have found two malicious machine learning (ML) models on Hugging Face that use a "broken" pickle file technique to evade detection. These models, more of a proof-of-concept (PoC) than an active supply chain attack, contain a reverse shell payload that connects to a hard-coded IP address. The pickle serialization format, widely used for distributing ML models, has been identified as a security risk due to its potential to execute arbitrary code upon loading and deserialization. The identified models, stored in the PyTorch format, were compressed using the 7z format instead of the default ZIP, allowing them to bypass Hugging Face's security tool, Picklescan. This highlights the need for improved security measures in ML model distribution. Source: The Hacker News

The Threat of Malicious ML Models

The approach used by these models, dubbed nullifAI, is a clear attempt to bypass existing safeguards designed to identify malicious models. The pickle files extracted from the PyTorch archives reveal malicious Python content at the beginning of the file, which is a typical platform-aware reverse shell. This discovery underscores the importance of robust security protocols in the ML community.

The Role of Pickle Files in Security Risks

The pickle serialization format has been a point of concern due to its ability to execute arbitrary code. The two models detected by ReversingLabs are stored in a compressed pickle file format, which is usually associated with the ZIP format. However, these models used the 7z format for compression, enabling them to avoid detection by Picklescan.

Implications and Mitigation

The fact that these models could still be partially deserialized despite Picklescan throwing an error message indicates a discrepancy between the tool's functionality and the deserialization process. This has led to the open-source utility being updated to address this bug. It's crucial for the ML community to stay vigilant and continuously update their security measures to counter such threats. hugging-face-malware.webp Source: The Hacker News code.webp Source: The Hacker News This news serves as a reminder for cybersecurity marketers and professionals to stay informed about the latest threats and to implement stringent security measures to protect against evolving cyber risks. GrackerAI, as an AI tool for cybersecurity marketers, plays a crucial role in providing these insights and helping to create a safer online environment.

Nikita Shekhawat
Nikita Shekhawat

Marketing Analyst

 

Data analyst who identifies the high-opportunity keywords and content gaps that fuel GrackerAI's portal strategy. Transforms search data into actionable insights that drive 10x lead generation growth.

Related Articles

Quality at Scale: How AI Solves Programmatic SEO's Biggest Challenge

Discover how AI transforms thin programmatic content into high-quality pages that survive Google's 2025 updates. Includes quality metrics and implementation guide.

By Deepak Gupta October 2, 2025 13 min read
Read full article

How AI Tools and Outlook Email Templates Can Streamline Communication

AI writing tools and Outlook templates save time, reduce errors, and boost focus. Learn how smart content and automation turn email into a productivity tool.

October 2, 2025 7 min read
Read full article
lookalike audience

Expand Your Reach: How to Create a Lookalike Audience

Learn how to create lookalike audiences to expand your reach, target high-value leads, and drive B2B SaaS growth. A cybersecurity growth hacking guide.

By Deepak Gupta October 2, 2025 4 min read
Read full article
Link Building

How to Succeed in Link Building Outreach in 2026

Learn effective link-building outreach strategies in 2026 to get quality backlinks with personalized emails, relevance, and real value.

By Deepak Gupta October 1, 2025 4 min read
Read full article