Malicious ML Models on Hugging Face Exploit Broken Pickle Format

Pratham Panchariya
Pratham Panchariya

Software Developer

 
February 11, 2025 2 min read

In a recent discovery, cybersecurity researchers have found two malicious machine learning (ML) models on Hugging Face that use a "broken" pickle file technique to evade detection. These models, more of a proof-of-concept (PoC) than an active supply chain attack, contain a reverse shell payload that connects to a hard-coded IP address. The pickle serialization format, widely used for distributing ML models, has been identified as a security risk due to its potential to execute arbitrary code upon loading and deserialization. The identified models, stored in the PyTorch format, were compressed using the 7z format instead of the default ZIP, allowing them to bypass Hugging Face's security tool, Picklescan. This highlights the need for improved security measures in ML model distribution. Source: The Hacker News

The Threat of Malicious ML Models

The approach used by these models, dubbed nullifAI, is a clear attempt to bypass existing safeguards designed to identify malicious models. The pickle files extracted from the PyTorch archives reveal malicious Python content at the beginning of the file, which is a typical platform-aware reverse shell. This discovery underscores the importance of robust security protocols in the ML community.

The Role of Pickle Files in Security Risks

The pickle serialization format has been a point of concern due to its ability to execute arbitrary code. The two models detected by ReversingLabs are stored in a compressed pickle file format, which is usually associated with the ZIP format. However, these models used the 7z format for compression, enabling them to avoid detection by Picklescan.

Implications and Mitigation

The fact that these models could still be partially deserialized despite Picklescan throwing an error message indicates a discrepancy between the tool's functionality and the deserialization process. This has led to the open-source utility being updated to address this bug. It's crucial for the ML community to stay vigilant and continuously update their security measures to counter such threats. hugging-face-malware.webp Source: The Hacker News code.webp Source: The Hacker News This news serves as a reminder for cybersecurity marketers and professionals to stay informed about the latest threats and to implement stringent security measures to protect against evolving cyber risks. GrackerAI, as an AI tool for cybersecurity marketers, plays a crucial role in providing these insights and helping to create a safer online environment.

Pratham Panchariya
Pratham Panchariya

Software Developer

 

Backend engineer powering GrackerAI's real-time content generation that produces 100+ optimized pages daily. Builds the programmatic systems that help cybersecurity companies own entire search categories.

Related Articles

The Question Hub Strategy: How B2B SaaS Companies Capture AI Search Traffic

Learn how B2B SaaS companies use Question Hub strategy to capture ChatGPT, Claude & Perplexity traffic. 5-step process with real case studies & results.

By Deepak Gupta July 23, 2025 3 min read
Read full article

Google Adds Comparison Mode for Real-Time SEO Checks

Use Google’s new Search Console comparison mode for hourly SEO audits. Perfect for SaaS & cybersecurity marketers tracking real-time changes.

By Ankit Agarwal July 18, 2025 3 min read
Read full article

2025 Programmatic SEO Playbook: AI, Real-Time Data, and Market Domination

Master 2025 programmatic SEO with AI-powered content, real-time data integration, and dynamic optimization. Includes implementation guide and competitive advantages.

By Deepak Gupta July 6, 2025 10 min read
Read full article

Quality at Scale: How AI Solves Programmatic SEO's Biggest Challenge

Discover how AI transforms thin programmatic content into high-quality pages that survive Google's 2025 updates. Includes quality metrics and implementation guide.

By Deepak Gupta July 6, 2025 13 min read
Read full article