Microsoft Azure documentation

Security & Compliance

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Security & Compliance

In addition to the enterprise-grade features available in Microsoft Azure services, the following security measures and requirements are enforced to safeguard the deployment and use of open models on Azure:

Model Eligibility Requirements

Only models that meet strict security criteria are included in the Hugging Face collection on Azure:

  • Public availability: Models must be public on the Hugging Face Hub; gated or private models are currently not eligible.
  • No trust_remote_code: Models that require trust_remote_code=True are disallowed unless they are explicitly verified by Hugging Face or come from a trusted/verified organization.
  • Secure format: Model weights must be uploaded in the Safetensors format to eliminate the risks associated with pickle-based formats.

Mandatory Security Scanning

All models made available via the Hugging Face collection on Azure undergo a robust set of security scans like ClamAV malware scanning, including third-party scanners such as Protect AI and JFrog solutions.

These checks help identify embedded malware or harmful binaries, unsafe deserialization, unintended external connections and security-sensitive content in model artifacts before being imported in customers’ tenancy.

For more details on Hugging Face Hub’s security practices and tooling, refer to this documentation.

Network Isolation and Compliance

For enhanced protection and compliance, model hosting and serving can be configured to run in isolated compute environments on Azure AI services, aligned with regulatory or internal policy requirements. Azure Foundry and Azure ML comes with enterprise-grade audit, logging, and access control frameworks that ensures full traceability and governance.

< > Update on GitHub