What is safetensors ?

safetensors is a different format from the classic .bin which uses Pytorch which uses pickle. It contains the exact same data, which is just the model weights (or tensors).

Pickle is notoriously unsafe which allow any malicious file to execute arbitrary code. The hub itself tries to prevent issues from it, but it’s not a silver bullet.

safetensors first and foremost goal is to make loading machine learning models safe in the sense that no takeover of your computer can be done.

Hence the name.

Why use safetensors ?

Safety can be one reason, if you’re attempting to use a not well known model and you’re not sure about the source of the file.

And a secondary reason, is the speed of loading. Safetensors can load models much faster than regular pickle files. If you spend a lot of times switching models, this can be a huge timesave.

Numbers taken AMD EPYC 7742 64-Core Processor

from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1")

# Loaded in safetensors 0:00:02.033658
# Loaded in Pytorch 0:00:02.663379

This is for the entire loading time, the actual weights loading time to load 500MB:

Safetensors: 3.4873ms
PyTorch: 172.7537ms

Performance in general is a tricky business, and there are a few things to understand:

How to use safetensors ?

If you have safetensors installed, and all the weights are available in safetensors format, \ then by default it will use that instead of the pytorch weights.

If you are really paranoid about this, the ultimate weapon would be disabling torch.load:

import torch


def _raise():
    raise RuntimeError("I don't want to use pickle")


torch.load = lambda *args, **kwargs: _raise()

I want to use model X but it doesn't have safetensors weights.

Just go to this space. This will create a new PR with the weights, let’s say refs/pr/22.

This space will download the pickled version, convert it, and upload it on the hub as a PR. If anything bad is contained in the file, it’s Huggingface hub that will get issues, not your own computer. And we’re equipped with dealing with it.

Then in order to use the model, even before the branch gets accepted by the original author you can do:

from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", revision="refs/pr/22")

or you can test it directly online with this space.

And that’s it !

Anything unclear, concerns, or found a bugs ? Open an issue