Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
PrunaAI
/
FLUX.1-schnell-4bit
like
11
Follow
Pruna AI
137
pruna-ai
Model card
Files
Files and versions
Community
4
61103bc
FLUX.1-schnell-4bit
1 contributor
History:
3 commits
johnrachwanpruna
1544210bbae385eef895bf48728a491ba51ba8dabde9ea824d994b0609912c54
61103bc
verified
4 months ago
.gitattributes
Safe
1.52 kB
initial commit
4 months ago
text_encoder_2.pt
pickle
Detected Pickle imports (30)
"optimum.quanto.nn.qlinear.QLinear"
,
"transformers.models.t5.modeling_t5.T5LayerNorm"
,
"torch._utils._rebuild_tensor_v3"
,
"torch._utils._rebuild_parameter"
,
"torch._utils._rebuild_wrapper_subclass"
,
"torch.storage.UntypedStorage"
,
"torch.nn.modules.sparse.Embedding"
,
"transformers.models.t5.modeling_t5.T5DenseGatedActDense"
,
"torch.nn.modules.container.ModuleList"
,
"transformers.models.t5.modeling_t5.T5EncoderModel"
,
"torch.BFloat16Storage"
,
"transformers.models.t5.modeling_t5.T5Block"
,
"collections.OrderedDict"
,
"transformers.models.t5.modeling_t5.T5Stack"
,
"torch.FloatStorage"
,
"transformers.models.t5.modeling_t5.T5LayerSelfAttention"
,
"torch.nn.modules.dropout.Dropout"
,
"transformers.models.t5.modeling_t5.T5LayerFF"
,
"optimum.quanto.tensor.qtype.qtype"
,
"torch.serialization._get_layout"
,
"__builtin__.set"
,
"torch.bfloat16"
,
"transformers.models.t5.modeling_t5.T5Attention"
,
"optimum.quanto.tensor.qbytes.QBytesTensor"
,
"transformers.activations.NewGELUActivation"
,
"transformers.models.t5.configuration_t5.T5Config"
,
"torch.float8_e4m3fn"
,
"torch._tensor._rebuild_from_type_v2"
,
"torch.device"
,
"torch._utils._rebuild_tensor_v2"
How to fix it?
4.9 GB
LFS
7779712269beec551a9cdd6c4ec2c3ee2353a191587516df2c47dbcba966937e
4 months ago
transformer.pt
pickle
Detected Pickle imports (43)
"diffusers.models.activations.GELU"
,
"optimum.quanto.tensor.qtype.qtype"
,
"diffusers.models.embeddings.CombinedTimestepTextProjEmbeddings"
,
"diffusers.models.normalization.AdaLayerNormZeroSingle"
,
"torch.nn.modules.normalization.LayerNorm"
,
"diffusers.models.transformers.transformer_flux.FluxTransformerBlock"
,
"diffusers.models.normalization.AdaLayerNormZero"
,
"torch._utils._rebuild_tensor_v2"
,
"diffusers.models.embeddings.Timesteps"
,
"torch.FloatStorage"
,
"diffusers.models.attention.FeedForward"
,
"torch.serialization._get_layout"
,
"torch.nn.modules.dropout.Dropout"
,
"diffusers.models.attention_processor.FluxSingleAttnProcessor2_0"
,
"torch.nn.modules.activation.SiLU"
,
"diffusers.models.attention_processor.FluxAttnProcessor2_0"
,
"diffusers.models.attention_processor.Attention"
,
"torch.uint8"
,
"diffusers.models.transformers.transformer_flux.EmbedND"
,
"torch.IntStorage"
,
"torch.device"
,
"diffusers.configuration_utils.FrozenDict"
,
"__builtin__.set"
,
"diffusers.models.transformers.transformer_flux.FluxSingleTransformerBlock"
,
"torch._utils._rebuild_parameter"
,
"optimum.quanto.nn.qlinear.QLinear"
,
"torch.Size"
,
"diffusers.models.transformers.transformer_flux.FluxTransformer2DModel"
,
"diffusers.models.normalization.AdaLayerNormContinuous"
,
"optimum.quanto.tensor.qbits.tinygemm.packed.TinyGemmPackedTensor"
,
"diffusers.models.embeddings.PixArtAlphaTextProjection"
,
"diffusers.models.normalization.RMSNorm"
,
"torch.bfloat16"
,
"optimum.quanto.tensor.qbits.tinygemm.qbits.TinyGemmQBitsTensor"
,
"torch.nn.modules.container.ModuleList"
,
"diffusers.models.embeddings.TimestepEmbedding"
,
"torch.BFloat16Storage"
,
"torch.int8"
,
"torch.nn.modules.linear.Linear"
,
"collections.OrderedDict"
,
"torch._utils._rebuild_wrapper_subclass"
,
"torch._tensor._rebuild_from_type_v2"
,
"torch.nn.modules.activation.GELU"
How to fix it?
6.34 GB
LFS
1544210bbae385eef895bf48728a491ba51ba8dabde9ea824d994b0609912c54
4 months ago