Fetching metadata from the HF Docker repository...
Update app.py
ea12179
verified
-
detection
test
-
recognization
Update recognization/dataset.py
-
samples
test
-
1.52 kB
initial commit
-
3.92 kB
test
-
310 Bytes
initial commit
-
1.32 kB
Update app.py
detection.pt
Detected Pickle imports (29)
- "torch.Size",
- "ultralytics.nn.modules.block.C2PSA",
- "torch.nn.modules.linear.Identity",
- "torch.nn.modules.activation.SiLU",
- "ultralytics.nn.modules.conv.DWConv",
- "ultralytics.nn.modules.block.Bottleneck",
- "ultralytics.nn.modules.block.DFL",
- "ultralytics.nn.modules.block.PSABlock",
- "torch.nn.modules.conv.Conv2d",
- "torch.nn.modules.pooling.MaxPool2d",
- "ultralytics.nn.modules.conv.Concat",
- "ultralytics.nn.modules.block.Attention",
- "ultralytics.nn.modules.block.SPPF",
- "torch.LongStorage",
- "ultralytics.nn.modules.block.C3k",
- "ultralytics.nn.modules.head.Detect",
- "ultralytics.nn.modules.conv.Conv",
- "__builtin__.set",
- "torch.FloatStorage",
- "ultralytics.nn.modules.block.C3k2",
- "torch.nn.modules.container.ModuleList",
- "ultralytics.nn.tasks.DetectionModel",
- "torch._utils._rebuild_parameter",
- "torch.nn.modules.batchnorm.BatchNorm2d",
- "torch.nn.modules.container.Sequential",
- "torch._utils._rebuild_tensor_v2",
- "collections.OrderedDict",
- "torch.nn.modules.upsampling.Upsample",
- "torch.HalfStorage"
How to fix it?
40.5 MB
add models
recognization_id.pt
Detected Pickle imports (36)
- "torch.Size",
- "ultralytics.nn.modules.block.C2PSA",
- "torch.nn.modules.linear.Identity",
- "torch.nn.modules.activation.SiLU",
- "torch.nn.modules.loss.BCEWithLogitsLoss",
- "ultralytics.nn.modules.conv.DWConv",
- "ultralytics.nn.modules.block.Bottleneck",
- "ultralytics.nn.modules.block.DFL",
- "ultralytics.utils.IterableSimpleNamespace",
- "ultralytics.nn.modules.block.PSABlock",
- "torch.nn.modules.conv.Conv2d",
- "ultralytics.utils.tal.TaskAlignedAssigner",
- "torch.nn.modules.pooling.MaxPool2d",
- "ultralytics.nn.modules.conv.Concat",
- "ultralytics.nn.modules.block.Attention",
- "ultralytics.utils.loss.v8DetectionLoss",
- "ultralytics.nn.modules.block.SPPF",
- "torch.LongStorage",
- "ultralytics.nn.modules.block.C3k",
- "ultralytics.nn.modules.head.Detect",
- "ultralytics.nn.modules.conv.Conv",
- "__builtin__.set",
- "torch.FloatStorage",
- "torch.device",
- "ultralytics.utils.loss.BboxLoss",
- "ultralytics.utils.loss.DFLoss",
- "ultralytics.nn.modules.block.C3k2",
- "torch.nn.modules.container.ModuleList",
- "ultralytics.nn.tasks.DetectionModel",
- "torch._utils._rebuild_parameter",
- "torch.nn.modules.batchnorm.BatchNorm2d",
- "torch.nn.modules.container.Sequential",
- "torch._utils._rebuild_tensor_v2",
- "collections.OrderedDict",
- "torch.nn.modules.upsampling.Upsample",
- "torch.HalfStorage"
How to fix it?
57.2 MB
add models
-
190 MB
Add fine tuned model
-
44 Bytes
test