Fetching metadata from the HF Docker repository...
new model
c5196fd
-
6.15 kB
init
tft_check.ckpt
Detected Pickle imports (21)
- "_codecs.encode",
- "pytorch_forecasting.data.encoders.NaNLabelEncoder",
- "torchmetrics.utilities.data.dim_zero_sum",
- "numpy.ndarray",
- "torch.device",
- "numpy.dtype",
- "pytorch_forecasting.metrics.point.MAPE",
- "sklearn.preprocessing._data.StandardScaler",
- "numpy.core.multiarray.scalar",
- "torchmetrics.metric.jit_distributed_available",
- "torch.FloatStorage",
- "torch._utils._rebuild_tensor_v2",
- "torch.LongStorage",
- "numpy.core.multiarray._reconstruct",
- "collections.OrderedDict",
- "torch.nn.modules.container.ModuleList",
- "pytorch_forecasting.data.encoders.EncoderNormalizer",
- "__builtin__.set",
- "pytorch_forecasting.metrics.point.MAE",
- "pytorch_forecasting.metrics.point.RMSE",
- "pytorch_forecasting.metrics.point.SMAPE"
How to fix it?
5.18 MB
init
tft_check_q.ckpt
Detected Pickle imports (21)
- "_codecs.encode",
- "pytorch_forecasting.data.encoders.NaNLabelEncoder",
- "torchmetrics.utilities.data.dim_zero_sum",
- "numpy.ndarray",
- "torch.device",
- "numpy.dtype",
- "pytorch_forecasting.metrics.point.MAPE",
- "sklearn.preprocessing._data.StandardScaler",
- "numpy.core.multiarray.scalar",
- "torchmetrics.metric.jit_distributed_available",
- "torch.FloatStorage",
- "torch._utils._rebuild_tensor_v2",
- "torch.LongStorage",
- "numpy.core.multiarray._reconstruct",
- "collections.OrderedDict",
- "torch.nn.modules.container.ModuleList",
- "pytorch_forecasting.data.encoders.EncoderNormalizer",
- "__builtin__.set",
- "pytorch_forecasting.metrics.point.MAE",
- "pytorch_forecasting.metrics.point.RMSE",
- "pytorch_forecasting.metrics.point.SMAPE"
How to fix it?
5.18 MB
new model