Datasets:

ArXiv:
License:
The dataset viewer is not available for this subset.
The c10-16 config contains 300 while it should generally contain 3 splits maximum (train/validation/test). If the splits 0, 1, 2, 3, 4... are not used to differentiate between training and evaluation, please consider defining configs of this dataset instead. You can find how to define configs instead of splits here: https://huggingface.co/docs/hub/datasets-data-files-configuration

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

This is the dataset of trained neural network checkpoints used to meta-train the NiNo model from https://github.com/SamsungSAILMontreal/nino/.

It contains 1000 models in total:

  • 300 small convnets with 3 layers and 16, 32 and 32 channels (14,378 parameters in each model), trained on FashionMNIST (FM-16)
  • 300 small convnets with 3 layers and 16, 32 and 32 channels (14,666 parameters in each model), trained on CIFAR10 (C10-16)
  • 200 small GPT2-based transformers with 3 layers, 24 hidden units and 3 heads (1,252,464 parameters in each model), trained on LM1B (LM1B-3-24)
  • 200 small GPT2-based transformers with 2 layers, 32 hidden units and 2 heads (1,666,464 parameters in each model), trained on LM1B (LM1B-2-32)

Each model contains multiple checkpoints:

  • 2688 checkpoints per each model in FM-16 (corresponding to every 4 steps of Adam)
  • 2513 checkpoints per each model in C10-16 (corresponding to every 4 steps of Adam)
  • 124 checkpoints per each model in LM1B-3-24 (corresponding to every 200 steps of Adam)
  • 124 checkpoints per each model in LM1B-2-32 (corresponding to every 200 steps of Adam)

In total, there are 1,609,900 model checkpoints.

The dataset also contains the training loss for each checkpoint, for FM-16 and C10-16 it also contains training accuracy, test loss, test accuracy.

The dataset corresponds to the first 4 columns (in-distribution tasks) in Table 1 below.

This Table is from the Accelerating Training with Neuron Interaction and Nowcasting Networks paper, see https://arxiv.org/abs/2409.04434 for details.

Example

UPDATE: To train the NiNo model more efficiently we also uploaded the dataset in the mmap format (in the mmap branch), which is much faster to read for large arrays. See https://github.com/SamsungSAILMontreal/nino/blob/main/dataset/dataset.py#L123 for the dataset code and https://github.com/SamsungSAILMontreal/nino/blob/main/README.md#training-nino for the training script examples.

Download and access trajectories of the fm-16 task (other three datasets: 'c10-16', 'lm1b-3-24', 'lm1b-2-32'). The original trajectories was saved as float16, but when loading huggingface may convert it to float32 or float64 depending on the version.

from datasets import load_dataset
import numpy as np

# Load the 'fm-16' task
dataset = load_dataset('SamsungSAILMontreal/nino_metatrain', 'fm-16')  # you can use streaming=True for checking out individual samples

# Once downloaded, access the t-th state in the i-th trajectory: 
print(np.array(dataset[str(i)][t]['data']).shape)  # (14378,)
Downloads last month
751
Edit dataset card