The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    FileNotFoundError
Message:      Couldn't find a dataset script at /src/services/worker/nlp-guild/non-linear-classification/non-linear-classification.py or any data file in the same directory. Couldn't find 'nlp-guild/non-linear-classification' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in nlp-guild/non-linear-classification. 
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 55, in compute_config_names_response
                  for config in sorted(get_dataset_config_names(path=dataset, token=hf_token))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1508, in dataset_module_factory
                  raise FileNotFoundError(
              FileNotFoundError: Couldn't find a dataset script at /src/services/worker/nlp-guild/non-linear-classification/non-linear-classification.py or any data file in the same directory. Couldn't find 'nlp-guild/non-linear-classification' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in nlp-guild/non-linear-classification.

Need help to make the dataset viewer work? Open a discussion for direct support.

please use the following code to load data:

# start data loading
!git lfs install
!git clone https://huggingface.co/datasets/nlp-guild/non-linear-classification

def load_dataset(path='dataset.npy'):
    """
    :return:
        f_and_xs: numpy array of size [sample_number, channels, sample_length]
        label_0, label_1, label_2: one-hot encodes of size [sample_number, number_bins]
    """

    r = np.load(path, allow_pickle=True).item()
    f_and_xs = r['f_and_xs']
    label_0 = r['l_0']
    label_1 = r['l_1']
    label_2 = r['l_2']
    return f_and_xs, label_0, label_1, label_2

f_and_xs, label_0, label_1, label_2 = load_dataset('/content/Nonlinear-System-Identification-with-Deep-Learning/dataset.npy')
# end data loading
Downloads last month
0
Edit dataset card