Dataset Viewer
Viewer
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    ValueError
Message:      Not able to read records in the JSON file at hf://datasets/hf-internal-testing/transformers_daily_ci@663c167b0121c9fbc455c90e7c33e114a25aed74/2024-06-05/ci_results_run_models_gpu/model_results.json. You should probably indicate the field of the JSON file containing your records. This JSON file contain the following fields: ['models_albert', 'models_align', 'models_altclip', 'models_audio_spectrogram_transformer', 'models_auto', 'models_autoformer', 'models_bark', 'models_bart', 'models_barthez', 'models_bartpho', 'models_beit', 'models_bert', 'models_bert_generation', 'models_bert_japanese', 'models_bertweet', 'models_big_bird', 'models_bigbird_pegasus', 'models_biogpt', 'models_bit', 'models_blenderbot', 'models_blenderbot_small', 'models_blip', 'models_blip_2', 'models_bloom', 'models_bridgetower', 'models_bros', 'models_byt5', 'models_camembert', 'models_canine', 'models_chinese_clip', 'models_clap', 'models_clip', 'models_clipseg', 'models_clvp', 'models_code_llama', 'models_codegen', 'models_cohere', 'models_conditional_detr', 'models_convbert', 'models_convnext', 'models_convnextv2', 'models_cpm', 'models_cpmant', 'models_ctrl', 'models_cvt', 'models_data2vec', 'models_dbrx', 'models_deberta', 'models_deberta_v2', 'models_decision_transformer', 'models_deformable_detr', 'models_deit', 'models_depth_anything', 'models_detr', 'models_dinat', 'models_dinov2', 'models_distilbert', 'models_dit', 'models_donut', 'models_dpr', 'models_dpt', 'models_efficientnet', 'models_electra', 'models_encodec', 'models_encoder_decoder', 'models_ernie', 'models_esm', 'models_falcon', 'models_fastspeech2_conformer', 'models_flaubert', 'models_flava', 'models_fnet', 'models_focalnet', 'models_fsmt', 'models_funnel', 'models_fuyu', 'models_gemma', 'models_git', 'models_glpn', 'models_gpt2', 'models_gpt_bigcode', 'models_gpt_neo', 'models_gpt_neox', 'models_gpt_neox_japanese', 'models_gpt_sw3', 'models_gptj', 'models_grounding_dino', 'models_groupvit', 'models_herbert', 'models_hubert', 'models_ibert', 'models_idefics', 'models_idefics2', 'models_imagegpt', 'models_informer', 'models_instructblip', 'models_jamba', 'models_jetmoe', 'models_kosmos2', 'models_layoutlm', 'models_layoutlmv2', 'models_layoutlmv3', 'models_layoutxlm', 'models_led', 'models_levit', 'models_lilt', 'models_llama', 'models_llava', 'models_llava_next', 'models_longformer', 'models_longt5', 'models_luke', 'models_lxmert', 'models_m2m_100', 'models_mamba', 'models_marian', 'models_markuplm', 'models_mask2former', 'models_maskformer', 'models_mbart', 'models_mbart50', 'models_megatron_bert', 'models_megatron_gpt2', 'models_mgp_str', 'models_mistral', 'models_mixtral', 'models_mluke', 'models_mobilebert', 'models_mobilenet_v1', 'models_mobilenet_v2', 'models_mobilevit', 'models_mobilevitv2', 'models_mpnet', 'models_mpt', 'models_mra', 'models_mt5', 'models_musicgen', 'models_musicgen_melody', 'models_mvp', 'models_nllb', 'models_nllb_moe', 'models_nougat', 'models_nystromformer', 'models_olmo', 'models_oneformer', 'models_openai', 'models_opt', 'models_owlv2', 'models_owlvit', 'models_paligemma', 'models_patchtsmixer', 'models_patchtst', 'models_pegasus', 'models_pegasus_x', 'models_perceiver', 'models_persimmon', 'models_phi', 'models_phi3', 'models_phobert', 'models_pix2struct', 'models_plbart', 'models_poolformer', 'models_pop2piano', 'models_prophetnet', 'models_pvt', 'models_pvt_v2', 'models_qwen2', 'models_qwen2_moe', 'models_rag', 'models_recurrent_gemma', 'models_reformer', 'models_regnet', 'models_rembert', 'models_resnet', 'models_roberta', 'models_roberta_prelayernorm', 'models_roc_bert', 'models_roformer', 'models_rwkv', 'models_sam', 'models_seamless_m4t', 'models_seamless_m4t_v2', 'models_segformer', 'models_seggpt', 'models_sew', 'models_sew_d', 'models_siglip', 'models_speech_encoder_decoder', 'models_speech_to_text', 'models_speecht5', 'models_splinter', 'models_squeezebert', 'models_stablelm', 'models_starcoder2', 'models_superpoint', 'models_swiftformer', 'models_swin', 'models_swin2sr', 'models_swinv2', 'models_switch_transformers', 'models_t5', 'models_table_transformer', 'models_tapas', 'models_time_series_transformer', 'models_timesformer', 'models_timm_backbone', 'models_trocr', 'models_tvp', 'models_udop', 'models_umt5', 'models_unispeech', 'models_unispeech_sat', 'models_univnet', 'models_upernet', 'models_video_llava', 'models_videomae', 'models_vilt', 'models_vipllava', 'models_vision_encoder_decoder', 'models_vision_text_dual_encoder', 'models_visual_bert', 'models_vit', 'models_vit_mae', 'models_vit_msn', 'models_vitdet', 'models_vitmatte', 'models_vits', 'models_vivit', 'models_wav2vec2', 'models_wav2vec2_bert', 'models_wav2vec2_conformer', 'models_wav2vec2_phoneme', 'models_wav2vec2_with_lm', 'models_wavlm', 'models_whisper', 'models_x_clip', 'models_xglm', 'models_xlm', 'models_xlm_roberta', 'models_xlm_roberta_xl', 'models_xlnet', 'models_xmod', 'models_yolos', 'models_yoso', 'agents', 'benchmark', 'bettertransformer', 'deepspeed', 'extended', 'fixtures', 'fsdp', 'generation', 'optimization', 'peft_integration', 'pipelines', 'quantization', 'repo_utils', 'sagemaker', 'tokenization', 'trainer', 'utils']. Select the correct one and provide it as `field='XXX'` to the dataset loading method. 
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 96, in get_rows_or_raise
                  return get_rows(
                File "/src/libs/libcommon/src/libcommon/utils.py", line 197, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/utils.py", line 73, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1389, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__
                  for key, pa_table in self.generate_tables_fn(**self.kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 170, in _generate_tables
                  raise ValueError(
              ValueError: Not able to read records in the JSON file at hf://datasets/hf-internal-testing/transformers_daily_ci@663c167b0121c9fbc455c90e7c33e114a25aed74/2024-06-05/ci_results_run_models_gpu/model_results.json. You should probably indicate the field of the JSON file containing your records. This JSON file contain the following fields: ['models_albert', 'models_align', 'models_altclip', 'models_audio_spectrogram_transformer', 'models_auto', 'models_autoformer', 'models_bark', 'models_bart', 'models_barthez', 'models_bartpho', 'models_beit', 'models_bert', 'models_bert_generation', 'models_bert_japanese', 'models_bertweet', 'models_big_bird', 'models_bigbird_pegasus', 'models_biogpt', 'models_bit', 'models_blenderbot', 'models_blenderbot_small', 'models_blip', 'models_blip_2', 'models_bloom', 'models_bridgetower', 'models_bros', 'models_byt5', 'models_camembert', 'models_canine', 'models_chinese_clip', 'models_clap', 'models_clip', 'models_clipseg', 'models_clvp', 'models_code_llama', 'models_codegen', 'models_cohere', 'models_conditional_detr', 'models_convbert', 'models_convnext', 'models_convnextv2', 'models_cpm', 'models_cpmant', 'models_ctrl', 'models_cvt', 'models_data2vec', 'models_dbrx', 'models_deberta', 'models_deberta_v2', 'models_decision_transformer', 'models_deformable_detr', 'models_deit', 'models_depth_anything', 'models_detr', 'models_dinat', 'models_dinov2', 'models_distilbert', 'models_dit', 'models_donut', 'models_dpr', 'models_dpt', 'models_efficientnet', 'models_electra', 'models_encodec', 'models_encoder_decoder', 'models_ernie', 'models_esm', 'models_falcon', 'models_fastspeech2_conformer', 'models_flaubert', 'models_flava', 'models_fnet', 'models_focalnet', 'models_fsmt', 'models_funnel', 'models_fuyu', 'models_gemma', 'models_git', 'models_glpn', 'models_gpt2', 'models_gpt_bigcode', 'models_gpt_neo', 'models_gpt_neox', 'models_gpt_neox_japanese', 'models_gpt_sw3', 'models_gptj', 'models_grounding_dino', 'models_groupvit', 'models_herbert', 'models_hubert', 'models_ibert', 'models_idefics', 'models_idefics2', 'models_imagegpt', 'models_informer', 'models_instructblip', 'models_jamba', 'models_jetmoe', 'models_kosmos2', 'models_layoutlm', 'models_layoutlmv2', 'models_layoutlmv3', 'models_layoutxlm', 'models_led', 'models_levit', 'models_lilt', 'models_llama', 'models_llava', 'models_llava_next', 'models_longformer', 'models_longt5', 'models_luke', 'models_lxmert', 'models_m2m_100', 'models_mamba', 'models_marian', 'models_markuplm', 'models_mask2former', 'models_maskformer', 'models_mbart', 'models_mbart50', 'models_megatron_bert', 'models_megatron_gpt2', 'models_mgp_str', 'models_mistral', 'models_mixtral', 'models_mluke', 'models_mobilebert', 'models_mobilenet_v1', 'models_mobilenet_v2', 'models_mobilevit', 'models_mobilevitv2', 'models_mpnet', 'models_mpt', 'models_mra', 'models_mt5', 'models_musicgen', 'models_musicgen_melody', 'models_mvp', 'models_nllb', 'models_nllb_moe', 'models_nougat', 'models_nystromformer', 'models_olmo', 'models_oneformer', 'models_openai', 'models_opt', 'models_owlv2', 'models_owlvit', 'models_paligemma', 'models_patchtsmixer', 'models_patchtst', 'models_pegasus', 'models_pegasus_x', 'models_perceiver', 'models_persimmon', 'models_phi', 'models_phi3', 'models_phobert', 'models_pix2struct', 'models_plbart', 'models_poolformer', 'models_pop2piano', 'models_prophetnet', 'models_pvt', 'models_pvt_v2', 'models_qwen2', 'models_qwen2_moe', 'models_rag', 'models_recurrent_gemma', 'models_reformer', 'models_regnet', 'models_rembert', 'models_resnet', 'models_roberta', 'models_roberta_prelayernorm', 'models_roc_bert', 'models_roformer', 'models_rwkv', 'models_sam', 'models_seamless_m4t', 'models_seamless_m4t_v2', 'models_segformer', 'models_seggpt', 'models_sew', 'models_sew_d', 'models_siglip', 'models_speech_encoder_decoder', 'models_speech_to_text', 'models_speecht5', 'models_splinter', 'models_squeezebert', 'models_stablelm', 'models_starcoder2', 'models_superpoint', 'models_swiftformer', 'models_swin', 'models_swin2sr', 'models_swinv2', 'models_switch_transformers', 'models_t5', 'models_table_transformer', 'models_tapas', 'models_time_series_transformer', 'models_timesformer', 'models_timm_backbone', 'models_trocr', 'models_tvp', 'models_udop', 'models_umt5', 'models_unispeech', 'models_unispeech_sat', 'models_univnet', 'models_upernet', 'models_video_llava', 'models_videomae', 'models_vilt', 'models_vipllava', 'models_vision_encoder_decoder', 'models_vision_text_dual_encoder', 'models_visual_bert', 'models_vit', 'models_vit_mae', 'models_vit_msn', 'models_vitdet', 'models_vitmatte', 'models_vits', 'models_vivit', 'models_wav2vec2', 'models_wav2vec2_bert', 'models_wav2vec2_conformer', 'models_wav2vec2_phoneme', 'models_wav2vec2_with_lm', 'models_wavlm', 'models_whisper', 'models_x_clip', 'models_xglm', 'models_xlm', 'models_xlm_roberta', 'models_xlm_roberta_xl', 'models_xlnet', 'models_xmod', 'models_yolos', 'models_yoso', 'agents', 'benchmark', 'bettertransformer', 'deepspeed', 'extended', 'fixtures', 'fsdp', 'generation', 'optimization', 'peft_integration', 'pipelines', 'quantization', 'repo_utils', 'sagemaker', 'tokenization', 'trainer', 'utils']. Select the correct one and provide it as `field='XXX'` to the dataset loading method.

Need help to make the dataset viewer work? Open a discussion for direct support.

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
3