The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ValueError
Message:      Not able to read records in the JSON file at hf://datasets/mit-han-lab/QServe-benchmarks@3225d566ca7999486e4b098c1f47217c6f26ed76/Llama-2-13B/tokenizer.json. You should probably indicate the field of the JSON file containing your records. This JSON file contain the following fields: ['version', 'truncation', 'padding', 'added_tokens', 'normalizer', 'pre_tokenizer', 'post_processor', 'decoder', 'model']. Select the correct one and provide it as `field='XXX'` to the dataset loading method. 
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 241, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2216, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1239, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1389, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1044, in __iter__
                  yield from islice(self.ex_iterable, self.n)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__
                  for key, pa_table in self.generate_tables_fn(**self.kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 170, in _generate_tables
                  raise ValueError(
              ValueError: Not able to read records in the JSON file at hf://datasets/mit-han-lab/QServe-benchmarks@3225d566ca7999486e4b098c1f47217c6f26ed76/Llama-2-13B/tokenizer.json. You should probably indicate the field of the JSON file containing your records. This JSON file contain the following fields: ['version', 'truncation', 'padding', 'added_tokens', 'normalizer', 'pre_tokenizer', 'post_processor', 'decoder', 'model']. Select the correct one and provide it as `field='XXX'` to the dataset loading method.

Need help to make the dataset viewer work? Open a discussion for direct support.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

QServe benchmarks

This huggingface repository contains configurations and tokenizer files for all models benchmarked in our QServe project:

  • Llama-3-8B
  • Llama-2-7B
  • Llama-2-13B
  • Llama-2-70B
  • Llama-30B
  • Mistral-7B
  • Yi-34B
  • Qwen1.5-72B

Please clone this repository if you wish to run our QServe benchmark code without cloning full models.

Please consider citing our paper if it is helpful:

@article{lin2024qserve,
  title={QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving},
  author={Lin*, Yujun and Tang*, Haotian and Yang*, Shang and Zhang, Zhekai and Xiao, Guangxuan and Gan, Chuang and Han, Song},
  year={2024}
}
Downloads last month
0