Dataset Viewer
View in Dataset Viewer
Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code: ConfigNamesError Exception: BrokenPipeError Message: [Errno 32] Broken pipe Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 55, in compute_config_names_response for config in sorted(get_dataset_config_names(path=dataset, token=hf_token)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1512, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1489, in dataset_module_factory return HubDatasetModuleFactoryWithoutScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1031, in get_module dataset_readme_path = cached_path( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 182, in cached_path output_path = get_from_cache( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 644, in get_from_cache http_get( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 411, in http_get with logging.tqdm( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/logging.py", line 207, in __call__ return tqdm_lib.tqdm(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/asyncio.py", line 24, in __init__ super(tqdm_asyncio, self).__init__(iterable, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/std.py", line 1095, in __init__ self.refresh(lock_args=self.lock_args) File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/std.py", line 1344, in refresh self.display() File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/std.py", line 1492, in display self.sp(self.__str__() if msg is None else msg) File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/std.py", line 347, in print_status fp_write('\r' + s + (' ' * max(last_len[0] - len_s, 0))) File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/std.py", line 340, in fp_write fp.write(str(s)) File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/utils.py", line 127, in inner return func(*args, **kwargs) BrokenPipeError: [Errno 32] Broken pipe
Need help to make the dataset viewer work? Open a discussion for direct support.
LLM Security Evaluation
This repo contains scripts for evaluating LLM security abilities. We gathered hundreds of questions cover different ascepts of security, such as vulnerablities, pentest, threat intelligence, etc.
All the questions can be viewed at https://huggingface.co/datasets/c01dsnap/LLM-Sec-Evaluation.
Suppoted LLM
- ChatGLM
- Baichuan
- Vicuna (GGML format)
Usage
Because of different LLM requires for different running environment, we highly recommended to manage your virtual envs via Miniconda.
1.Install dependencies
pip install -r requirements.txt
# If you want to use GPU, please install llama-cpp-python with the following command
LLAMA_CUBLAS=1 CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python --no-cache-dir --force-reinstall --verbose
2.Clone this repo
git clone https://github.com/Coldwave96/LLM-Sec-Evaluation
cd LLM-Sec-Evaluation
3.Run bash scripts
# You might need to modify the script running interpreter in evaluate.py
bash evaluate.sh
Changelog
- 2023.7.13 - Add support for ChatGLM & Baichuan
- 2023.7.17 - Add support for Vicuna
- Downloads last month
- 0