The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    FileNotFoundError
Message:      Couldn't find a dataset script at /src/services/worker/h2oai/db_dirs/db_dirs.py or any data file in the same directory. Couldn't find 'h2oai/db_dirs' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in h2oai/db_dirs. 
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 55, in compute_config_names_response
                  for config in sorted(get_dataset_config_names(path=dataset, token=hf_token))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1491, in dataset_module_factory
                  raise FileNotFoundError(
              FileNotFoundError: Couldn't find a dataset script at /src/services/worker/h2oai/db_dirs/db_dirs.py or any data file in the same directory. Couldn't find 'h2oai/db_dirs' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in h2oai/db_dirs.

Need help to make the dataset viewer work? Open a discussion for direct support.

h2oGPT DataBase Data Card

Summary

H2O.ai's Chroma database files for h2oGPT for LangChain integration. Sources are generated and processed by get_db()

File Purpose Source License
db_dir_DriverlessAI_docs.zip DriverlessAI Documentation Q/A Source CC-BY-NC
db_dir_UserData.zip Example PDFs and Text Files Q/A Source ArXiv
db_dir_github_h2oGPT.zip h2oGPT GitHub repo Q/A Source Apache V2
db_dir_wiki.zip Example subset of Wikipedia (from API) Q/A Source Wikipedia CC-BY-SA
db_dir_wiki_full.zip All Wikipedia as of 04/01/2023 for articles with >5k views for Q/A Source Wikipedia CC-BY-SA

UserData can be generated for any collection of private offline docs by running make_db.py. For quickly using a private document collection for Q/A, place documents (PDFs, text, etc.) into a folder called user_path and run

python make_db.py

To use the chatbot with such docs, run:

python generate.py --base_model=h2oai/h2ogpt-oig-oasst1-512-6.9b --langchain_mode=UserData

using h2oGPT . Any other instruct-tuned base model can be used, including non-h2oGPT ones, as long as required GPU memory is avaialble for given model size. Or one can choose 8-bit generation.

See also LangChain example use with test_langchain_simple.py

If one has obtained all databases (except wiki_full) and unzipped them into the current directory, then one can run h2oGPT Chatbot like:

python generate.py --base_model=h2oai/h2ogpt-oasst1-512-12b --load_8bit=True --langchain_mode=UserData --visible_langchain_modes="['UserData', 'wiki', 'MyData', 'github h2oGPT', 'DriverlessAI docs']"

which uses now 12B model in 8-bit mode, that fits onto single 24GB GPU.

If one has obtained all databases (including wiki_full) and unzipped them into the current directory, then one can run h2oGPT Chatbot like:

python generate.py --base_model=h2oai/h2ogpt-oasst1-512-12b --load_8bit=True --langchain_mode=wiki_full --visible_langchain_modes="['UserData', 'wiki_full', 'MyData', 'github h2oGPT', 'DriverlessAI docs']"

which will default to wiki_full for QA against full Wikipedia.

Downloads last month
0
Edit dataset card

Space using h2oai/db_dirs 1