Datasets:

Languages:
English
Size Categories:
1M<n<10M
Language Creators:
found
Annotations Creators:
no-annotation
DOI:
License:

Dataset Querying support

#10
by ardrap - opened

Hi umarbutler(https://huggingface.co/umarbutler),

Thank you for great work. It is really amazing.
I was querying the embedding using the code you provided. I got dataset not found error first. That i corrected by copping and pasting the dataset name.
after that i got below error. can you please tell how to debug this. and also please share how much time spent and which cloud provider used for creating embeddings? and i wanted to save this embeddings to any vector storage chroma or qdrant.
JSONDecodeError: Extra data: line 2 column 1 (char 4704)
During handling of the above exception, another exception occurred:
ArrowInvalid Traceback (most recent call last)
Cell In[2], line 13
10 oale = load_dataset('umarbutler/open-australian-legal-embeddings', split='train', streaming=True) # Set streaming to False if you wish to load the entire dataset into memory (unadvised unless you have at least 64 GB of RAM).
12 # Sample the first 100,000 embeddings.
---> 13 sample = list(itertools.islice(oale, 100000))
15 # Embed a query.
16 query = model.encode(instruction + 'Who is the Governor-General of Australia?', normalize_embeddings=True)

File /usr/local/lib/python3.10/dist-packages/datasets/iterable_dataset.py:1379, in IterableDataset.iter(self)
1376 yield formatter.format_row(pa_table)
1377 return
-> 1379 for key, example in ex_iterable:
1380 if self.features:
1381 # IterableDataset automatically fills missing columns with None.
1382 # This is done with _apply_feature_types_on_example.
1383 example = _apply_feature_types_on_example(
1384 example, self.features, token_per_repo_id=self._token_per_repo_id
1385 )

File /usr/local/lib/python3.10/dist-packages/datasets/iterable_dataset.py:281, in ArrowExamplesIterable.iter(self)
279 def iter(self):
280 formatter = PythonFormatter()
--> 281 for key, pa_table in self.generate_tables_fn(**self.kwargs):
282 for pa_subtable in pa_table.to_reader(max_chunksize=config.ARROW_READER_BATCH_SIZE_IN_DATASET_ITER):
283 formatted_batch = formatter.format_batch(pa_subtable)

File /usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/json/json.py:147, in Json._generate_tables(self, files)
145 except json.JSONDecodeError:
146 logger.error(f"Failed to read file '{file}' with error {type(e)}: {e}")
--> 147 raise e
148 # If possible, parse the file as a list of json objects and exit the loop
149 if isinstance(dataset, list): # list is the only sequence type supported in JSON

File /usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/json/json.py:121, in Json._generate_tables(self, files)
119 while True:
120 try:
--> 121 pa_table = paj.read_json(
122 io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size)
123 )
124 break
125 except (pa.ArrowInvalid, pa.ArrowNotImplementedError) as e:

File /usr/local/lib/python3.10/dist-packages/pyarrow/_json.pyx:308, in pyarrow._json.read_json()
File /usr/local/lib/python3.10/dist-packages/pyarrow/error.pxi:154, in pyarrow.lib.pyarrow_internal_check_status()
File /usr/local/lib/python3.10/dist-packages/pyarrow/error.pxi:91, in pyarrow.lib.check_status()
ArrowInvalid: JSON parse error: Column() changed from object to array in row 0

Seeing as the code works in my local repo but not remotely, it is likely an issue with streaming the embeddings. Instead, remove streaming=False and see how that goes. I've also updated the README accordingly.

I don't recall how long it took to create the Embeddings. I trained them locally, not using a cloud provider.

umarbutler changed discussion status to closed

Sign up or log in to comment