Dataset Viewer
Full Screen
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'BLBGreekLexicon' of the dataset.
Error code:   FeaturesError
Exception:    ParserError
Message:      Error tokenizing data. C error: Expected 2 fields in line 3, saw 3

Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 322, in compute
                  compute_first_rows_from_parquet_response(
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 88, in compute_first_rows_from_parquet_response
                  rows_index = indexer.get_rows_index(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 640, in get_rows_index
                  return RowsIndex(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 521, in __init__
                  self.parquet_index = self._init_parquet_index(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 538, in _init_parquet_index
                  response = get_previous_step_or_raise(
                File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 591, in get_previous_step_or_raise
                  raise CachedArtifactError(
              libcommon.simple_cache.CachedArtifactError: The previous step failed.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 240, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2216, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1239, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1389, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1044, in __iter__
                  yield from islice(self.ex_iterable, self.n)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__
                  for key, pa_table in self.generate_tables_fn(**self.kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 195, in _generate_tables
                  for batch_idx, df in enumerate(csv_file_reader):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1843, in __next__
                  return self.get_chunk()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1985, in get_chunk
                  return self.read(nrows=size)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1923, in read
                  ) = self._engine.read(  # type: ignore[attr-defined]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 234, in read
                  chunks = self._reader.read_low_memory(nrows)
                File "parsers.pyx", line 850, in pandas._libs.parsers.TextReader.read_low_memory
                File "parsers.pyx", line 905, in pandas._libs.parsers.TextReader._read_rows
                File "parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
                File "parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
                File "parsers.pyx", line 2061, in pandas._libs.parsers.raise_parser_error
              pandas.errors.ParserError: Error tokenizing data. C error: Expected 2 fields in line 3, saw 3

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

There are 4 separate lexicons, each in a CSV file. All the files are delimited by carats. Two of the files also include internal delimiters.

Blue Letter Bible (BLB) Hebrew

LexicalID ^ Total Translated ^ Connection ^ Gender ^ POS ^ Pronunciation ^ Transliteration ^ TWOT Number ^ IsAramaic ^ IsRoot ^ ~Translation Counts% ^ ~Lexical Hierarchy%

BLB Greek

LexicalID ^ IsRoot ^ Total Translated ^ Connection ^ Gender ^ POS ^ Pronunciation ^ Transliteration ^ TWOT Number ^ ~Conjugated% ^ ~Translation Counts% ^ ~Extra TDNT Information% ^ ~Lexical Hierarchy%

STEPBible Hebrew

Line Number ^ Part Number ^ Parse ^ Strongs Number ^ Word Order ID ^ Translation Order ID ^ Translation ^ Translation Alternate ^ Translation Alternate 2 ^ Translation Semantic

NT Words

Gloss ^ Book Number ^ Chapter Number ^ Verse Number ^ Word ID ^ Word Function ^ Clause Level ^ Clause ^ Clause Function ^ Subclause ^ Subclause Function ^ Greek Clause Level ^ Greek ID ^ Strongs ^ Morphology ^ Greek Word ^ Word ID

Downloads last month
48