The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ParserError
Message:      Error tokenizing data. C error: Expected 6 fields in line 3, saw 9

Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 328, in compute
                  compute_first_rows_from_parquet_response(
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 88, in compute_first_rows_from_parquet_response
                  rows_index = indexer.get_rows_index(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 631, in get_rows_index
                  return RowsIndex(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 512, in __init__
                  self.parquet_index = self._init_parquet_index(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 529, in _init_parquet_index
                  response = get_previous_step_or_raise(
                File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 566, in get_previous_step_or_raise
                  raise CachedArtifactError(
              libcommon.simple_cache.CachedArtifactError: The previous step failed.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2215, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1239, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1388, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1044, in __iter__
                  yield from islice(self.ex_iterable, self.n)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__
                  for key, pa_table in self.generate_tables_fn(**self.kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 194, in _generate_tables
                  for batch_idx, df in enumerate(csv_file_reader):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1843, in __next__
                  return self.get_chunk()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1985, in get_chunk
                  return self.read(nrows=size)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1923, in read
                  ) = self._engine.read(  # type: ignore[attr-defined]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 234, in read
                  chunks = self._reader.read_low_memory(nrows)
                File "parsers.pyx", line 850, in pandas._libs.parsers.TextReader.read_low_memory
                File "parsers.pyx", line 905, in pandas._libs.parsers.TextReader._read_rows
                File "parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
                File "parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
                File "parsers.pyx", line 2061, in pandas._libs.parsers.raise_parser_error
              pandas.errors.ParserError: Error tokenizing data. C error: Expected 6 fields in line 3, saw 9

Need help to make the dataset viewer work? Open a discussion for direct support.

YAML Metadata Warning: The task_categories "conversational" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, other

Spongebob Transcripts Dataset 🧽

The Spongebob Transcripts Dataset is a collection of transcripts from the beloved animated television series, Spongebob Squarepants. This dataset includes information on each line of dialogue spoken by a character, including the character's name, their replica, and the episode ID.

The number of characters in the dataset: 84

Total number of words in the dataset: ~80,800 words, ~4000 rows, Updated to full Season 1

Dataset Overview πŸ“Š

Column Description
Speaker The character speaking the dialogue.
Replica The line of dialogue spoken.
EP_ID The episode ID of the transcript.

System ReplicasπŸ”

The system replicas describe the actions and events that occur in each episode. These replicas are written in a specific format, using brackets to indicate actions and events.

Replica Format

{system} : [The episode opens with a bubble transition, and we see a coral reef under the sea. The camera zooms to initiate parallax scrolling, which reveals the city of Bikini Bottom. It continues zooming to show a brown rock, a Moai head, and a pineapple, which each contain inhabitants.]

Sample Data πŸ’¬

Speaker Replica EP_ID
Spongebob I just met this girl. She wears a hat full of... air. s1e3_22
Patrick Do you mean she puts on "airs"? s1e3_23
Spongebob I guess so. s1e3_24
Patrick That's just fancy talk. If you wanna be fancy, hold your pinky up like this. The higher you hold it, the fancier you are. s1e3_25

πŸ“Š Interactions with Dataset

Using Pandas to filter rows

  1. To find all rows with a specific ep_id, you can use the following code:
import pandas as pd
#Read the CSV file into a Pandas DataFrame
df = pd.read_csv('dataset.csv')
#Define the ep_id you want to filter by
ep_id = 's1e2'
#Filter the DataFrame to get rows with an ep_id that starts with the defined ep_id
filtered_df = df[df['ep_id'].str.startswith(ep_id)]
#Print the filtered DataFrame
print(filtered_df)
  1. To find rows where a specific character says a specific word or phrase, you can use the following code:
#Filter the DataFrame to get rows where a specific character says a specific word or phrase
speaker = 'SpongeBob'
word_or_phrase = 'jellyfish'
filtered_df = df[df['speaker'] == speaker]
filtered_df = filtered_df[filtered_df['replica'].str.contains(word_or_phrase)]
#Print the filtered DataFrame
print(filtered_df)

You can replace SpongeBob and jellyfish with any other speaker and word/phrase that you want to filter by.

Data Sources πŸ“

The transcripts were sourced Encyclopedia SpongeBobia.

Potential Uses 🧐

This Dataset could be used for a variety of natural language processing (NLP) tasks, including dialogue generation. It could also be used for educational purposes, such as studying the language and communication styles of different characters.

Downloads last month
0
Edit dataset card

Models trained or fine-tuned on krplt/spongebob_transcripts