en-lt-merged-data / README.md
scoris's picture
Update README.md
4c84d50 verified
|
raw
history blame
2.28 kB
metadata
language:
  - lt
  - en
size_categories:
  - 1M<n<10M
dataset_info:
  features:
    - name: translation
      struct:
        - name: en
          dtype: string
        - name: lt
          dtype: string
    - name: __index_level_0__
      dtype: int64
  splits:
    - name: train
      num_bytes: 945130215
      num_examples: 5422278
    - name: validation
      num_bytes: 9521400
      num_examples: 54771
  download_size: 719193731
  dataset_size: 954651615
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
license: cc-by-2.5

Scoris logo

The data set is a merge of other open datasets:

Basic clean-up and deduplication was applied when creating this set

This can be used to train Lithuanian-English-Lithuanian MT Seq2Seq models.

Made by Scoris team

You can use this in the following way:

from datasets import load_dataset
dataset_name = "scoris/en-lt-merged-data" 

# Load the dataset
dataset = load_dataset(dataset_name)

# Accessing data
# Display the first example from the training set
print("First training example:", dataset['train'][0])

# Display the first example from the validation set
print("First validation example:", dataset['validation'][0])

# Iterate through a few examples from the training set
for i, example in enumerate(dataset['train']):
    if i < 5:
        print(f"Training example {i}:", example)
    else:
        break

# If you want to use the dataset in a machine learning model, you can directly
# iterate over the dataset or convert it to a pandas DataFrame for analysis
import pandas as pd

# Convert the training set to a pandas DataFrame
train_df = pd.DataFrame(dataset['train'])
print(train_df.head())