File size: 2,275 Bytes
00ca674 9e4a740 2989f04 9e4a740 2989f04 8832115 2989f04 8832115 2989f04 8832115 2989f04 8832115 2989f04 4c84d50 2989f04 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 |
---
language:
- lt
- en
size_categories:
- 1M<n<10M
dataset_info:
features:
- name: translation
struct:
- name: en
dtype: string
- name: lt
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 945130215
num_examples: 5422278
- name: validation
num_bytes: 9521400
num_examples: 54771
download_size: 719193731
dataset_size: 954651615
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
license: cc-by-2.5
---
![Scoris logo](https://scoris.lt/logo_smaller.png)
The data set is a merge of other open datasets:
- [wmt19](https://huggingface.co/datasets/wmt19) (lt-en)
- [opus100](https://huggingface.co/datasets/opus100) (en-lt)
- [sentence-transformers/parallel-sentences](https://huggingface.co/datasets/sentence-transformers/parallel-sentences)
- Europarl-en-lt-train.tsv.gz
- JW300-en-lt-train.tsv.gz
- OpenSubtitles-en-lt-train.tsv.gz
- Talks-en-lt-train.tsv.gz
- Tatoeba-en-lt-train.tsv.gz
- WikiMatrix-en-lt-train.tsv.gz
- Custom [Scoris](https://scoris.lt) data set translated using Deepl.
Basic clean-up and deduplication was applied when creating this set
This can be used to train Lithuanian-English-Lithuanian MT Seq2Seq models.
Made by [Scoris](https://scoris.lt) team
You can use this in the following way:
```python
from datasets import load_dataset
dataset_name = "scoris/en-lt-merged-data"
# Load the dataset
dataset = load_dataset(dataset_name)
# Accessing data
# Display the first example from the training set
print("First training example:", dataset['train'][0])
# Display the first example from the validation set
print("First validation example:", dataset['validation'][0])
# Iterate through a few examples from the training set
for i, example in enumerate(dataset['train']):
if i < 5:
print(f"Training example {i}:", example)
else:
break
# If you want to use the dataset in a machine learning model, you can directly
# iterate over the dataset or convert it to a pandas DataFrame for analysis
import pandas as pd
# Convert the training set to a pandas DataFrame
train_df = pd.DataFrame(dataset['train'])
print(train_df.head())
``` |