parquet-converter commited on
Commit
26d61bb
1 Parent(s): 9f00385

Update parquet files

Browse files
.gitattributes CHANGED
@@ -14,3 +14,4 @@
14
  *.pb filter=lfs diff=lfs merge=lfs -text
15
  *.pt filter=lfs diff=lfs merge=lfs -text
16
  *.pth filter=lfs diff=lfs merge=lfs -text
 
 
14
  *.pb filter=lfs diff=lfs merge=lfs -text
15
  *.pt filter=lfs diff=lfs merge=lfs -text
16
  *.pth filter=lfs diff=lfs merge=lfs -text
17
+ flax-sentence-embeddings--paws-jsonl/json-train.parquet filter=lfs diff=lfs merge=lfs -text
README.md DELETED
@@ -1,20 +0,0 @@
1
- # Introduction
2
- This dataset is a jsonl format for PAWS dataset from: https://github.com/google-research-datasets/paws. It only contains the `PAWS-Wiki Labeled (Final)` and
3
- `PAWS-Wiki Labeled (Swap-only)` training sections of the original PAWS dataset. Duplicates data are removed.
4
-
5
- Each line contains a dict in the following format:
6
-
7
- `{"guid": <id>, "texts": [anchor, positive]}` or
8
-
9
- `{"guid": <id>, "texts": [anchor, positive, negative]}`
10
-
11
- positives_negatives.jsonl.gz: 24,723
12
-
13
- positives_only.jsonl.gz: 13,487
14
-
15
- **Total**: 38,210
16
-
17
- ## Dataset summary
18
- [**PAWS: Paraphrase Adversaries from Word Scrambling**](https://github.com/google-research-datasets/paws)
19
-
20
- This dataset contains 108,463 human-labeled and 656k noisily labeled pairs that feature the importance of modeling structure, context, and word order information for the problem of paraphrase identification. The dataset has two subsets, one based on Wikipedia and the other one based on the Quora Question Pairs (QQP) dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
flax-sentence-embeddings--paws-jsonl/json-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42709a290d78b31435ada0e52615e654bde0c162dddb18977ebc4a4e5e710716
3
+ size 3084288
positives_negatives.jsonl.gz DELETED
Binary file (677 kB)
 
positives_only.jsonl.gz DELETED
Binary file (1.23 MB)