parquet-converter commited on
Commit
6d01f21
1 Parent(s): 3587946

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,3 +0,0 @@
1
- # BBC News Topic Classification
2
-
3
- Dataset on [BBC News Topic Classification](https://www.kaggle.com/yufengdev/bbc-text-categorization/data): 2225 articles, each labeled under one of 5 categories: business, entertainment, politics, sport or tech.
 
 
 
 
SetFit--bbc-news/json-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:544128b846eec27f62bdcf29d48d42639c298a2bcb7dcf6f09807c64c48f796e
3
+ size 1365469
SetFit--bbc-news/json-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e87b3dace0e47a9ff50d13303bc93924ad7b3a3e4733cb05dc207aba4e31185
3
+ size 1727169
bbc-text.csv DELETED
The diff for this file is too large to render. See raw diff
 
prepare.py DELETED
@@ -1,36 +0,0 @@
1
- import pandas as pd
2
- from collections import Counter
3
- import json
4
- import random
5
-
6
-
7
- df = pd.read_csv("bbc-text.csv")
8
- df.fillna('', inplace=True)
9
- print(df)
10
-
11
- label2id = {label: idx for idx, label in enumerate(df['category'].unique())}
12
-
13
- rows = [{'text': row['text'].strip(),
14
- 'label': label2id[row['category']],
15
- 'label_text': row['category'],
16
- } for idx, row in df.iterrows()]
17
-
18
- random.seed(42)
19
- random.shuffle(rows)
20
-
21
- num_test = 1000
22
- splits = {'test': rows[0:num_test], 'train': rows[num_test:]}
23
-
24
- print("Train:", len(splits['train']))
25
- print("Test:", len(splits['test']))
26
-
27
- num_labels = Counter()
28
-
29
- for row in splits['test']:
30
- num_labels[row['label']] += 1
31
- print(num_labels)
32
-
33
- for split in ['train', 'test']:
34
- with open(f'{split}.jsonl', 'w') as fOut:
35
- for row in splits[split]:
36
- fOut.write(json.dumps(row)+"\n")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
test.jsonl DELETED
The diff for this file is too large to render. See raw diff
 
train.jsonl DELETED
The diff for this file is too large to render. See raw diff