Voice49 commited on
Commit
5b55716
·
verified ·
1 Parent(s): a050925

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -23
README.md CHANGED
@@ -27,12 +27,7 @@ Each example includes: the NLQ, database identifier, a canonical dataset id, the
27
 
28
  ---
29
 
30
- ## What’s inside
31
-
32
- ### Labels
33
- - **4-class:** `Table`, `Column`, `Value`, `O`
34
-
35
- ### Fields per example
36
  - `question_id` *(int)* — Example id
37
  - `db_id` *(str)* — Database identifier
38
  - `dber_id` *(str)* — Canonical id linking back to the source file/split (BIRD, SPIDER)
@@ -52,14 +47,6 @@ Each example includes: the NLQ, database identifier, a canonical dataset id, the
52
 
53
  ## Splits
54
 
55
- ### Split groups
56
-
57
- - Human
58
- - Human_train (`human_train`)
59
- - Human_test (`human_test`)
60
- - Synthetic
61
- - Synthetic_train (`synthetic_train`)
62
-
63
  **Entity token prevalence is consistent across splits: ~29% entity vs. ~71% `O`.**
64
 
65
  | Split | # Examples |
@@ -110,10 +97,16 @@ Each example includes: the NLQ, database identifier, a canonical dataset id, the
110
  }
111
  ```
112
 
113
- ---
114
 
115
  ## Usage
116
 
 
 
 
 
 
 
117
  ### Load JSONL files
118
  ```python
119
  from datasets import load_dataset
@@ -125,14 +118,7 @@ data_files = {
125
  }
126
  ds = load_dataset("json", data_files=data_files)
127
  print(ds)
128
- print(ds["human_train"][0])
129
- ```
130
-
131
- ### Load from the Hub
132
- ```python
133
- from datasets import load_dataset
134
- ds = load_dataset("Voice49/dber")
135
- ```
136
 
137
  ---
138
 
 
27
 
28
  ---
29
 
30
+ ## Fields
 
 
 
 
 
31
  - `question_id` *(int)* — Example id
32
  - `db_id` *(str)* — Database identifier
33
  - `dber_id` *(str)* — Canonical id linking back to the source file/split (BIRD, SPIDER)
 
47
 
48
  ## Splits
49
 
 
 
 
 
 
 
 
 
50
  **Entity token prevalence is consistent across splits: ~29% entity vs. ~71% `O`.**
51
 
52
  | Split | # Examples |
 
97
  }
98
  ```
99
 
100
+ <!-- ---
101
 
102
  ## Usage
103
 
104
+ ### Load from Hub
105
+ ```python
106
+ from datasets import load_dataset
107
+ ds = load_dataset("Voice49/dber")
108
+ ```
109
+
110
  ### Load JSONL files
111
  ```python
112
  from datasets import load_dataset
 
118
  }
119
  ds = load_dataset("json", data_files=data_files)
120
  print(ds)
121
+ ``` -->
 
 
 
 
 
 
 
122
 
123
  ---
124