Datasets:
Bitext
commited on
Commit
·
ef6cecb
1
Parent(s):
9bea5fa
Update README.md
Browse files
README.md
CHANGED
@@ -8,15 +8,15 @@ This dataset can be used to train chatbots on Large Language Models such as GPT,
|
|
8 |
|
9 |
The dataset is parallel to our Evaluation dataset (see [Customer Service Tagged Evaluation Dataset for Intent Detection](https://github.com/bitext/customer-support-intent-detection-evaluation-dataset)). Both datasets can be used in conjunction to first train and then evaluate the accuracy provided by training. The main difference between the two datasets is the number of utterances:
|
10 |
|
11 |
-
- The training dataset contains 4,
|
12 |
- The evaluation dataset contains around 270,000 utterances (around 10,000 per intent)
|
13 |
|
14 |
Both datasets share the rest of the specifications, so they can be used in conjunction. The training dataset has the following specs, shared with the evaluation dataset:
|
15 |
|
16 |
- Customer Service domain
|
17 |
-
-
|
18 |
-
-
|
19 |
-
-
|
20 |
|
21 |
Each utterance is tagged with entities/slots when applicable. Additionally, each utterance is enriched with tags that indicate the type of language variation that the utterance expresses. Examples include:
|
22 |
|
@@ -32,7 +32,7 @@ These intents have been selected from Bitext's collection of 20 domain-specific
|
|
32 |
|
33 |
Utterances and Linguistic Tags
|
34 |
------------------------------------
|
35 |
-
The dataset contains 4,
|
36 |
|
37 |
The dataset also reflects commonly occurring linguistic phenomena of real-life chatbots, such as spelling mistakes, run-on words, punctuation errors…
|
38 |
|
@@ -48,6 +48,7 @@ Each entry in the dataset contains the following four fields:
|
|
48 |
- end_offset: the ending position of the entity
|
49 |
- category: the high-level semantic category for the intent
|
50 |
- tags: different tags that reflect the types of language variations expressed in the utterance
|
|
|
51 |
- response: an example expected response from the chatbot
|
52 |
|
53 |
The dataset contains tags that reflect different language phenomena like colloquial or offensive language. So if an utterance for intent “cancel_order” contains the “COLLOQUIAL” tag, the utterance will express an informal language variation like: “can u cancel my order”
|
@@ -110,4 +111,4 @@ The entities covered by the dataset are:
|
|
110 |
- refund_amount
|
111 |
- Intents: get_refund, track_refund
|
112 |
|
113 |
-
(c) Bitext Innovations,
|
|
|
8 |
|
9 |
The dataset is parallel to our Evaluation dataset (see [Customer Service Tagged Evaluation Dataset for Intent Detection](https://github.com/bitext/customer-support-intent-detection-evaluation-dataset)). Both datasets can be used in conjunction to first train and then evaluate the accuracy provided by training. The main difference between the two datasets is the number of utterances:
|
10 |
|
11 |
+
- The training dataset contains 4,269 utterances (around 200 per intent)
|
12 |
- The evaluation dataset contains around 270,000 utterances (around 10,000 per intent)
|
13 |
|
14 |
Both datasets share the rest of the specifications, so they can be used in conjunction. The training dataset has the following specs, shared with the evaluation dataset:
|
15 |
|
16 |
- Customer Service domain
|
17 |
+
- 10 categories or intent groups
|
18 |
+
- 20 intents assigned to one of the 10 categories
|
19 |
+
- 6 entity/slot types
|
20 |
|
21 |
Each utterance is tagged with entities/slots when applicable. Additionally, each utterance is enriched with tags that indicate the type of language variation that the utterance expresses. Examples include:
|
22 |
|
|
|
32 |
|
33 |
Utterances and Linguistic Tags
|
34 |
------------------------------------
|
35 |
+
The dataset contains 4,269 training utterances, with 200 utterances per intent. It has been split into training (80%), validation (10%) and testing (10%) sets, preserving the distribution of intents and linguistic phenomena.
|
36 |
|
37 |
The dataset also reflects commonly occurring linguistic phenomena of real-life chatbots, such as spelling mistakes, run-on words, punctuation errors…
|
38 |
|
|
|
48 |
- end_offset: the ending position of the entity
|
49 |
- category: the high-level semantic category for the intent
|
50 |
- tags: different tags that reflect the types of language variations expressed in the utterance
|
51 |
+
- response_type: identifier for tracking the composition, and version of chatbot responses.
|
52 |
- response: an example expected response from the chatbot
|
53 |
|
54 |
The dataset contains tags that reflect different language phenomena like colloquial or offensive language. So if an utterance for intent “cancel_order” contains the “COLLOQUIAL” tag, the utterance will express an informal language variation like: “can u cancel my order”
|
|
|
111 |
- refund_amount
|
112 |
- Intents: get_refund, track_refund
|
113 |
|
114 |
+
(c) Bitext Innovations, 2023
|