Datasets:
2A2I
/

Modalities:
Text
Formats:
parquet
Languages:
Arabic
ArXiv:
Tags:
License:
File size: 2,815 Bytes
90b806f
68c3586
 
 
 
 
 
 
90b806f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8fe0373
68c3586
90b806f
8fe0373
68c3586
90b806f
68c3586
 
 
 
 
 
 
 
90b806f
68c3586
9b06407
68c3586
9b06407
68c3586
9b06407
68c3586
9b06407
 
 
 
 
 
 
 
 
 
 
 
68c3586
9b06407
68c3586
9b06407
68c3586
9b06407
68c3586
9b06407
68c3586
9b06407
 
 
 
68c3586
9b06407
68c3586
9b06407
 
 
68c3586
9b06407
68c3586
9b06407
68c3586
9b06407
68c3586
 
 
 
 
 
 
 
 
 
9b06407
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
---
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
dataset_info:
  features:
  - name: prompt
    dtype: string
  - name: prompt_id
    dtype: string
  - name: messages
    list:
    - name: content
      dtype: string
    - name: role
      dtype: string
  - name: category
    dtype: string
  splits:
  - name: train
    num_bytes: 16496867
    num_examples: 9500
  - name: test
    num_bytes: 887460
    num_examples: 500
  download_size: 11045465
  dataset_size: 17384327
task_categories:
- text-generation
language:
- ar
pretty_name: لا روبوتات
license: cc-by-nc-4.0
---

### Dataset Card for "No Robots" 🙅‍♂️🤖

#### Summary

"No Robots" is a dataset consisting of 10,000 instructions and demonstrations, created by professional annotators. It was translated using the Google Cloud Platform Translation API. This dataset can be used to train language models to follow instructions more accurately (instruction-tuned fine-tuning - SFT). The "No Robots" dataset was created based on the dataset described in OpenAI's [InstructGPT](https://huggingface.co/papers/2203.02155) paper, and includes the following categories:

| Category          | Count |
|-------------------|------:|
| Creation          |  4560 |
| Open Questions    |  1240 |
| Brainstorming     |  1120 |
| Chatting          |  850  |
| Rewriting         |  660  |
| Summarization     |  420  |
| Programming       |  350  |
| Classification    |  350  |
| Closed Questions  |  260  |
| Extraction        |  190  |

#### Languages

This dataset is available in Arabic only. The original version in **English** can be found at [this link](https://huggingface.co/datasets/HuggingFaceH4/no_robots), and the **Turkish** version at [this link](https://huggingface.co/datasets/merve/tr-h4-norobots).

#### Data Fields

Columns as follows:

* `prompt`: Specifies the instruction that the model should follow.
* `prompt_id`: A unique identifier.
* `messages`: A list containing dictionaries, each dictionary describes a message (key: content) and who sent it (key: role).
* `category`: The task category, I did not translate this.

#### Splits

|                  | train | test |
|------------------|------:|-----:|
| No Robots        |  9500 |  500 |

#### License

The dataset is available under the [(CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode) license.

#### Citation Information

```
@misc{no_robots,
  author = {Nazneen Rajani and Lewis Tunstall and Edward Beeching and Nathan Lambert and Alexander M. Rush and Thomas Wolf},
  title = {No Robots},
  year = {2023},
  publisher = {Hugging Face},
  journal = {Hugging Face repository},
  howpublished = {\url{https://huggingface.co/datasets/HuggingFaceH4/no_robots}}
}
```