frascuchon HF staff commited on
Commit
e267818
1 Parent(s): cd6761b

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +24 -132
README.md CHANGED
@@ -4,136 +4,6 @@ tags:
4
  - rlfh
5
  - argilla
6
  - human-feedback
7
- dataset_info:
8
- features:
9
- - name: prompt
10
- dtype: string
11
- id: field
12
- - name: response
13
- dtype: string
14
- id: field
15
- - name: relevant
16
- list:
17
- - name: user_id
18
- dtype: string
19
- id: question
20
- - name: value
21
- dtype: string
22
- id: suggestion
23
- - name: status
24
- dtype: string
25
- id: question
26
- - name: relevant-suggestion
27
- dtype: string
28
- id: suggestion
29
- - name: relevant-suggestion-metadata
30
- struct:
31
- - name: type
32
- dtype: string
33
- id: suggestion-metadata
34
- - name: score
35
- dtype: float32
36
- id: suggestion-metadata
37
- - name: agent
38
- dtype: string
39
- id: suggestion-metadata
40
- - name: content_class
41
- list:
42
- - name: user_id
43
- dtype: string
44
- id: question
45
- - name: value
46
- sequence: string
47
- id: suggestion
48
- - name: status
49
- dtype: string
50
- id: question
51
- - name: content_class-suggestion
52
- sequence: string
53
- id: suggestion
54
- - name: content_class-suggestion-metadata
55
- struct:
56
- - name: type
57
- dtype: string
58
- id: suggestion-metadata
59
- - name: score
60
- dtype: float32
61
- id: suggestion-metadata
62
- - name: agent
63
- dtype: string
64
- id: suggestion-metadata
65
- - name: rating
66
- list:
67
- - name: user_id
68
- dtype: string
69
- id: question
70
- - name: value
71
- dtype: int32
72
- id: suggestion
73
- - name: status
74
- dtype: string
75
- id: question
76
- - name: rating-suggestion
77
- dtype: int32
78
- id: suggestion
79
- - name: rating-suggestion-metadata
80
- struct:
81
- - name: type
82
- dtype: string
83
- id: suggestion-metadata
84
- - name: score
85
- dtype: float32
86
- id: suggestion-metadata
87
- - name: agent
88
- dtype: string
89
- id: suggestion-metadata
90
- - name: corrected-text
91
- list:
92
- - name: user_id
93
- dtype: string
94
- id: question
95
- - name: value
96
- dtype: string
97
- id: suggestion
98
- - name: status
99
- dtype: string
100
- id: question
101
- - name: corrected-text-suggestion
102
- dtype: string
103
- id: suggestion
104
- - name: corrected-text-suggestion-metadata
105
- struct:
106
- - name: type
107
- dtype: string
108
- id: suggestion-metadata
109
- - name: score
110
- dtype: float32
111
- id: suggestion-metadata
112
- - name: agent
113
- dtype: string
114
- id: suggestion-metadata
115
- - name: external_id
116
- dtype: string
117
- id: external_id
118
- - name: metadata
119
- dtype: string
120
- id: metadata
121
- - name: vectors
122
- struct:
123
- - name: prompt
124
- sequence: float32
125
- id: vectors
126
- splits:
127
- - name: train
128
- num_bytes: 6458850
129
- num_examples: 5590
130
- download_size: 3574600
131
- dataset_size: 6458850
132
- configs:
133
- - config_name: default
134
- data_files:
135
- - split: train
136
- path: data/train-*
137
  ---
138
 
139
  # Dataset Card for oasst_response_quality
@@ -219,6 +89,13 @@ The **suggestions** are human or machine generated recommendations for each ques
219
  The **metadata** is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
220
 
221
 
 
 
 
 
 
 
 
222
 
223
  | Metadata Name | Title | Type | Values | Visible for Annotators |
224
  | ------------- | ----- | ---- | ------ | ---------------------- |
@@ -240,7 +117,12 @@ An example of a dataset instance in Argilla looks as follows:
240
  "metadata": {},
241
  "responses": [],
242
  "suggestions": [],
243
- "vectors": {}
 
 
 
 
 
244
  }
245
  ```
246
 
@@ -279,7 +161,13 @@ While the same record in HuggingFace `datasets` looks as follows:
279
  "score": null,
280
  "type": null
281
  },
282
- "response": "Sure! Let\u0027s say you want to build a model which can distinguish between images of cats and dogs. You gather your dataset, consisting of many cat and dog pictures. Then you put them through a neural net of your choice, which produces some representation for each image, a sequence of numbers like [0.123, 0.045, 0.334, ...]. The problem is, if your model is unfamiliar with cat and dog images, these representations will be quite random. At one time a cat and a dog picture could have very similar representations (their numbers would be close to each other), while at others two cat images may be represented far apart. In simple terms, the model wouldn\u0027t be able to tell cats and dogs apart. This is where contrastive learning comes in.\n\nThe point of contrastive learning is to take pairs of samples (in this case images of cats and dogs), then train the model to \"pull\" representations of similar pairs (cat-cat or dog-dog) closer to each other and \"push\" representations of different pairs (cat-dog) apart. After doing this for a sufficient number of steps, your model will be able to produce unique, reliable representations for cats and dogs, in essence tell them apart.\n\nThis method is not limited to images, you can typically use it with any dataset that has similar and dissimilar data points."
 
 
 
 
 
 
283
  }
284
  ```
285
 
@@ -307,6 +195,10 @@ Among the dataset fields, we differentiate between the following:
307
  * (optional) **corrected-text-suggestion** is of type `QuestionTypes.text`.
308
 
309
 
 
 
 
 
310
 
311
  Additionally, we also have two more fields that are optional and are the following:
312
 
 
4
  - rlfh
5
  - argilla
6
  - human-feedback
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  ---
8
 
9
  # Dataset Card for oasst_response_quality
 
89
  The **metadata** is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
90
 
91
 
92
+ **✨ NEW** The **vectors** are different columns that contain a vector in floating point, which is constraint to the pre-defined dimensions in the **vectors_settings** when configuring the vectors within the dataset itself, also the dimensions will always be 1-dimensional. The **vectors** are optional and identified by the pre-defined vector name in the dataset configuration file in `argilla.yaml`.
93
+
94
+ | Vector Name | Title | Dimensions |
95
+ |-------------|-------|------------|
96
+ | prompt | Prompt | [1, 2] |
97
+
98
+
99
 
100
  | Metadata Name | Title | Type | Values | Visible for Annotators |
101
  | ------------- | ----- | ---- | ------ | ---------------------- |
 
117
  "metadata": {},
118
  "responses": [],
119
  "suggestions": [],
120
+ "vectors": {
121
+ "prompt": [
122
+ 1,
123
+ 2
124
+ ]
125
+ }
126
  }
127
  ```
128
 
 
161
  "score": null,
162
  "type": null
163
  },
164
+ "response": "Sure! Let\u0027s say you want to build a model which can distinguish between images of cats and dogs. You gather your dataset, consisting of many cat and dog pictures. Then you put them through a neural net of your choice, which produces some representation for each image, a sequence of numbers like [0.123, 0.045, 0.334, ...]. The problem is, if your model is unfamiliar with cat and dog images, these representations will be quite random. At one time a cat and a dog picture could have very similar representations (their numbers would be close to each other), while at others two cat images may be represented far apart. In simple terms, the model wouldn\u0027t be able to tell cats and dogs apart. This is where contrastive learning comes in.\n\nThe point of contrastive learning is to take pairs of samples (in this case images of cats and dogs), then train the model to \"pull\" representations of similar pairs (cat-cat or dog-dog) closer to each other and \"push\" representations of different pairs (cat-dog) apart. After doing this for a sufficient number of steps, your model will be able to produce unique, reliable representations for cats and dogs, in essence tell them apart.\n\nThis method is not limited to images, you can typically use it with any dataset that has similar and dissimilar data points.",
165
+ "vectors": {
166
+ "prompt": [
167
+ 1.0,
168
+ 2.0
169
+ ]
170
+ }
171
  }
172
  ```
173
 
 
195
  * (optional) **corrected-text-suggestion** is of type `QuestionTypes.text`.
196
 
197
 
198
+ * **✨ NEW** **Vectors**: As of Argilla 1.19.0, the vectors have been included in order to add support for similarity search to explore similar records based on vector search powered by the search engine defined. The vectors are optional and cannot be seen within the UI, those are uploaded and internally used. Also the vectors will always be optional, and only the dimensions previously defined in their settings.
199
+
200
+ * (optional) **prompt** is of type `float32` and has a dimension of (1, `2`).
201
+
202
 
203
  Additionally, we also have two more fields that are optional and are the following:
204