mdumandag commited on
Commit
5efb24f
·
1 Parent(s): f01fb8e

update readme

Browse files
Files changed (1) hide show
  1. README.md +159 -0
README.md CHANGED
@@ -47,3 +47,162 @@ configs:
47
  path: "data/tr/*.parquet"
48
  license: apache-2.0
49
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
  path: "data/tr/*.parquet"
48
  license: apache-2.0
49
  ---
50
+ # Wikipedia Embeddings with BGE-M3
51
+
52
+ This dataset contains embeddings from the
53
+ [June 2024 Wikipedia dump](https://dumps.wikimedia.org/wikidatawiki/20240601/)
54
+ for the 11 most popular languages.
55
+
56
+ The embeddings are generated with the multilingual
57
+ [BGE-M3](https://huggingface.co/BAAI/bge-m3) model.
58
+
59
+ The dataset consists of Wikipedia articles split into paragraphs,
60
+ and embedded with the aforementioned model.
61
+
62
+ To enhance search quality, the paragraphs are prefixed with their
63
+ respective article titles before embedding.
64
+
65
+ Additionally, paragraphs containing fewer than 100 characters,
66
+ which tend to have low information density, are excluded from the dataset.
67
+
68
+ The dataset contains approximately 144 million vector embeddings in total.
69
+
70
+ | Language | Config Name | Embeddings |
71
+ |------------|-------------|-------------|
72
+ | English | en | 47_018_430 |
73
+ | German | de | 20_213_669 |
74
+ | French | fr | 18_324_060 |
75
+ | Russian | ru | 13_618_886 |
76
+ | Spanish | es | 13_194_999 |
77
+ | Italian | it | 10_092_524 |
78
+ | Japanese | ja | 7_769_997 |
79
+ | Portuguese | pt | 5_948_941 |
80
+ | Farsi | fa | 2_598_251 |
81
+ | Chinese | zh | 3_306_397 |
82
+ | Turkish | tr | 2_051_157 |
83
+ | **Total** | | 144_137_311 |
84
+
85
+ ## Loading Dataset
86
+
87
+ You can load the entire dataset for a language as follows.
88
+ Please note that for some languages, the download size may be quite large.
89
+
90
+ ```python
91
+ from datasets import load_dataset
92
+
93
+ dataset = load_dataset("Upstash/wikipedia-2024-06-bge-m3", "en", split="train")
94
+ ```
95
+
96
+ Alternatively, you can stream portions of the dataset as needed.
97
+
98
+ ```python
99
+ from datasets import load_dataset
100
+
101
+ dataset = load_dataset("Upstash/wikipedia-2024-06-bge-m3", "en", split="train", streaming=True)
102
+
103
+ for data in dataset:
104
+ data_id = data["id"]
105
+ url = data["url"]
106
+ title = data["title"]
107
+ text = data["text"]
108
+ embedding = data["embedding"]
109
+ # Do some work
110
+ break
111
+ ```
112
+
113
+ ## Using Dataset
114
+
115
+ One potential use case for the dataset is enabling similarity search
116
+ by integrating it with a vector database.
117
+
118
+ In fact, we have developed a vector database that allows you to search
119
+ through the Wikipedia articles. Additionally, it includes a
120
+ [RAG (Retrieval-Augmented Generation)](https://github.com/upstash/rag-chat) chatbot,
121
+ which enables you to interact with a chatbot enhanced by the dataset.
122
+
123
+ For more details, see this [blog post](https://upstash.com/blog/indexing-wikipedia),
124
+ and be sure to check out the
125
+ [search engine and chatbot](https://wikipedia-semantic-search.vercel.app) yourself.
126
+
127
+ For reference, here is a rough estimation of how to implement semantic search
128
+ functionality using this dataset and Upstash Vector.
129
+
130
+ ```python
131
+ from datasets import load_dataset
132
+ from sentence_transformers import SentenceTransformer
133
+ from upstash_vector import Index
134
+
135
+ # You can create Upstash Vector with dimension set to 1024,
136
+ # and similarity search function to dot product.
137
+ index = Index(
138
+ url="https://upward-lion-77104-eu1-vector.upstash.io",
139
+ token="ABUFMHVwd2FyZC1saW9uLTc3MTA0LWV1MWFkbWluWVdSaE5HRm1NREl0TWpObU15MDBZbUl6TFdKaVpUWXRNRGMwTWpVd01qQXpaR1Jq",
140
+ )
141
+
142
+ vectors = []
143
+ batch_size = 200
144
+
145
+ dataset = load_dataset(
146
+ "Upstash/wikipedia-2024-06-bge-m3", "en", split="train", streaming=True
147
+ )
148
+
149
+ for data in dataset:
150
+ data_id = data["id"]
151
+ url = data["url"]
152
+ title = data["title"]
153
+ text = data["text"]
154
+ embedding = data["embedding"]
155
+
156
+ metadata = {
157
+ "url": url,
158
+ "title": title,
159
+ }
160
+
161
+ vector = (
162
+ data_id, # Unique vector id
163
+ embedding, # Vector embedding
164
+ metadata, # Optional, JSON-like metadata
165
+ text, # Optional, unstructured text data
166
+ )
167
+ vectors.append(vector)
168
+
169
+ if len(vectors) == batch_size:
170
+ break
171
+
172
+ # Upload embeddings into Upstash Vector in batches
173
+ index.upsert(
174
+ vectors=vectors,
175
+ namespace="en",
176
+ )
177
+
178
+ # Create the query vector
179
+ transformer = SentenceTransformer(
180
+ "BAAI/bge-m3",
181
+ device="cuda",
182
+ revision="babcf60cae0a1f438d7ade582983d4ba462303c2",
183
+ )
184
+
185
+ query = "Which state has the nickname Yellowhammer State?"
186
+ query_vector = transformer.encode(
187
+ sentences=query,
188
+ show_progress_bar=False,
189
+ normalize_embeddings=True,
190
+ )
191
+
192
+ results = index.query(
193
+ vector=query_vector,
194
+ top_k=2,
195
+ include_metadata=True,
196
+ include_data=True,
197
+ namespace="en",
198
+ )
199
+
200
+ # Query results are sorted in descending order of similarity
201
+ for result in results:
202
+ print(result.id) # Unique vector id
203
+ print(result.score) # Similarity score to the query vector
204
+ print(result.metadata) # Metadata associated with vector
205
+ print(result.data) # Unstructured data associated with vector
206
+ print("---")
207
+ ```
208
+