tien314 commited on
Commit
99635b6
1 Parent(s): b744878

Update BM25S model

Browse files
.gitattributes CHANGED
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ corpus.jsonl filter=lfs diff=lfs merge=lfs -text
37
+ corpus.mmindex.json filter=lfs diff=lfs merge=lfs -text
38
+ vocab.index.json filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,157 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ library_name: bm25s
4
+ tags:
5
+ - bm25
6
+ - bm25s
7
+ - retrieval
8
+ - search
9
+ - lexical
10
+ ---
11
+
12
+ # BM25S Index
13
+
14
+ This is a BM25S index created with the [`bm25s` library](https://github.com/xhluca/bm25s) (version `0.2.5`), an ultra-fast implementation of BM25. It can be used for lexical retrieval tasks.
15
+
16
+ BM25S Related Links:
17
+
18
+ * 🏠[Homepage](https://bm25s.github.io)
19
+ * 💻[GitHub Repository](https://github.com/xhluca/bm25s)
20
+ * 🤗[Blog Post](https://huggingface.co/blog/xhluca/bm25s)
21
+ * 📝[Technical Report](https://arxiv.org/abs/2407.03618)
22
+
23
+
24
+ ## Installation
25
+
26
+ You can install the `bm25s` library with `pip`:
27
+
28
+ ```bash
29
+ pip install "bm25s==0.2.5"
30
+
31
+ # Include extra dependencies like stemmer
32
+ pip install "bm25s[full]==0.2.5"
33
+
34
+ # For huggingface hub usage
35
+ pip install huggingface_hub
36
+ ```
37
+
38
+ ## Loading a `bm25s` index
39
+
40
+ You can use this index for information retrieval tasks. Here is an example:
41
+
42
+ ```python
43
+ import bm25s
44
+ from bm25s.hf import BM25HF
45
+
46
+ # Load the index
47
+ retriever = BM25HF.load_from_hub("tien314/bm25s-version2")
48
+
49
+ # You can retrieve now
50
+ query = "a cat is a feline"
51
+ results = retriever.retrieve(bm25s.tokenize(query), k=3)
52
+ ```
53
+
54
+ ## Saving a `bm25s` index
55
+
56
+ You can save a `bm25s` index to the Hugging Face Hub. Here is an example:
57
+
58
+ ```python
59
+ import bm25s
60
+ from bm25s.hf import BM25HF
61
+
62
+ corpus = [
63
+ "a cat is a feline and likes to purr",
64
+ "a dog is the human's best friend and loves to play",
65
+ "a bird is a beautiful animal that can fly",
66
+ "a fish is a creature that lives in water and swims",
67
+ ]
68
+
69
+ retriever = BM25HF(corpus=corpus)
70
+ retriever.index(bm25s.tokenize(corpus))
71
+
72
+ token = None # You can get a token from the Hugging Face website
73
+ retriever.save_to_hub("tien314/bm25s-version2", token=token)
74
+ ```
75
+
76
+ ## Advanced usage
77
+
78
+ You can leverage more advanced features of the BM25S library during `load_from_hub`:
79
+
80
+ ```python
81
+ # Load corpus and index in memory-map (mmap=True) to reduce memory
82
+ retriever = BM25HF.load_from_hub("tien314/bm25s-version2", load_corpus=True, mmap=True)
83
+
84
+ # Load a different branch/revision
85
+ retriever = BM25HF.load_from_hub("tien314/bm25s-version2", revision="main")
86
+
87
+ # Change directory where the local files should be downloaded
88
+ retriever = BM25HF.load_from_hub("tien314/bm25s-version2", local_dir="/path/to/dir")
89
+
90
+ # Load private repositories with a token:
91
+ retriever = BM25HF.load_from_hub("tien314/bm25s-version2", token=token)
92
+ ```
93
+
94
+ ## Tokenizer
95
+
96
+ If you have saved a `Tokenizer` object with the index using the following approach:
97
+
98
+ ```python
99
+ from bm25s.hf import TokenizerHF
100
+
101
+ token = "your_hugging_face_token"
102
+ tokenizer = TokenizerHF(corpus=corpus, stopwords="english")
103
+ tokenizer.save_to_hub("tien314/bm25s-version2", token=token)
104
+
105
+ # and stopwords too
106
+ tokenizer.save_stopwords_to_hub("tien314/bm25s-version2", token=token)
107
+ ```
108
+
109
+ Then, you can load the tokenizer using the following code:
110
+
111
+ ```python
112
+ from bm25s.hf import TokenizerHF
113
+
114
+ tokenizer = TokenizerHF(corpus=corpus, stopwords=[])
115
+ tokenizer.load_vocab_from_hub("tien314/bm25s-version2", token=token)
116
+ tokenizer.load_stopwords_from_hub("tien314/bm25s-version2", token=token)
117
+ ```
118
+
119
+
120
+ ## Stats
121
+
122
+ This dataset was created using the following data:
123
+
124
+ | Statistic | Value |
125
+ | --- | --- |
126
+ | Number of documents | 9826150 |
127
+ | Number of tokens | 90771053 |
128
+ | Average tokens per document | 9.24 |
129
+
130
+ ## Parameters
131
+
132
+ The index was created with the following parameters:
133
+
134
+ | Parameter | Value |
135
+ | --- | --- |
136
+ | k1 | `1.5` |
137
+ | b | `0.75` |
138
+ | delta | `0.5` |
139
+ | method | `lucene` |
140
+ | idf method | `lucene` |
141
+
142
+ ## Citation
143
+
144
+ To cite `bm25s`, please use the following bibtex:
145
+
146
+ ```
147
+ @misc{lu_2024_bm25s,
148
+ title={BM25S: Orders of magnitude faster lexical search via eager sparse scoring},
149
+ author={Xing Han Lù},
150
+ year={2024},
151
+ eprint={2407.03618},
152
+ archivePrefix={arXiv},
153
+ primaryClass={cs.IR},
154
+ url={https://arxiv.org/abs/2407.03618},
155
+ }
156
+ ```
157
+
corpus.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc67e52d86fb0491bc0bbe3779c98a4f7ed410969e18ef8f3e6ecb925be2d1f1
3
+ size 904224061
corpus.mmindex.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:70a643508f5acf2f522626054bf56b1c2de09f1667402c9d566fff72ecbe0393
3
+ size 97039423
data.csc.index.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c9a5b996421a169d4f2c4699a8736411d2aa66d5214b8e0c50dc45c79e208e6
3
+ size 363084340
indices.csc.index.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:77212125429072a15bb878f8f3a85e1711ef394da51aae760e0744e40e6a08b8
3
+ size 363084340
indptr.csc.index.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0244fd6aed85ad7725b845e3c2d14193aa722822fc39091131dd82ba17af10cf
3
+ size 9116092
params.index.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "k1": 1.5,
3
+ "b": 0.75,
4
+ "delta": 0.5,
5
+ "method": "lucene",
6
+ "idf_method": "lucene",
7
+ "dtype": "float32",
8
+ "int_dtype": "int32",
9
+ "num_docs": 9826150,
10
+ "version": "0.2.5",
11
+ "backend": "numpy"
12
+ }
vocab.index.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63b81555f7457d5a9310be43a7fce6818caaf2a0b261d0646aee1cdfaeb5e4b5
3
+ size 45333270