aritrasen commited on
Commit
e84ed2e
1 Parent(s): f37fd1f

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,439 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: BAAI/bge-base-en-v1.5
3
+ datasets: []
4
+ language: []
5
+ library_name: sentence-transformers
6
+ pipeline_tag: sentence-similarity
7
+ tags:
8
+ - sentence-transformers
9
+ - sentence-similarity
10
+ - feature-extraction
11
+ - generated_from_trainer
12
+ - dataset_size:21
13
+ - loss:MultipleNegativesRankingLoss
14
+ widget:
15
+ - source_sentence: '| Config | Model |
16
+ Epochs | Max seq length | Micro batch size | Machine | Training runtime | Cost
17
+ | Peak memory | Validation loss | Validation perplexity | Multitask score (MMLU)
18
+ |
19
+
20
+ | --------------------------------- | ---------------------- | ------ | --------------
21
+ | ---------------- | ------- | ---------------- | ---- | ----------- | ---------------
22
+ | --------------------- | --------------- |
23
+
24
+ | falcon-7b/lora.yaml | falcon-7b | 4 | 512 |
25
+ 1 | 1xA10G | 24.84 min | $0.7 | 16.69 GB | 0.945 |
26
+ 2.573 | 26.2% |
27
+
28
+ | falcon-7b/lora.yaml | falcon-7b | 4 | 512 |
29
+ 1 | 4xA10G | 24.94 min | $2.0 | 16.69 GB | 0.945 |
30
+ 2.573 | 26.4% |
31
+
32
+ | falcon-7b/qlora.yaml | falcon-7b | 4 | 512 |
33
+ 1 | 1xA10G | 50.85 min | $1.5 | 9.44 GB | 0.993 |
34
+ 2.699 | 26.3% |
35
+
36
+ | falcon-7b/qlora.yaml | falcon-7b | 4 | 512 |
37
+ 1 | 4xA10G | 50.88 min | $4.1 | 9.44 GB | 0.993 |
38
+ 2.699 | 26.3% |
39
+
40
+ | | | | | | | | | | | | |
41
+
42
+ | gemma-2b/full.yaml | gemma-2b | 1 | 512 |
43
+ 1 | 4xA10G | 14.06 min | $1.1 | 17.43 GB | 1.021 |
44
+ 2.777 | 32.4% |
45
+
46
+ | gemma-2b/lora.yaml | gemma-2b | 2 | 512 |
47
+ 2 | 1xA10G | 9.41 min | $0.3 | 12.62 GB | 0.981 |
48
+ 2.666 | 34.4% |'
49
+ sentences:
50
+ - 'What is the command to download the pretrained model weights for the Llama-2-7b-hf
51
+ model?
52
+
53
+ '
54
+ - 'What is the version of nvfuser\_cu121 used?
55
+
56
+ '
57
+ - 'What is the training runtime for the gemma-2b model with the lora configuration?
58
+
59
+ '
60
+ - source_sentence: "# Serve and Deploy LLMs\n\nThis document shows how you can serve\
61
+ \ a LitGPT for deployment. \n\n \n## Serve an LLM\n\nThis section illustrates\
62
+ \ how we can set up an inference server for a phi-2 LLM using `litgpt serve` that\
63
+ \ is minimal and highly scalable.\n\n\n \n## Step 1: Start the inference\
64
+ \ server\n\n\n```bash\n# 1) Download a pretrained model (alternatively, use your\
65
+ \ own finetuned model)\nlitgpt download --repo_id microsoft/phi-2\n\n# 2) Start\
66
+ \ the server\nlitgpt serve --checkpoint_dir checkpoints/microsoft/phi-2\n```\n\
67
+ \n> [!TIP]\n> Use `litgpt serve --help` to display additional options, including\
68
+ \ the port, devices, LLM temperature setting, and more.\n\n\n \n## Step 2:\
69
+ \ Query the inference server\n\nYou can now send requests to the inference server\
70
+ \ you started in step 2. For example, in a new Python session, we can send requests\
71
+ \ to the inference server as follows:\n\n\n```python\nimport requests, json\n\n\
72
+ response = requests.post(\n \"http://127.0.0.1:8000/predict\", \n json={\"\
73
+ prompt\": \"Fix typos in the following sentence: Exampel input\"}\n)\n\nprint(response.json()[\"\
74
+ output\"])\n```\n\nExecuting the code above prints the following output:\n\n```\n\
75
+ Instruct: Fix typos in the following sentence: Exampel input\nOutput: Example\
76
+ \ input.\n```"
77
+ sentences:
78
+ - 'What command do I use to convert the finetuned model to a HF transformer model?
79
+
80
+ '
81
+ - 'How do you merge LoRA weights into the original model''s checkpoint?
82
+
83
+ '
84
+ - 'How can I start an inference server for a phi-2 LLM using litgpt serve?
85
+
86
+ '
87
+ ---
88
+
89
+ # SentenceTransformer based on BAAI/bge-base-en-v1.5
90
+
91
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
92
+
93
+ ## Model Details
94
+
95
+ ### Model Description
96
+ - **Model Type:** Sentence Transformer
97
+ - **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
98
+ - **Maximum Sequence Length:** 512 tokens
99
+ - **Output Dimensionality:** 768 tokens
100
+ - **Similarity Function:** Cosine Similarity
101
+ <!-- - **Training Dataset:** Unknown -->
102
+ <!-- - **Language:** Unknown -->
103
+ <!-- - **License:** Unknown -->
104
+
105
+ ### Model Sources
106
+
107
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
108
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
109
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
110
+
111
+ ### Full Model Architecture
112
+
113
+ ```
114
+ SentenceTransformer(
115
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
116
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
117
+ (2): Normalize()
118
+ )
119
+ ```
120
+
121
+ ## Usage
122
+
123
+ ### Direct Usage (Sentence Transformers)
124
+
125
+ First install the Sentence Transformers library:
126
+
127
+ ```bash
128
+ pip install -U sentence-transformers
129
+ ```
130
+
131
+ Then you can load this model and run inference.
132
+ ```python
133
+ from sentence_transformers import SentenceTransformer
134
+
135
+ # Download from the 🤗 Hub
136
+ model = SentenceTransformer("aritrasen/bge-base-en-v1.5-ft")
137
+ # Run inference
138
+ sentences = [
139
+ '# Serve and Deploy LLMs\n\nThis document shows how you can serve a LitGPT for deployment. \n\n&nbsp;\n## Serve an LLM\n\nThis section illustrates how we can set up an inference server for a phi-2 LLM using `litgpt serve` that is minimal and highly scalable.\n\n\n&nbsp;\n## Step 1: Start the inference server\n\n\n```bash\n# 1) Download a pretrained model (alternatively, use your own finetuned model)\nlitgpt download --repo_id microsoft/phi-2\n\n# 2) Start the server\nlitgpt serve --checkpoint_dir checkpoints/microsoft/phi-2\n```\n\n> [!TIP]\n> Use `litgpt serve --help` to display additional options, including the port, devices, LLM temperature setting, and more.\n\n\n&nbsp;\n## Step 2: Query the inference server\n\nYou can now send requests to the inference server you started in step 2. For example, in a new Python session, we can send requests to the inference server as follows:\n\n\n```python\nimport requests, json\n\nresponse = requests.post(\n "http://127.0.0.1:8000/predict", \n json={"prompt": "Fix typos in the following sentence: Exampel input"}\n)\n\nprint(response.json()["output"])\n```\n\nExecuting the code above prints the following output:\n\n```\nInstruct: Fix typos in the following sentence: Exampel input\nOutput: Example input.\n```',
140
+ 'How can I start an inference server for a phi-2 LLM using litgpt serve?\n',
141
+ 'What command do I use to convert the finetuned model to a HF transformer model?\n',
142
+ ]
143
+ embeddings = model.encode(sentences)
144
+ print(embeddings.shape)
145
+ # [3, 768]
146
+
147
+ # Get the similarity scores for the embeddings
148
+ similarities = model.similarity(embeddings, embeddings)
149
+ print(similarities.shape)
150
+ # [3, 3]
151
+ ```
152
+
153
+ <!--
154
+ ### Direct Usage (Transformers)
155
+
156
+ <details><summary>Click to see the direct usage in Transformers</summary>
157
+
158
+ </details>
159
+ -->
160
+
161
+ <!--
162
+ ### Downstream Usage (Sentence Transformers)
163
+
164
+ You can finetune this model on your own dataset.
165
+
166
+ <details><summary>Click to expand</summary>
167
+
168
+ </details>
169
+ -->
170
+
171
+ <!--
172
+ ### Out-of-Scope Use
173
+
174
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
175
+ -->
176
+
177
+ <!--
178
+ ## Bias, Risks and Limitations
179
+
180
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
181
+ -->
182
+
183
+ <!--
184
+ ### Recommendations
185
+
186
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
187
+ -->
188
+
189
+ ## Training Details
190
+
191
+ ### Training Dataset
192
+
193
+ #### Unnamed Dataset
194
+
195
+
196
+ * Size: 21 training samples
197
+ * Columns: <code>anchor</code> and <code>positive</code>
198
+ * Approximate statistics based on the first 1000 samples:
199
+ | | anchor | positive |
200
+ |:--------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
201
+ | type | string | string |
202
+ | details | <ul><li>min: 51 tokens</li><li>mean: 424.62 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 17.19 tokens</li><li>max: 26 tokens</li></ul> |
203
+ * Samples:
204
+ | anchor | positive |
205
+ |:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
206
+ | <code>| 7 B | Llama 2 | bnb.nf4 | 1 | 4,194,304 | 14.14 GB | 3.68 min |<br>| 7 B | Llama 2 | bnb.nf4-dq | 1 | 4,194,304 | 13.84 GB | 3.83 min |<br>| 7 B | Llama 2 | None | 2 | 4,194,304 | 29.07 GB | 2.52 min |<br>| 7 B | Llama 2 | None | 4 | 4,194,304 | OOM | - |<br>| | | | | | | |<br>| 13 B | Llama 2 | None | 1 | 6,553,600 | 38.12 GB | 3.19 min |<br>| 13 B | Llama 2 | bnb.nf4 | 1 | 6,553,600 | 23.14 GB | 6.38 min |<br>| 13 B | Llama 2 | bnb.nf4-dq | 1 | 6,553,600 | 22.55 GB | 6.55 min |<br>| 13 B | Llama 2 | None | 2 | 6,553,600 | OOM | - |<br>| 13 B | Llama 2 | None | 4 | 6,553,600 | OOM | - |<br>| | | | | | | |<br>| 40 B | Falcon | None | 1 | 12,042,240 | OOM | - |<br>| 40 B | Falcon | bnb.nf4 | 1 | 12,042,240 | OOM | - |<br>| 40 B | Falcon | bnb.nf4-dq | 1 | 12,042,240 | OOM | - |</code> | <code>What is the memory usage of Llama 2 with 7B when using bnb.nf4-dq?<br></code> |
207
+ | <code>1. Follow the instructions above to load the model into a Hugging Face transformers model.<br><br>2. Create a `model.safetensor` file:<br><br>```python<br>model.save_pretrained("out/hf-tinyllama/converted/")<br>```<br><br>3. Copy the tokenizer files into the model-containing directory:<br><br>```bash<br>cp checkpoints/$repo_id/tokenizer* out/hf-tinyllama/converted<br>```<br><br>4. Run the evaluation harness, for example:<br><br>```bash<br>lm_eval --model hf \<br> --model_args pretrained=out/hf-tinyllama/converted \<br> --tasks "hellaswag,gsm8k,truthfulqa_mc2,mmlu,winogrande,arc_challenge" \<br> --device "cuda:0" \<br> --batch_size 4<br>```</code> | <code>What is the command to run the evaluation harness?<br></code> |
208
+ | <code>The LM Evaluation Harness requires a tokenizer to be present in the model checkpoint folder, which we can copy from the original download checkpoint:<br><br>```bash<br># Copy the tokenizer needed by the Eval Harness<br>cp checkpoints/microsoft/phi-2/tokenizer*<br>out/converted_model<br>```<br><br>Then, we can run the Evaluation Harness as follows:<br><br>```bash<br>lm_eval --model hf \<br> --model_args pretrained="out/converted_model" \<br> --tasks "hellaswag,gsm8k,truthfulqa_mc2,mmlu,winogrande,arc_challenge" \<br> --device "cuda:0" \<br> --batch_size 4<br>```<br><br>&nbsp;<br><br>> [!TIP]<br>> The Evaluation Harness tasks above are those used in Open LLM Leaderboard. You can find a list all supported tasks [here](https://github.com/EleutherAI/lm-evaluation-harness/blob/master/docs/task_table.md).<br><br><br><br>&nbsp;<br>**More information and additional resources**<br><br>- [tutorials/convert_lit_models](./convert_lit_models.md): Tutorial on converting LitGPT weights<br><br><br><br>&nbsp;<br><br>## Get involved!<br><br>We appreciate your feedback and contributions. If you have feature requests, questions, or want to contribute code or config files, please don't hesitate to use the [GitHub Issue](https://github.com/Lightning-AI/litgpt/issues) tracker.<br><br>We welcome all individual contributors, regardless of their level of experience or hardware. Your contributions are valuable, and we are excited to see what you can accomplish in this collaborative and supportive environment.<br><br>&nbsp;<br><br>> [!TIP]<br>> Unsure about contributing? Check out our [How to Contribute to LitGPT](https://lightning.ai/pages/community/tutorial/how-to-contribute-to-litgpt/) guide.<br><br>&nbsp;<br><br>If you have general questions about building with LitGPT, please [join our Discord](https://discord.gg/VptPCZkGNa).</code> | <code>What is the command to run the Evaluation Harness?<br></code> |
209
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
210
+ ```json
211
+ {
212
+ "scale": 20.0,
213
+ "similarity_fct": "cos_sim"
214
+ }
215
+ ```
216
+
217
+ ### Evaluation Dataset
218
+
219
+ #### Unnamed Dataset
220
+
221
+
222
+ * Size: 10 evaluation samples
223
+ * Columns: <code>anchor</code> and <code>positive</code>
224
+ * Approximate statistics based on the first 1000 samples:
225
+ | | anchor | positive |
226
+ |:--------|:-------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
227
+ | type | string | string |
228
+ | details | <ul><li>min: 273 tokens</li><li>mean: 460.8 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 20.1 tokens</li><li>max: 34 tokens</li></ul> |
229
+ * Samples:
230
+ | anchor | positive |
231
+ |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------|
232
+ | <code>(this table was sourced from the author's [README](https://github.com/jzhang38/TinyLlama/))<br><br>&nbsp;<br>## Download datasets<br><br>You can download the data using git lfs:<br><br>```bash<br># Make sure you have git-lfs installed (https://git-lfs.com):<br>sudo apt install git-lfs<br>```<br><br>```bash<br>git clone https://huggingface.co/datasets/cerebras/slimpajama-627b data/slimpajama-raw<br>git clone https://huggingface.co/datasets/bigcode/starcoderdata data/starcoderdata-raw<br>```<br><br>Around 1.2 TB of disk space is required to store both datasets.<br><br>&nbsp;<br>## Prepare the datasets for training<br><br>In order to start pretraining litgpt on it, you need to read, tokenize, and write the data in binary chunks. This will leverage the `litdata` optimization pipeline and streaming dataset.<br><br>First, install additional dependencies for preprocessing:<br><br>```bash<br>pip install '.[all]'<br>```<br><br>You will need to have the tokenizer config available:<br><br>```bash<br>litgpt download \<br> --repo_id meta-llama/Llama-2-7b-hf \<br> --access_token your_hf_token \<br> --tokenizer_only true<br>```<br><br>Then, run the preprocessing script for each dataset and split.<br>You will require **1.1 TB** of disk space for Starcoder and **2.5** TB of space for the SlimPajama dataset.<br><br>**Starcoder:**<br><br>```bash<br>python litgpt/data/prepare_starcoder.py \<br> --input_dir data/starcoderdata-raw \<br> --output_dir data/starcoder \<br> --tokenizer_path checkpoints/meta-llama/Llama-2-7b-hf<br>```<br><br>**SlimPajama:**<br><br>```bash<br>python litgpt/data/prepare_slimpajama.py \<br> --input_dir data/slimpajama-raw/validation \<br> --output_dir data/slimpajama/val \<br> --tokenizer_path checkpoints/meta-llama/Llama-2-7b-hf<br><br>python litgpt/data/prepare_slimpajama.py \<br> --input_dir data/slimpajama-raw/test \<br> --output_dir data/slimpajama/test \<br> --tokenizer_path checkpoints/meta-llama/Llama-2-7b-hf<br><br>python litgpt/data/prepare_slimpajama.py \<br> --input_dir data/slimpajama-raw/train \<br> --output_dir data/slimpajama/train \<br> --tokenizer_path checkpoints/meta-llama/Llama-2-7b-hf<br>```</code> | <code>How much disk space is required to store the SlimPajama dataset?<br></code> |
233
+ | <code># Serve and Deploy LLMs<br><br>This document shows how you can serve a LitGPT for deployment. <br><br>&nbsp;<br>## Serve an LLM<br><br>This section illustrates how we can set up an inference server for a phi-2 LLM using `litgpt serve` that is minimal and highly scalable.<br><br><br>&nbsp;<br>## Step 1: Start the inference server<br><br><br>```bash<br># 1) Download a pretrained model (alternatively, use your own finetuned model)<br>litgpt download --repo_id microsoft/phi-2<br><br># 2) Start the server<br>litgpt serve --checkpoint_dir checkpoints/microsoft/phi-2<br>```<br><br>> [!TIP]<br>> Use `litgpt serve --help` to display additional options, including the port, devices, LLM temperature setting, and more.<br><br><br>&nbsp;<br>## Step 2: Query the inference server<br><br>You can now send requests to the inference server you started in step 2. For example, in a new Python session, we can send requests to the inference server as follows:<br><br><br>```python<br>import requests, json<br><br>response = requests.post(<br> "http://127.0.0.1:8000/predict", <br> json={"prompt": "Fix typos in the following sentence: Exampel input"}<br>)<br><br>print(response.json()["output"])<br>```<br><br>Executing the code above prints the following output:<br><br>```<br>Instruct: Fix typos in the following sentence: Exampel input<br>Output: Example input.<br>```</code> | <code>How can I start an inference server for a phi-2 LLM using litgpt serve?<br></code> |
234
+ | <code># TPU support<br><br>This project utilizes [`Fabric`](https://lightning.ai/docs/fabric/stable), which supports TPUs via [PyTorch XLA](https://github.com/pytorch/xla).<br><br>> [!NOTE]<br>> This guide assumes that you have already set-up your [Google Cloud environment](https://cloud.google.com/run/docs/setup).<br><br>To set up a Google Cloud instance with a TPU v4 VM, run the following commands:<br><br>```shell<br>gcloud compute tpus tpu-vm create litgpt --version=tpu-vm-v4-base --accelerator-type=v4-8 --zone=us-central2-b<br>gcloud compute tpus tpu-vm ssh litgpt --zone=us-central2-b<br>```<br><br>You can also choose a different TPU type. To do so, change the `version`, `accelerator-type`, and `zone` arguments. Find all regions and zones [here](https://cloud.google.com/tpu/docs/regions-zones).<br><br><details><br><summary>Multihost caveats</summary><br><br>TPU v4-8 uses a single host. SSH'ing into the machine and running commands manually will only work when using a single host (1 slice in the TPU pod).<br>In multi-host environments, such as larger TPU pod slices, it's necessary to launch all commands on all hosts simultaneously to avoid hangs.<br>For local development, it is advisable to upload a zip file containing all your current changes and execute it inside the VM from your personal computer:<br><br>```shell<br># Zip the local directory, excluding large directories from the zip. You may want to keep them.<br>zip -r local_changes.zip . -x ".git/*" "checkpoints/*" "data/*" "out/*"<br># Copy the .zip file to the TPU VM<br>gcloud compute tpus tpu-vm scp --worker=all local_changes.zip "litgpt:~"<br># Unzip on each host<br>gcloud compute tpus tpu-vm ssh litgpt --worker=all --command="cd ~; unzip -q -o local_changes.zip"<br><br># Example of a typical workflow<br>gcloud compute tpus tpu-vm ssh tmp --worker=all --command="cd ~; bash install_dependencies.sh"<br>gcloud compute tpus tpu-vm ssh tmp --worker=all --command="cd ~; bash prepare_checkpoints.sh"<br>gcloud compute tpus tpu-vm ssh tmp --worker=all --command="cd ~; bash run_desired_script.sh"</code> | <code>How does this project support TPUs?<br></code> |
235
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
236
+ ```json
237
+ {
238
+ "scale": 20.0,
239
+ "similarity_fct": "cos_sim"
240
+ }
241
+ ```
242
+
243
+ ### Training Hyperparameters
244
+ #### Non-Default Hyperparameters
245
+
246
+ - `eval_strategy`: steps
247
+ - `per_device_train_batch_size`: 5
248
+ - `per_device_eval_batch_size`: 5
249
+ - `num_train_epochs`: 5
250
+ - `warmup_ratio`: 0.1
251
+ - `fp16`: True
252
+ - `batch_sampler`: no_duplicates
253
+
254
+ #### All Hyperparameters
255
+ <details><summary>Click to expand</summary>
256
+
257
+ - `overwrite_output_dir`: False
258
+ - `do_predict`: False
259
+ - `eval_strategy`: steps
260
+ - `prediction_loss_only`: True
261
+ - `per_device_train_batch_size`: 5
262
+ - `per_device_eval_batch_size`: 5
263
+ - `per_gpu_train_batch_size`: None
264
+ - `per_gpu_eval_batch_size`: None
265
+ - `gradient_accumulation_steps`: 1
266
+ - `eval_accumulation_steps`: None
267
+ - `learning_rate`: 5e-05
268
+ - `weight_decay`: 0.0
269
+ - `adam_beta1`: 0.9
270
+ - `adam_beta2`: 0.999
271
+ - `adam_epsilon`: 1e-08
272
+ - `max_grad_norm`: 1.0
273
+ - `num_train_epochs`: 5
274
+ - `max_steps`: -1
275
+ - `lr_scheduler_type`: linear
276
+ - `lr_scheduler_kwargs`: {}
277
+ - `warmup_ratio`: 0.1
278
+ - `warmup_steps`: 0
279
+ - `log_level`: passive
280
+ - `log_level_replica`: warning
281
+ - `log_on_each_node`: True
282
+ - `logging_nan_inf_filter`: True
283
+ - `save_safetensors`: True
284
+ - `save_on_each_node`: False
285
+ - `save_only_model`: False
286
+ - `restore_callback_states_from_checkpoint`: False
287
+ - `no_cuda`: False
288
+ - `use_cpu`: False
289
+ - `use_mps_device`: False
290
+ - `seed`: 42
291
+ - `data_seed`: None
292
+ - `jit_mode_eval`: False
293
+ - `use_ipex`: False
294
+ - `bf16`: False
295
+ - `fp16`: True
296
+ - `fp16_opt_level`: O1
297
+ - `half_precision_backend`: auto
298
+ - `bf16_full_eval`: False
299
+ - `fp16_full_eval`: False
300
+ - `tf32`: None
301
+ - `local_rank`: 0
302
+ - `ddp_backend`: None
303
+ - `tpu_num_cores`: None
304
+ - `tpu_metrics_debug`: False
305
+ - `debug`: []
306
+ - `dataloader_drop_last`: False
307
+ - `dataloader_num_workers`: 0
308
+ - `dataloader_prefetch_factor`: None
309
+ - `past_index`: -1
310
+ - `disable_tqdm`: False
311
+ - `remove_unused_columns`: True
312
+ - `label_names`: None
313
+ - `load_best_model_at_end`: False
314
+ - `ignore_data_skip`: False
315
+ - `fsdp`: []
316
+ - `fsdp_min_num_params`: 0
317
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
318
+ - `fsdp_transformer_layer_cls_to_wrap`: None
319
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
320
+ - `deepspeed`: None
321
+ - `label_smoothing_factor`: 0.0
322
+ - `optim`: adamw_torch
323
+ - `optim_args`: None
324
+ - `adafactor`: False
325
+ - `group_by_length`: False
326
+ - `length_column_name`: length
327
+ - `ddp_find_unused_parameters`: None
328
+ - `ddp_bucket_cap_mb`: None
329
+ - `ddp_broadcast_buffers`: False
330
+ - `dataloader_pin_memory`: True
331
+ - `dataloader_persistent_workers`: False
332
+ - `skip_memory_metrics`: True
333
+ - `use_legacy_prediction_loop`: False
334
+ - `push_to_hub`: False
335
+ - `resume_from_checkpoint`: None
336
+ - `hub_model_id`: None
337
+ - `hub_strategy`: every_save
338
+ - `hub_private_repo`: False
339
+ - `hub_always_push`: False
340
+ - `gradient_checkpointing`: False
341
+ - `gradient_checkpointing_kwargs`: None
342
+ - `include_inputs_for_metrics`: False
343
+ - `eval_do_concat_batches`: True
344
+ - `fp16_backend`: auto
345
+ - `push_to_hub_model_id`: None
346
+ - `push_to_hub_organization`: None
347
+ - `mp_parameters`:
348
+ - `auto_find_batch_size`: False
349
+ - `full_determinism`: False
350
+ - `torchdynamo`: None
351
+ - `ray_scope`: last
352
+ - `ddp_timeout`: 1800
353
+ - `torch_compile`: False
354
+ - `torch_compile_backend`: None
355
+ - `torch_compile_mode`: None
356
+ - `dispatch_batches`: None
357
+ - `split_batches`: None
358
+ - `include_tokens_per_second`: False
359
+ - `include_num_input_tokens_seen`: False
360
+ - `neftune_noise_alpha`: None
361
+ - `optim_target_modules`: None
362
+ - `batch_eval_metrics`: False
363
+ - `batch_sampler`: no_duplicates
364
+ - `multi_dataset_batch_sampler`: proportional
365
+
366
+ </details>
367
+
368
+ ### Training Logs
369
+ | Epoch | Step | Training Loss | loss |
370
+ |:-----:|:----:|:-------------:|:------:|
371
+ | 0.4 | 2 | 0.6407 | 0.4190 |
372
+ | 0.8 | 4 | 0.7873 | 0.2789 |
373
+ | 1.2 | 6 | 0.1871 | 0.2089 |
374
+ | 1.6 | 8 | 0.2125 | 0.1718 |
375
+ | 2.0 | 10 | 0.0374 | 0.1648 |
376
+ | 2.4 | 12 | 0.1923 | 0.1695 |
377
+ | 2.8 | 14 | 0.0183 | 0.1723 |
378
+ | 3.2 | 16 | 0.1582 | 0.1770 |
379
+ | 3.6 | 18 | 0.0032 | 0.1824 |
380
+ | 4.0 | 20 | 0.0015 | 0.1870 |
381
+ | 4.4 | 22 | 0.1399 | 0.1901 |
382
+ | 4.8 | 24 | 0.002 | 0.1914 |
383
+
384
+
385
+ ### Framework Versions
386
+ - Python: 3.10.12
387
+ - Sentence Transformers: 3.0.1
388
+ - Transformers: 4.41.2
389
+ - PyTorch: 2.3.0+cu121
390
+ - Accelerate: 0.27.0
391
+ - Datasets: 2.20.0
392
+ - Tokenizers: 0.19.1
393
+
394
+ ## Citation
395
+
396
+ ### BibTeX
397
+
398
+ #### Sentence Transformers
399
+ ```bibtex
400
+ @inproceedings{reimers-2019-sentence-bert,
401
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
402
+ author = "Reimers, Nils and Gurevych, Iryna",
403
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
404
+ month = "11",
405
+ year = "2019",
406
+ publisher = "Association for Computational Linguistics",
407
+ url = "https://arxiv.org/abs/1908.10084",
408
+ }
409
+ ```
410
+
411
+ #### MultipleNegativesRankingLoss
412
+ ```bibtex
413
+ @misc{henderson2017efficient,
414
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
415
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
416
+ year={2017},
417
+ eprint={1705.00652},
418
+ archivePrefix={arXiv},
419
+ primaryClass={cs.CL}
420
+ }
421
+ ```
422
+
423
+ <!--
424
+ ## Glossary
425
+
426
+ *Clearly define terms in order to be accessible across audiences.*
427
+ -->
428
+
429
+ <!--
430
+ ## Model Card Authors
431
+
432
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
433
+ -->
434
+
435
+ <!--
436
+ ## Model Card Contact
437
+
438
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
439
+ -->
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "BAAI/bge-base-en-v1.5",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "id2label": {
13
+ "0": "LABEL_0"
14
+ },
15
+ "initializer_range": 0.02,
16
+ "intermediate_size": 3072,
17
+ "label2id": {
18
+ "LABEL_0": 0
19
+ },
20
+ "layer_norm_eps": 1e-12,
21
+ "max_position_embeddings": 512,
22
+ "model_type": "bert",
23
+ "num_attention_heads": 12,
24
+ "num_hidden_layers": 12,
25
+ "pad_token_id": 0,
26
+ "position_embedding_type": "absolute",
27
+ "torch_dtype": "float32",
28
+ "transformers_version": "4.41.2",
29
+ "type_vocab_size": 2,
30
+ "use_cache": true,
31
+ "vocab_size": 30522
32
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.0.1",
4
+ "transformers": "4.41.2",
5
+ "pytorch": "2.3.0+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9641dc993bedcc45e9d4c766e4aa7563062190e97e7e0770cbcab9c5b1314394
3
+ size 437951328
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": true
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "never_split": null,
51
+ "pad_token": "[PAD]",
52
+ "sep_token": "[SEP]",
53
+ "strip_accents": null,
54
+ "tokenize_chinese_chars": true,
55
+ "tokenizer_class": "BertTokenizer",
56
+ "unk_token": "[UNK]"
57
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff