trapoom555 commited on
Commit
91d8b3e
·
1 Parent(s): 5ab02d7

modify readme

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -14,7 +14,9 @@ pipeline_tag: sentence-similarity
14
 
15
  ## Description
16
 
17
- This is a fine-tuned version of [MiniCPM-2B-dpo-bf16](https://huggingface.co/openbmb/MiniCPM-2B-dpo-bf16) to perform Text Embedding tasks. The model is fine-tuned using the Contrastive Fine-tuning and LoRA technique on NLI datasets. ⚠️ **The training process ignores hard-negative samples and treat other in-bash samples + their entailments as in-batch negatives**. If you want to see the version utilizing hard-negative examples in training, please refer [here](https://huggingface.co/trapoom555/MiniCPM-2B-Text-Embedding-cft)
 
 
18
 
19
  ## Base Model
20
 
@@ -88,7 +90,7 @@ print(encoded_sentences)
88
 
89
  ## Training Details
90
 
91
- ⚠️ **The training process ignores hard-negative samples and treat other in-bash samples + their entailments as in-batch negatives**.
92
 
93
  | **Training Details** | **Value** |
94
  |-------------------------|-------------------|
 
14
 
15
  ## Description
16
 
17
+ This is a fine-tuned version of [MiniCPM-2B-dpo-bf16](https://huggingface.co/openbmb/MiniCPM-2B-dpo-bf16) to perform Text Embedding tasks. The model is fine-tuned using the Contrastive Fine-tuning and LoRA technique on NLI datasets.
18
+
19
+ ⚠️ The training process ignores hard-negative samples and treat other in-batch samples + their entailments as in-batch negatives. ⚠️ If you want to see the version utilizing hard-negative examples in the training process, please refer [here](https://huggingface.co/trapoom555/MiniCPM-2B-Text-Embedding-cft)
20
 
21
  ## Base Model
22
 
 
90
 
91
  ## Training Details
92
 
93
+ ⚠️ The training process ignores hard-negative samples and treat other in-batch samples + their entailments as in-batch negatives. ⚠️
94
 
95
  | **Training Details** | **Value** |
96
  |-------------------------|-------------------|