multi-train commited on
Commit
00224fc
1 Parent(s): aba2bca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -2
README.md CHANGED
@@ -10,10 +10,10 @@ tags:
10
  ---
11
 
12
  # hkunlp/instructor-large
13
- This is a general embedding model: It maps **any** piece of text (e.g., a title, a sentence, a document, etc.) to a fixed-length vector in test time **without further training**. With instructions, the embeddings are **domain-specific** (e.g., specialized for science, finance, etc.) and **task-aware** (e.g., customized for classification, information retrieval, etc.)
14
-
15
  The model is easy to use with `sentence-transformer` library.
16
 
 
17
  ## Installation
18
  ```bash
19
  git clone https://github.com/HKUNLP/instructor-embedding
@@ -32,6 +32,16 @@ embeddings = model.encode([[instruction,sentence,0]])
32
  print(embeddings)
33
  ```
34
 
 
 
 
 
 
 
 
 
 
 
35
  ## Calculate Sentence similarities
36
  You can further use the model to compute similarities between two groups of sentences, with **customized embeddings**.
37
  ```python
 
10
  ---
11
 
12
  # hkunlp/instructor-large
13
+ We introduce **Instructor**👨‍🏫, an instruction-finetuned text embedding model that can generate text embeddings tailored to any task (e.g., classification, retrieval, clustering, text evaluation, etc.) and domains (e.g., science, finance, etc.) ***by simply providing the task instruction, without any finetuning***. **Instructor**👨‍ achieves sota on 70 diverse embedding tasks!
 
14
  The model is easy to use with `sentence-transformer` library.
15
 
16
+ # Quick start
17
  ## Installation
18
  ```bash
19
  git clone https://github.com/HKUNLP/instructor-embedding
 
32
  print(embeddings)
33
  ```
34
 
35
+ # Use cases
36
+ We provide a few specific use cases in the following. For more examples and applications, refer to [our paper](https://arxiv.org/abs/2212.09741)
37
+ ## Calculate embeddings for your customized texts
38
+ If you want to calculate customized embeddings for specific sentences, you may follow the unified template to write instructions:
39
+
40
+                           Represent the `domain` `text_type` for `task_objective`; Input:
41
+ * `domain` is optional, and it specifies the domain of the text, e.g., science, finance, medicine, etc.
42
+ * `text_type` is required, and it specifies the encoding unit, e.g., sentence, document, paragraph, etc.
43
+ * `task_objective` is optional, and it specifies the objective of emebdding, e.g., retrieve a document, classify the sentence, etc.
44
+
45
  ## Calculate Sentence similarities
46
  You can further use the model to compute similarities between two groups of sentences, with **customized embeddings**.
47
  ```python