File size: 1,718 Bytes
7d13513
 
 
55df886
 
 
 
 
 
 
 
 
 
a3e3433
 
f5bccc2
 
95f65b5
f5bccc2
95f65b5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f5bccc2
 
 
318fe06
 
f5bccc2
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
---
license: apache-2.0
---

<br><br>

<p align="center">
<img src="https://github.com/jina-ai/finetuner/blob/main/docs/_static/finetuner-logo-ani.svg?raw=true" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px">
</p>


<p align="center">
<b>Task-oriented finetuning for better embeddings on neural search</b>
</p>

The text embedding suit trained by [Jina AI](https://github.com/jina-ai), [Finetuner team](https://github.com/jina-ai/finetuner).

## Intented Usage & Model Info

`jina-embedding-s-en-v1` is a language model that has been trained using Jina AI's Linnaeus-Clean dataset.
This dataset consists of 380 million pairs of sentences, which include both query-document pairs.
These pairs were obtained from various domains and were carefully selected through a thorough cleaning process.
The Linnaeus-Full dataset, from which the Linnaeus-Clean dataset is derived, originally contained 1.6 billion sentence pairs.

The model has a range of use cases, including information retrieval, semantic textual similarity, text reranking, and more.

With a compact size of just 35 million parameters,
the model enables lightning-fast inference while still delivering impressive performance.
Additionally, we provide the following options:

- jina-embedding-b-en-v1: 110 million parameters.
- jina-embedding-l-en-v1: 800 million parameters.
- jina-embedding-xl-en-v1: 3 billion parameters.
- jina-embedding-xxl-en-v1: 11 billion parameters.

## Data & Parameters

More info will be released together with the technique report.

## Metrics

## Usage