bwang0911's picture
Update README.md
ed0eb73
|
raw
history blame
6.95 kB
metadata
license: apache-2.0
language:
  - en
pipeline_tag: feature-extraction
tags:
  - code



Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications.

The text embedding set trained by Jina AI, Finetuner team.

Intended Usage & Model Info

jina-embeddings-v2-base-code is an multilingual embedding model speaks English and 30 widely used programming languages. Same as other jina-embeddings-v2 series, it supports 8192 sequence length.

jina-embeddings-v2-base-code is based on a Bert architecture (JinaBert) that supports the symmetric bidirectional variant of ALiBi to allow longer sequence length. The backbone jina-bert-v2-base-code is pretrained on the github-code dataset. The model is further trained on Jina AI's collection of more than 150 millions of coding question answer and docstring source code pairs. These pairs were obtained from various domains and were carefully selected through a thorough cleaning process.

The embedding model was trained using 512 sequence length, but extrapolates to 8k sequence length (or even longer) thanks to ALiBi. This makes our model useful for a range of use cases, especially when processing long documents is needed, including technical question answering and code search.

This model has 137 million parameters, which enables fast and memory efficient inference, while delivering impressive performance. Additionally, we provide the following embedding models:

V1 (Based on T5, 512 Seq)

V2 (Based on JinaBert, 8k Seq)

Supported (Programming) Languages

  • English
  • Assembly
  • Batchfile
  • C
  • C#
  • C++
  • CMake
  • CSS
  • Dockerfile
  • FORTRAN
  • GO
  • Haskell
  • HTML
  • Java
  • JavaScript
  • Julia
  • Lua
  • Makefile
  • Markdown
  • PHP
  • Perl
  • PowerShell
  • Python
  • Ruby
  • Rust
  • SQL
  • Scala
  • Shell
  • TypeScript
  • TeX
  • Visual Basic

Data & Parameters

Jina Embeddings V2 technical report

Usage

Please apply mean pooling when integrating the model.

Why mean pooling?

mean poooling takes all token embeddings from model output and averaging them at sentence/paragraph level. It has been proved to be the most effective way to produce high-quality sentence embeddings. We offer an encode function to deal with this.

However, if you would like to do it without using the default encode function:

import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel

def mean_pooling(model_output, attention_mask):
    token_embeddings = model_output[0]
    input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
    return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)

sentences = [
    'Save model to a pickle located at `path`',
    'def save_act(self, path=None): if path is None: path = os.path.join(logger.get_dir(), "model.pkl") with tempfile.TemporaryDirectory() as td: save_variables(os.path.join(td, "model")) arc_name = os.path.join(td, "packed.zip") with zipfile.ZipFile(arc_name, "w") as zipf: for root, dirs, files in os.walk(td): for fname in files: file_path = os.path.join(root, fname) if file_path != arc_name: zipf.write(file_path, os.path.relpath(file_path, td)) with open(arc_name, "rb") as f: model_data = f.read() with open(path, "wb") as f: cloudpickle.dump((model_data, self._act_params), f)',
]

tokenizer = AutoTokenizer.from_pretrained('jinaai/jina-embeddings-v2-base-code')
model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-code', trust_remote_code=True)

encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')

with torch.no_grad():
    model_output = model(**encoded_input)

embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
embeddings = F.normalize(embeddings, p=2, dim=1)

You can use Jina Embedding models directly from transformers package:

!pip install transformers
from transformers import AutoModel
from numpy.linalg import norm

cos_sim = lambda a,b: (a @ b.T) / (norm(a)*norm(b))
model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-code', trust_remote_code=True)
embeddings = model.encode(
    [
        'Save model to a pickle located at `path`',
        'def save_act(self, path=None): if path is None: path = os.path.join(logger.get_dir(), "model.pkl") with tempfile.TemporaryDirectory() as td: save_variables(os.path.join(td, "model")) arc_name = os.path.join(td, "packed.zip") with zipfile.ZipFile(arc_name, "w") as zipf: for root, dirs, files in os.walk(td): for fname in files: file_path = os.path.join(root, fname) if file_path != arc_name: zipf.write(file_path, os.path.relpath(file_path, td)) with open(arc_name, "rb") as f: model_data = f.read() with open(path, "wb") as f: cloudpickle.dump((model_data, self._act_params), f)',
    ]
)
print(cos_sim(embeddings[0], embeddings[1]))

If you only want to handle shorter sequence, such as 2k, pass the max_length parameter to the encode function:

embeddings = model.encode(
    ['Very long ... code'],
    max_length=2048
)

Fully-managed Embeddings Service

Alternatively, you can use Jina AI's Embeddings platform for fully-managed access to Jina Embeddings models.

Plans

The development of new bilingual models is currently underway. We will be targeting mainly the German and Spanish languages. The upcoming models will be called jina-embeddings-v2-small-de/es.

Contact

Join our Discord community and chat with other community members about ideas.