Upload folder using huggingface_hub
Browse files- LICENSE +21 -0
- README.md +98 -0
- config.json +22 -0
- manifest.json +17 -0
- model_quantized.onnx +3 -0
- special_tokens_map.json +51 -0
- tokenizer.json +0 -0
- tokenizer_config.json +58 -0
LICENSE
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
MIT License
|
| 2 |
+
|
| 3 |
+
Copyright (c) Microsoft Corporation.
|
| 4 |
+
|
| 5 |
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
| 6 |
+
of this software and associated documentation files (the "Software"), to deal
|
| 7 |
+
in the Software without restriction, including without limitation the rights
|
| 8 |
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
| 9 |
+
copies of the Software, and to permit persons to whom the Software is
|
| 10 |
+
furnished to do so, subject to the following conditions:
|
| 11 |
+
|
| 12 |
+
The above copyright notice and this permission notice shall be included in all
|
| 13 |
+
copies or substantial portions of the Software.
|
| 14 |
+
|
| 15 |
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
| 16 |
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
| 17 |
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
| 18 |
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
| 19 |
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
| 20 |
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
| 21 |
+
SOFTWARE.
|
README.md
ADDED
|
@@ -0,0 +1,98 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
tags:
|
| 4 |
+
- onnx
|
| 5 |
+
- int8
|
| 6 |
+
- quantized
|
| 7 |
+
- code
|
| 8 |
+
- embeddings
|
| 9 |
+
- justembed
|
| 10 |
+
base_model: microsoft/codebert-base
|
| 11 |
+
library_name: onnxruntime
|
| 12 |
+
pipeline_tag: feature-extraction
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
# CodeBERT INT8 — ONNX Quantized
|
| 16 |
+
|
| 17 |
+
ONNX INT8 quantized version of [microsoft/codebert-base](https://huggingface.co/microsoft/codebert-base) for efficient code and natural language embeddings.
|
| 18 |
+
|
| 19 |
+
## Model Details
|
| 20 |
+
|
| 21 |
+
| Property | Value |
|
| 22 |
+
|----------|-------|
|
| 23 |
+
| Base Model | [microsoft/codebert-base](https://huggingface.co/microsoft/codebert-base) |
|
| 24 |
+
| Format | ONNX |
|
| 25 |
+
| Quantization | INT8 (dynamic quantization) |
|
| 26 |
+
| Embedding Dimension | 768 |
|
| 27 |
+
| Quantized by | [JustEmbed](https://pypi.org/project/justembed/) |
|
| 28 |
+
|
| 29 |
+
## What is this?
|
| 30 |
+
|
| 31 |
+
This is a quantized ONNX export of CodeBERT, a bimodal pre-trained model for programming and natural language by Microsoft Research. The INT8 quantization reduces model size and improves inference speed while maintaining high accuracy for code-related embeddings.
|
| 32 |
+
|
| 33 |
+
CodeBERT is trained on both natural language and programming language data (Python, Java, JavaScript, PHP, Ruby, Go).
|
| 34 |
+
|
| 35 |
+
## Use Cases
|
| 36 |
+
|
| 37 |
+
- Code search and retrieval
|
| 38 |
+
- Code documentation matching
|
| 39 |
+
- Programming language embeddings
|
| 40 |
+
- Code similarity detection
|
| 41 |
+
- Natural language to code matching
|
| 42 |
+
|
| 43 |
+
## Files
|
| 44 |
+
|
| 45 |
+
- `model_quantized.onnx` — INT8 quantized ONNX model
|
| 46 |
+
- `tokenizer.json` — Fast tokenizer
|
| 47 |
+
- `config.json` — Model configuration
|
| 48 |
+
|
| 49 |
+
## Usage with JustEmbed
|
| 50 |
+
|
| 51 |
+
```python
|
| 52 |
+
from justembed import Embedder
|
| 53 |
+
|
| 54 |
+
embedder = Embedder("codebert-int8")
|
| 55 |
+
vectors = embedder.embed(["def sort_list(arr): return sorted(arr)"])
|
| 56 |
+
```
|
| 57 |
+
|
| 58 |
+
## Usage with ONNX Runtime
|
| 59 |
+
|
| 60 |
+
```python
|
| 61 |
+
import onnxruntime as ort
|
| 62 |
+
from transformers import AutoTokenizer
|
| 63 |
+
|
| 64 |
+
tokenizer = AutoTokenizer.from_pretrained(".")
|
| 65 |
+
session = ort.InferenceSession("model_quantized.onnx")
|
| 66 |
+
|
| 67 |
+
inputs = tokenizer("def sort_list(arr): return sorted(arr)", return_tensors="np")
|
| 68 |
+
outputs = session.run(None, dict(inputs))
|
| 69 |
+
```
|
| 70 |
+
|
| 71 |
+
## Quantization Details
|
| 72 |
+
|
| 73 |
+
- Method: Dynamic INT8 quantization via ONNX Runtime
|
| 74 |
+
- Source: Original PyTorch weights converted to ONNX, then quantized
|
| 75 |
+
- Speed: ~2-3x faster inference than FP32
|
| 76 |
+
- Size: ~4x smaller than FP32
|
| 77 |
+
|
| 78 |
+
## License
|
| 79 |
+
|
| 80 |
+
This model is a derivative work of [microsoft/codebert-base](https://huggingface.co/microsoft/codebert-base).
|
| 81 |
+
|
| 82 |
+
The original model is licensed under **MIT License**. This quantized version is distributed under the same license. See the [LICENSE](LICENSE) file for the full text.
|
| 83 |
+
|
| 84 |
+
## Citation
|
| 85 |
+
|
| 86 |
+
```bibtex
|
| 87 |
+
@inproceedings{feng2020codebert,
|
| 88 |
+
title={CodeBERT: A Pre-Trained Model for Programming and Natural Languages},
|
| 89 |
+
author={Feng, Zhangyin and Guo, Daya and Tang, Duyu and Duan, Nan and Feng, Xiaocheng and Gong, Ming and Shou, Linjun and Qin, Bing and Liu, Ting and Jiang, Daxin and Zhou, Ming},
|
| 90 |
+
booktitle={Findings of EMNLP},
|
| 91 |
+
year={2020}
|
| 92 |
+
}
|
| 93 |
+
```
|
| 94 |
+
|
| 95 |
+
## Acknowledgments
|
| 96 |
+
|
| 97 |
+
- Original model by [Microsoft Research](https://github.com/microsoft/CodeBERT)
|
| 98 |
+
- Quantization and packaging by [JustEmbed](https://pypi.org/project/justembed/)
|
config.json
ADDED
|
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"architectures": [
|
| 3 |
+
"RobertaModel"
|
| 4 |
+
],
|
| 5 |
+
"attention_probs_dropout_prob": 0.1,
|
| 6 |
+
"bos_token_id": 0,
|
| 7 |
+
"eos_token_id": 2,
|
| 8 |
+
"hidden_act": "gelu",
|
| 9 |
+
"hidden_dropout_prob": 0.1,
|
| 10 |
+
"hidden_size": 768,
|
| 11 |
+
"initializer_range": 0.02,
|
| 12 |
+
"intermediate_size": 3072,
|
| 13 |
+
"layer_norm_eps": 1e-05,
|
| 14 |
+
"max_position_embeddings": 514,
|
| 15 |
+
"model_type": "roberta",
|
| 16 |
+
"num_attention_heads": 12,
|
| 17 |
+
"num_hidden_layers": 12,
|
| 18 |
+
"output_past": true,
|
| 19 |
+
"pad_token_id": 1,
|
| 20 |
+
"type_vocab_size": 1,
|
| 21 |
+
"vocab_size": 50265
|
| 22 |
+
}
|
manifest.json
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"schema_version": "1.0",
|
| 3 |
+
"name": "codebert-int8",
|
| 4 |
+
"version": "1.0.1",
|
| 5 |
+
"type": "single",
|
| 6 |
+
"model_class": "BERTEmbedder",
|
| 7 |
+
"embedding_dim": 768,
|
| 8 |
+
"min_justembed_version": "0.1.1a9",
|
| 9 |
+
"description": "Code embeddings for programming and software documentation (INT8 quantized)",
|
| 10 |
+
"files": null,
|
| 11 |
+
"metadata": null,
|
| 12 |
+
"stages": null,
|
| 13 |
+
"max_sequence_length": 512,
|
| 14 |
+
"license": "MIT",
|
| 15 |
+
"source": "microsoft/codebert-base",
|
| 16 |
+
"quantization": "int8"
|
| 17 |
+
}
|
model_quantized.onnx
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:149eb7ed0e3b5b1cf6a7e0194c56a1b83ba104baecb746b40a6b6c0c4ba1b061
|
| 3 |
+
size 126315271
|
special_tokens_map.json
ADDED
|
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"bos_token": {
|
| 3 |
+
"content": "<s>",
|
| 4 |
+
"lstrip": false,
|
| 5 |
+
"normalized": true,
|
| 6 |
+
"rstrip": false,
|
| 7 |
+
"single_word": false
|
| 8 |
+
},
|
| 9 |
+
"cls_token": {
|
| 10 |
+
"content": "<s>",
|
| 11 |
+
"lstrip": false,
|
| 12 |
+
"normalized": true,
|
| 13 |
+
"rstrip": false,
|
| 14 |
+
"single_word": false
|
| 15 |
+
},
|
| 16 |
+
"eos_token": {
|
| 17 |
+
"content": "</s>",
|
| 18 |
+
"lstrip": false,
|
| 19 |
+
"normalized": true,
|
| 20 |
+
"rstrip": false,
|
| 21 |
+
"single_word": false
|
| 22 |
+
},
|
| 23 |
+
"mask_token": {
|
| 24 |
+
"content": "<mask>",
|
| 25 |
+
"lstrip": true,
|
| 26 |
+
"normalized": false,
|
| 27 |
+
"rstrip": false,
|
| 28 |
+
"single_word": false
|
| 29 |
+
},
|
| 30 |
+
"pad_token": {
|
| 31 |
+
"content": "<pad>",
|
| 32 |
+
"lstrip": false,
|
| 33 |
+
"normalized": true,
|
| 34 |
+
"rstrip": false,
|
| 35 |
+
"single_word": false
|
| 36 |
+
},
|
| 37 |
+
"sep_token": {
|
| 38 |
+
"content": "</s>",
|
| 39 |
+
"lstrip": false,
|
| 40 |
+
"normalized": true,
|
| 41 |
+
"rstrip": false,
|
| 42 |
+
"single_word": false
|
| 43 |
+
},
|
| 44 |
+
"unk_token": {
|
| 45 |
+
"content": "<unk>",
|
| 46 |
+
"lstrip": false,
|
| 47 |
+
"normalized": true,
|
| 48 |
+
"rstrip": false,
|
| 49 |
+
"single_word": false
|
| 50 |
+
}
|
| 51 |
+
}
|
tokenizer.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
tokenizer_config.json
ADDED
|
@@ -0,0 +1,58 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"add_prefix_space": false,
|
| 3 |
+
"added_tokens_decoder": {
|
| 4 |
+
"0": {
|
| 5 |
+
"content": "<s>",
|
| 6 |
+
"lstrip": false,
|
| 7 |
+
"normalized": true,
|
| 8 |
+
"rstrip": false,
|
| 9 |
+
"single_word": false,
|
| 10 |
+
"special": true
|
| 11 |
+
},
|
| 12 |
+
"1": {
|
| 13 |
+
"content": "<pad>",
|
| 14 |
+
"lstrip": false,
|
| 15 |
+
"normalized": true,
|
| 16 |
+
"rstrip": false,
|
| 17 |
+
"single_word": false,
|
| 18 |
+
"special": true
|
| 19 |
+
},
|
| 20 |
+
"2": {
|
| 21 |
+
"content": "</s>",
|
| 22 |
+
"lstrip": false,
|
| 23 |
+
"normalized": true,
|
| 24 |
+
"rstrip": false,
|
| 25 |
+
"single_word": false,
|
| 26 |
+
"special": true
|
| 27 |
+
},
|
| 28 |
+
"3": {
|
| 29 |
+
"content": "<unk>",
|
| 30 |
+
"lstrip": false,
|
| 31 |
+
"normalized": true,
|
| 32 |
+
"rstrip": false,
|
| 33 |
+
"single_word": false,
|
| 34 |
+
"special": true
|
| 35 |
+
},
|
| 36 |
+
"50264": {
|
| 37 |
+
"content": "<mask>",
|
| 38 |
+
"lstrip": true,
|
| 39 |
+
"normalized": false,
|
| 40 |
+
"rstrip": false,
|
| 41 |
+
"single_word": false,
|
| 42 |
+
"special": true
|
| 43 |
+
}
|
| 44 |
+
},
|
| 45 |
+
"bos_token": "<s>",
|
| 46 |
+
"clean_up_tokenization_spaces": false,
|
| 47 |
+
"cls_token": "<s>",
|
| 48 |
+
"eos_token": "</s>",
|
| 49 |
+
"errors": "replace",
|
| 50 |
+
"extra_special_tokens": {},
|
| 51 |
+
"mask_token": "<mask>",
|
| 52 |
+
"model_max_length": 512,
|
| 53 |
+
"pad_token": "<pad>",
|
| 54 |
+
"sep_token": "</s>",
|
| 55 |
+
"tokenizer_class": "RobertaTokenizer",
|
| 56 |
+
"trim_offsets": true,
|
| 57 |
+
"unk_token": "<unk>"
|
| 58 |
+
}
|