LoneStriker
commited on
Commit
•
edac347
1
Parent(s):
0db4352
Upload folder using huggingface_hub
Browse files- .gitattributes +8 -35
- CodeLlama-70b-Python-hf-Q2_K.gguf +3 -0
- CodeLlama-70b-Python-hf-Q3_K_L.gguf +3 -0
- CodeLlama-70b-Python-hf-Q3_K_M.gguf +3 -0
- CodeLlama-70b-Python-hf-Q3_K_S.gguf +3 -0
- CodeLlama-70b-Python-hf-Q4_K_M.gguf +3 -0
- CodeLlama-70b-Python-hf-Q4_K_S.gguf +3 -0
- CodeLlama-70b-Python-hf-Q5_K_M.gguf +3 -0
- CodeLlama-70b-Python-hf-Q5_K_S.gguf +3 -0
- README.md +84 -0
- huggingface-metadata.txt +63 -0
.gitattributes
CHANGED
@@ -1,35 +1,8 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
-
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
-
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
12 |
-
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
-
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
-
*.npy filter=lfs diff=lfs merge=lfs -text
|
15 |
-
*.npz filter=lfs diff=lfs merge=lfs -text
|
16 |
-
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
-
*.ot filter=lfs diff=lfs merge=lfs -text
|
18 |
-
*.parquet filter=lfs diff=lfs merge=lfs -text
|
19 |
-
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
-
*.pickle filter=lfs diff=lfs merge=lfs -text
|
21 |
-
*.pkl filter=lfs diff=lfs merge=lfs -text
|
22 |
-
*.pt filter=lfs diff=lfs merge=lfs -text
|
23 |
-
*.pth filter=lfs diff=lfs merge=lfs -text
|
24 |
-
*.rar filter=lfs diff=lfs merge=lfs -text
|
25 |
-
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
26 |
-
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
27 |
-
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
28 |
-
*.tar filter=lfs diff=lfs merge=lfs -text
|
29 |
-
*.tflite filter=lfs diff=lfs merge=lfs -text
|
30 |
-
*.tgz filter=lfs diff=lfs merge=lfs -text
|
31 |
-
*.wasm filter=lfs diff=lfs merge=lfs -text
|
32 |
-
*.xz filter=lfs diff=lfs merge=lfs -text
|
33 |
-
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
-
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
-
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
1 |
+
CodeLlama-70b-Python-hf-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
2 |
+
CodeLlama-70b-Python-hf-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
3 |
+
CodeLlama-70b-Python-hf-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
4 |
+
CodeLlama-70b-Python-hf-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
5 |
+
CodeLlama-70b-Python-hf-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
6 |
+
CodeLlama-70b-Python-hf-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
7 |
+
CodeLlama-70b-Python-hf-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
8 |
+
CodeLlama-70b-Python-hf-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
CodeLlama-70b-Python-hf-Q2_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d683913eb8999f18feedb95b74ba660475cf0bec508056b1f402da1772e504d1
|
3 |
+
size 25462587840
|
CodeLlama-70b-Python-hf-Q3_K_L.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9b3a2a00d40ab99493957007a72aea7a8dc3bc40f29b5c1ea10fe27129744895
|
3 |
+
size 36148000192
|
CodeLlama-70b-Python-hf-Q3_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:411eb201d003f7d3e1d194ee6056972e79948eb973ff748d263818dce1971d6b
|
3 |
+
size 33274901952
|
CodeLlama-70b-Python-hf-Q3_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9dad34d14fe34c90fe701464b3c4b1cf1654f01c323bb94c93d020c0a65de3da
|
3 |
+
size 29919458752
|
CodeLlama-70b-Python-hf-Q4_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9baea162791432e7d7f6ff8cec9ba2be92239ec706536286530bd4b740d446ba
|
3 |
+
size 41423092160
|
CodeLlama-70b-Python-hf-Q4_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:783a53a1c86f4bd440714449378463a0502f9a2ec01819d0eb7458e12ce68c2c
|
3 |
+
size 39249918400
|
CodeLlama-70b-Python-hf-Q5_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bf7eb333636d19ec46968d8060d18c7dad872cfe63bdedd5b5af54e56bb3494b
|
3 |
+
size 48753965504
|
CodeLlama-70b-Python-hf-Q5_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:20057e0b2bb40ee616f93c1e39f51ece7ae8f6b33b4608d3976d42dcbc820d71
|
3 |
+
size 47461595584
|
README.md
ADDED
@@ -0,0 +1,84 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
---
|
3 |
+
language:
|
4 |
+
- code
|
5 |
+
pipeline_tag: text-generation
|
6 |
+
tags:
|
7 |
+
- llama-2
|
8 |
+
license: llama2
|
9 |
+
---
|
10 |
+
# **Code Llama**
|
11 |
+
Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 70B Python specialist version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom.
|
12 |
+
|
13 |
+
| | Base Model | Python | Instruct |
|
14 |
+
| --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- |
|
15 |
+
| 7B | [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) |
|
16 |
+
| 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) |
|
17 |
+
| 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) |
|
18 |
+
| 70B | [codellama/CodeLlama-70b-hf](https://huggingface.co/codellama/CodeLlama-70b-hf) | [codellama/CodeLlama-70b-Python-hf](https://huggingface.co/codellama/CodeLlama-70b-Python-hf) | [codellama/CodeLlama-70b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-70b-Instruct-hf) |
|
19 |
+
|
20 |
+
## Model Use
|
21 |
+
|
22 |
+
To use this model, please make sure to install `transformers`.
|
23 |
+
|
24 |
+
```bash
|
25 |
+
pip install transformers accelerate
|
26 |
+
```
|
27 |
+
|
28 |
+
Model capabilities:
|
29 |
+
|
30 |
+
- [x] Code completion.
|
31 |
+
- [ ] Infilling.
|
32 |
+
- [ ] Instructions / chat.
|
33 |
+
- [x] Python specialist.
|
34 |
+
|
35 |
+
## Model Details
|
36 |
+
*Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs).
|
37 |
+
|
38 |
+
**Model Developers** Meta
|
39 |
+
|
40 |
+
**Variations** Code Llama comes in four model sizes, and three variants:
|
41 |
+
|
42 |
+
* Code Llama: base models designed for general code synthesis and understanding
|
43 |
+
* Code Llama - Python: designed specifically for Python
|
44 |
+
* Code Llama - Instruct: for instruction following and safer deployment
|
45 |
+
|
46 |
+
All variants are available in sizes of 7B, 13B, 34B, and 70B parameters.
|
47 |
+
|
48 |
+
**This repository contains the Python version of the 70B parameters model.**
|
49 |
+
|
50 |
+
**Input** Models input text only.
|
51 |
+
|
52 |
+
**Output** Models generate text only.
|
53 |
+
|
54 |
+
**Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture.
|
55 |
+
**Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture. It was fine-tuned with up to 16k tokens. This variant **does not** support long context of up to 100k tokens.
|
56 |
+
|
57 |
+
**Model Dates** Code Llama and its variants have been trained between January 2023 and January 2024.
|
58 |
+
|
59 |
+
**Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback.
|
60 |
+
|
61 |
+
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
|
62 |
+
|
63 |
+
**Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950).
|
64 |
+
|
65 |
+
## Intended Use
|
66 |
+
**Intended Use Cases** Code Llama and its variants are intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.
|
67 |
+
|
68 |
+
**Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.
|
69 |
+
|
70 |
+
## Hardware and Software
|
71 |
+
**Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster.
|
72 |
+
|
73 |
+
**Carbon Footprint** In aggregate, training all 12 Code Llama models required 1400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 228.55 tCO2eq, 100% of which were offset by Meta’s sustainability program.
|
74 |
+
|
75 |
+
## Evaluation Results
|
76 |
+
|
77 |
+
See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.
|
78 |
+
|
79 |
+
|
80 |
+
## Ethical Considerations and Limitations
|
81 |
+
|
82 |
+
Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.
|
83 |
+
|
84 |
+
Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-use-guide](https://ai.meta.com/llama/responsible-use-guide).
|
huggingface-metadata.txt
ADDED
@@ -0,0 +1,63 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
url: https://huggingface.co/codellama/CodeLlama-70b-Python-hf
|
2 |
+
branch: main
|
3 |
+
download date: 2024-01-29 12:59:13
|
4 |
+
sha256sum:
|
5 |
+
0dc39667e9e9bb99b37e164514e2517c859a21b69083d1e391c95454538b8fc0 model-00001-of-00029.safetensors
|
6 |
+
93cb9a3be35a63540022de4c9e0d19f48310ea491ebd11f077575c0907b523d7 model-00002-of-00029.safetensors
|
7 |
+
804d76bf0780390092e0ad15338b40b82221f5a89df019e58c7ef71ddf953433 model-00003-of-00029.safetensors
|
8 |
+
15e3751d89cdda1ed46ca2b27e66ef527fa2937b69ffe8f52cf0a30a0ad13038 model-00004-of-00029.safetensors
|
9 |
+
d857c396d675c945cfc4c6149269f475dc60f1ff38810f446be373e5ad0ca402 model-00005-of-00029.safetensors
|
10 |
+
5f325631196d211f66f4b614d7ef7733869da094044732b48399da5f14dc6992 model-00006-of-00029.safetensors
|
11 |
+
49e617ff57141a9cd574fb68738d8cc7700d8732f3f793b6e6b5ffd99967f856 model-00007-of-00029.safetensors
|
12 |
+
61089d0168ff2c89407d8d8aec42572ed542ad1c0d8c16a1c0199f56bcb8f886 model-00008-of-00029.safetensors
|
13 |
+
10339c063a9708cadde61d3a690d1a2b65cbdb66a3af02f762fa6220379e9494 model-00009-of-00029.safetensors
|
14 |
+
44cf1bc764a3183a1a30478399dd14810da918dc861f1f3f8ca5722e3051aa5c model-00010-of-00029.safetensors
|
15 |
+
850147b678c3950665c18042ed185022d8c18b9e7706a08f5b56a3e165ea1d08 model-00011-of-00029.safetensors
|
16 |
+
054899a12a20c0014e304b1f69260676d4ee7b6437789d3a06eabd48588ffe0b model-00012-of-00029.safetensors
|
17 |
+
ebf60aa2ee9a142b54d164c207340aa6fc2633e9986d54fe820b3fff7abfc75a model-00013-of-00029.safetensors
|
18 |
+
99d9ec085297c0e9771261376e128487b7e7d8cb8ff48ca68cd1004d2f99ceba model-00014-of-00029.safetensors
|
19 |
+
665b12bb855f6a1afce80e1255fbceebf683923498d048ebaaced0dade3882e6 model-00015-of-00029.safetensors
|
20 |
+
07ec206dd2ad1cc5c5c939eff06ea6858c30a51b6efda3390da5ef49ddf981e4 model-00016-of-00029.safetensors
|
21 |
+
bbdbc85b01918662674b27345b64439a494781f1b892f245db57f9162cb1f887 model-00017-of-00029.safetensors
|
22 |
+
a5309faee1ab036dfbe8103f4aa8c2dba77988e9a2c5c92fd5f00580f650f486 model-00018-of-00029.safetensors
|
23 |
+
140288ba636559cef56179e649253997f8fa72437717b21bc060262bc3416468 model-00019-of-00029.safetensors
|
24 |
+
a7466b2e9e116445857d71019cef746dbc63176b5568ad600da48a08294cb63e model-00020-of-00029.safetensors
|
25 |
+
d28bcf68247f93c5b70626b917aa0349103feaedd267714b6e4e9bc5ac65b689 model-00021-of-00029.safetensors
|
26 |
+
3d9419b0dbd0fd6a8784180d8aca472fc922cecefb03083b9f8371a86c708097 model-00022-of-00029.safetensors
|
27 |
+
4b0b88e28c69c3f2245d5b68f11832e2161f578881ed7f6a8dfe5f01d53051d3 model-00023-of-00029.safetensors
|
28 |
+
ed64d23c1c38899c2ff00e1d3bf1f73239ca06b7d691878668dbe31cbee115a8 model-00024-of-00029.safetensors
|
29 |
+
b93038be5716e03c6304a7574dd0cb90f65ebb54fb519669c5046ae1249060b5 model-00025-of-00029.safetensors
|
30 |
+
cf37ac601e61c9d476cd2edda9b2f95d7a3c89e9d90a6226f455758b891c8dd1 model-00026-of-00029.safetensors
|
31 |
+
f561fea5218a2cb49f210748bf766dfe63603912e8950f0a7f5d4ddaa5d0512a model-00027-of-00029.safetensors
|
32 |
+
1001859efc68cdefe27f45f9910af9ff8ea7c864124270e2fcfc6038c22b531b model-00028-of-00029.safetensors
|
33 |
+
3129bb872e06ec49f86d54fafeabb402787835540aee51300bb9586c79df2c07 model-00029-of-00029.safetensors
|
34 |
+
e4b2b2319420e9b426c45d40c188d438758c0dbe29e878107966a81209667df5 pytorch_model-00001-of-00029.bin
|
35 |
+
dee546284bc9c43eae1a45a83f08f3c832ae2ee1ade02e2ad0833cced9c76bc7 pytorch_model-00002-of-00029.bin
|
36 |
+
01285a3719d7bacd443e6ea35158fc4a877a73e65005624f6b513b3fd162139c pytorch_model-00003-of-00029.bin
|
37 |
+
584d439f395de303d39931b1e88ff5f4871d06f8a80c8fe29866727a1f0a4390 pytorch_model-00004-of-00029.bin
|
38 |
+
169fe45301b4e0e33d272c99faa9d666f7defc7c09a00d904932f2021874cce0 pytorch_model-00005-of-00029.bin
|
39 |
+
fe97b617d794ed1300ebdbe0bc9a9b2b0d4f7056ed130b757b394b8dd3002e0d pytorch_model-00006-of-00029.bin
|
40 |
+
018be562bf5e81e6b6d7894a12c1d0319a24599cfe860c78127730740425cc46 pytorch_model-00007-of-00029.bin
|
41 |
+
1d2820934d03c59678c2bc5d084f7ac278d49ba9803276d021d221d809c1caef pytorch_model-00008-of-00029.bin
|
42 |
+
4cf080d1d082f8a5c5ba9ecc6b132e9ebac2d6b666f611a7b501375cf5b852b9 pytorch_model-00009-of-00029.bin
|
43 |
+
b932c647486d35f9504e536cc4adddf4a0024349a2761bc5513e7ecfa7b75108 pytorch_model-00010-of-00029.bin
|
44 |
+
7be53afe83fac9fdb36d7374192d65eaadcea237a829a3a53a1e74e7a07025c9 pytorch_model-00011-of-00029.bin
|
45 |
+
fe76a0b9432fe65344dae2b81b7817a6b57b10162f6506b2e5182c4cc706f442 pytorch_model-00012-of-00029.bin
|
46 |
+
cc1b1db8614091dc9810138422f7c188707e4f0b50f0d790a6073c4a7b80b4a3 pytorch_model-00013-of-00029.bin
|
47 |
+
c032cd8c883adb0226a8d4248090f90ea9b4ed56f02047636c52608048c87086 pytorch_model-00014-of-00029.bin
|
48 |
+
e92e860739c83d248e238f8d9638734cd7f80e36191c3fac38e32eb4492241ed pytorch_model-00015-of-00029.bin
|
49 |
+
4c2936268379bae42fd78909c4890d659cea83ac3e29c33eb8373c62af645437 pytorch_model-00016-of-00029.bin
|
50 |
+
b4169c126e2eb74c0ac9bb936a9cd405823421b1142cda905d3ba6abdf8b1340 pytorch_model-00017-of-00029.bin
|
51 |
+
8ae9442dd4860ffe158f456980899ea9179b27702a8f5617a6e710553f411f56 pytorch_model-00018-of-00029.bin
|
52 |
+
e2c2aa4d2e647cb2f3b27892f77e719eb6baea5ee066c74cc845628c44fb2ae2 pytorch_model-00019-of-00029.bin
|
53 |
+
ac999fe79be43a9fe427dca59c2e3771043556295664ef660c6d91a4be4b8100 pytorch_model-00020-of-00029.bin
|
54 |
+
bf75ade8110925283fb8afc308c56e9ad07a9f2e71df22ea67f0d949ae4f7648 pytorch_model-00021-of-00029.bin
|
55 |
+
ea1f1ffcaacf693ad615767d74adee279ce8129eb9be90318cf989ce0984ad29 pytorch_model-00022-of-00029.bin
|
56 |
+
39498459a6d625c97e22cbec87eb28326352507b88ed1f47f0d1e693ce678c05 pytorch_model-00023-of-00029.bin
|
57 |
+
89f885f70b1332aafff9a6ccd1ad584f8f445ffa94204db62c06d559c42d5dfa pytorch_model-00024-of-00029.bin
|
58 |
+
82dbc3092b63b3d3bde2a693315d88a38d9f612f9ddde8a2e968124bd6056325 pytorch_model-00025-of-00029.bin
|
59 |
+
67fa5742c14edf1e7bbf9340616f2097a9bc7b842dc8fd25e99e9b61a255f2f0 pytorch_model-00026-of-00029.bin
|
60 |
+
9195cf483fc8149013687528c052c9ba1b9407927e1cf90839991d66794d5292 pytorch_model-00027-of-00029.bin
|
61 |
+
c32d578a160f6ae791f22f1eab5dbc4a99516e707b92d0f36ca5075bbccc6158 pytorch_model-00028-of-00029.bin
|
62 |
+
32048101b4b3e5b01aacdf418a49b645ea7bb98915c77873fa945726e78a2903 pytorch_model-00029-of-00029.bin
|
63 |
+
99049b351301fb75b3b0587a484b675cbfd51abe27d2b92eabd385e4c41f97e9 tokenizer.model
|