Upload folder using huggingface_hub (#2)
Browse files- fc3f33f33f0f4ae2f1ae622ab5a4d4c1166e720852c96462db2dce7787c0fd26 (6adcac14f27494be185f489fa686bd764d610f8d)
- 98952edc8cf30f9c96d381b5b61f24fb9190e0f47c244bb4e96d19598a1593ab (18cd3fa5f45575e7593ef235efa716d69411042f)
- 45eeb925d078937066917344591444d4fb0dcde346ad6d0be4f1e34accb7ff17 (508e946e36fcc3b3bbbdc4b71c56c89a00d2e19d)
- dec405f932eec7e6036e57f041b4a88ba3cda0244e754dca51fdcb2255dc87c2 (8d715e2171b9a34ca1da27f525b1f3fb11147319)
- 06fe651bd435bf521fa9618051d54e6e29bd8a36f74c0fd951a68dc2fd6d959e (4416c9a53548f8765c6c49f9bd7f2938e4d46553)
- 6b06906991d1dce99c0708ec68cb94f6eec8e7346da89c2d2f858eef730a00c6 (9e6b9b337e228cd7fd33276fdd8c6f8794b925c0)
- 7dca25b6922f057efd86eabd5148e99220d454aab70b94f4fc492255110d0f65 (622207d02cfe7dcf3f582171620a1f0246dfabf9)
- 240e00fba4fd8880658b0e7e2b0857580907d4a9b5033b4e658e4734e728f00b (f84746946edea75a73f60efce29641ccf231fdb5)
- ef2e6451544498cc8029638c407b0bef6c2c3b7827ff99e4ff8724f42d29982c (1c17239f03c204e8d9720dd80de2e1a804653413)
- af901daa122c8438a4cd71b59b127d9b5a008ef03ca9d08d1b0bf1059ba78e10 (31d4812feade68c1c06a16fcee87444e5a06a5d3)
- 99ebe2f9eca6273423bc41eed75122b4460746d2bea32e0746ec0ec7a6d5d274 (b62d445225d012d46054297f99b16cc65f43fa27)
- a512dee76dfea181109a8bf63c51ac83f0ed6ce2de419345c2cfc7c89369f7bc (511e7e5808bb48b1e87d37c3be9f4f672bd043de)
- Delete .ipynb_checkpoints (61bd86eb52a998563dce9bbceeff6dc3d9d6b7ba)
- 1.1 examples (880c754c89c92751a9642394a14ddb5e5a425970)
Co-authored-by: scott <sequelbox@users.noreply.huggingface.co>
- README.md +18 -11
- config.json +1 -1
- generation_config.json +1 -1
- model-00001-of-00011.safetensors +1 -1
- model-00002-of-00011.safetensors +1 -1
- model-00003-of-00011.safetensors +1 -1
- model-00004-of-00011.safetensors +1 -1
- model-00005-of-00011.safetensors +1 -1
- model-00006-of-00011.safetensors +1 -1
- model-00007-of-00011.safetensors +1 -1
- model-00008-of-00011.safetensors +1 -1
- model-00009-of-00011.safetensors +1 -1
- model-00010-of-00011.safetensors +1 -1
- model-00011-of-00011.safetensors +1 -1
- tokenizer_config.json +1 -1
@@ -23,17 +23,20 @@ license: apache-2.0
|
|
23 |
|
24 |
Fireplace-13b is a function calling model built on the Llama 2 architecture.
|
25 |
- Built on llama-2-13b architecture, using [CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) as the base model.
|
26 |
-
- Emphasizes function calling and code-instruct as skills.
|
|
|
27 |
|
28 |
(If you're looking for a friendly general-purpose chat model, try ours: [llama-13b](https://huggingface.co/ValiantLabs/ShiningValiantXS) and [70b](https://huggingface.co/ValiantLabs/ShiningValiant))
|
29 |
|
30 |
## Version
|
31 |
|
32 |
-
This is Version **1.
|
33 |
|
34 |
The current version of Fireplace-13b uses [CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) trained on [glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2).
|
35 |
|
36 |
-
Fireplace is the first release in our Build Tools campaign, to deliver helpful open source capabilities for users and creators.
|
|
|
|
|
37 |
|
38 |
We're also working to bring Fireplace to larger model architectures, to maximize baseline model capability and function-calling performance.
|
39 |
|
@@ -42,25 +45,29 @@ Fireplace-13b specializes in function calling and code instruct/chat.
|
|
42 |
|
43 |
See [CodeLlama-13b-Instruct-hf](codellama/CodeLlama-13b-Instruct-hf) for code capabilities of the base model.
|
44 |
|
45 |
-
For function calling in this version of the model, the recommended format is
|
46 |
|
|
|
|
|
|
|
|
|
47 |
|
48 |
-
|
49 |
|
|
|
50 |
|
51 |
-
|
52 |
|
53 |
|
54 |
-
(
|
55 |
|
56 |
-
For handling of function call responses, append "FUNCTION RESPONSE: " to the existing chat history:
|
57 |
|
|
|
58 |
|
59 |
-
|
60 |
-
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/6D0KnhAZPDUOZOJM_btTn.png)
|
61 |
|
62 |
|
63 |
-
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/
|
64 |
|
65 |
|
66 |
Fireplace is optimized for function/code capabilities and not general chat, but it has also been trained to utilize general instruct-chat capabilities:
|
|
|
23 |
|
24 |
Fireplace-13b is a function calling model built on the Llama 2 architecture.
|
25 |
- Built on llama-2-13b architecture, using [CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) as the base model.
|
26 |
+
- Emphasizes function calling and code-instruct as skills.
|
27 |
+
- Version 1.1 improves output structure for a superior user experience.
|
28 |
|
29 |
(If you're looking for a friendly general-purpose chat model, try ours: [llama-13b](https://huggingface.co/ValiantLabs/ShiningValiantXS) and [70b](https://huggingface.co/ValiantLabs/ShiningValiant))
|
30 |
|
31 |
## Version
|
32 |
|
33 |
+
This is Version **1.1** of Fireplace-13b.
|
34 |
|
35 |
The current version of Fireplace-13b uses [CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) trained on [glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2).
|
36 |
|
37 |
+
Fireplace is the first release in our Build Tools campaign, to deliver helpful open source capabilities for users and creators.
|
38 |
+
|
39 |
+
**The next release in our Build Tools series will be coming soon, with an initial release at 70b parameters** - we're very excited to bring this to everyone!
|
40 |
|
41 |
We're also working to bring Fireplace to larger model architectures, to maximize baseline model capability and function-calling performance.
|
42 |
|
|
|
45 |
|
46 |
See [CodeLlama-13b-Instruct-hf](codellama/CodeLlama-13b-Instruct-hf) for code capabilities of the base model.
|
47 |
|
48 |
+
For function calling in this version of the model, the recommended format is to deliver the function(s) in a system message and then proceed with chat:
|
49 |
|
50 |
+
SYSTEM: You are Fireplace, an expert code assistant with access to the following functions. Use them if required -
|
51 |
+
{
|
52 |
+
""name"": ""function_name"",
|
53 |
+
}
|
54 |
|
55 |
+
USER: Can you (do thing from function)?
|
56 |
|
57 |
+
ASSISTANT:
|
58 |
|
59 |
+
Assistant will deliver function call responses between \<functioncall> and <|endoftext|>:
|
60 |
|
61 |
|
62 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/rpfkQKAS0E3483Qxn1HIF.png)
|
63 |
|
|
|
64 |
|
65 |
+
(Please note that <|endoftext|> is not an EOS/EOT token, it is used to indicate the end of function call responses specifically.)
|
66 |
|
67 |
+
For handling of function call responses, append "FUNCTION RESPONSE: " to the existing chat history:
|
|
|
68 |
|
69 |
|
70 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/2bKX9Zsk6pHJxKYqEprcq.png)
|
71 |
|
72 |
|
73 |
Fireplace is optimized for function/code capabilities and not general chat, but it has also been trained to utilize general instruct-chat capabilities:
|
@@ -22,7 +22,7 @@
|
|
22 |
"rope_theta": 1000000,
|
23 |
"tie_word_embeddings": false,
|
24 |
"torch_dtype": "float32",
|
25 |
-
"transformers_version": "4.
|
26 |
"use_cache": true,
|
27 |
"vocab_size": 32016
|
28 |
}
|
|
|
22 |
"rope_theta": 1000000,
|
23 |
"tie_word_embeddings": false,
|
24 |
"torch_dtype": "float32",
|
25 |
+
"transformers_version": "4.37.2",
|
26 |
"use_cache": true,
|
27 |
"vocab_size": 32016
|
28 |
}
|
@@ -2,5 +2,5 @@
|
|
2 |
"_from_model_config": true,
|
3 |
"bos_token_id": 1,
|
4 |
"eos_token_id": 2,
|
5 |
-
"transformers_version": "4.
|
6 |
}
|
|
|
2 |
"_from_model_config": true,
|
3 |
"bos_token_id": 1,
|
4 |
"eos_token_id": 2,
|
5 |
+
"transformers_version": "4.37.2"
|
6 |
}
|
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4881575536
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9261ecc9bf641f8194d9dfd707a1c627e1646d086b54bf1e5368cc8c860b3812
|
3 |
size 4881575536
|
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4970418112
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0df40517f03d482f5a2d6473c6af305003672cea8feaae5d1288d84d93a25ac7
|
3 |
size 4970418112
|
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4970418120
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:559377e2227c601d98c925cdfa40cfe812ec8ce3e23fada0a95fcc469f4947a3
|
3 |
size 4970418120
|
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4970418144
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:37a4ec442035d5413c07b6c6119c9cb4c778134f3c72c98f49f9a4d2006116b4
|
3 |
size 4970418144
|
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4970418144
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fa47ed211ebb221bd983c5b1b94880016733081095dfece70f27bbfad24a4d23
|
3 |
size 4970418144
|
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4792119040
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1e99052645564812a6f1b0b83aab13dee1b69055fd595b0fd08b4daec507059d
|
3 |
size 4792119040
|
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4792160232
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3e7b3d7c1b78eb105bf733b5aa01f08e266ff238ed8e1571a8341ddc1a7c4d9b
|
3 |
size 4792160232
|
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4792160224
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d9f3e00e10671dd8992efc35587e10d400e27659cc05db98b75e3483c47b7dd6
|
3 |
size 4792160224
|
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4970418144
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8152175b1f95a51b56ac83961600a07a67bd1a527bb4c44b7b12842efe4c4947
|
3 |
size 4970418144
|
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4970418144
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a9e752b04a0bf5606dd72922f197610ff1af8bef01a1f6726062d1a5df49c8fc
|
3 |
size 4970418144
|
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 2983630864
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6de442a4ca387c6cd1efa09072a849a4fc841d05b06d26d62fca4fbaa90699f3
|
3 |
size 2983630864
|
@@ -77,7 +77,7 @@
|
|
77 |
"prefix_token": "▁<PRE>",
|
78 |
"sp_model_kwargs": {},
|
79 |
"suffix_token": "▁<SUF>",
|
80 |
-
"tokenizer_class": "
|
81 |
"unk_token": "<unk>",
|
82 |
"use_default_system_prompt": false
|
83 |
}
|
|
|
77 |
"prefix_token": "▁<PRE>",
|
78 |
"sp_model_kwargs": {},
|
79 |
"suffix_token": "▁<SUF>",
|
80 |
+
"tokenizer_class": "CodeLlamaTokenizer",
|
81 |
"unk_token": "<unk>",
|
82 |
"use_default_system_prompt": false
|
83 |
}
|