--- language: - en pipeline_tag: text-generation tags: - fireplace - function-calling - code - code-instruct - valiant - valiant-labs - llama - llama-2 - llama-2-chat - 13b model_type: llama license: apache-2.0 --- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/qg49GOlx8zogDOrMTnb89.jpeg) Fireplace-13b is a function calling model built on the Llama 2 architecture. - Built on llama-2-13b architecture, using [CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) as the base model. - Emphasizes function calling and code-instruct as skills. (If you're looking for a friendly general-purpose chat model, try ours: [llama-13b](https://huggingface.co/ValiantLabs/ShiningValiantXS) and [70b](https://huggingface.co/ValiantLabs/ShiningValiant)) ## Version This is Version **1.0** of Fireplace-13b. The current version of Fireplace-13b uses [CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) trained on [glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2). Fireplace is the first release in our Build Tools campaign, to deliver helpful open source capabilities for users and creators. We're excitedly working on our next open source models now! We're also working to bring Fireplace to larger model architectures, to maximize baseline model capability and function-calling performance. ## Prompting Guide Fireplace-13b specializes in function calling and code instruct/chat. See [CodeLlama-13b-Instruct-hf](codellama/CodeLlama-13b-Instruct-hf) for code capabilities of the base model. For function calling in this version of the model, the recommended format is taken from [the training dataset](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2). Assistant can make function calls, which are given between \ and <|endoftext|>: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/WTX8qlnXXw2PQ9t6OI1VB.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/oJGb5hpUTy1KPxIdaIG5F.png) (Please note that <|endoftext|> is not an EOS/EOT token, it is used to indicate the end of an 'ASSISTANT: ' response in the training data. Post-processing should be used to extract the function response as required by the user.) For handling of function call responses, append "FUNCTION RESPONSE: " to the existing chat history: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/6D0KnhAZPDUOZOJM_btTn.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/Ruwdx_hxGmFdedzQ7d-ZJ.png) Fireplace is optimized for function/code capabilities and not general chat, but it has also been trained to utilize general instruct-chat capabilities: SYSTEM: You are a helpful assistant. USER: user chat input ASSISTANT: The model may be subject to errors and limitations, including those of the base model and dataset. We offer Fireplace-13b as open source for all to use. The user is responsible for all outputs. ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63444f2687964b331809eb55/VCJ8Fmefd8cdVhXSSxJiD.jpeg) Fireplace is created by [Valiant Labs.](http://valiantlabs.ca/) Try our flagship chat model, [Shining Valiant!](https://huggingface.co/ValiantLabs/ShiningValiant) [Follow us on X for updates on our models!](https://twitter.com/valiant_labs) We care about open source. For everyone to use. We encourage others to finetune further from our models.