--- license: llama2 tags: - code llama base_model: BallisticAI/Ballistic-CodeLlama-34B-v1 inference: false model_creator: BallisticAI model_type: llama prompt_template: '### System Prompt {system_message} ### User Message {prompt} ### Assistant ' quantized_by: BallisticAI model-index: - name: Ballistic-CodeLlama-34B-v1 results: - task: type: text-generation dataset: name: HumanEval type: openai_humaneval metrics: - type: n/a value: n/a name: n/a verified: false --- # CodeLlama 34B v1 - Model creator: [BallisticAI](https://huggingface.co/BallisticAI) - Based on: [CodeLlama 34B hf](https://huggingface.co/codellama/CodeLlama-34b-hf) - Merged with: [CodeLlama 34B v2](https://huggingface.co/Phind/Phind-CodeLlama-34B-v2) && [speechless-codellama-34b-v2](https://huggingface.co/uukuguy/speechless-codellama-34b-v2.0) - Additional training with: [jondurbin/airoboros-2.2](https://huggingface.co/datasets/jondurbin/airoboros-2.2) ## Description This repo contains model for [Ballistic-CodeLlama-34B-v1](https://huggingface.co/BallisticAI/Ballistic-CodeLlama-34B-v1). ## Repositories available * [AWQ model for GPU inference.](https://huggingface.co/BallisticAI/Ballistic-CodeLlama-34B-v1-AWQ) * [GGUF model for CPU inference.](https://huggingface.co/BallisticAI/Ballistic-CodeLlama-34B-v1-GGUF) ## How to Prompt the Model This model accepts the Alpaca/Vicuna instruction format. For example: ``` ### System Prompt You are an intelligent programming assistant. ### User Message Implement a linked list in C++ ### Assistant ... ``` ## Bias, Risks, and Limitations This model has undergone very limited testing. Additional safety testing should be performed before any real-world deployments. ## Thanks Thanks to: - The Original Llama team - [Phind](https://huggingface.co/phind) - [uukuguy](https://huggingface.co/uukuguy) - [jondurbin](https://huggingface.co/jondurbin) - And everyone else who's involved in the Open Source AI/ML Community.