--- license: apache-2.0 pipeline_tag: text-generation --- # 🥷 Safurai-Csharp-34B 📝 [Article](https://www.safurai.com/blog/introducing-safurai-csharp) 📄 [Paper](https://www.safurai.com/)
This is a [`codellama/CodeLlama-34b-hf`](https://huggingface.co/codellama/CodeLlama-34b-hf) model fine-tuned using QLoRA (4-bit precision) on 13B tokens of csharp evolved Q&A We obtained state-of-the-art performance on the MultiPL-E code LLM benchmark for csharp, reaching 56% at pass@1 with n=5. ## 🔧 Training It was trained on 2 x NVIDIA A100 PCIe 80GB in 7h 40m with the following configuration file: ```yaml base_model: codellama/CodeLlama-34b-hf base_model_config: codellama/CodeLlama-34b-hf model_type: LlamaForCausalLM tokenizer_type: CodeLlamaTokenizer is_llama_derived_model: true hub_model_id: "Safurai/Evol-csharp-v1" load_in_8bit: false load_in_4bit: true strict: false datasets: - path: Safurai/EvolInstruct-csharp-16k-13B-Alpaca type: alpaca dataset_prepared_path: last_run_prepared val_set_size: 0.01 output_dir: ./qlora-out sequence_len: 4096 sample_packing: true pad_to_sequence_len: true adapter: lora lora_model_dir: lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: wandb_project: codellama-csharp wandb_entity: wandb_watch: wandb_run_id: wandb_log_model: gradient_accumulation_steps: 4 micro_batch_size: 2 num_epochs: 3 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.0003 train_on_inputs: false group_by_length: false bf16: true fp16: false tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 40 eval_steps: 40 save_steps: debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: bos_token: "" eos_token: "" unk_token: "" ``` ## 📉 Training loss curve: ## 📊 Dataset composition: ## 💻 Usage ``` python # pip install transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Safurai/Evol-csharp-full" prompt = "User: \n {your question} \n Assistant: " tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) sequences = pipeline( f'{prompt}', do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, max_length=1024, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` [Built with Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)