--- language: - ko datasets: - garage-bAInd/Open-Platypus library_name: transformers pipeline_tag: text-generation license: cc-by-nc-sa-4.0 --- # **PlatYi-34B-LoRA** ## Model Details **Model Developers** Kyujin Han (kyujinpy) **Input** Models input text only. **Output** Models generate text only. **Model Architecture** PlatYi-34B-LoRA is an auto-regressive language model based on the Yi-34B transformer architecture. **Blog Link** Blog: [Coming soon...] Github: [Coming soon...] **Base Model** [01-ai/Yi-34B](https://huggingface.co/01-ai/Yi-34B) **Training Dataset** [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). **Notice** While training, I used LoRA. The `lora_r` values is 16. # **Model Benchmark** ## Open leaderboard - Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | | --- | --- | --- | --- | --- | --- | --- | --- | | PlatYi-34B-Q | NaN | NaN | NaN | NaN | NaN | NaN | NaN | | **PlatYi-34B-LoRA** | NaN | NaN | NaN | NaN | NaN | NaN | NaN | | [01-ai/Yi-34B](https://huggingface.co/01-ai/Yi-34B) | 69.42 | 64.59 | 85.69 | 76.35 | 56.23 | 83.03 | 50.64 | # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "kyujinpy/PlatYi-34B-LoRA" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ``` ---