--- license: apache-2.0 tags: - alignment-handbook - generated_from_trainer - trl - sft - generated_from_trainer datasets: - jan-hq/bagel_sft_binarized - jan-hq/dolphin_binarized - jan-hq/openhermes_binarized - jan-hq/bagel_dpo_binarized base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T pipeline_tag: text-generation inference: parameters: temperature: 0.7 max_new_tokens: 40 widget: - messages: - role: user content: Tell me about NVIDIA in 20 words ---
Jan banner

Jan - Discord

# Model description - Finetuned [TinyLlama-1.1B](TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) further for handling simple tasks and have acceptable conversational quality - Utilized high-quality opensource dataset - Can be run on [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) on consumer devices - Can fit into laptop dGPUs with as little as >=6gb of VRAM # Prompt template ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` # Run this model You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux. Jan is an open source, ChatGPT alternative that is: - 💻 **100% offline on your machine**: Your conversations remain confidential, and visible only to you. - 🗂️ ** An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time. - 🌐 **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints - 🌍 **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65713d70f56f9538679e5a56/r7VmEBLGXpPLTu2MImM7S.png) # About Jan Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones. Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life. # LlamaCorn-1.1B-Chat ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:-----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.9958 | 0.03 | 100 | 1.0003 | -0.0002 | -0.0002 | 0.4930 | -0.0001 | -180.9232 | -195.6078 | -2.6876 | -2.6924 | | 0.9299 | 1.02 | 3500 | 0.9439 | -0.1570 | -0.2195 | 0.5770 | 0.0625 | -183.1160 | -197.1755 | -2.6612 | -2.6663 | | 0.9328 | 2.01 | 6900 | 0.9313 | -0.2127 | -0.2924 | 0.5884 | 0.0798 | -183.8456 | -197.7321 | -2.6296 | -2.6352 | | 0.9321 | 2.98 | 10200 | 0.9305 | -0.2149 | -0.2955 | 0.5824 | 0.0805 | -183.8759 | -197.7545 | -2.6439 | -2.6493 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.0 # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jan-hq__LlamaCorn-1.1B) | Metric |Value| |---------------------------------|----:| |Avg. |36.94| |AI2 Reasoning Challenge (25-Shot)|34.13| |HellaSwag (10-Shot) |59.33| |MMLU (5-Shot) |29.01| |TruthfulQA (0-shot) |36.78| |Winogrande (5-shot) |61.96| |GSM8k (5-shot) | 0.45|