--- library_name: transformers license: apache-2.0 datasets: - pythainlp/han-instruct-dataset-v2.0 language: - th pipeline_tag: text-generation --- # Model Card for Han LLM 7B v1 Han LLM v1 is a model that trained by han-instruct-dataset v2.0. The model are working with Thai. Based model: [scb10x/typhoon-7b](https://huggingface.co/scb10x/typhoon-7b) ## Model Details ### Model Description The model was trained by LoRA. This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Wannaphong Phatthiyaphaibun - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** Thai - **License:** [More Information Needed] - **Finetuned from model [optional]:** [scb10x/typhoon-7b](https://huggingface.co/scb10x/typhoon-7b) ## Uses ### Direct Use [More Information Needed] ### Downstream Use [optional] [More Information Needed] ### Out-of-Scope Use [More Information Needed] ## Bias, Risks, and Limitations [More Information Needed] ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data [More Information Needed] ### Training Procedure Use LoRa - r: 48 - lora_alpha - 1 epoch