hoang.dang1
Model upload
72f283f
---
license: other
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: huyhuyvu01/VietLlama2_Law_Pretrain_7B
model-index:
- name: VinaLlamaLawBaseFinetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VinaLlamaLawBaseFinetune
This model is a fine-tuned version of [huyhuyvu01/VietLlama2_Law_Pretrain_7B](https://huggingface.co/huyhuyvu01/VietLlama2_Law_Pretrain_7B) on a private law_finetune dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5.0
- mixed_precision_training: Native AMP
Usage and other considerations: Please refer to the [Llama 2](https://github.com/facebookresearch/llama)
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
### Disclaimer
This project is built upon vilm/vinallama-7b-chat, which is built upon Meta's Llama-2 model. It is essential to strictly adhere to the open-source license agreement of Llama-2 when using this model. If you incorporate third-party code, please ensure compliance with the relevant open-source license agreements.
It's important to note that the content generated by the model may be influenced by various factors, such as calculation methods, random elements, and potential inaccuracies in quantification. Consequently, this project does not offer any guarantees regarding the accuracy of the model's outputs, and it disclaims any responsibility for consequences resulting from the use of the model's resources and its output.
For those employing the models from this project for commercial purposes, developers must adhere to local laws and regulations to ensure the compliance of the model's output content. This project is not accountable for any products or services derived from such usage.
### Contact
huyhuyvu01@gmail.com (persional email)
https://github.com/huyhuyvu01 (Github)