--- base_model: None tags: - generated_from_trainer model-index: - name: checkpoints-mistral-0.3b results: [] license: apache-2.0 --- # checkpoints-mistral-300M This model is a fine-tuned version of [None](https://huggingface.co/None) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.205 ## Model description More information needed ## Training and evaluation data ***** train metrics ***** epoch = 13.91 train_loss = 2.205 ***** eval metrics ***** epoch = 13.91 eval_loss = 2.4 perplexity = 11.0228 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 16 - total_train_batch_size: 192 - total_eval_batch_size: 12 - optimizer: Adam with betas=(0.9,0.95) and epsilon=0.0001 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 4 - num_epochs: 6 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.5 - Tokenizers 0.14.1 ## Usage ```python from transformers import pipeline pipe = pipeline("text-generation", model="ayousanz/japanese-mistral-0.3b-base") from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer import torch MODEL_NAME = "ayousanz/japanese-mistral-0.3b-base" torch.set_float32_matmul_precision('high') DEVICE = "cuda" if torch.cuda.is_available(): print("cuda") DEVICE = "cuda" else: print("cpu") DEVICE = "cpu" tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME,use_fast=False) model = AutoModelForCausalLM.from_pretrained( MODEL_NAME, trust_remote_code=True, ).to(DEVICE) prompt = "大規模言語モデルとは、" inputs = tokenizer(prompt, add_special_tokens=False,return_tensors="pt").to(model.device) with torch.no_grad(): outputs = model.generate( inputs["input_ids"], max_new_tokens=256, do_sample=True, early_stopping=False, top_p=0.95, top_k=50, temperature=0.9, no_repeat_ngram_size=2, num_beams=3 ) outputs_txt = tokenizer.decode(outputs[0]) print(outputs_txt) ```