mpasila's picture
Update README.md
96d0ecb verified
metadata
language:
  - en
license: apache-2.0
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - mistral
  - trl
  - sft
base_model: mpasila/JP-EN-Translator-2K-steps-7B
datasets:
  - NilanE/ParallelFiction-Ja_En-100k
  - mpasila/ParallelFiction-Ja_En-100k-alpaca

This is an ExLlamaV2 quantized model in 4bpw of mpasila/JP-EN-Translator-2K-steps-7B using the default calibration dataset.

Original Model card

Experimental model, may not perform that well. Dataset used is a modified version of NilanE/ParallelFiction-Ja_En-100k.

After training with an 8k context length it didn't appear to improve performance much at all. Not sure if I should keep training it (which is costly) or if I should fix some issues with the dataset (like it starting with Ch or Chapter) or I go back to finetuning Finnish models.

Prompt format: Alpaca

Below is a translation task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{}

### Input:
{}

### Response:
{}

Uploaded model

  • Developed by: mpasila
  • License: apache-2.0
  • Finetuned from model : augmxnt/shisa-base-7b-v1

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.