This is an ExLlamaV2 quantized model in 4bpw of mpasila/JP-EN-Translator-1K-steps-7B-merged using the default calibration dataset.

Original Model card

Experimental model, may not perform that well. Dataset used is a modified version of NilanE/ParallelFiction-Ja_En-100k.

Next version should be better (I'll use a GPU with more memory since the dataset happens to use pretty long samples).

Prompt format: Alpaca

Below is a translation task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{}

### Input:
{}

### Response:
{}

Uploaded model

  • Developed by: mpasila
  • License: apache-2.0
  • Finetuned from model : augmxnt/shisa-base-7b-v1

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mpasila/JP-EN-Translator-1K-steps-7B-merged-exl2-4bpw

Quantized
(1)
this model

Datasets used to train mpasila/JP-EN-Translator-1K-steps-7B-merged-exl2-4bpw

Collection including mpasila/JP-EN-Translator-1K-steps-7B-merged-exl2-4bpw