Nondzu's picture
Update README.md
33fe748
|
raw
history blame
2.74 kB
metadata
license: apache-2.0
tags:
  - code
  - mistral

Mistral-7B-codealpaca

We are thrilled to introduce the Mistral-7B-codealpaca model. This variant is optimized and demonstrates potential in assisting developers as a coding companion. We welcome contributions from testers and enthusiasts to help evaluate its performance.

Training Details

The model was trained using 3xRTX 3090 in a homelab setup. Built with Axolotl

Quantised Model Links:

Dataset:

Prompt template: Alpaca

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:

Performance (evalplus)

image/png

Well, the results are better than I expected

  • Base: {'pass@1': 0.47560975609756095}
  • Base + Extra: {'pass@1': 0.4329268292682927}

For reference, we've provided the performance of the original Mistral model alongside Mistral-7B-code-16k-qlora model.

** Nondzu/Mistral-7B-code-16k-qlora**:

  • Base: {'pass@1': 0.3353658536585366}
  • Base + Extra: {'pass@1': 0.2804878048780488}

** mistralai/Mistral-7B-Instruct-v0.1**:

  • Base: {'pass@1': 0.2926829268292683}
  • Base + Extra: {'pass@1': 0.24390243902439024}

Model Configuration:

The following are the configurations for the Mistral-7B-codealpaca-lora:

base_model: mistralai/Mistral-7B-Instruct-v0.1
base_model_config: mistralai/Mistral-7B-Instruct-v0.1
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true
load_in_8bit: true
load_in_4bit: false
strict: false
datasets:
  - path: theblackcat102/evol-codealpaca-v1
    type: oasst
dataset_prepared_path:
val_set_size: 0.01
output_dir: ./nondzu/Mistral-7B-codealpaca-test14
adapter: lora
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true

image/png

Additional Projects:

For other related projects, you can check out: