Text Generation
Transformers
PyTorch
llama
text-generation-inference
PengQu's picture
Update README.md
9ee08c0
|
raw
history blame
1.35 kB
metadata
license: apache-2.0
inference: false
datasets:
  - PengQu/langchain-MRKL-finetune
  - fnlp/moss-003-sft-data
  - anon8231489123/ShareGPT_Vicuna_unfiltered

NOTE: This "delta model" cannot be used directly.
Users have to apply it on top of the original LLaMA weights to get actual Vicuna weights.
See https://github.com/pengqu123/vicuna-13b-delta-finetuned-langchain-MRKL#model-weights for instructions.

vicuna-13b-finetuned-langchain-MRKL

Model details

Model type: vicuna-13b-finetuned-langchain-MRKL is an open-source chatbot trained by fine-tuning vicuna-13b on 15 examples with langchain-MRKL format.

Where to send questions or comments about the model: https://github.com/pengqu123/vicuna-13b-delta-finetuned-langchain-MRKL/issues

Training dataset

train only one epoch on mix data (sharegpt + 32*my.json + moss-003-sft-data)

Evaluation

Major Improvement

  • support langchain-MRKL(agent= "zero-shot-react-description")
  • very fast because of stritcly format(it doesn't generate redundant tokens)