falcon-7b_guanaco / README.md
lgaalves's picture
Update README.md
0dd921a
metadata
license: mit
datasets:
  - timdettmers/openassistant-guanaco
language:
  - en
pipeline_tag: text-generation

Falcon-7b_guanaco

lgaalves/falcon-7b_guanaco is an instruction fine-tuned model based on the Falcon 7B transformer architecture.

Benchmark Metrics

Metric lgaalves/falcon-7b_guanaco tiiuae/falcon-7b (base)
Avg. 56.33 53.42
ARC (25-shot) 50.0 47.87
HellaSwag (10-shot) 78.54 78.13
TruthfulQA (0-shot) 40.45 34.26

We use state-of-the-art Language Model Evaluation Harness to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.

Model Details

  • Trained by: Luiz G A Alves
  • Model type: falcon-7b_guanaco is an auto-regressive language model based on the Falcon 7B transformer architecture.
  • Language(s): English

How to use:

# Use a pipeline as a high-level helper
>>> from transformers import pipeline
>>> pipe = pipeline("text-generation", model="lgaalves/falcon-7b_guanaco")
>>> question = "What is a large language model?"
>>> answer = pipe(question)
>>> print(answer[0]['generated_text'])

or, you can load the model direclty using:

# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("lgaalves/falcon-7b_guanaco")
model = AutoModelForCausalLM.from_pretrained("lgaalves/falcon-7b_guanaco")

Training Dataset

lgaalves/falcon-7b_guanaco was trained using the following dataset: timdettmers/openassistant-guanaco

Training Procedure

lgaalves/falcon-7b_guanaco was instruction fine-tuned using LoRA on 1 Tesla V100-SXM2-16GB. It took about 3.5 hours to train it.

Intended uses, limitations & biases

You can use the raw model for text generation or fine-tune it to a downstream task. The model was not extensively tested and may produce false information. It contains a lot of unfiltered content from the internet, which is far from neutral.