TheBloke's picture
Upload README.md
80c2793
metadata
base_model: MerlynMind/merlyn-education-corpus-qa-v2
inference: false
license: apache-2.0
model_creator: Merlyn Mind
model_name: Merlyn Education Corpus QA v2
model_type: llama
prompt_template: |
  Instruction:\t{system_message}
  Conversation:
  'user1':\tuser message to analyse
  'user2':\tuser message to analyse
  Response:
quantized_by: TheBloke
tags:
  - MerlynMind
  - education
TheBlokeAI

TheBloke's LLM work is generously supported by a grant from andreessen horowitz (a16z)


Merlyn Education Corpus QA v2 - AWQ

Description

This repo contains AWQ model files for Merlyn Mind's Merlyn Education Corpus QA v2.

These files were quantised using hardware kindly provided by Massed Compute.

About AWQ

AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.

It is supported by:

Repositories available

Prompt template: Merlyn-Education

Instruction:\t{system_message}
Conversation:
'user1':\tuser message to analyse
'user2':\tuser message to analyse
Response:

Licensing

The creator of the source model has listed its license as apache-2.0, and this quantization has therefore used that same license.

As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.

In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: Merlyn Mind's Merlyn Education Corpus QA v2.

Provided files, and AWQ parameters

I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.

Models are released as sharded safetensors files.

Branch Bits GS AWQ Dataset Seq Len Size
main 4 128 wikitext 4096 7.25 GB

How to easily download and use this model in text-generation-webui

Please make sure you're using the latest version of text-generation-webui.

It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.

  1. Click the Model tab.
  2. Under Download custom model or LoRA, enter TheBloke/merlyn-education-corpus-qa-v2-AWQ.
  3. Click Download.
  4. The model will start downloading. Once it's finished it will say "Done".
  5. In the top left, click the refresh icon next to Model.
  6. In the Model dropdown, choose the model you just downloaded: merlyn-education-corpus-qa-v2-AWQ
  7. Select Loader: AutoAWQ.
  8. Click Load, and the model will load and is now ready for use.
  9. If you want any custom settings, set them and then click Save settings for this model followed by Reload the Model in the top right.
  10. Once you're ready, click the Text Generation tab and enter a prompt to get started!

Multi-user inference server: vLLM

Documentation on installing and using vLLM can be found here.

  • Please ensure you are using vLLM version 0.2 or later.
  • When using vLLM as a server, pass the --quantization awq parameter.

For example:

python3 -m vllm.entrypoints.api_server --model TheBloke/merlyn-education-corpus-qa-v2-AWQ --quantization awq --dtype auto
  • When using vLLM from Python code, again set quantization=awq.

For example:

from vllm import LLM, SamplingParams

prompts = [
    "Tell me about AI",
    "Write a story about llamas",
    "What is 291 - 150?",
    "How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''Instruction:\t{system_message}
Conversation:
'user1':\tuser message to analyse
'user2':\tuser message to analyse
Response:
'''

prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]

sampling_params = SamplingParams(temperature=0.8, top_p=0.95)

llm = LLM(model="TheBloke/merlyn-education-corpus-qa-v2-AWQ", quantization="awq", dtype="auto")

outputs = llm.generate(prompts, sampling_params)

# Print the outputs.
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")

Multi-user inference server: Hugging Face Text Generation Inference (TGI)

Use TGI version 1.1.0 or later. The official Docker container is: ghcr.io/huggingface/text-generation-inference:1.1.0

Example Docker parameters:

--model-id TheBloke/merlyn-education-corpus-qa-v2-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096

Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):

pip3 install huggingface-hub
from huggingface_hub import InferenceClient

endpoint_url = "https://your-endpoint-url-here"

prompt = "Tell me about AI"
prompt_template=f'''Instruction:\t{system_message}
Conversation:
'user1':\tuser message to analyse
'user2':\tuser message to analyse
Response:
'''

client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
                                  max_new_tokens=128,
                                  do_sample=True,
                                  temperature=0.7,
                                  top_p=0.95,
                                  top_k=40,
                                  repetition_penalty=1.1)

print(f"Model output: ", response)

Inference from Python code using Transformers

Install the necessary packages

pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"

Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.

If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:

pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl

If you have problems installing AutoAWQ using the pre-built wheels, install it from source instead:

pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .

Transformers example code (requires Transformers 4.35.0 and later)

from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer

model_name_or_path = "TheBloke/merlyn-education-corpus-qa-v2-AWQ"

tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
    model_name_or_path,
    low_cpu_mem_usage=True,
    device_map="cuda:0"
)

# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)

prompt = "Tell me about AI"
prompt_template=f'''Instruction:\t{system_message}
Conversation:
'user1':\tuser message to analyse
'user2':\tuser message to analyse
Response:
'''

# Convert prompt to tokens
tokens = tokenizer(
    prompt_template,
    return_tensors='pt'
).input_ids.cuda()

generation_params = {
    "do_sample": True,
    "temperature": 0.7,
    "top_p": 0.95,
    "top_k": 40,
    "max_new_tokens": 512,
    "repetition_penalty": 1.1
}

# Generate streamed output, visible one token at a time
generation_output = model.generate(
    tokens,
    streamer=streamer,
    **generation_params
)

# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
    tokens,
    **generation_params
)

# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)

# Inference is also possible via Transformers' pipeline
from transformers import pipeline

pipe = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    **generation_params
)

pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)

Compatibility

The files provided are tested to work with:

Discord

For further support, and discussions on these models and AI in general, join us at:

TheBloke AI's Discord server

Thanks, and how to contribute

Thanks to the chirper.ai team!

Thanks to Clay from gpus.llm-utils.org!

I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.

If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.

Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.

Special thanks to: Aemon Algiz.

Patreon special mentions: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius

Thank you to all my generous patrons and donaters!

And thank you again to a16z for their generous grant.

Original model card: Merlyn Mind's Merlyn Education Corpus QA v2

Merlyn-Education Corpus QA

merlyn-education-corpus-qa-v2 is a 13b parameter decoder-style transformer model for the education domain. It is fine-tuned from a llama2-13b base-model.

This model was trained by Merlyn Mind.

It is a model that provides an answer to a question based on the given context.

Model Date

August 21, 2023

Model License

Apache-2.0

Usage

Loading model and tokenizer:

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

model_path = "MerlynMind/merlyn-education-corpus-qa-v2"
device = torch.device("cuda:0") # change device id as necessary
model = AutoModelForCausalLM.from_pretrained(model_path)    
tokenizer = AutoTokenizer.from_pretrained(model_path, fast_tokenizer=True)
model.to(device) # move to device

Prompt example:

info = '''Information:\tThe Solar System is about 4.6 billion years old. The Sun formed by gravity in a large molecular cloud. It is mainly hydrogen, which it converts into helium.
Information:\tThe formation and evolution of the Solar System began 4.6 billion years ago with the gravitational collapse of a small part of a giant molecular cloud.
Information:\tAstronomers are now more or less certain that the order of the planets was not always as it is today. Knowing what we know today, we can see the Solar System is strange. All other planetary system we are able to study have their largest planet close to their star. Also we have noticed other oddities in the Solar System. Mars is smaller than it ought to be, and the asteroid belt has been disturbed.
Information:\tFor thousands of years, people had no need for a name for the "Solar System". They thought the Earth stayed still at the center of everything (geocentrism). The Greek philosopher Aristarchus of Samos suggested that there was a special order in the sky. Nicolaus Copernicus was the first to develop a mathematical system that described what we now call the "Solar System". This was called a "new system of the world". In the 17th century, Galileo Galilei, Johannes Kepler and Isaac Newton began to understand physics more clearly. People began to accept the idea that the Earth is a planet that moves around the Sun, and that the planets are worlds, and that all worlds are governed by the same same physical laws. More recently, telescopes and space probes sometimes let us see details directly. All inner planets have surface features. The gas giants (as the name suggests) have surfaces whose make-up is gradually being discovered.
Information:\tThere are eight planets in the Solar System. From closest to farthest from the Sun, they are: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus and Neptune. The first four planets are called terrestrial planets. They are mostly made of rock and metal, and they are mostly solid. The last four planets are called gas giants. This is because they are much larger than other planets and are mostly made of gas.
'''
qs = "Question:\tHow old is the Solar System?"

prompt = tokenizer.bos_token
prompt += '''Instruction:\tYou are to try to answer the following question using only the pieces of information given.
Instruction:\tYour response should be a well formed JSON object with an 'answerable' property followed by an 'answer' property.
Instruction:\tIf you cannot answer the question given the information, the value of the 'answerable' should be 'false' and the 'answer' should be an empty string.
Instruction:\tIf you can answer the question given the information, the value of the 'answerable' should be 'true' and your answer should be the string value of the 'answer' property.
''' + info + qs + " Response:"

We recommend using newline character for stopping criterion, as follows:

from transformers import StoppingCriteria, StoppingCriteriaList

eos_tokens = [tokenizer.eos_token,'\n']
eos_token_ids = [tokenizer.encode(token)[0] for token in eos_tokens]

class MultipleEOSTokensStoppingCriteria(StoppingCriteria):
    def __init__(self, eos_token_ids):
        self.eos_token_ids = set(eos_token_ids)
    def __call__(self, input_ids, scores) -> bool:
        if input_ids.shape[-1] <= 1:
            return False
        for eos_token_id in self.eos_token_ids:
            if eos_token_id == input_ids[0, -1].item():
                return True
        return False

# Define stopping criteria
multiple_eos_tokens_processor = MultipleEOSTokensStoppingCriteria(eos_token_ids)
stopping_criteria = StoppingCriteriaList([multiple_eos_tokens_processor])

Inference:

inputs = tokenizer(prompt, return_tensors="pt", return_token_type_ids=False).to(device)
generate_ids = model.generate(
    **inputs,
    max_new_tokens=1024,
    temperature=0.0,
    num_beams=2,
    top_p=1,
    stopping_criteria=stopping_criteria
)
response = tokenizer.decode(generate_ids[0],
                      skip_special_tokens=True,
                      clean_up_tokenization_spaces=True)

Example output (after response processing):

[{"answerable": "true", "answer": "4.6 billion years"}]

Evaluation

This model is trained on a larger dataset compared to the pythia-based v1 model, yielding better correctness and reduced hallucinations on a larger and more diverse benchmarking dataset.