Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

LLaMa Lite: Reduced-Scale, Experimental Versions of LLaMA and LLaMa 2

In this series of repos, we present an open-source reproduction of Meta AI's LLaMA and LLaMa 2 large language models. However, with significantly reduced model sizes, the experimental version of llama1_s has 1.8B parameters, and the experimental version of llama2_xs has 460M parameters. ('s' stands for small, while 'xs' denotes extra small).

Dataset and Tokenization

We train our models on part of RedPajama dataset. We use the GPT2Tokenizer to tokenize the text.

Using with HuggingFace Transformers

The experimental checkpoints can be directly loaded by Transformers library. The following code snippet shows how to load the our experimental model and generate text with it.

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

# model_path = 'ahxt/llama2_xs_460M_experimental'
model_path = 'ahxt/llama1_s_1.8B_experimental'

model = AutoModelForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
model.eval()

prompt = 'Q: What is the highest mountain?\nA:'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
tokens = model.generate(input_ids, max_length=20)
print( tokenizer.decode(tokens[0].tolist(), skip_special_tokens=True) )
# Q: What is the largest bird?\nA: The largest bird is the bald eagle.

Contact

This experimental version is developed by: Xiaotian Han from Texas A&M University. And these experimental verisons are for research only.

Downloads last month
21