--- license: mit language: - en library_name: adapter-transformers --- # alpaca_orca_open_llama: An Open_LLaMA-3B model trained on Alpaca dataset using Orca Research paper approaches # Dataset and Training We train OpenLLaMa-3B model to become more steerable by training it on the custom Alpaca dataset created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707). Please pay attention how the **System** prompt is added before each *instruction*. The training configurations are provided in the table below. The training takes on 4x A600(50G) GPUs and lasts for around 20 Hours for cost of $66 using [Lambda Labs](https://lambdalabs.com) We used DeepSpeed with Zero-3 approaches for parallel gpu training. ||| |:-------------:|:-------------:| |*batch size*|16| |*train_micro_batch_size_per_gpu*|2| |*gradient_accumulation_steps*|2| |*Learning rate*|2e-5| |*Epochs*|3| |*Max length*|1024| # Example Usage Below shows an example on how to use OpenAlpaca ```python import torch from transformers import LlamaForCausalLM, LlamaTokenizer # change model_path between 3b,7b or 13b model_path = 'psmathur/alpaca_orca_open_llama_3b' tokenizer = LlamaTokenizer.from_pretrained(model_path) model = LlamaForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map='auto', ) # check more details here https://github.com/openlm-research/open_llama tokenizer.bos_token_id, tokenizer.eos_token_id = 1,2 # same prompt as provided by Orca Research Paper system = 'You are an AI assistant. User will you give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.' instruction = 'Use the given data to calculate the median.' input = '[7, 3, 8, 2, 10]' prompt_input = f"### System:\n{system}\n\n#\n\n### User:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n" #prompt_no_input = f"### System:\n{system}\n\n#\n\n### User:\n{instruction}\n\n### Response:\n" tokens = tokenizer.encode(prompt_no_input) tokens = torch.LongTensor(tokens).unsqueeze(0) tokens = tokens.to('cuda') instance = {'input_ids': tokens,'top_k': 50, 'top_p': 1.0, 'generate_len': 1024} # instance = {'input_ids': tokens,'top_k': 50, 'top_p': 1.0, 'temperature':0.7, 'generate_len': 1024} with torch.no_grad(): rest = model.generate( input_ids=tokens, max_length=length+instance['generate_len'], use_cache=True, do_sample=True, top_p=instance['top_p'], top_k=instance['top_k'], # temperature=instance['temperature'] ) output = rest[0][length:] string = tokenizer.decode(output, skip_special_tokens=True) print(f'[!] Response: {string}') ```