--- license: apache-2.0 language: - en tags: - InstructGPT - hf - palmyra datasets: - Writer/palmyra-data-index --- # InstructPalmyra-20b - **Developed by:** [https://writer.com/](https://writer.com/); - **Model type:** Causal decoder-only; - **Language(s) (NLP):** English; - **License:** Apache 2.0; - **Finetuned from model:** [Palmyra-20B](https://huggingface.co/Writer/palmyra-large). ## Model Description Introducing InstructPalmyra-20b, a state-of-the-art instruction-following 20b language model designed to deliver exceptional performance and versatility. Derived from the foundational architecture of [Palmyra-20b](https://huggingface.co/Writer/palmyra-large), InstructPalmyra-20b is specifically tailored to address the growing demand for advanced natural language processing and comprehension capabilities. The InstructPalmyra-20b model is meticulously trained on an extensive dataset of approximately 70,000 instruction-response records. These records are generated by our dedicated Writer Linguist team, who possess considerable expertise in language modeling and fine-tuning techniques. By leveraging their skills and knowledge, the InstructPalmyra-20b model is primed to offer unparalleled proficiency in understanding and executing language-based instructions. One of the key differentiators of InstructPalmyra-20b lies in its ability to process complex instructions and generate accurate, contextually appropriate responses. This makes it an ideal choice for a wide range of applications, including virtual assistants, customer support, content generation, and more. Additionally, the model's comprehensive training enables it to adapt and perform well under varying conditions and contexts, further expanding its potential use cases. ## Usage : ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "Writer/InstructPalmyra-20b" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, device_map="auto", torch_dtype=torch.float16 ) instruction = "Describe a futuristic device that revolutionizes space travel." PROMPT_DICT = { "prompt_input": ( "Below is an instruction that describes a task, paired with an input that provides further context. " "Write a response that appropriately completes the request\n\n" "### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:" ), "prompt_no_input": ( "Below is an instruction that describes a task. " "Write a response that appropriately completes the request.\n\n" "### Instruction:\n{instruction}\n\n### Response:" ), } text = ( PROMPT_DICT["prompt_no_input"].format(instruction=instruction) if not input else PROMPT_DICT["prompt_input"].format(instruction=instruction, input=input) ) model_inputs = tokenizer(text, return_tensors="pt").to("cuda") output_ids = model.generate( **model_inputs, max_length=256, ) output_text = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0] clean_output = output_text.split("### Response:")[1].strip() print(clean_output) ``` It can also be used with text-generation-inference ```sh model=Writer/InstructPalmyra-20b volume=$PWD/data docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference --model-id $model ``` ### Limitations and Biases InstructPalmyra's core functionality is to take a string of text and predict the next token. While language models are widely used for other tasks, there are many unknowns in this work. When prompting InstructPalmyra, keep in mind that the next statistically likely token is not always the token that produces the most "accurate" text. Never rely on InstructPalmyra to produce factually correct results. InstructPalmyra was trained on Writer’s custom data. As with all language models, it is difficult to predict how InstructPalmyra will respond to specific prompts, and offensive content may appear unexpectedly. We recommend that the outputs be curated or filtered by humans before they are released, both to censor undesirable content and to improve the quality of the results. ## Uses ### Out-of-Scope Use Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful. ## Bias, Risks, and Limitations InstructPalmyra-20b is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online. ### Recommendations We recommend users of InstructPalmyra-20b to develop guardrails and to take appropriate precautions for any production use. ## Citation and Related Information To cite this model: ``` @misc{InstructPalmyra, author = {Writer Engineering team}, title = {{InstructPalmyra-20b : Instruct tuned Palmyra-Large model}}, howpublished = {\url{https://dev.writer.com}}, year = 2023, month = Augest } ``` [![Model architecture](https://img.shields.io/badge/Model%20Arch-Transformer%20Decoder-green)](#model-architecture)|[![Model size](https://img.shields.io/badge/Params-20B-green)](#model-architecture)|[![Language](https://img.shields.io/badge/Language-en--US-lightgrey#model-badge)](#datasets)|![AUR license](https://img.shields.io/badge/license-Apache%202-blue) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Writer__InstructPalmyra-20b) | Metric | Value | |-----------------------|---------------------------| | Avg. | 37.98 | | ARC (25-shot) | 47.1 | | HellaSwag (10-shot) | 73.0 | | MMLU (5-shot) | 28.26 | | TruthfulQA (0-shot) | 41.81 | | Winogrande (5-shot) | 64.72 | | GSM8K (5-shot) | 2.58 | | DROP (3-shot) | 8.36 |