--- tags: - llm-rs - ggml pipeline_tag: text-generation license: apache-2.0 language: - en datasets: - togethercomputer/RedPajama-Data-1T --- # GGML converted versions of [OpenLM Research](https://huggingface.co/openlm-research)'s LLaMA models # OpenLLaMA: An Open Reproduction of LLaMA In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a 7B and 3B model trained on 1T tokens, as well as the preview of a 13B model trained on 600B tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details. ## Weights Release, License and Usage We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license. ## Converted Models: $MODELS$ ## Usage ### Python via [llm-rs](https://github.com/LLukas22/llm-rs-python): #### Installation Via pip: `pip install llm-rs` #### Run inference ```python from llm_rs import AutoModel #Load the model, define any model you like from the list above as the `model_file` model = AutoModel.from_pretrained("rustformers/open-llama-ggml",model_file=" open_llama_7b-q4_0-ggjt.bin") #Generate print(model.generate("The meaning of life is")) ``` ### Using [local.ai](https://github.com/louisgv/local.ai) GUI #### Installation Download the installer at [www.localai.app](https://www.localai.app/). #### Running Inference Download your preferred model and place it in the "models" directory. Subsequently, you can start a chat session with your model directly from the interface. ### Rust via [Rustformers/llm](https://github.com/rustformers/llm): #### Installation ``` git clone --recurse-submodules https://github.com/rustformers/llm.git cd llm cargo build --release ``` #### Run inference ``` cargo run --release -- llama infer -m path/to/model.bin -p "Tell me how cool the Rust programming language is:" ```