prithivMLmods's picture
Update README.md
f9b3d8c verified
metadata
license: creativeml-openrail-m
datasets:
  - HuggingFaceH4/ultrachat_200k
language:
  - en
base_model:
  - Qwen/Qwen2.5-0.5B
tags:
  - Qwen2.5
  - 200K
  - 5B
  - Llama-cpp
pipeline_tag: text-generation

Qwen2.5-0.5B-200K-GGUF

File Name Size Description
.gitattributes 1.78kB Git configuration file specifying attributes and LFS rules.
Modelfile 1.73kB Model-specific file containing metadata or configurations.
Qwen2.5-0.5B-200K.F16.gguf 994MB Full precision 16-bit float model file for Qwen 2.5 with 0.5B parameters and 200K steps.
Qwen2.5-0.5B-200K.Q4_K_M.gguf 398MB Quantized 4-bit model file for Qwen 2.5 with 0.5B parameters and 200K steps, optimized for memory.
Qwen2.5-0.5B-200K.Q5_K_M.gguf 420MB Quantized 5-bit model file for Qwen 2.5 with 0.5B parameters and 200K steps, balanced for accuracy.
Qwen2.5-0.5B-200K.Q8_0.gguf 531MB Quantized 8-bit model file for Qwen 2.5 with 0.5B parameters and 200K steps, moderate accuracy.
README.md 166B Markdown file with project information and instructions.
config.json - JSON configuration file for setting model parameters.

Run with Ollama 🦙

Overview

Ollama is a powerful tool that allows you to run machine learning models effortlessly. This guide will help you download, install, and run your own GGUF models in just a few minutes.

Table of Contents

Download and Install Ollama🦙

To get started, download Ollama from https://ollama.com/download and install it on your Windows or Mac system.

Steps to Run GGUF Models

1. Create the Model File

First, create a model file and name it appropriately. For example, you can name your model file metallama.

2. Add the Template Command

In your model file, include a FROM line that specifies the base model file you want to use. For instance:

FROM Llama-3.2-1B.F16.gguf

Ensure that the model file is in the same directory as your script.

3. Create and Patch the Model

Open your terminal and run the following command to create and patch your model:

ollama create metallama -f ./metallama

Once the process is successful, you will see a confirmation message.

To verify that the model was created successfully, you can list all models with:

ollama list

Make sure that metallama appears in the list of models.


Running the Model

To run your newly created model, use the following command in your terminal:

ollama run metallama

Sample Usage

In the command prompt, you can execute:

D:\>ollama run metallama

You can interact with the model like this:

>>> write a mini passage about space x
Space X, the private aerospace company founded by Elon Musk, is revolutionizing the field of space exploration.
With its ambitious goals to make humanity a multi-planetary species and establish a sustainable human presence in
the cosmos, Space X has become a leading player in the industry. The company's spacecraft, like the Falcon 9, have
demonstrated remarkable capabilities, allowing for the transport of crews and cargo into space with unprecedented
efficiency. As technology continues to advance, the possibility of establishing permanent colonies on Mars becomes
increasingly feasible, thanks in part to the success of reusable rockets that can launch multiple times without
sustaining significant damage. The journey towards becoming a multi-planetary species is underway, and Space X
plays a pivotal role in pushing the boundaries of human exploration and settlement.

Conclusion

With these simple steps, you can easily download, install, and run your own models using Ollama. Whether you're exploring the capabilities of Llama or building your own custom models, Ollama makes it accessible and efficient.

  • This README provides clear instructions and structured information to help users navigate the process of using Ollama effectively. Adjust any sections as needed based on your specific requirements or additional details you may want to include.