Edit model card

Ghost 7B Alpha

Ghost 7B Alpha Logo

The large generation of language models focuses on optimizing excellent reasoning, multi-task knowledge, and tools support.

Introduction

Ghost 7B Alpha is a large language model fine-tuned from Mistral 7B, with a size of 7 billion parameters. The model was developed with the goal of optimizing reasoning ability, multi-task knowledge and supporting tool usage. The model works well with the main trained and optimized languages being English and Vietnamese.

Overall, the model is suitable when making a pretrained version so you can continue to develop the desired tasks, develop virtual assistants, perform features on tasks such as coding, translation, answering questions, creating documents, etc. It is truly an efficient, fast and extremely cheap open model.

Specifications

  • Name: Ghost 7B Alpha.
  • Model size: 7 billion parameters.
  • Context length: 8K, 8192.
  • Languages: English and Vietnamese.
  • Main tasks: reasoning, multi-tasking knowledge and function tools.
  • License: Ghost 7B LICENSE AGREEMENT.
  • Based on: Mistral 7B.
  • Distributions: Standard (BF16), GGUF, AWQ.
  • Developed by: Ghost X, Hieu Lam.

Links

Distributions

We create many distributions to give you the best access options that best suit your needs. Always make sure you know which version you need and what will help you operate better.

Note

For all official information and updates about the model, see here:

Downloads last month
88
GGUF
Model size
7.24B params
Architecture
llama

4-bit

5-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including ghost-x/ghost-7b-alpha-gguf