Edit model card

🌟 HelpingAI-Lite-1.5T Model Card 🌟

πŸ“Š Datasets used:

  • cerebras/SlimPajama-627B
  • HuggingFaceH4/ultrachat_200k
  • bigcode/starcoderdata
  • HuggingFaceH4/ultrafeedback_binarized
  • OEvortex/vortex-mini
  • Open-Orca/OpenOrca

πŸ—£οΈ Language:

  • English (en)

πŸ”’ License:

HelpingAI Simplified Universal License (HSUL)

🧠 Model Overview: HelpingAI-Lite-1.5T is an advanced version of the HelpingAI-Lite model, trained on a vast corpus of 1.5 trillion tokens. This extensive training data enables the model to provide precise and insightful responses, particularly for coding tasks.

πŸ”§ Usage Example:

from transformers import pipeline
from accelerate import Accelerator

# Initialize the accelerator
accelerator = Accelerator()

# Initialize the pipeline
pipe = pipeline("text-generation", model="OEvortex/HelpingAI-Lite-1.5T", device=accelerator.device)

# Define the messages
messages = [
    {
        "role": "system",
        "content": "You are a chatbot who can be a teacher",
    },
    {
        "role": "user",
        "content": "Explain me working of AI.",
    },
]

# Prepare the prompt
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

# Generate predictions
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)

# Print the generated text
print(outputs[0]["generated_text"])
Downloads last month
72
Safetensors
Model size
1.1B params
Tensor type
BF16
Β·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train OEvortex/HelpingAI-Lite-1.5T