Text Generation
Transformers
PyTorch
Safetensors
English
llama
Text-Generation
Transformers
HelpingAI
Inference Endpoints
text-generation-inference
HelpingAI-Lite-1.5T / README.md
Abhaykoul's picture
Update README.md
2533343 verified
metadata
datasets:
  - cerebras/SlimPajama-627B
  - HuggingFaceH4/ultrachat_200k
  - bigcode/starcoderdata
  - HuggingFaceH4/ultrafeedback_binarized
  - OEvortex/vortex-mini
  - Open-Orca/OpenOrca
language:
  - en
metrics:
  - speed
library_name: transformers
tags:
  - Text-Generation
  - Transformers
  - HelpingAI
license: other
license_name: hsul
license_link: https://huggingface.co/OEvortex/vortex-3b/raw/main/LICENSE.md
widget:
  - text: |
      <|system|>
      You are a chatbot who can be a teacher!</s>
      <|user|>
      Explain me working of AI .</s>
      <|assistant|>

🌟 HelpingAI-Lite-1.5T Model Card 🌟

πŸ“Š Datasets used:

  • cerebras/SlimPajama-627B
  • HuggingFaceH4/ultrachat_200k
  • bigcode/starcoderdata
  • HuggingFaceH4/ultrafeedback_binarized
  • OEvortex/vortex-mini
  • Open-Orca/OpenOrca

πŸ—£οΈ Language:

  • English (en)

πŸ”’ License:

HelpingAI Simplified Universal License (HSUL)

🧠 Model Overview: HelpingAI-Lite-1.5T is an advanced version of the HelpingAI-Lite model, trained on a vast corpus of 1.5 trillion tokens. This extensive training data enables the model to provide precise and insightful responses, particularly for coding tasks.

πŸ”§ Usage Example:

from transformers import pipeline
from accelerate import Accelerator

# Initialize the accelerator
accelerator = Accelerator()

# Initialize the pipeline
pipe = pipeline("text-generation", model="OEvortex/HelpingAI-Lite-1.5T", device=accelerator.device)

# Define the messages
messages = [
    {
        "role": "system",
        "content": "You are a chatbot who can be a teacher",
    },
    {
        "role": "user",
        "content": "Explain me working of AI.",
    },
]

# Prepare the prompt
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

# Generate predictions
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)

# Print the generated text
print(outputs[0]["generated_text"])