You need to agree to share your contact information to access this model
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
By accessing this model, you agree to the terms of use as outlined in the Apache Labs Community License and confirm that you will not use the model in ways that violate ethical guidelines.
Log in or Sign Up to review the conditions and access this model content.
Glacier R+ 104B
Model Name: Glacier R+ 104B
Base Model: CohereForAI/c4ai-command-r-plus-08-2024
Finetuned by: Apache Labs
Model Description
Glacier R+ 104B is a specialized fine-tune of the CohereForAI/c4ai-command-r-plus-08-2024
model, created by Apache Labs to enhance performance in advanced language generation tasks. This model has been fine-tuned on domain-specific datasets, which improve its relevance, contextual accuracy, and fluency in conversation, creative content generation, and complex Q&A tasks.
Intended Use
Glacier R+ 104B is designed for the following use cases:
- Conversational AI: Enhanced for generating coherent, contextually accurate responses.
- Content Creation: Generates detailed, creative text based on input prompts.
- Question Answering: Provides reliable answers, leveraging strong contextual understanding.
- Summarization and Text Completion: Ideal for completing and summarizing complex texts with improved relevance and fluency.
Limitations and Considerations
- Context Length: Optimal for shorter to medium-length inputs (under 500 tokens).
- Bias and Fairness: Reflects the biases present in the training data. Use responsibly.
- Ethical Use: Avoid deployment in harmful, deceptive, or misleading applications.
How to Use
This model can be accessed through Hugging Face's transformers
library. You can use the high-level pipeline or load the model and tokenizer directly.
Using a Pipeline
# Use a pipeline as a high-level helper
from transformers import pipeline
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="apache-labs/glacier-r-plus-104B")
response = pipe(messages)
print(response)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("apache-labs/glacier-r-plus-104B")
model = AutoModelForCausalLM.from_pretrained("apache-labs/glacier-r-plus-104B")
- Downloads last month
- 0
Model tree for apache-labs/glacier-r-plus-104B
Base model
CohereForAI/c4ai-command-r-plus-08-2024