Reverb-7b: Ozone AI

Model Description

Reverb-7b is a 7 billion parameter language model developed by Ozone AI. It is a causal language model designed for text generation and various downstream tasks. This is the third model release by Ozone AI.

Join Our Discord

https://discord.gg/ozone

Intended Uses & Limitations

Reverb-7b is intended for research and chatting purposes in natural language processing. Potential use cases include:

  • Text generation
  • Question answering
  • Summarization
  • Code generation (performance may vary)
  • Creative writing

Limitations:

  • Like all language models, Reverb-7b can generate biased or harmful content. Users should implement appropriate safeguards to mitigate these risks.
  • The model's performance may vary depending on the specific task and dataset.
  • Important Safety Note: We have observed that at lower quantization levels (e.g., below 4-bit), the model's safety guardrails may be less effective. The model may be more likely to generate inappropriate or harmful content, or to solicit personal information. Exercise extreme caution when using Reverb-7b at these lower quantization levels and implement strict input/output filtering.

How to Use

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "ozone-ai/Reverb-7b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

prompt = "The quick brown fox jumps over the lazy dog."
input_ids = tokenizer(prompt, return_tensors="pt").input_ids

generation_output = model.generate(input_ids, max_length=50)
print(tokenizer.decode(generation_output[0]))

Evaluation

Benchmarks

The following table shows the performance of Reverb-7b on various benchmarks:

Benchmark Metric Value
MMLU Pro Average Accuracy 0.4006
MMLU Pro Biology 0.6904
MMLU Pro Business 0.3143
MMLU Pro Chemistry 0.2314
MMLU Pro Computer Science 0.4000
MMLU Pro Economics 0.5758
MMLU Pro Engineering 0.3148
MMLU Pro Health 0.5183
MMLU Pro History 0.4934
MMLU Pro Law 0.3315
MMLU Pro Math 0.2983
MMLU Pro Other 0.4372
MMLU Pro Philosophy 0.4409
MMLU Pro Physics 0.2910
MMLU Pro Psychology 0.5990

Training Details

  • Training infrastructure: 1x H100 PCIE
  • Training procedure: 1 epoch

Contact

For questions or feedback, please contact us through contact@ozone-ai.com or https://ozone-ai.com

Attribution

Built with Qwen Users of this model must agree with the Qwen license agreement

Model Card Authors

Vneq - CEO @ Ozone AI Tristan - CEO @ ShuttleAI, CTO @ Ozone AI

Downloads last month
40
Safetensors
Model size
7.62B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support text-generation models for reverb library.

Model tree for ozone-research/Reverb-7b

Finetunes
2 models
Quantizations
9 models

Collection including ozone-research/Reverb-7b