✨ Orion-7B
Orion is based on alpaca-7B, further trained on the latest deep learning papers published at notable conferences with various PEFT methods. It is made available under the Apache 2.0 license. Paper coming soon 😊.
What is so special about Orion?
- Comprehensive performance analysis of PEFT methods for knowledge editing
- Novel framekwork for effective knowledge editing dataset generation
- Large knowledge editing dataset from recent deep learning papers published in top conferences
- ⚠️ This is a raw, fine-tuned model, which should be further instruction-tuned for production level performance.
💥 Orion LLM requires PyTorch 2.0
You will need at least 20-25GB of memory to swiftly run inference with Orion-7B.
Model Details
Model Description
- Developed by: AttentionX;
- Base model: alpaca-7B;
- Training method: LoRA, llama-adapter, llama-adapterV2, full fine-tuning
- Language(s) (NLP): English;
- License: Apache 2.0 license.
Model Source
- Paper: coming soon.
Training Details
Paper coming soon 😊.
Evaluation
Paper coming soon.
Test Inference
- What is GPT-4?
- llama
- GPT-4 is a neural network model built by OpenAI, a non-profit artificial intelligence research organization founded in 2015 by Elon Musk, Sam Altman and other industry leaders.
- Orion
- GPT-4 is a large-scale, multimodal model that can accept image and text inputs and produce text outputs. It is a language model developed by OpenAI. The GPT-4 base model has a larger model size and can accept image and text inputs
- llama
- What is the lowest score GPT-4 achieved in AP Calculus BC?
- llama
- The lowest score achieved is 60%
- Orion
- The lowest score achieved in AP Calculus BC (MCQ) is 33.33%
- llama
Hardware
Orion was trained on a single a100
License
Orion is made available under the Apache 2.0 license.
Contact
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.