File size: 1,623 Bytes
5322e5a 6ee0ddd 5322e5a 9cc0507 5322e5a 9cc0507 5322e5a 6ee0ddd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
---
license: mit
pipeline_tag: text-generation
tags:
- cortex.cpp
---
## Overview
**Meta** developed and released the [Llama3.3](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) model, a state-of-the-art multilingual large language model designed for instruction-tuned generative tasks. With 70 billion parameters, this model is optimized for multilingual dialogue use cases, providing high-quality text input and output. Llama3.3 has been fine-tuned through supervised learning and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. It sets a new standard in performance, outperforming many open-source and closed-source chat models on common industry benchmarks. The model’s capabilities make it a powerful tool for applications requiring conversational AI, multilingual support, and instruction adherence.
## Variants
| No | Variant | Cortex CLI command |
| --- | --- | --- |
| 1 | [Llama3.3-70b](https://huggingface.co/cortexso/llama3.3/tree/70b) | `cortex run llama3.3:70b` |
## Use it with Jan (UI)
1. Install **Jan** using [Quickstart](https://jan.ai/docs/quickstart)
2. Use in Jan model Hub:
```bash
cortexso/llama3.3
```
## Use it with Cortex (CLI)
1. Install **Cortex** using [Quickstart](https://cortex.jan.ai/docs/quickstart)
2. Run the model with command:
```bash
cortex run llama3.3
```
## Credits
- **Author:** Meta
- **Converter:** [Homebrew](https://www.homebrew.ltd/)
- **Original License:** [License](https://llama.meta.com/llama3/license/)
- **Papers:** [Llama-3 Blog](https://llama.meta.com/llama3/) |