GGUF
File size: 1,143 Bytes
7307df7
 
 
 
 
 
 
 
 
 
 
 
 
 
4c9ecc5
7307df7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
license: other
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE
---

## Overview

Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 72B Qwen2 model.

## Variants

| No | Variant | Cortex CLI command |
| --- | --- | --- |
| 1 | [7b-gguf](https://huggingface.co/cortexhub/qwen2/tree/7b-gguf) | `cortex run qwen2:7b-gguf` |

## Use it with Jan (UI)

1. Install **Jan** using [Quickstart](https://jan.ai/docs/quickstart)
2. Use in Jan model Hub:
    ```
    cortexhub/qwen2
    ```
    
## Use it with Cortex (CLI)

1. Install **Cortex** using [Quickstart](https://cortex.jan.ai/docs/quickstart)
2. Run the model with command:
    ```
    cortex run qwen2
    ```
    
## Credits

- **Author:** Tongyi Qianwen
- **Converter:** [Homebrew](https://www.homebrew.ltd/)
- **Original License:** [Licence](https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE)