leafspark commited on
Commit
3e1894e
·
verified ·
1 Parent(s): 8aef3f0

docs: add model card

Browse files
Files changed (1) hide show
  1. README.md +68 -5
README.md CHANGED
@@ -1,5 +1,68 @@
1
- ---
2
- license: other
3
- license_name: tongyi-qianwen
4
- license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/raw/main/LICENSE
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - zh
5
+ license: other
6
+ tags:
7
+ - chat
8
+ - gguf
9
+ license_name: tongyi-qianwen
10
+ license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE
11
+ pipeline_tag: text-generation
12
+ library_name: transformers
13
+ ---
14
+
15
+ # magnum-72b-v1-llamaify
16
+
17
+ This is a converted version of the Magnum 72B v1 model, now in LLaMA format. The original model was designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This converted version maintains the same capabilities but is now compatible with LLaMA-based frameworks and tools.
18
+
19
+ The speed may also be a bit faster, especially if you use frameworks optimized for LLaMA.
20
+
21
+ ## Model Details
22
+
23
+ - **Base Model:** [Qwen-2 72B Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct)
24
+ - **Training Data:** 55 million tokens of high-quality RP data
25
+ - **Training Duration:** 1.5 epochs
26
+ - **Hardware Used:** 8x AMD Instinct™ MI300X Accelerators
27
+
28
+ Context length is reduced to 32k, not sure how the sliding window implementation should be translated (afaik LLaMA doesn't use this).
29
+
30
+ ## Prompting
31
+
32
+ The model uses ChatML formatting for instructions. A typical input would look like this:
33
+
34
+ ```
35
+ <|im_start|>user
36
+ Hi there!<|im_end|>
37
+ <|im_start|>assistant
38
+ Nice to meet you!<|im_end|>
39
+ <|im_start|>user
40
+ Can I ask a question?<|im_end|>
41
+ <|im_start|>assistant
42
+ ```
43
+
44
+ ## Credits
45
+
46
+ Credit goes to Anthracite for the original model.
47
+
48
+ ## Conversion Details
49
+
50
+ This version of the model has been converted to the LLaMA format to enhance compatibility with a wider range of tools and frameworks. While the core capabilities of the model remain the same, users should be aware that there might be slight differences in performance due to the conversion process.
51
+
52
+ ## Usage
53
+
54
+ Can be used in transformers or any software that supports LLaMA arch models.
55
+
56
+ You can download GGUF quantizations here: [leafspark/magnum-72b-v1-llamaify-GGUF](https://huggingface.co/leafspark/magnum-72b-v1-llamaify-GGUF)
57
+
58
+ ## Limitations
59
+
60
+ Users should be aware that while this converted model maintains the general capabilities of the original, there might be subtle differences in performance or behavior due to the format change. It's recommended to test the model for your specific use case.
61
+
62
+ ## License
63
+
64
+ This model inherits the license from its base model, Qwen-2 72B Instruct. Please refer to the [original license](https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE) for terms of use.
65
+
66
+ ## Contact
67
+
68
+ For questions or issues related to this converted model, please open an issue in the model's repository.