prithivMLmods
commited on
Commit
•
2d724cc
1
Parent(s):
6924da6
Update README.md
Browse files
README.md
CHANGED
@@ -20,6 +20,8 @@ tags:
|
|
20 |
|
21 |
### QwQ-LCoT-7B-Instruct Model File
|
22 |
|
|
|
|
|
23 |
| **File Name** | **Size** | **Description** | **Upload Status** |
|
24 |
|----------------------------------------|----------------|-------------------------------------------------|--------------------|
|
25 |
| `.gitattributes` | 1.57 kB | Tracks large files with Git LFS. | Uploaded |
|
@@ -39,3 +41,46 @@ tags:
|
|
39 |
| `vocab.json` | 2.78 MB | Tokenizer vocabulary. | Uploaded |
|
40 |
|
41 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
|
21 |
### QwQ-LCoT-7B-Instruct Model File
|
22 |
|
23 |
+
The **QwQ-LCoT-7B-Instruct** is a fine-tuned language model designed for advanced reasoning and instruction-following tasks. It leverages the **Qwen2.5-7B** base model and has been fine-tuned on the **amphora/QwQ-LongCoT-130K** dataset, focusing on chain-of-thought (CoT) reasoning.
|
24 |
+
|
25 |
| **File Name** | **Size** | **Description** | **Upload Status** |
|
26 |
|----------------------------------------|----------------|-------------------------------------------------|--------------------|
|
27 |
| `.gitattributes` | 1.57 kB | Tracks large files with Git LFS. | Uploaded |
|
|
|
41 |
| `vocab.json` | 2.78 MB | Tokenizer vocabulary. | Uploaded |
|
42 |
|
43 |
---
|
44 |
+
### **Key Features:**
|
45 |
+
|
46 |
+
1. **Model Size:**
|
47 |
+
- **7.62B parameters** (FP16 precision).
|
48 |
+
|
49 |
+
2. **Model Sharding:**
|
50 |
+
- The model weights are split into 4 shards (`safetensors`) for efficient storage and download:
|
51 |
+
- `model-00001-of-00004.safetensors` (4.88 GB)
|
52 |
+
- `model-00002-of-00004.safetensors` (4.93 GB)
|
53 |
+
- `model-00003-of-00004.safetensors` (4.33 GB)
|
54 |
+
- `model-00004-of-00004.safetensors` (1.09 GB)
|
55 |
+
|
56 |
+
3. **Tokenizer:**
|
57 |
+
- Byte-pair encoding (BPE) based.
|
58 |
+
- Files included:
|
59 |
+
- `vocab.json` (2.78 MB)
|
60 |
+
- `merges.txt` (1.82 MB)
|
61 |
+
- `tokenizer.json` (11.4 MB)
|
62 |
+
- Special tokens mapped in `special_tokens_map.json` (e.g., `<pad>`, `<eos>`).
|
63 |
+
|
64 |
+
4. **Configuration Files:**
|
65 |
+
- `config.json`: Defines model architecture and hyperparameters.
|
66 |
+
- `generation_config.json`: Settings for inference and text generation tasks.
|
67 |
+
|
68 |
+
---
|
69 |
+
|
70 |
+
### **Training Dataset:**
|
71 |
+
- **Dataset Name:** [amphora/QwQ-LongCoT-130K](https://huggingface.co/amphora/QwQ-LongCoT-130K)
|
72 |
+
- **Size:** 133k examples.
|
73 |
+
- **Focus:** Chain-of-Thought reasoning for complex tasks.
|
74 |
+
|
75 |
+
---
|
76 |
+
|
77 |
+
### **Use Cases:**
|
78 |
+
1. **Instruction Following:**
|
79 |
+
Handle user instructions effectively, even for multi-step tasks.
|
80 |
+
|
81 |
+
2. **Reasoning Tasks:**
|
82 |
+
Perform logical reasoning and generate detailed step-by-step solutions.
|
83 |
+
|
84 |
+
3. **Text Generation:**
|
85 |
+
Generate coherent, context-aware responses.
|
86 |
+
---
|