File size: 4,604 Bytes
6ef781f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- TinyLlama/TinyLlama-1.1B-Chat-v1.0
- vihangd/DopeyTinyLlama-1.1B-v1
- cognitivecomputations/TinyDolphin-2.8.1-1.1b
- Josephgflowers/Tinyllama-Cinder-1.3B-Reason-Test
base_model:
- TinyLlama/TinyLlama-1.1B-Chat-v1.0
- vihangd/DopeyTinyLlama-1.1B-v1
- cognitivecomputations/TinyDolphin-2.8.1-1.1b
- Josephgflowers/Tinyllama-Cinder-1.3B-Reason-Test
---

# UltraCompute-7B-Base

UltraCompute-7B-Base is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)
* [vihangd/DopeyTinyLlama-1.1B-v1](https://huggingface.co/vihangd/DopeyTinyLlama-1.1B-v1)
* [cognitivecomputations/TinyDolphin-2.8.1-1.1b](https://huggingface.co/cognitivecomputations/TinyDolphin-2.8.1-1.1b)
* [Josephgflowers/Tinyllama-Cinder-1.3B-Reason-Test](https://huggingface.co/Josephgflowers/Tinyllama-Cinder-1.3B-Reason-Test)

## 🧩 Configuration

```yaml
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
gate_mode: hidden
dtype: float16
experts:
  - source_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
    positive_prompts:
    - "Help me debug this code."
    - "Rewrite this function in Python."
    - "Optimize this C# script."
    - "Implement this feature using JavaScript."
    - "Convert this HTML structure into a more efficient design."
    - "Assist me with writing a program that"
  - source_model: vihangd/DopeyTinyLlama-1.1B-v1
    positive_prompts:
    - "How do you"
    - "Explain the concept of"
    - "Give an overview of"
    - "Compare and contrast between"
    - "Provide information about"
    - "Help me understand"
    - "Summarize"
    - "Make a recommendation on"
    - "Answer this question"
  - source_model: cognitivecomputations/TinyDolphin-2.8.1-1.1b
    positive_prompts:
    - "Write a program to solve this problem"
    - "Modify this function to improve its performance"
    - "Refactor this code to enhance readability"
    - "Create a custom function for this specific use case"
    - "Optimize this algorithm to reduce computational complexity"
    - "Implement this feature by extending existing codebase"
    - "Integrate this API call into the application"
    - "Help me troubleshoot and fix this bug"
    - "Review and test this code snippet before deployment"
    - "Analyze this error log to identify potential issues"
    - "Generate a set of unit tests for this module"
    - "Evaluate different approaches to solving this problem"
    - "Do a web search for"
    - "Use the plugin to"
  - source_model: Josephgflowers/Tinyllama-Cinder-1.3B-Reason-Test
    positive_prompts:
    - "add these numbers"
    - "whats 2+2"
    - "subtraction"
    - "division"
    - "multiplication"
    - "addition"
    - "I need help with a math problem"
    - "Solve for x"
    - "Add these two numbers together: 4 + 3 = 7"
    - "Multiply 5 by 6: 5 * 6 = 30"
    - "Divide 8 by 2: 8 / 2 = 4"
    - "Find the remainder when 9 is divided by 3: 9 % 3 = 0"
    - "Calculate the square root of 16: sqrt(16) = 4"
    - "Simplify the expression (a+b)/(c-d): (a+b)/(c-d)"
    - "Factor out the common factor of 2 from 4x + 6y: 2(2x + 3y)"
    - "Solve for x in the equation 3x - 7 = 2x + 5: x = 12"
    - "Graph the line y = 2x + 3"
    - "Approximate pi to three decimal places: 3.142"
    - "Find the derivative of f(x) = sin(x): f'(x) = cos(x)"
    - "Integrate g(x) = x^2 over the interval [0, 1]: g(1) - g(0) = 1/3"
    - "Calculate the determinant of the matrix A = [[2, 3], [4, 5]]: det(A) = 2*5 - 3*4 = -2"
    - "Solve the system of equations Ax = b: x = [-5, 10]"
    - "Calculate the sum of the first n natural numbers using the formula Sn = n*(n+1)/2: sum(n=1 to 5) = 15"
```

## 💻 Usage

```python
!pip install -qU transformers bitsandbytes accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "gmonsoon/UltraCompute-7B-Base"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)

messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```