File size: 2,091 Bytes
f6e708b
0456e1c
 
 
 
f6e708b
0456e1c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
datasets:
- ise-uiuc/Magicoder-OSS-Instruct-75K
---

# Model Card for Model ID

Trained with [Ludwig.ai](https://ludwig.ai) and [Predibase](https://predibase.com)!

Given a programming problem and a target language, generate a solution.

Try it in [LoRAX](https://github.com/predibase/lorax):

```python
from lorax import Client

client = Client("http://<your_endpoint>")

problem = "<your programming problem>"
lang = "<your programming language>"

prompt = f"""
Below is a programming problem, paired with a language in which the solution
should be written. Write a solution in the provided that appropriately
solves the programming problem.

### Problem: {problem}

### Language: {lang}

### Solution:
"""

adapter_id = "tgaddair/mistral-7b-magicoder-lora-r8"
resp = client.generate(prompt, max_new_tokens=64, adapter_id=adapter_id)
print(resp.generated_text)
```



## Model Details

### Model Description

Ludwig config (v0.9.3):

```yaml
model_type: llm
input_features:
  - name: prompt
    type: text
    preprocessing:
      max_sequence_length: null
    column: prompt
output_features:
  - name: solution
    type: text
    preprocessing:
      max_sequence_length: null
    column: solution
prompt:
  template: >-
    Below is a programming problem, paired with a language in which the solution
    should be written. Write a solution in the provided that appropriately
    solves the programming problem.


    ### Problem: {problem}


    ### Language: {lang}


    ### Solution: 
preprocessing:
  split:
    type: fixed
    column: split
  global_max_sequence_length: 2048
adapter:
  type: lora
generation:
  max_new_tokens: 64
trainer:
  type: finetune
  epochs: 1
  optimizer:
    type: paged_adam
  batch_size: 1
  eval_steps: 100
  learning_rate: 0.0002
  eval_batch_size: 2
  steps_per_checkpoint: 1000
  learning_rate_scheduler:
    decay: cosine
    warmup_fraction: 0.03
  gradient_accumulation_steps: 16
  enable_gradient_checkpointing: true
base_model: mistralai/Mistral-7B-v0.1
quantization:
  bits: 4
```