File size: 2,884 Bytes
8e3eaad
 
 
 
 
 
 
cddfb98
 
8e3eaad
 
cddfb98
8e3eaad
 
 
 
 
 
6dbbb82
8e3eaad
6dbbb82
 
 
8e3eaad
6592522
 
00350e2
6592522
 
 
8e3eaad
 
6b9c66b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d2bd00f
6b9c66b
 
 
 
 
 
 
 
 
8e3eaad
bc2bd4e
 
6dbbb82
bc2bd4e
 
 
 
8e3eaad
 
07de8a5
8e3eaad
 
 
 
 
 
 
 
 
 
 
 
afddcfa
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
license: apache-2.0
datasets:
- Intel/orca_dpo_pairs
language:
- en
library_name: transformers
tags:
- code
---

# πŸŒžπŸš€ Orca-SOLAR-4x10.7_36B 

Merge of four Solar-10.7B instruct finetunes.

![solar](solar.png)

## 🌟 Usage 
This SOLAR model _loves_ to code. In my experience, if you ask it a code question it will use almost all of the available token limit to complete the code.

However, this can also be to its own detriment. If the request is complex it may not finish the code in a given time period. This behavior is not because of an eos token, as it finishes sentences quite normally if its a non code question.

Your mileage may vary.

## 🌎 HF Spaces

This 36B parameter model is capabale of running on free tier hardware (CPU only - GGUF)

+ Try the model [here](https://huggingface.co/spaces/macadeliccc/Orca-SOLAR-4x10.7b-chat-GGUF)
  
## πŸŒ… Code Example

Example also available in [colab](https://colab.research.google.com/drive/10FWCLODU_EFclVOFOlxNYMmSiLilGMBZ?usp=sharing)

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

def generate_response(prompt):
    """
    Generate a response from the model based on the input prompt.

    Args:
    prompt (str): Prompt for the model.

    Returns:
    str: The generated response from the model.
    """
    # Tokenize the input prompt
    inputs = tokenizer(prompt, return_tensors="pt")
    
    # Generate output tokens
    outputs = model.generate(**inputs, max_new_tokens=512, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id)

    # Decode the generated tokens to a string
    response = tokenizer.decode(outputs[0], skip_special_tokens=True)

    return response


# Load the model and tokenizer
model_id = "macadeliccc/Orca-SOLAR-4x10.7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)

prompt = "Explain the proof of Fermat's Last Theorem and its implications in number theory."


print("Response:")
print(generate_response(prompt), "\n")
```

## Llama.cpp

GGUF Quants available [here](https://huggingface.co/macadeliccc/Orca-SOLAR-4x10.7b-GGUF)

![llama.cpp-screenshot](orca-llama-cpp-1.png)


## Evaluations 

https://huggingface.co/datasets/open-llm-leaderboard/details_macadeliccc__Orca-SOLAR-4x10.7b


### πŸ“š Citations 

```bibtex
@misc{kim2023solar,
      title={SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling}, 
      author={Dahyun Kim and Chanjun Park and Sanghoon Kim and Wonsung Lee and Wonho Song and Yunsu Kim and Hyeonwoo Kim and Yungi Kim and Hyeonju Lee and Jihoo Kim and Changbae Ahn and Seonghoon Yang and Sukyung Lee and Hyunbyung Park and Gyoungjin Gim and Mikyoung Cha and Hwalsuk Lee and Sunghun Kim},
      year={2023},
      eprint={2312.15166},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```