Text Generation
Transformers
PyTorch
English
llama
Merge
slerp
text-generation-inference
File size: 2,021 Bytes
467e3a6
 
8276bfe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
467e3a6
8276bfe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
---
license: mit
datasets:
- Open-Orca/OpenOrca
- conceptofmind/cot_submix_original
- conceptofmind/t0_submix_original
- conceptofmind/niv2_submix_original
- conceptofmind/flan2021_submix_original
- ehartford/dolphin
language:
- en
tags:
- merge
- slerp
inference: false
metrics:
- accuracy
- bleu
---
<h1 style="text-align: center">Orfini</h1>
<h2 style="text-align: center">An experimental model</h2>
<hr>


## Model Details
Orfini is an experimental merged model created from the following three foundation models:

- stabilityai/StableBeluga-7B
- pankajmathur/orca_mini_v3_7b  
- AIDC-ai-business/Marcoroni-7B
  
Orfini was created by merging the weights and architectures of these three models using a custom merging technique. No further fine-tuning was performed after the merge.

Once the model obtains it's evaluation scores, then we'll know if it works or not.

## Intended Use
As an experimental model, Orfini is intended for testing and research purposes only. It should not be used for production systems or to generate content for public use.

## Training Data
Orfini inherits training data from its three foundation models:

- StableBeluga-7B: COT, Niv2, t0, & FLAN2021
- dolphin-llama2-7b: Dolphin
- Marcoroni-7B: OpenOrca

## Limitations
As an untested merged model, Orfini has unknown capabilities and limitations. Potential issues include:

- Instability due to merged architectures
- Compounded bias and issues from all three foundation models
- Decreased performance on some tasks compared to the foundation models

Extensive testing is required to characterize Orfini's capabilities and limitations.

## Ethical Considerations
- Orfini may exhibit harmful biases inherited from its training data
- Output may be unreliable or manipulated due to instability
- Experimental nature increases potential for misuse

Use this model ethically and do not deploy it for sensitive applications.

## Contact Information
Please report issues or concerns with this model to the creator for further investigation.