munish0838 commited on
Commit
79be930
1 Parent(s): ed3831c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +117 -0
README.md ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3
3
+ datasets:
4
+ - NobodyExistsOnTheInternet/ToxicQAFinal
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
+ base_model: fearlessdots/Llama-3-Alpha-Centauri-v0.1
8
+ ---
9
+
10
+ # Llama-3-Alpha-Centauri-v0.1-GGUF
11
+ This is quantized version of [fearlessdots/Llama-3-Alpha-Centauri-v0.1](https://huggingface.co/fearlessdots/Llama-3-Alpha-Centauri-v0.1) created using llama.cpp
12
+
13
+
14
+ ## Disclaimer
15
+
16
+ **Note:** All models and LoRAs from the **Centaurus** series were created with the sole purpose of research. The usage of this model and/or its related LoRA implies agreement with the following terms:
17
+
18
+ - The user is responsible for what they might do with it, including how the output of the model is interpreted and used;
19
+ - The user should not use the model and its outputs for any illegal purposes;
20
+ - The user is the only one resposible for any misuse or negative consequences from using this model and/or its related LoRA.
21
+
22
+ I do not endorse any particular perspectives presented in the training data.
23
+
24
+ ---
25
+
26
+ ## Centaurus Series
27
+
28
+ This series aims to develop highly uncensored Large Language Models (LLMs) with the following focuses:
29
+
30
+ - Science, Technology, Engineering, and Mathematics (STEM)
31
+ - Computer Science (including programming)
32
+ - Social Sciences
33
+
34
+ And several key cognitive skills, including but not limited to:
35
+
36
+ - Reasoning and logical deduction
37
+ - Critical thinking
38
+ - Analysis
39
+
40
+ While maintaining strong overall knowledge and expertise, the models will undergo refinement through:
41
+
42
+ - Fine-tuning processes
43
+ - Model merging techniques including Mixture of Experts (MoE)
44
+
45
+ Please note that these models are experimental and may demonstrate varied levels of effectiveness. Your feedback, critique, or queries are most welcome for improvement purposes.
46
+
47
+ ## Base
48
+
49
+ This model and its related LoRA was fine-tuned on [https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3).
50
+
51
+ ## LoRA
52
+
53
+ The LoRA merged with the base model is available at [https://huggingface.co/fearlessdots/Llama-3-Alpha-Centauri-v0.1-LoRA](https://huggingface.co/fearlessdots/Llama-3-Alpha-Centauri-v0.1-LoRA).
54
+
55
+ ## GGUF
56
+
57
+ I provide some GGUF files here: [https://huggingface.co/fearlessdots/Llama-3-Alpha-Centauri-v0.1-GGUF](https://huggingface.co/fearlessdots/Llama-3-Alpha-Centauri-v0.1-GGUF).
58
+
59
+ ## Datasets
60
+
61
+ - [https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal)
62
+
63
+ ## Fine Tuning
64
+
65
+ ### - Quantization Configuration
66
+
67
+ - load_in_4bit=True
68
+ - bnb_4bit_quant_type="fp4"
69
+ - bnb_4bit_compute_dtype=compute_dtype
70
+ - bnb_4bit_use_double_quant=False
71
+
72
+ ### - PEFT Parameters
73
+
74
+ - lora_alpha=64
75
+ - lora_dropout=0.05
76
+ - r=128
77
+ - bias="none"
78
+
79
+ ### - Training Arguments
80
+
81
+ - num_train_epochs=1
82
+ - per_device_train_batch_size=1
83
+ - gradient_accumulation_steps=4
84
+ - optim="adamw_bnb_8bit"
85
+ - save_steps=25
86
+ - logging_steps=25
87
+ - learning_rate=2e-4
88
+ - weight_decay=0.001
89
+ - fp16=False
90
+ - bf16=False
91
+ - max_grad_norm=0.3
92
+ - max_steps=-1
93
+ - warmup_ratio=0.03
94
+ - group_by_length=True
95
+ - lr_scheduler_type="constant"
96
+
97
+ ## Credits
98
+
99
+ - Meta ([https://huggingface.co/meta-llama](https://huggingface.co/meta-llama)): for the original Llama-3;
100
+ - HuggingFace: for hosting this model and for creating the fine-tuning tools used;
101
+ - failspy ([https://huggingface.co/failspy](https://huggingface.co/failspy)): for the base model and the orthogonalization implementation;
102
+ - NobodyExistsOnTheInternet ([https://huggingface.co/NobodyExistsOnTheInternet](https://huggingface.co/NobodyExistsOnTheInternet)): for the incredible dataset;
103
+ - Undi95 ([https://huggingface.co/Undi95](https://huggingface.co/Undi95)) and Sao10k ([https://huggingface.co/Sao10K](https://huggingface.co/Sao10K)): my main inspirations for doing these models =]
104
+
105
+ A huge thank you to all of them ☺️
106
+
107
+ ## About Alpha Centauri
108
+
109
+ **Alpha Centauri** is a triple star system located in the constellation of **Centaurus**. It includes three stars: Rigil Kentaurus (also known as **α Centauri A**), Toliman (or **α Centauri B**), and Proxima Centauri (**α Centauri C**). Proxima Centauri is the nearest star to the Sun, residing at approximately 4.25 light-years (1.3 parsecs) away.
110
+
111
+ The primary pair, **α Centauri A** and **B**, are both similar to our Sun - **α Centauri A** being a class G star with 1.1 solar masses and 1.5 times the Sun's luminosity; **α Centauri B** having 0.9 solar masses and under half the luminosity of the Sun. They revolve around their shared center every 79 years following an elliptical path, ranging from 35.6 astronomical units apart (nearly Pluto's distance from the Sun) to 11.2 astronomical units apart (around Saturn's distance from the Sun.)
112
+
113
+ Proxima Centauri, or **α Centauri C**, is a diminutive, dim red dwarf (a class M star) initially unseen to the naked eye. At roughly 4.24 light-years (1.3 parsecs) from us, it lies nearer than **α Centauri AB**, the binary system. Presently, the gap between **Proxima Centauri** and **α Centauri AB** amounts to around 13,000 Astronomical Units (0.21 light-years)—comparable to over 430 times Neptune's orbital radius.
114
+
115
+ Two confirmed exoplanets accompany Proxima Centauri: **Proxima b**, discovered in 2016, is Earth-sized within the habitable zone; **Proxima d**, revealed in 2022, is a potential sub-Earth close to its host star. Meanwhile, disputes surround **Proxima c**, a mini-Neptune detected in 2019. Intriguingly, hints suggest that **α Centauri A** might possess a Neptune-sized object in its habitable region, but further investigation is required before confirming whether it truly exists and qualifies as a planet. Regarding **α Centauri B**, although once thought to harbor a planet (named **α Cen Bb**), subsequent research invalidated this claim, leaving it currently devoid of identified planets.
116
+
117
+ **Source:** retrived from [https://en.wikipedia.org/wiki/Alpha_Centauri](https://en.wikipedia.org/wiki/Alpha_Centauri) and processed with [https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1).