XeTute commited on
Commit
1d11f78
·
verified ·
1 Parent(s): 3eedc81

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -38
README.md CHANGED
@@ -6,7 +6,8 @@ language:
6
  - en
7
  - zh
8
  - ur
9
- base_model: XeTute/Intellect_V0.2-1.6B
 
10
  tags:
11
  - reasoning
12
  - tiny
@@ -18,46 +19,30 @@ tags:
18
  pipeline_tag: text-generation
19
  ---
20
 
21
- # XeTute/Intellect_V0.2-1.6B-Q8_0-GGUF
22
- This model was converted to GGUF format from [`XeTute/Intellect_V0.2-1.6B`](https://huggingface.co/XeTute/Intellect_V0.2-1.6B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
23
- Refer to the [original model card](https://huggingface.co/XeTute/Intellect_V0.2-1.6B) for more details on the model.
24
 
25
- ## Use with llama.cpp
26
- Install llama.cpp through brew (works on Mac and Linux)
 
 
27
 
28
- ```bash
29
- brew install llama.cpp
30
-
31
- ```
32
- Invoke the llama.cpp server or the CLI.
33
-
34
- ### CLI:
35
- ```bash
36
- llama-cli --hf-repo XeTute/Intellect_V0.2-1.6B-Q8_0-GGUF --hf-file intellect_v0.2-1.6b-q8_0.gguf -p "The meaning to life and the universe is"
37
- ```
38
-
39
- ### Server:
40
- ```bash
41
- llama-server --hf-repo XeTute/Intellect_V0.2-1.6B-Q8_0-GGUF --hf-file intellect_v0.2-1.6b-q8_0.gguf -c 2048
42
  ```
43
 
44
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
45
-
46
- Step 1: Clone llama.cpp from GitHub.
47
- ```
48
- git clone https://github.com/ggerganov/llama.cpp
49
- ```
50
 
51
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
52
- ```
53
- cd llama.cpp && LLAMA_CURL=1 make
54
- ```
55
 
56
- Step 3: Run inference through the main binary.
57
- ```
58
- ./llama-cli --hf-repo XeTute/Intellect_V0.2-1.6B-Q8_0-GGUF --hf-file intellect_v0.2-1.6b-q8_0.gguf -p "The meaning to life and the universe is"
59
- ```
60
- or
61
- ```
62
- ./llama-server --hf-repo XeTute/Intellect_V0.2-1.6B-Q8_0-GGUF --hf-file intellect_v0.2-1.6b-q8_0.gguf -c 2048
63
- ```
 
6
  - en
7
  - zh
8
  - ur
9
+ base_model:
10
+ - openai-community/gpt2-xl
11
  tags:
12
  - reasoning
13
  - tiny
 
19
  pipeline_tag: text-generation
20
  ---
21
 
22
+ > [!TIP]
23
+ > Intellect V0.2 (1.6B) is a small model that is still under development and has not been extensively tested. We do not recommend deploying it for production use, but it performs well for private applications. Feedback is welcome.
 
24
 
25
+ # Introduction
26
+ We introduce **Intellect 1.6B (V0.2)**, our first-ever reasoning model. It is a full-parameter fine-tune of **GPT2-XL** (licensed under MIT), trained using the **Pakistan-China-Alpaca** dataset (licensed under MIT).
27
+ Intellect V0.2 (1.6B) is licensed under **Apache 2.0**, meaning you are free to use it in personal projects. However, this fine-tune is highly experimental, and we do not recommend it for serious, production-ready deployments.
28
+ [You can find the FP32 version here.](https://huggingface.co/XeTute/Intellect_V0.2-1.6B)
29
 
30
+ # Usage
31
+ Since the data used for training were only one-message-in one-message-out pairs, the model often repeats itself after the user sent a follow-up question.
32
+ The chat-template is Alpaca, which looks something like following:
33
+ ```txt
34
+ ### Instruction:
35
+ {{{ INPUT }}}
36
+ ### Response:
37
+ {{{ OUTPUT }}}
 
 
 
 
 
 
38
  ```
39
 
40
+ # Training Details
41
+ We used **SGD** (instead of AdamW) with an initial learning rate of **1.0e-5**, which allowed us to train the model with a batch size of **1** and a **maximum context length of 1K(the maximum GPT2-XL supports)** while staying within our allocated **64GB memory** for this project.
42
+ Training was completed in **under a day**, which is why **[PhantasiaAI](https://xetute.com/PhantasiaAI)** was unavailable from **05/02/2025 00:00 - 19:00**. The service is now fully operational.
 
 
 
43
 
44
+ ---
 
 
 
45
 
46
+ [Visit our website.](https://xetute.com)
47
+ [Check out our Character.AI alternative.](https://xetute.com/PhantasiaAI)
48
+ [Support us financially.](https://ko-fi.com/XeTute)