Pacific-Prime commited on
Commit
0a549c3
·
verified ·
1 Parent(s): 75cd3e5

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +84 -0
README.md ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ datasets:
4
+ - iamtarun/python_code_instructions_18k_alpaca
5
+ language:
6
+ - en
7
+ base_model: Pacific-Prime/pacific-prime-code
8
+ tags:
9
+ - code
10
+ - python
11
+ - i64
12
+ - complexity-deep
13
+ - sft
14
+ pipeline_tag: text-generation
15
+ library_name: transformers
16
+ ---
17
+
18
+ # Pacific-Prime: Python Node
19
+
20
+ **Pure Python specialist** fine-tuned from Pacific-Prime Code (I64 architecture, 1.5B parameters).
21
+
22
+ ## Skills
23
+
24
+ - Python basics & standard library
25
+ - Algorithms & data structures
26
+ - Object-oriented programming
27
+ - Decorators & generators
28
+ - List comprehensions
29
+ - File I/O & error handling
30
+
31
+ ## Training
32
+
33
+ - **Architecture**: I64 (Complexity-Deep)
34
+ - **Parameters**: 1.5B
35
+ - **Base model**: pacific-prime-code (checkpoint epoch 70)
36
+ - **Method**: Full SFT (no LoRA)
37
+ - **Dataset**: python_code_instructions_18k_alpaca (18K samples)
38
+ - **Epochs**: 1000
39
+ - **Max context**: 4096 tokens
40
+
41
+ ## Inference with vLLM-I64
42
+
43
+ Use our custom vLLM engine with native I64 support:
44
+
45
+ **[vllm-i64](https://github.com/Complexity-ML/vllm-i64)**
46
+
47
+ ```bash
48
+ git clone https://github.com/Complexity-ML/vllm-i64.git
49
+ cd vllm-i64
50
+ pip install -e .
51
+ ```
52
+
53
+ ```python
54
+ from vllm import LLM, SamplingParams
55
+
56
+ model = LLM(model="Pacific-Prime/python-node")
57
+ params = SamplingParams(temperature=0.7, max_tokens=4096)
58
+
59
+ prompt = "User: Write a Python function to find the longest common subsequence of two strings.\nAssistant:"
60
+ output = model.generate([prompt], params)
61
+ print(output[0].outputs[0].text)
62
+ ```
63
+
64
+ ## Serve Your Own I64 Model
65
+
66
+ Trained your own I64 model with [complexity-deep](https://github.com/Complexity-ML/complexity-deep)? Serve it with vllm-i64:
67
+
68
+ ```python
69
+ from vllm import LLM, SamplingParams
70
+
71
+ model = LLM(model="/path/to/your/i64-model")
72
+ params = SamplingParams(temperature=0.7, max_tokens=4096)
73
+ output = model.generate(["User: Hello!\nAssistant:"], params)
74
+ print(output[0].outputs[0].text)
75
+ ```
76
+
77
+ ## Links
78
+
79
+ - [Complexity-Deep](https://github.com/Complexity-ML/complexity-deep) — Training framework & architecture
80
+ - [vllm-i64](https://github.com/Complexity-ML/vllm-i64) — Inference engine for I64 models
81
+
82
+ ## License
83
+
84
+ CC BY-NC 4.0