calebboud commited on
Commit
b557cd6
·
verified ·
1 Parent(s): 8ad6b59

Add README with usage instructions

Browse files
Files changed (1) hide show
  1. README.md +104 -0
README.md ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: Qwen/Qwen3-1.7B
4
+ tags:
5
+ - vibescript
6
+ - code-compression
7
+ - lora
8
+ - gguf
9
+ - qwen3
10
+ language:
11
+ - en
12
+ pipeline_tag: text-generation
13
+ ---
14
+
15
+ # VibeScript - Code to DSL Converter
16
+
17
+ **vibecoder-discern** converts natural language and code into VibeScript - a compact symbolic DSL for expressing programming concepts.
18
+
19
+ ## What is VibeScript?
20
+
21
+ VibeScript compresses verbose code into symbolic notation:
22
+
23
+ | Code | VibeScript |
24
+ |------|------------|
25
+ | `function add(a, b) { return a + b; }` | `Ω> add!(a, b)` |
26
+ | `const users = await db.query(...)` | `δ.m.p.query()` |
27
+ | `app.get('/api/users', ...)` | `θ.m.route(θ.e, ζ.x)` |
28
+ | `if (error) { throw new Error(...) }` | `~system~γ#error!` |
29
+
30
+ ## Model Variants
31
+
32
+ | Path | Format | Size | Use Case |
33
+ |------|--------|------|----------|
34
+ | `/lora-adapter/` | LoRA | ~13MB | Merge with your own Qwen3-1.7B |
35
+ | `/merged-model/` | HuggingFace | ~3.4GB | Ready-to-use transformers |
36
+ | `/gguf/` | GGUF Q4_K_M | ~1.1GB | llama.cpp / Ollama |
37
+
38
+ ## Quick Start
39
+
40
+ ### llama.cpp (GGUF)
41
+
42
+ ```bash
43
+ # Download
44
+ wget https://huggingface.co/calebboud/vibescript/resolve/main/gguf/vibecoder-discern-1.7B-Q4_K_M.gguf
45
+
46
+ # Run
47
+ llama-cli -m vibecoder-discern-1.7B-Q4_K_M.gguf \
48
+ -p "Convert this to vibescript: function multiply(x, y) { return x * y; }" \
49
+ -n 100 --temp 0.7
50
+ ```
51
+
52
+ ### Transformers (Merged Model)
53
+
54
+ ```python
55
+ from transformers import AutoModelForCausalLM, AutoTokenizer
56
+
57
+ model = AutoModelForCausalLM.from_pretrained("calebboud/vibescript", subfolder="merged-model")
58
+ tokenizer = AutoTokenizer.from_pretrained("calebboud/vibescript", subfolder="merged-model")
59
+
60
+ prompt = "Convert this to vibescript: console.log('Hello World')"
61
+ inputs = tokenizer(prompt, return_tensors="pt")
62
+ outputs = model.generate(**inputs, max_new_tokens=50)
63
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
64
+ ```
65
+
66
+ ### LoRA Adapter (Merge Yourself)
67
+
68
+ ```python
69
+ from peft import PeftModel
70
+ from transformers import AutoModelForCausalLM
71
+
72
+ base = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-1.7B")
73
+ model = PeftModel.from_pretrained(base, "calebboud/vibescript", subfolder="lora-adapter")
74
+ merged = model.merge_and_unload()
75
+ ```
76
+
77
+ ## Training Details
78
+
79
+ - **Base Model:** Qwen/Qwen3-1.7B
80
+ - **Method:** LoRA (r=8, alpha=16)
81
+ - **Target Modules:** q_proj, k_proj, v_proj, o_proj
82
+ - **Dataset:** 885 code → vibescript examples
83
+ - **Task:** CAUSAL_LM
84
+
85
+ ## VibeScript Symbols
86
+
87
+ | Symbol | Meaning |
88
+ |--------|---------|
89
+ | `Ω>` | Function definition |
90
+ | `Σ` | Route/scaffold |
91
+ | `δ` | Database operations |
92
+ | `θ` | HTTP/API |
93
+ | `γ` | Error handling |
94
+ | `ζ` | Structure/scaffold |
95
+ | `α` | Analysis |
96
+ | `ε` | Dependencies |
97
+
98
+ ## Coming Soon
99
+
100
+ - **vibecoder-expand**: VibeScript → Code (reverse direction)
101
+
102
+ ## License
103
+
104
+ Apache 2.0