prithivMLmods commited on
Commit
2fa6f0a
·
verified ·
1 Parent(s): 920559e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +99 -1
README.md CHANGED
@@ -1,4 +1,102 @@
1
  ---
2
  datasets:
3
  - prithivMLmods/Poseidon-Reasoning-5M
4
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  datasets:
3
  - prithivMLmods/Poseidon-Reasoning-5M
4
+ license: apache-2.0
5
+ language:
6
+ - en
7
+ base_model:
8
+ - Qwen/Qwen3-1.7B
9
+ library_name: transformers
10
+ tags:
11
+ - text-generation-inference
12
+ - moe
13
+ - code
14
+ - science
15
+ - biology
16
+ - chemistry
17
+ - thinking
18
+ pipeline_tag: text-generation
19
+ ---
20
+
21
+ # **Poseidon-Reasoning-1.7B**
22
+
23
+ > **Poseidon-Reasoning-1.7B** is a general-purpose, high-efficiency reasoning model fine-tuned on **Qwen3-1.7B** using the **Poseidon-Reasoning-5M** dataset (first 70K entries). Designed for **mathematical, scientific, and code-related reasoning**, this model strikes a balance between structured logic and contextual fluency—ideal for domains demanding symbolic precision and algorithmic thought.
24
+
25
+ > \[!note]
26
+ > GGUF: [https://huggingface.co/prithivMLmods/Poseidon-Reasoning-1.7B-GGUF](https://huggingface.co/prithivMLmods/Poseidon-Reasoning-1.7B-GGUF)
27
+
28
+ ## **Key Features**
29
+
30
+ 1. **Versatile Reasoning Model**
31
+ Finely tuned for multi-domain reasoning tasks, including mathematics, scientific computation, and code logic—capable of navigating structured problem-solving and analytic workflows.
32
+
33
+ 2. **Qwen3-1.7B Foundation**
34
+ Built upon **Qwen3-1.7B**, providing multilingual reasoning capability, efficient token handling, and strong alignment with instruction-following tasks.
35
+
36
+ 3. **Powered by Poseidon-Reasoning-5M (70K Sample Subset)**
37
+ Trained on a carefully selected subset of 70K entries from the **Poseidon-Reasoning-5M** dataset—focusing on tasks that emphasize **symbolic accuracy**, **step-by-step thinking**, and **STEM-relevant clarity**.
38
+
39
+ 4. **Balanced Thinking Mode**
40
+ Supports structured, guided thinking without excessive hallucination or unnecessary verbosity. Ideal for prompt-driven logic tasks with moderate complexity.
41
+
42
+ 5. **Rich Format Output**
43
+ Outputs include **Markdown**, **Python**, **LaTeX**, and tabular structures—helpful for notebooks, scientific documentation, and programmatic outputs.
44
+
45
+ 6. **1.7B Parameter Footprint**
46
+ Lightweight enough to run on **mid-tier GPUs or CPU-only environments**, while offering scalable reasoning power for research, teaching, and light automation.
47
+
48
+ ## **Quickstart with Transformers**
49
+
50
+ ```python
51
+ from transformers import AutoModelForCausalLM, AutoTokenizer
52
+
53
+ model_name = "prithivMLmods/Poseidon-Reasoning-1.7B"
54
+
55
+ model = AutoModelForCausalLM.from_pretrained(
56
+ model_name,
57
+ torch_dtype="auto",
58
+ device_map="auto"
59
+ )
60
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
61
+
62
+ prompt = "Solve: What is the derivative of sin(x) * ln(x)?"
63
+
64
+ messages = [
65
+ {"role": "system", "content": "You are a structured reasoning assistant for math, science, and code."},
66
+ {"role": "user", "content": prompt}
67
+ ]
68
+
69
+ text = tokenizer.apply_chat_template(
70
+ messages,
71
+ tokenize=False,
72
+ add_generation_prompt=True
73
+ )
74
+
75
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
76
+
77
+ generated_ids = model.generate(
78
+ **model_inputs,
79
+ max_new_tokens=256
80
+ )
81
+ generated_ids = [
82
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
83
+ ]
84
+
85
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
86
+ print(response)
87
+ ```
88
+
89
+ ## **Intended Use**
90
+
91
+ * General-purpose symbolic reasoning
92
+ * Math and science tutoring, theorem solving, and computational guidance
93
+ * Structured coding under constraints or STEM-based tasks
94
+ * Lightweight environments where interpretability and precision matter
95
+ * Prompt-driven reasoning with deterministic steps
96
+
97
+ ## **Limitations**
98
+
99
+ * Not designed for broad open-domain conversation
100
+ * May underperform on creative writing or emotional expression
101
+ * Best results occur with **clear problem statements and goal-directed prompts**
102
+ * Less suitable for speculative or abstract reasoning without structure