Nerdsking commited on
Commit
ff07f06
·
verified ·
1 Parent(s): 6fee672

Update README.md

Browse files

<b>1. Overview</b>

<b>Nerdsking-python-coder-3B-i</b> is a 3B parameter partially uncensored model focused in <b> Python</b>, with <b>English</b> as main language. It was massively trained in python, therefore despite the fact it can code in other languages as well, the performance will be not in the same level as the one achieved while using python.

<u>Key Characteristics:</u>

✔ Parameter count: 3B
✔ Primary domain: Python programming
✔ Secondary capabilities: General coding, technical English
✔ Training focus: Python logic, standard library usage, algorithmic reasoning
✔ Alignment: Partially uncensored (developer-oriented)


<b>2. Benchmark</b>
After months of refining, <b>Nerdsking-python-coder-3B-i</b> has achieved <b>88.41 in HumanEval (bf16)</b>, placing it among the highest-performing Python-focused 3B models reported on HumanEval. Surpassing much bigger models in that area.

<u>2.1 specs:</u>

✔ Official HumanEval execution protocol
✔ Deterministic, zero-shot evaluation
✔ temperature = 0.1
✔ do_sample = False
✔ zero-shot, pass@1
✔ dtype == "bfloat16"
✔ Fixed system prompt: “You are an expert Python coding assistant.”
✔ Evaluated on fully merged weights




<b>3. S.o.n.n.</b>
The model was treated under <b>"s.o.n.n." (single omni neural network)</b>, a concept created by <b>IPMN</b> at <b>Nerdsking.com </b> that is both a precise way of fine tunning/altering existing models, as well the basis for a new Artificial Intelligence standard, currently in development.

While used in pre-existing models, s.o.n.n. allows:
• Parameter-preserving refinement methodology
• Focus on global behavioral shaping, not task-local adapters
• Avoid fragmentation common in multi-adapter or task-siloed approaches


<b>4. Intended Use & Limitations</b>

<u>Intended use:</u>

✔Python development
✔Algorithmic problem solving
✔Code reasoning and refactoring
✔Developer-centric workflows

<u>Known limitations:</u>

• Not optimized for non-Python languages
• Not instruction-chat aligned for conversational safety
• Not trained for legal, medical, or policy compliance use cases


<b>5. Quick Start (Inference)</b>

<code>
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "Nerdsking/Nerdsking-python-coder-3B-i"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="bfloat16",
device_map="auto"
)

prompt = "Write a Python function that checks if a number is prime."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))
</code>



<b>6. Ethical & Safety Notes</b>

This model is intended for technical and research use. Due to relaxed alignment constraints, outputs should be reviewed before deployment in production or public-facing systems.

Files changed (1) hide show
  1. README.md +14 -3
README.md CHANGED
@@ -1,3 +1,14 @@
1
- ---
2
- license: fair-noncommercial-research-license
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: fair-noncommercial-research-license
3
+ language:
4
+ - en
5
+ - pt
6
+ metrics:
7
+ - type: { HumanEval zero-shot pass@1} # Required. Example: wer. Use metric id from https://hf.co/metrics
8
+ value: {88.41} # Required. Example: 20.90
9
+ base_model:
10
+ - Qwen/Qwen2.5-Coder-3B-Instruct
11
+ pipeline_tag: text-generation
12
+ tags:
13
+ - code
14
+ ---