Purpose
Used for generating high quality commit messages for a given git difference
Model Description
Generated by fine tuning Qwen2.5-Coder-1.5B-Instruct on bigcode/commitpackft dataset for 2 epochs Trained on a total of 277 Languages Achieved a final training loss in the range of 1- 1.7 (due to data set not containing equal data rows for each language) For common languages(python, java ,javascripts,c etc) loss went for a minimum of 1.0335
Environmental Impact
- Hardware Type: geforce RTX 4060 TI - 16GB]
- Hours used: 10 Hours
- Cloud Provider: local
Results
Inference
from llama_cpp import Llama
llm = Llama.from_pretrained(
repo_id="seniruk/commitGen-gguf",
filename="commitGen.gguf",
)
diff="" #the git difference
instruction= "" #the instruction --> 'create a commit message for given git difference'
prompt = "{}{}".format(instruction,diff)
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
output = llm.create_chat_completion(
messages=messages,
temperature=0.5
)
llm_message = output['choices'][0]['message']['content']
print(llm_message)
- Downloads last month
- 0
Hardware compatibility
Log In
to view the estimation
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for seniruk/commitGen-gguf
Base model
Qwen/Qwen2.5-1.5B
Finetuned
Qwen/Qwen2.5-Coder-1.5B
Finetuned
Qwen/Qwen2.5-Coder-1.5B-Instruct