Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Model Card for sai1881/PyCoder

Model Details

Model Description

The sai1881/PyCoder model is a specialized language model fine-tuned to generate Python code and respond to programming queries. It is based on Microsoft's Phi-3-mini-4k-instruct and fine-tuned using a dataset comprised of diverse Python programming challenges, solutions, and expert discussions. This model aims to assist developers by providing code suggestions, debugging tips, and explanations of Python concepts, enhancing productivity in coding tasks.

image/webp

Developed by: Sai Manoj
Model type: Causal Language Model
Language(s) (NLP): English
License: MIT
Finetuned from model: Microsoft's Phi-3-mini-4k-instruct

Uses

Direct Use

This model is intended for direct integration into development environments and coding platforms where automated code generation and assistance are beneficial. It is particularly useful for IDE plugins, online coding tutorials, and automated code review tools. The model provides capabilities to simulate a programming assistant that can help with writing, debugging, and optimizing Python code.

Out-of-Scope Use

The model is not intended to replace human programmers or to be used in situations where the security and reliability of the code are critical without human oversight. It should not be used as the sole decision-maker in production environments or in systems where its suggestions might cause harm if incorrect.

Bias, Risks, and Limitations

Despite being trained on a comprehensive dataset of Python code and discussions, this model might still inherit biases from the training data or exhibit unexpected behaviors in generating code. The output should always be monitored and evaluated in the context of its use to ensure it aligns with best coding practices and security standards.

Recommendations

Users, both direct and downstream, should be informed about the model's limitations and potential biases. It is recommended that outputs be reviewed by experienced developers before being used in critical applications to ensure the correctness and security of the code. Regular updates and feedback mechanisms should be implemented to continuously improve the model's performance and safety.

How to Get Started with the Model

To get started with the sai1881/PyCoder model, developers can use the following code snippet to integrate the model into their applications:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

torch.random.manual_seed(0)

model = AutoModelForCausalLM.from_pretrained(
    "sai1881/PyCoder", 
    device_map="cuda", 
    torch_dtype="auto", 
    trust_remote_code=True, 
)

tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct")

messages = [
    {"role": "user", "content": " Code to read a csv file first separate numerical and string columns, second fill na's with zero or empty string "},
]

pipe = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
)

generation_args = {
    "max_new_tokens": 1000,
    "return_full_text": False,
    "temperature": 0.5,
    "do_sample": True,
}

output = pipe(messages, **generation_args)
print(output[0]['generated_text'])

Output: {"RESPONSE": "def read_csv_fillna(file_path):\n """\n Read a csv file first separate numerical and string columns, second fill na's with zero or empty string\n """\n df = pd.read_csv(file_path)\n df_num = df.select_dtypes(include='number')\n df_str = df.select_dtypes(exclude='number')\n df_str = df_str.fillna('')\n df_num = df_num.fillna(0)\n df = pd.concat([df_num, df_str], axis=1)\n return df"}

Training Details

image/png

Enhanced Training Details

The sai1881/PyCoder model utilizes advanced training techniques to optimize performance and memory usage, making it suitable for deployment on various hardware configurations. Here are detailed training aspects and configurations:

Training Environment and Libraries

  • PEFT Configuration: Parameter-efficient fine-tuning (PEFT) was implemented using LoraConfig, adjusting layers specifically for causal language modeling. This includes LoRA (Low-Rank Adaptation) adjustments with specific dropout settings, bias configurations, and targeted layers within the model architecture to enhance learning efficiency without extensive parameter modifications.

Data Processing and Augmentation

  • Dataset: This instructional dataset (Nan-Do/instructional_code-search-net-python) for Python includes tasks for generating code from descriptions and vice versa. It contains 417,000 records.
  • Dynamic Tokenization: Tokenization was adjusted dynamically, employing a customized method that considers chat format and max length settings to ensure optimal sequence handling during training.

Training Strategy

  • DeepSpeed Integration: Utilized DeepSpeed's ZeRO-3 optimization for memory efficiency, enabling training with higher batch sizes and reduced memory footprint.
  • Training Arguments: Configured with a cosine learning rate scheduler, BF16 mixed precision for faster computation, and gradient checkpointing to handle longer sequences effectively.
  • Batch and Memory Management: Employed gradient accumulation and batch size adjustments to manage GPU memory efficiently, ensuring stable and effective training without overloading the system resources.

Evaluation and Output

  • Model Saving and Evaluation: Configured periodic model saving every 50 steps with a limit on the number of saved checkpoints to manage disk space. The model outputs were evaluated on a separate test dataset to ensure the model's generalization capability across unseen dialogues.

Debugging and Logging

  • Logging Configuration: Comprehensive logging was set up to track training progress and configurations, aiding in debugging and ensuring transparency throughout the model training process.

Environment and Software Details

To ensure optimal performance and compatibility, the sai1881/PyCoder model was developed and trained within a highly specified software and hardware environment. Below are the detailed specifications:

System Configuration

  • Operating System: Linux 5.15.146.1 on Microsoft WSL2, tailored for high-performance computing tasks.
  • Processor and Architecture: x86_64 architecture with 64-bit ELF, utilizing a robust multi-core setup.
  • Memory: A total of 62.64 GB system memory with 57.74 GB available, ensuring sufficient resources for large-scale data processing and model training.
  • Cores: 14 physical cores and 28 total cores, providing substantial parallel processing capabilities.

GPU and CUDA Details

  • GPUs: Two NVIDIA RTX A4500 graphics cards, each equipped with 19.99 GB of memory and a compute capability of 8.6, which is ideal for deep learning and large model training.
  • CUDA Version: CUDA 11.8, allowing for efficient exploitation of GPU capabilities in training and inference processes.

Software Versions

  • Python Version: Python 3.10.12, supporting modern software libraries and frameworks.
  • PyTorch Version: PyTorch 2.2.1+cu118, optimized for CUDA 11.8 to leverage GPU acceleration.
  • Transformers Library Version: 4.41.2, used for managing pre-trained models and implementing custom training routines.

This environment provides a robust foundation for developing and training advanced machine learning models, such as sai1881/PyCoder, ensuring compatibility and performance optimization specific to the needs of dialogue generation and handling.

Downloads last month

-

Downloads are not tracked for this model. How to track
Unable to determine this model's library. Check the docs .