--- library_name: transformers datasets: - Nikil263/Fine_Tuning_Dataset --- # Model Card for Model ID ## Model Details ### Model Description The fine-tuned model "Nikil263/Fine_Tuned_GPT2model" has been specifically fine-tuned to enhance its performance in generating constraints for Pyomo code. This specialized version of the GPT-2 architecture is tailored to understand and produce text relevant to defining and adding constraints in Pyomo, a popular Python-based optimization modeling language. Through the fine-tuning process, the model has been trained on a dataset that includes various examples of Pyomo constraints, enabling it to accurately generate syntactically correct and contextually appropriate constraint expressions. ## Training Details ### Fine Tuning The fine-tuning dataset for "Nikil263/Fine_Tuned_GPT2model" consists of a collection of natural language instructions and corresponding Pyomo code snippets specifically focused on defining and adding constraints in optimization models. Each entry in the dataset pairs an instructional prompt, such as "add a constraint of 7X + 3Y >= -7 in our model," with the appropriate Pyomo code, e.g., model.constraint3 = Constraint(expr=7 * model.x + 3 * model.y >= -7). This curated dataset ensures that the model learns to generate accurate and contextually relevant Pyomo constraint code from natural language descriptions, facilitating the automation of optimization model setup. ## Evaluation ### Testing Data & Metrics #### Testing Data The testing data was made with 100 random constrainsts in the same format like fine tuning dataset. #### Metrics Accuracy: 72.00% Average BLEU Score: 0.8894