Interplay-AppCoder a CodeGeneration LLM
Iterate’s new top-performing Interplay-AppCoder LLM scores 2.9 on usefulness and 2.7 on functionality on the ICE Benchmark Test
The world of LLMs is growing rapidly as Several new LLMs and finetunes are released daily by the open-source community, startups, and enterprises, as new models are invented to perform various novel tasks.
One of our Iterate.ai R&D Projects has been to experiment with several LLMs, then train a code generation LLM on the latest generative AI frameworks and libraries. Our goal was to generate working and updated code for generative AI projects that we build alongside our enterprise clients.
As part of the process, wWe have been fine-tuning CodeLlama -7B, 34B and Wizard Coder -15B, 34B. We combined that fine-tuning with our hand-coded dataset training on LangChain, YOLO V8, VertexAI and many other modern libraries which we use on a daily basis. We fine-tuned our work on top of WizardCoder-15B.
The result is Interplay-AppCoder LLM, a brand new high performing code generation model, which we are releasing on October 31, 2023.
Model Details
- Developed by: Iterate.ai
- Language(s) (NLP): [Python,Langchain,yolov8,vertexai]
- Finetuned from model : Wizardcoder-15B-v1.0
Model Demo
Bias, Risks, and Limitations
The model is optimized for code generation and cannot be used as chat model.
How to Get Started with the Model
Use the code below to get started with the model.
#import model from hugging face repository
import torch
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
pipeline,
logging
)
model_repo_id ="iterateai/Interplay-AppCoder"
Load the model in FP16
iterate_model = AutoModelForCausalLM.from_pretrained(
model_repo_id,
low_cpu_mem_usage=True,
return_dict=True,
torch_dtype=torch.float16,
device_map={"": 0},
trust_remote_code=True
)
#Note: You can quantize the model using bnb confi parameter to load the model in T4 GPU
### Load tokenizer to save it
tokenizer = AutoTokenizer.from_pretrained(model_repo_id, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
### Inferencing
logging.set_verbosity(logging.CRITICAL)
#Sample prompt
prompt = "Can you provide a python script that uses the YOLOv8 model from the Ultralytics library to detect people in an image, draw green bounding boxes around them, and then save the image?"
pipe = pipeline(task="text-generation", model=iterate_model, tokenizer=tokenizer, max_length=1024)
result = pipe(f"Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response:",temperature=0.1,do_sample=True)
print(result[0]['generated_text'])
Sample demo notebook
Evaluation
Testing Data
Dataset used for evaluation [https://drive.google.com/file/d/1R6DDyBhcR6TSUYFTgUosJxrvibkR1BHC/view]
Metrics
Our CodeGeneration LLM was created and fine-tuned with a new and unique knowledge base. As such, we utilized the newly published ICE score benchmark methodology for evaluating the code generated by the Interplay-AppCoder LLM.
The ICE methodology provides metrics for Usefulness and Functional Correctness as a baseline for scoring code generation.
- Usefulness: addresses whether the code output from the model is clear, presented in logical order, and maintains human readability and whether it covers all functionalities of the problem statement after comparing it with the reference code.
- Functional Correctness: An LLM that has complex reasoning capabilities is utilized to conduct unit tests while considering the given question and the reference code.
We utilized GPT4 to measure the above metrics and provide a score from 0-4. This is the test dataset and Jupyter notebook we used to perform the benchmark.
You can read more about the ICE methodology in this paper
Model Name | Usefulness (0 - 4) | Functional Correctness (0 - 4) |
---|---|---|
Interplay AppCoder | 2.968 | 2.476 |
Wizard Coder | 1.825 | 0.603 |
Can you try it?
Yes, we’ve opened it up. Try out yourself right here:
- Can you provide a python script that uses the YOLOv8 model from the Ultralytics library to detect people in an image, draw green bounding boxes around them, and then save the image?
- Write a python code using langchain to do Question and Answering over a blog post.
- Write a python code using langchain library to retrieve information from SQL database and a vector store
- How can I set up clients for job service, model service, endpoint service, and prediction service using the Vertex AI client library in Python?
- Downloads last month
- 22