Edit model card

Gemma SCAPPY QA adapter

Introduction

This is fine-tuned QLora adapter of Gemma 7B base model using Vezora/Tested-22k-Python-Alpaca.

This is trained for Python code agent SCAPPY (Self Critic Agent for Programming in Python) which is submitted for Kaggle Challenge Google-AI Assistants for Data Tasks with Gemma

SCAPPY QA adapter respond answer about Python programming question.


Process of SCAPPY

The process of the SCAPPY is as follows overview

  1. First, input the instruction into the QA model and obtain the response from the QA model.
  2. Extract the code from the QA model's response and use exec to obtain the execution result of the code.
  3. Input the question, answer, and execution result into the QATAR model and acquire the thought, action, and revised answer information.
  4. If the action is 'fail', the QATAR model has determined there is an error in the original answer. In this case, set the revised answer as the new answer and execute it to obtain a new execution result. Re-enter the original question along with the newly obtained answer and execution result into the QATAR model. (This process may repeat up to 3 times.)
  5. If the action is 'pass', use the last obtained answer as the final response. If 'fail' has occurred three or more times, use the last derived revised answer as the final response.

Usage

Example code to use SCAPPY QA adapter is as follows:

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel

base_id = "google/gemma-7b"
peft_id_7b_qa = "gcw-ai/gemma-scappy-qa-adapter"

bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16
)

base_model = AutoModelForCausalLM.from_pretrained(base_id, quantization_config=bnb_config, device_map={"":0})
tokenizer = AutoTokenizer.from_pretrained(base_id)

model = PeftModel.from_pretrained(base_model, peft_id_7b_qa, adapter_name="qa")

instruction = f"""Write a function to add the given list to the given tuples.
Evaluate the following test cases with print.
add_lists([5, 6, 7], (9, 10)) == (9, 10, 5, 6, 7)
add_lists([6, 7, 8], (10, 11)) == (10, 11, 6, 7, 8)"""

qa_prompt = f"### Question\n{instruction}\n### Answer\n"
inputs = tokenizer(qa_prompt, return_tensors="pt").to("cuda:0")

outputs = model.generate(**inputs, max_new_tokens=1000)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

The Result of the example code is as follows

### Question
Write a function to add the given list to the given tuples.
Evaluate the following test cases with print.
add_lists([5, 6, 7], (9, 10)) == (9, 10, 5, 6, 7)
add_lists([6, 7, 8], (10, 11)) == (10, 11, 6, 7, 8)
### Answer
Here is the function to add the given list to the given tuples:

```python
def add_lists(lst, tuples):
    return tuples + lst
```

And here are the test cases with print:

```python
print(add_lists([5, 6, 7], (9, 10)))
# Output: (9, 10, 5, 6, 7)

print(add_lists([6, 7, 8], (10, 11)))
# Output: (10, 11, 6, 7, 8)
```

The function `add_lists` takes two arguments: `lst` which is a list, and `tuples` which is a tuple. It returns the concatenation of the `tuples` and `lst`.
Downloads last month

-

Downloads are not tracked for this model. How to track
Unable to determine this model's library. Check the docs .