|
--- |
|
language: |
|
- en |
|
license: apache-2.0 |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- unsloth |
|
- llama |
|
- trl |
|
- sft |
|
base_model: unsloth/llama-3-8b-Instruct |
|
datasets: |
|
- tomasonjo/text2cypher-gpt4o-clean |
|
--- |
|
|
|
# Uploaded model |
|
|
|
- **Developed by:** tomasonjo |
|
- **License:** apache-2.0 |
|
- **Finetuned from model :** unsloth/llama-3-8b-Instruct |
|
|
|
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. |
|
|
|
**For more information visit [this link](https://github.com/neo4j-labs/text2cypher/tree/main/finetuning/unsloth-llama3#using-chat-prompt-template)** |
|
|
|
|
|
## Example usage: |
|
|
|
Install dependencies. Check [Unsloth documentation](https://github.com/unslothai/unsloth) for specific installation for other environments. |
|
|
|
````python |
|
%%capture |
|
# Installs Unsloth, Xformers (Flash Attention) and all other packages! |
|
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git" |
|
!pip install --no-deps "xformers<0.0.26" trl peft accelerate bitsandbytes |
|
```` |
|
|
|
Then you can load the model and use it as inference |
|
|
|
```python |
|
from unsloth.chat_templates import get_chat_template |
|
|
|
tokenizer = get_chat_template( |
|
tokenizer, |
|
chat_template = "llama-3", |
|
map_eos_token = True, |
|
) |
|
|
|
FastLanguageModel.for_inference(model) # Enable native 2x faster inference |
|
|
|
schema = """Node properties: - **Question** - `favorites`: INTEGER Example: "0" - `answered`: BOOLEAN - `text`: STRING Example: "### This is: Bug ### Specifications OS: Win10" - `link`: STRING Example: "https://stackoverflow.com/questions/62224586/playg" - `createdAt`: DATE_TIME Min: 2020-06-05T16:57:19Z, Max: 2020-06-05T21:49:16Z - `title`: STRING Example: "Playground is not loading with apollo-server-lambd" - `id`: INTEGER Min: 62220505, Max: 62224586 - `upVotes`: INTEGER Example: "0" - `score`: INTEGER Example: "-1" - `downVotes`: INTEGER Example: "1" - **Tag** - `name`: STRING Example: "aws-lambda" - **User** - `image`: STRING Example: "https://lh3.googleusercontent.com/-NcFYSuXU0nk/AAA" - `link`: STRING Example: "https://stackoverflow.com/users/10251021/alexandre" - `id`: INTEGER Min: 751, Max: 13681006 - `reputation`: INTEGER Min: 1, Max: 420137 - `display_name`: STRING Example: "Alexandre Le" Relationship properties: The relationships: (:Question)-[:TAGGED]->(:Tag) (:User)-[:ASKED]->(:Question)""" |
|
question = "Identify the top 5 questions with the most downVotes." |
|
|
|
messages = [ |
|
{"role": "system", "content": "Given an input question, convert it to a Cypher query. No pre-amble."}, |
|
{"role": "user", "content": f"""Based on the Neo4j graph schema below, write a Cypher query that would answer the user's question: |
|
{schema} |
|
|
|
Question: {question} |
|
Cypher query:"""} |
|
] |
|
inputs = tokenizer.apply_chat_template( |
|
messages, |
|
tokenize = True, |
|
add_generation_prompt = True, # Must add for generation |
|
return_tensors = "pt", |
|
).to("cuda") |
|
|
|
outputs = model.generate(input_ids = inputs, max_new_tokens = 128, use_cache = True) |
|
tokenizer.batch_decode(outputs) |
|
``` |