Llama-2-7b-chat-smile-finetune
Model Card
Model Description
This is a fine-tuned version of Llama-2-7b-chat, adapted for predicting drug names based on chemical properties. Given parameters like InChI, SMILES, HBD, and logP, this model generates probable drug names.
Model Details
- Developed by: Stanley Tulani Ndlovu
- Model type: Text Generation
- Language(s): English
- License: [Specify License Here]
- Finetuned from model: Llama-2-7b-chat
Uses
Direct Use
- Predicting drug names: Given a set of chemical parameters, the model predicts the name of a drug.
Out-of-Scope Use
- The model should not be used for predicting drug effectiveness or other medical diagnoses.
Bias, Risks, and Limitations
- Bias: The model might have inherent biases based on the data it was trained on.
- Limitations: The model's predictions are based on learned patterns and may not always be accurate or reliable.
Recommendations
Users should verify predictions with domain experts and use this model as a supplementary tool rather than a definitive source.
How to Get Started with the Model
from transformers import LlamaTokenizer, LlamaForCausalLM
# Load model and tokenizer
model_name = "webs911/Llama-2-7b-chat-smile-finetune"
tokenizer = LlamaTokenizer.from_pretrained(model_name)
model = LlamaForCausalLM.from_pretrained(model_name)
def predict_drug_name(prompt):
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
# Example usage
prompt = "Predict the drug name given the following parameters:\nInChI: ...\nSMILES: ...\nHBD: ...\nlogP: ..."
print(predict_drug_name(prompt))
- Downloads last month
- 0