Edit model card
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda"
model_path = "braindao/iq-code-evmind-v1-granite-8b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()

chat = [
    {
        "role": "user",
        "content": "Create a smart contract to serve as a centralized review system called ReviewHub. This contract should allow users to submit and manage reviews for various products or services, rate them on a scale of 1 to 5, and provide detailed comments. It should include functionalities for assigning unique identifiers to products or services, storing and retrieving reviews, allowing users to edit or delete their reviews, calculating average ratings, and enabling an administrator to moderate content. The contract must incorporate robust security measures to ensure review integrity and prevent spam or malicious activity."
    },
]

chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
input_tokens = tokenizer(chat, return_tensors="pt")
for i in input_tokens:
    input_tokens[i] = input_tokens[i].to(device)
output = model.generate(**input_tokens, max_new_tokens=4096)
output = tokenizer.batch_decode(output)
for i in output:
    print(i)
Downloads last month
752
Safetensors
Model size
8.05B params
Tensor type
FP16
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train braindao/iq-code-evmind-v1-granite-8b-instruct

Collection including braindao/iq-code-evmind-v1-granite-8b-instruct