Safetensors
GGUF
English
chain-of-thought
cot-reasoning
step-by-step-reasoning
systematic-research-planning
academic-assistant
academic-planning
thesis-planning
dissertation-planning
research-question-formulation
literature-review-planning
methodology-design
experimental-design
qualitative-research-planning
quantitative-research-planning
mixed-methods-planning
student-research-assistant
phd-support
postgraduate-tool
early-career-researcher
grant-writing-assistant
research-proposal-helper
cross-disciplinary-research
interdisciplinary-methodology
academic-mentorship-tool
research-evaluation-assistant
independent-researcher-tool
r-and-d-assistant
reasoning-model
structured-output
systematic-analysis
problem-decomposition
research-breakdown
actionable-planning
scientific-research
social-science-research
humanities-research
medical-research-planning
engineering-research
business-research
mistral-based
mistral-fine-tune
lora-adaptation
foundation-model
instruction-tuned
7b-parameters
efficient-model
low-compute-requirement
ai-research-assistant
rag-compatible
research-automation
sota-research-planning
hypothesis-generation
experiment-design-assistant
literature-analysis
paper-outline-generator
structured-output-generation
systematic-reasoning
long-context
detailed-planning
zero-shot-planning
few-shot-learning
research-summarization
tree-of-thought
biomedical-research-assistant
clinical-trial-planning
tech-r-and-d
materials-science
computational-research
data-science-assistant
literature-synthesis
meta-analysis-helper
best-research-assistant-model
top-research-planning-model
research-ai-assistant
ai-research-mentor
academic-planning-ai
research-workflow-automation
Research-Reasoner-7B-v0.3
research-reasoner-7b-v0.3
Research-reasoner-7B-v0.3
research-Reasoner-7B-v0.3
Research-Reasoner-7b-v0.3
research-reasoner-7B-V0.3
Research-reasoner-7b-v0.3
research-Reasoner-7b-v0.3
RESEARCH-REASONER-7B-V0.3
research-REASONER-7b-v0.3
Research-Reasoner-7B
research-reasoner-7b
Research-reasoner-7B
research-Reasoner-7B
Research-Reasoner-7b
research-reasoner-7B
Research-reasoner-7b
research-Reasoner-7b
RESEARCH-REASONER-7B
research-REASONER-7b
Research-Reasoner
research-reasoner
Research-reasoner
research-Reasoner
RESEARCH-REASONER
research-REASONER
conversational
Research-Reasoner-7B-v0.3 / Scripts /Inference_safetensors.py
Raymond-dev-546730's picture
Upload 2 files
6fecb13 verified
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# Insert your research topic here
RESEARCH_TOPIC = """
"""
def load_model(model_path):
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_path)
return model, tokenizer
def generate_response(model, tokenizer, topic):
topic = topic.strip()
prompt = f"USER: Research Topic: \"{topic}\"\nLet's think step by step:\nASSISTANT:"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=2000,
temperature=0.7,
top_p=0.9,
repetition_penalty=1.1,
do_sample=True
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
return response.split("ASSISTANT:")[-1].strip()
def main():
model_path = "./" # Path to the directory containing your model weight files
model, tokenizer = load_model(model_path)
result = generate_response(model, tokenizer, RESEARCH_TOPIC)
print(result)
if __name__ == "__main__":
main()