Update README.md
Browse files
README.md
CHANGED
@@ -37,36 +37,8 @@ IdeaWhiz is a fine-tuned version of QwQ-32B-Preview, specifically optimized for
|
|
37 |
|
38 |
## Quickstart 🚀
|
39 |
|
40 |
-
```
|
41 |
-
|
42 |
-
|
43 |
-
# Load the GGUF model
|
44 |
-
model = Llama(
|
45 |
-
model_path="path/to/QwQ-32B-Preview-IdeaWhiz-v1-Q4_K_M.gguf",
|
46 |
-
n_ctx=4096, # Context window
|
47 |
-
n_threads=8 # Adjust based on your CPU
|
48 |
-
)
|
49 |
-
|
50 |
-
# Define the prompt
|
51 |
-
prompt = """I'll be submitting your next responses to a "Good Scientific Idea" expert review panel. If they consider your idea to be a good one, you'll receive a reward. Your assigned keyword is: "cancer". You may provide background information. The idea MUST be within 100 words (including background information). (Note: good scientific ideas should be novel, verifiable, practically valuable, and able to advance the field.). NOTE: You MUST give your answer after **Final Idea:**
|
52 |
-
"""
|
53 |
-
|
54 |
-
# Create chat message format
|
55 |
-
messages = [
|
56 |
-
{"role": "system", "content": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."},
|
57 |
-
{"role": "user", "content": prompt}
|
58 |
-
]
|
59 |
-
|
60 |
-
# Generate response
|
61 |
-
response = model.create_chat_completion(
|
62 |
-
messages=messages,
|
63 |
-
max_tokens=4096,
|
64 |
-
temperature=0.7,
|
65 |
-
top_p=0.95
|
66 |
-
)
|
67 |
-
|
68 |
-
# Print the response
|
69 |
-
print(response['choices'][0]['message']['content'])
|
70 |
```
|
71 |
|
72 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
|
|
37 |
|
38 |
## Quickstart 🚀
|
39 |
|
40 |
+
```bash
|
41 |
+
ollama run 6cf/QwQ-32B-Preview-IdeaWhiz-v1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
42 |
```
|
43 |
|
44 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|