File size: 1,970 Bytes
a215431
 
d57d53f
32b2e56
 
1f76003
32b2e56
1f76003
03cc0d3
ab725eb
 
d57d53f
 
 
 
03cc0d3
 
 
44f5e16
 
 
ab725eb
44f5e16
 
 
 
 
ab725eb
44f5e16
 
 
 
 
 
 
 
 
f17dab6
 
44f5e16
f17dab6
 
44f5e16
 
 
 
 
 
 
 
 
05fe813
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
license: mit
widget:
- example_title: Question Answering!
  text: 'Please Answer the Question: what is depression?'
- example_title: Other Example!
  text: 'Please Answer the Question: How to bake a cake?'
- example_title:  Other Example!
  text: 'Please Answer the Question: what is depression?'
- example_title:  Other Example!
  text: "Please Answer the Question:  I'm going through some things with my feelings and myself.I barely sleep and I do nothing but think about how I'm worthless and how I shouldn't be here. I've never tried or contemplated suicide. I've always wanted to fix my issues, but I never get around to it. How can I change my feeling of being worthless to everyone?"
inference:
  parameters:
    do_sample: true
    max_new_tokens: 250
datasets:
- databricks/databricks-dolly-15k
- VMware/open-instruct
---
## MaxMini-Instruct-248M
# Overview
MaxMini-Instruct-248M is a T5 (Text-To-Text Transfer Transformer) model Instruct fine-tuned on a variety of tasks. This model is designed to perform a range of instruction tasks.

## Model Details
- Model Name: MaxMini-Instruct-248M
- Model Type: T5 (Text-To-Text Transfer Transformer)
- Model Size: 248M parameters
- Instruction Tuning 
## Usage
#### Installation
You can install the model via the Hugging Face library:
```bash
pip install transformers
pip install torch
```
## Inference
```python
# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("suriya7/MaxMini-Instruct-248M")
model = AutoModelForSeq2SeqLM.from_pretrained("suriya7/MaxMini-Instruct-248M")

my_question = "what is depression?"
inputs = "Please answer to this question: " + my_question

inputs = tokenizer(inputs, return_tensors="pt"     
                      )

generated_ids = model.generate(**inputs, max_new_tokens=250,do_sample=True)
decoded = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
print(f"Generated Output: {decoded}")
```