rbattle commited on
Commit
4935bbe
1 Parent(s): dd31142

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -11
README.md CHANGED
@@ -40,13 +40,15 @@ from transformers import pipeline
40
  # Chose the model inference precision
41
  dtype = torch.float16 # options are torch.float16, torch.bfloat16, torch.float32
42
 
43
- model = pipeline(model="VMware/flan-ul2-alpaca-lora",device_map = 'auto',torch_dtype=dtype )
 
 
 
 
44
 
45
  prompt = "YOUR PROMPT HERE"
46
 
47
  output = model(prompt, max_length=2048, do_sample=True)
48
-
49
-
50
  ```
51
 
52
 
@@ -60,14 +62,17 @@ from transformers import pipeline
60
  dtype = torch.float16 # options are torch.bfloat16, torch.float32
61
  model = pipeline(model="VMware/flan-ul2-alpaca-lora",device_map = 'auto',torch_dtype=dtype )
62
 
63
- prompt_template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
64
-
65
- prompt = "YOUR PROMPT HERE"
66
 
67
- output = model(prompt_template.format(instruction= prompt), max_length=2048, do_sample=True)
 
68
 
 
69
 
 
70
 
 
71
  ```
72
 
73
  # Training Details
@@ -81,10 +86,6 @@ The model was trained on 3xV100 GPUs using PEFT-LORA and Deepspeed
81
  * epochs = 3
82
 
83
 
84
- ```
85
-
86
-
87
-
88
  # Limitations and Bias
89
 
90
  The model is based on a large and diverse dataset, but it may still have limitations and biases in certain areas. Some limitations include:
 
40
  # Chose the model inference precision
41
  dtype = torch.float16 # options are torch.float16, torch.bfloat16, torch.float32
42
 
43
+ model = pipeline(
44
+ model = "VMware/flan-ul2-alpaca-lora",
45
+ device_map = 'auto',
46
+ torch_dtype=dtype
47
+ )
48
 
49
  prompt = "YOUR PROMPT HERE"
50
 
51
  output = model(prompt, max_length=2048, do_sample=True)
 
 
52
  ```
53
 
54
 
 
62
  dtype = torch.float16 # options are torch.bfloat16, torch.float32
63
  model = pipeline(model="VMware/flan-ul2-alpaca-lora",device_map = 'auto',torch_dtype=dtype )
64
 
65
+ prompt_template = """
66
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
 
67
 
68
+ ### Instruction:
69
+ {instruction}
70
 
71
+ ### Response:"""
72
 
73
+ prompt = "YOUR INSTRUCTION HERE"
74
 
75
+ output = model(prompt_template.format(instruction=prompt), max_length=2048, do_sample=True)
76
  ```
77
 
78
  # Training Details
 
86
  * epochs = 3
87
 
88
 
 
 
 
 
89
  # Limitations and Bias
90
 
91
  The model is based on a large and diverse dataset, but it may still have limitations and biases in certain areas. Some limitations include: