File size: 3,673 Bytes
aef95a5
587ba3c
 
 
 
9daf86a
587ba3c
 
 
 
7cf6931
587ba3c
 
 
 
 
b4d8a7b
e7ad50b
27e8fc2
9daf86a
 
 
 
 
 
 
 
 
 
 
 
3c96d4e
9daf86a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aef95a5
27e8fc2
587ba3c
a1d46af
ed93df5
14281cb
2a00212
14281cb
587ba3c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0b88c68
587ba3c
 
 
0b88c68
587ba3c
0b88c68
 
587ba3c
0b88c68
 
 
 
587ba3c
 
 
 
 
 
 
 
 
 
 
0b88c68
587ba3c
0b88c68
 
587ba3c
0b88c68
 
c567098
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
---
datasets:
- cerebras/SlimPajama-627B
- HuggingFaceH4/ultrachat_200k
- bigcode/starcoderdata
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
metrics:
- accuracy
- speed
library_name: transformers
tags:
- coder
- Text-Generation
- Transformers
- HelpingAI
license: mit
widget:
- text: |
    <|system|>
    You are a chatbot who can code!</s>
    <|user|>
    Write me a function to search for OEvortex on youtube use Webbrowser .</s>
    <|assistant|>
- text: |
    <|system|>
    You are a chatbot who can be a teacher!</s>
    <|user|>
    Explain me working of AI .</s>
    <|assistant|>
model-index:
- name: HelpingAI-Lite
  results:
  - task:
      type: text-generation
    metrics:
    - name: Epoch
      type: Training Epoch
      value: 3
    - name: Eval Logits/Chosen
      type: Evaluation Logits for Chosen Samples
      value: -2.707406759262085
    - name: Eval Logits/Rejected
      type: Evaluation Logits for Rejected Samples
      value: -2.65652441978546
    - name: Eval Logps/Chosen
      type: Evaluation Log-probabilities for Chosen Samples
      value: -370.129670421875
    - name: Eval Logps/Rejected
      type: Evaluation Log-probabilities for Rejected Samples
      value: -296.073825390625
    - name: Eval Loss
      type: Evaluation Loss
      value: 0.513750433921814
    - name: Eval Rewards/Accuracies
      type: Evaluation Rewards and Accuracies
      value: 0.738095223903656
    - name: Eval Rewards/Chosen
      type: Evaluation Rewards for Chosen Samples
      value: -0.0274422804903984
    - name: Eval Rewards/Margins
      type: Evaluation Rewards Margins
      value: 1.008722543614307
    - name: Eval Rewards/Rejected
      type: Evaluation Rewards for Rejected Samples
      value: -1.03616464138031
    - name: Eval Runtime
      type: Evaluation Runtime
      value: 93.5908
    - name: Eval Samples
      type: Number of Evaluation Samples
      value: 2000
    - name: Eval Samples per Second
      type: Evaluation Samples per Second
      value: 21.37
    - name: Eval Steps per Second
      type: Evaluation Steps per Second
      value: 0.673
---

# HelpingAI-Lite
# Subscribe to my YouTube channel
[Subscribe](https://youtube.com/@OEvortex)

GGUF version [here](https://huggingface.co/OEvortex/HelpingAI-Lite-GGUF)

HelpingAI-Lite is a lite version of the HelpingAI model that can assist with coding tasks. It's trained on a diverse range of datasets and fine-tuned to provide accurate and helpful responses.

## License

This model is licensed under MIT.

## Datasets

The model was trained on the following datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized

## Language

The model supports English language.

## Usage

# CPU and GPU code

```python
from transformers import pipeline
from accelerate import Accelerator

# Initialize the accelerator
accelerator = Accelerator()

# Initialize the pipeline
pipe = pipeline("text-generation", model="OEvortex/HelpingAI-Lite", device=accelerator.device)

# Define the messages
messages = [
    {
        "role": "system",
        "content": "You are a chatbot who can help code!",
    },
    {
        "role": "user",
        "content": "Write me a function to calculate the first 10 digits of the fibonacci sequence in Python and print it out to the CLI.",
    },
]

# Prepare the prompt
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

# Generate predictions
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)

# Print the generated text
print(outputs[0]["generated_text"])
```