File size: 2,848 Bytes
fdbd12c
20d2302
263bece
20d2302
 
263bece
 
 
 
 
 
 
 
 
 
20d2302
 
fdbd12c
263bece
 
a891423
 
19b65d2
a891423
19b65d2
9edc004
7c0e3e4
9edc004
263bece
7c0e3e4
42414c0
7c0e3e4
 
 
22731d4
263bece
f02250f
7c0e3e4
09993cb
7c0e3e4
 
09993cb
ecd0650
ff0bc04
 
42414c0
 
 
 
7c0e3e4
42414c0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
434e91c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
263bece
 
 
 
 
19b65d2
263bece
 
f02250f
 
263bece
 
20d2302
263bece
20d2302
 
263bece
 
20d2302
263bece
7c0e3e4
 
20d2302
 
 
 
263bece
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
---
license: apache-2.0
datasets:
- OpenAssistant/oasst1
- EleutherAI/pile
language:
- en
- es
- ar
- fr
- fa
metrics:
- accuracy
- bleu
pipeline_tag: text-generation
tags:
- code
---


this model uses Task classification and the conversation is between USER and Answer or AI 

# NOTE ⚠️



THE JAX/FLAX version of model is available both for training and usage And This model support context length of 3300


this model support run with OST_UI so heres how to run it with just one command 

```shell
git clone https://github.com/erfanzar/OST-OpenSourceTransformers
cd OST-OpenSourceTransformers/
python3 OST_UI/app.py --model_id='erfanzar/chatLGeM' --num_gpus <NUMBER OF GPUS TO USE>
```

## Examples πŸš€ 

```text
</s><|prompter|> TEXT </s><|assistant|>
```

 or Just Simply Open [GOOGLE COLAB πŸš€πŸš€](https://colab.research.google.com/drive/1nWS_FhWIDH3-g56F3FbWCIYi0ngVdWHx?usp=sharing)

### Generate Method to get res Text by Text

```python

def generate(model_,input_ids_,tokeinzer_,max_length:int=3300,temperature :float= 0.2,eos_token_id:int=2):
  with torch.no_grad():
    before_start = len(input_ids_[0])+1
    for _ in range(max_length):
      out = model_(
          input_ids=input_ids_,
          return_dict=True,
      )
      opa = torch.nn.functional.softmax(out.logits[:,-1,:]/temperature)
      Camila = torch.multinomial(opa,1)
      input_ids_ = torch.cat([input_ids_,Camila],-1)
      clear_output(wait=True)
      print(f"\r{tokeinzer_.decode(input_ids_[0],skip_special_tokens=True)[before_start:]}",end='')
      if Camila[0].item() == eos_token_id:
        break
      yield tokeinzer_.decode(Camila[0],skip_special_tokens=True)
  return f"{tokeinzer_.decode(input_ids_[0],skip_special_tokens=True)[before_start:]}"
```


### Result 

```python
import socket
import time

def check_internet_connection():
    try:
        s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        s.connect(("www.google.com", 80))
        print("Internet connection is active.")
    except:
        print("Internet connection is not active.")

if __name__ == "__main__":

  check_internet_connection()
```


# Using Model in OST


### LGeM πŸš€

- what is LGeM, LGeM is a CausalLM Model that is trained on self instruct data (Alpaca data) and for initialization of the first train of the main model (weights are available) I used pre weights from Alpaca LoRA (open source) 

- it's Decoder Only
- built-in Pytorch and Jax
- you can simply import models like (In EasyDeL or OST Library) 

```python
# Pytorch
from modules import LGeMForCausalLM
# Jax
from modules import FlaxLGeMForCausalLM
```

- and Training code is available at jax_train.py (check source)
- training parameters 
- - learning rate 2e-5
- - Optimizer AdamW
- - batch 32
- - TPU POD
- - Train Time 50 hours
- - budget 500 $
``` shell
python3 LGeM-train.py
```