llama-chat / README.md
erfanzar's picture
Update README.md
a891423
|
raw
history blame
3.49 kB
metadata
license: gpl-3.0
datasets:
  - tatsu-lab/alpaca
  - yizhongw/self_instruct
  - anon8231489123/ShareGPT_Vicuna_unfiltered
  - NeelNanda/pile-10k
language:
  - en
  - es
  - ar
  - fr
  - fa
metrics:
  - accuracy
  - bleu
pipeline_tag: text-generation

this model uses Task classification and the conversation is between USER and Answer or AI

This model is a finetuned version of Kolla with LGeM data With Respect to them and changes some data and optimizers

Model includes pre-trained Weights so its GNU v3.0 licensed as same as Original LLaMA Model

Using Model in Huggingface Transformers

EG

CONVERSATION: USER: how can I start to work out more \n
Q&A: USER: how can I start to work out more \n
INFO: USER: how can I start to work out more \n
from transformers import LlamaTokenizer, LlamaForCausalLM, pipeline
import torch
import textwrap
tokenizer = LlamaTokenizer.from_pretrained("erfanzar/LGeM-7B-MT")

model = LlamaForCausalLM.from_pretrained(
    'erfanzar/LGeM-7B-MT',
    load_in_8bit=True, 
    device_map='auto',
    torch_dtype=torch.float16,
)

pipe_line = pipeline(
    "text-generation",
    model=model, 
    tokenizer=tokenizer, 
    max_length=256, # Your max length here
    temperature=1, # Temperature (use 1 for good performance)
    top_p=0.95,
)

verify_text = lambda txt : '\n'.join([textwrap.fill(txt, width=90) for txt in txt.split('\n')])
with torch.no_grad():
  output = pipe_line('CONVERSATION: USER: code a program for me to check internet connection in python ? ')
  print(verify_text(output[0]['generated_text']))

Generate Method to get res Text by Text


def generate(model_,input_ids_,tokeinzer_,max_length:int=256,temperature :float= 1,eos_token_id:int=2):
  with torch.no_grad():
    before_start = len(input_ids_[0])+1
    for _ in range(max_length):
      out = model_(
          input_ids=input_ids_,
          return_dict=True,
      )
      opa = torch.nn.functional.softmax(out.logits[:,-1,:]/temperature)
      Camila = torch.multinomial(opa,1)
      input_ids_ = torch.cat([input_ids_,Camila],-1)
      clear_output(wait=True)
      print(f"\r{tokeinzer_.decode(input_ids_[0],skip_special_tokens=True)[before_start:]}",end='')
      if Camila[0].item() == eos_token_id:
        break
      yield tokeinzer_.decode(Camila[0],skip_special_tokens=True)
  return f"{tokeinzer_.decode(input_ids_[0],skip_special_tokens=True)[before_start:]}"

Result

import socket
import time

def check_internet_connection():
    try:
        s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        s.connect(("www.google.com", 80))
        print("Internet connection is active.")
    except:
        print("Internet connection is not active.")

if __name__ == "__main__":

  check_internet_connection()

Using Model in OST

LGeM πŸš€

  • what is LGeM, LGeM is a CausalLM Model that is trained on self instruct data (Alpaca data) and for initilization of the first train of main model (weight are available) I used pre weights from Alpaca LoRA (open source)

  • it's Decoder Only

  • built-in Pytorch

  • you can simply import models like

from modules import LGeMForCausalLM
  • and Training code is available at LGeM-Train.py (check source)
  • training parameters
    • learning rate 1e-4
    • AdamW (weight decay 1e-2)
    • batch 2
    • A 100 80GB used for training (4 X)
    • Train Time 800 hours
    • budget 760 $
python3 LGeM-train.py