Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

source code

Want to see how it works?

Model Description

This model is trained on a large physics dataset, specifically “Special Relativity,” “Dark Matter,” “Black Holes,” “Quantum Mechanics,” “Plasma Physics,” “Particle Physics,” “Specific Theory,” “Nuclear Physics.” Provides expertise in "Atomic Physics," "Quantum Field Theory," "Gravitational Waves," "Electromagnetism," and "Chaotic Theory." It has the ability to provide information on a wide range of subjects, with the ability to answer physics questions under these headings.

You can access the model with the following two codes. My suggestion is that you load and run the model locally with the first code, because when you access it via the API with the second code, the Huggingface site applies Quantization and does not give good responses.

from transformers import AutoTokenizer, GenerationConfig, T5ForConditionalGeneration
import torch
from peft import PeftModel
import time
import sys

repo_id = "sabankara/einstein"
model_name = 'google/flan-t5-small'
tokenizer = AutoTokenizer.from_pretrained(repo_id,device=0)

peft_model_base = T5ForConditionalGeneration.from_pretrained(model_name, torch_dtype=torch.bfloat16, local_files_only=True, device_map= {"":0})
peft_model = PeftModel.from_pretrained(peft_model_base, 
                                    repo_id, 
                                    torch_dtype=torch.bfloat16,
                                    is_trainable=False)


def print_character_by_character(text, delay=0.005, color_code="\033[36m"):
    for char in text:
        sys.stdout.write(f"{color_code}{char}\033[0m")
        sys.stdout.flush()
        time.sleep(delay)

def add_newline_after_punctuation(text):
    n=17
    words = text.split()
    updated_text = ""

    for i, word in enumerate(words):
        updated_text += word + " "
        if (i + 1) % n == 0:
            updated_text += "\n"

    return updated_text.strip()

#example
prompt ="what is the black hole?"

input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda(0)
model_outputs = peft_model.generate(input_ids=input_ids, generation_config=GenerationConfig(min_new_tokens= 100, max_new_tokens=512, num_beams=1, do_sample=True, top_p=0.6, top_k=0, temperature=0.4,repetition_penalty=2.5))

model_text_output = tokenizer.decode(model_outputs[0], skip_special_tokens=True)

albert_response = add_newline_after_punctuation(model_text_output)

print("\033[32mEintein:\033[0m")
print_character_by_character(albert_response)
sys.stdout.write("\n")
import requests

API_URL = "https://api-inference.huggingface.co/models/sabankara/einstein"
headers = {"Authorization": "Bearer YOUR_READ_KEY"}

def query(payload):
    response = requests.post(API_URL, headers=headers, json=payload)
    return response.json()
    
output = query({
    "inputs": "what is the black hole?",
})
output

Pre-Processing

Loading Dataset: The Hugging Face dataset is loaded using the specified name.

Creating Pandas DataFrame: The dataset "train" section is converted to a Pandas DataFrame.

Specific Topics Are Selected: Rows with specific topics related to physics are selected.

Editing Columns and Clearing NaN Values: Required columns are selected and rows with missing data are dropped.

Creating Ordered Question-Answer Pairs: Questions and answers are arranged in consecutive ordered pairs.

Being Tokenized: The texts in the dataset are converted into tokens.

Decoding Process: Tokens are converted into texts and made readable.

Creating a New DataFrame: The final DataFrame is created by adding new columns.

Converting to Dataset Object: Pandas DataFrame is converted to Hugging Face dataset object and divided into training, testing and validation sections.

These operations prepare a Hugging Face dataset to be used as training data for the model.

In summary, during the data pre-processing stage, data was shifted and mixed for one shot learning application.

Downloads last month
0

Dataset used to train sabankara/einstein