effi-13b / README.md
Plaban81's picture
Updated Readme with required details
18545f0
|
raw
history blame
7.83 kB
metadata
license: apache-2.0
datasets:
  - kaist-ai/CoT-Collection
metrics:
  - accuracy
pipeline_tag: text-generation

Model card for aiplanet/effi-13b

effic-13B parameters is a causal decoder-only model built by Aiplanet based on Llama-2-13b-chat-hf and fine tuned using the CoT dataset available in huggingface datasets.It is made available under the Apache 2.0 license.

This modelcard aims to be a base template for new models. It has been generated using this raw template.

Why use Falcon-40B-Instruct?

  • This is a ready to use chat/instruct model based on Llama-2-13b-chat-hf which provides a rationale for the context provided.
  • Llam-2 is the bset open source model available. This is an instruct model, which may not be ideal for further finetuning. If you are interested in building your own instruct/chat model, we recommend starting from Llama-2-13b-chat-hf

You will need at least 85-100GB of memory to swiftly run inference with effi-17b.

Model Details

Model Description

This model has been fine tuned on Chain of Thought datsets which has context from mixed sources with corresponding rationale. The final finetuned Large Language Model(LLM) have shown enhanced capabilities of solving novel tasks by providing a reasoning.

  • Developed by: AiPlanet
  • Model type: Casual Decoder only
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Finetuned from model : Llama-2-13b-chat-hf

Model Sources [optional]

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: [More Information Needed]

Uses

Direct Use

effic-17b has been finetuned on a Chain of Thought dataset.

Out-of-Scope Use

Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.

Bias, Risks, and Limitations

This model has been majorly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.

Recommendations

We recommend users of effic-13b to develop guardrails and to take appropriate precautions for any production use.

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import (AutoModelForCausalLM, AutoTokenizer, pipeline)
model_card = "aiplanet/effi-13b"
#
model = AutoModelForCausalLM.from_pretrained(model_card)
tokenizer = AutoTokenizer.from_pretrained(model_card)
#
generate_text = transformers.pipeline(
    model=model, tokenizer=tokenizer,
    return_full_text=True,  # langchain expects the full text
    task='text-generation',
    # we pass model parameters here too
    temperature=0.4,  # 'randomness' of outputs, 0.0 is the min and 1.0 the max
    max_new_tokens=512,  # mex number of tokens to generate in the output
    repetition_penalty=1.1  # without this output begins repeating
)
#
promt = """
Can you explain this code in detail?

def generate_stream(tokenizer, model, params, device,
                    context_len=2048, stream_interval=2):

    prompt = params["prompt"]
    l_prompt = len(prompt)
    temperature = float(params.get("temperature", 1.0))
    max_new_tokens = int(params.get("max_new_tokens", 256))
    stop_str = params.get("stop", None)

    input_ids = tokenizer(prompt).input_ids
    output_ids = list(input_ids)

    max_src_len = context_len - max_new_tokens - 8
    input_ids = input_ids[-max_src_len:]

    for i in range(max_new_tokens):
        if i == 0:
            out = model(
                torch.as_tensor([input_ids], device=device), use_cache=True)
            logits = out.logits
            past_key_values = out.past_key_values
        else:
            attention_mask = torch.ones(
                1, past_key_values[0][0].shape[-2] + 1, device=device)
            out = model(input_ids=torch.as_tensor([[token]], device=device),
                        use_cache=True,
                        attention_mask=attention_mask,
                        past_key_values=past_key_values)
            logits = out.logits
            past_key_values = out.past_key_values

        last_token_logits = logits[0][-1]

        if device == "mps":
            # Switch to CPU by avoiding some bugs in mps backend.
            last_token_logits = last_token_logits.float().to("cpu")

        if temperature < 1e-4:
            token = int(torch.argmax(last_token_logits))
        else:
            probs = torch.softmax(last_token_logits / temperature, dim=-1)
            token = int(torch.multinomial(probs, num_samples=1))

        output_ids.append(token)

        if token == tokenizer.eos_token_id:
            stopped = True
        else:
            stopped = False

        if i % stream_interval == 0 or i == max_new_tokens - 1 or stopped:
            output = tokenizer.decode(output_ids, skip_special_tokens=True)
            pos = output.rfind(stop_str, l_prompt)
            if pos != -1:
                output = output[:pos]
                stopped = True
            yield output

        if stopped:
            break

    del past_key_values
"""
#
system_message = "Given your chain of thought reasoning, provide a rationale for the context in the source."
prompt = f"[INST] <<SYS>>\n{system_message}\n<</SYS>>\n\n{prompt}. [/INST]" # replace the command here with something relevant to your task
#
result = generate_text(prompt)
print(result[0]['generated_text'].strip().split("[/INST]")[-1])

Training Details

Training Data

effic-13b has been finetuned on https://huggingface.co/datasets/kaist-ai/CoT-Collection The data was tokenized with the meta-llama/Llama-2-13b-chat-hf tokenizer.

Training Procedure

Finetuning approach using PefT and Qlora(https://huggingface.co/blog/4bit-transformers-bitsandbytes)

Preprocessing [optional]

[More Information Needed]

Training Hyperparameters

  • Training regime:

  • lora_alpha=32,

  • lora_dropout=0.05,

  • r=8,

  • bias="none",

  • task_type="CAUSAL_LM"

  • load_in_4bit=True,
  • bnb_4bit_quant_type = "nf4",
  • bnb_4bit_use_double_quant=True,
  • bnb_4bit_compute_dtype=torch.bfloat16

  • num_train_epochs = 1
  • fp16 = False
  • bf16 = False
  • per_device_train_batch_size = 1
  • per_device_eval_batch_size = 1
  • gradient_accumulation_steps = 4
  • gradient_checkpointing = True
  • max_grad_norm = 0.3
  • learning_rate = 2e-4
  • weight_decay = 0.001
  • optim = "paged_adamw_32bit"
  • lr_scheduler_type = "constant"
  • max_steps = 500
  • warmup_ratio = 0.03
  • group_by_length = True
  • save_steps = 25
  • logging_steps = 5
  • max_seq_length = 2048
  • packing = False
  • device_map = {"": 0}

Evaluation

Paper coming soon.

See the OpenLLM Leaderboard(https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)for early results.

Technical Specifications [optional]

Model Architecture and Objective

[More Information Needed]

Compute Infrastructure

[More Information Needed]

Hardware

[More Information Needed]

Software

[More Information Needed]

Citation

@article{effic-13b, title={{effic-13b}: an open large language model with state-of-the-art performance}, author={aiplanet}, year={2023} }

Model Card Contact

community@aiplanet.com