Edit model card

Model Card for Model ID

I've fine-tuned a language model to be my virtual banker, tailored to understand financial nuances and navigate the intricacies of banking tasks.

Model Details

I've fine-tuned LLama2, a Language Model, to function as my virtual banking assistant. This personalized AI understands the intricacies of financial tasks, allowing me to seamlessly instruct it for a range of banking activities. From transaction analysis to insights on investment opportunities, LLama2 has become my digital finance companion, making banking more efficient and tailored to my specific needs.

  • Finetuned from model: meta-llama/Llama-2-7b-chat-hf

Model Sources

Uses

The model's intended use is essential for ethical deployment. It's designed to assist users in tasks related to natural language understanding, generation, and text-based applications. Foreseeable users include developers, researchers, and businesses seeking advanced language processing capabilities. The model's impact extends to those directly interacting with its outputs, as well as downstream users affected by applications incorporating its features. Transparency in communicating the model's strengths, limitations, and potential biases is crucial to ensure responsible and informed usage by all stakeholders.

How to Get Started with the Model

Use the code below to get started with the model.

Use a pipeline as a high-level helper

from transformers import pipeline

pipe = pipeline("text-generation", model="PiyushLavaniya/Llama2_Banker")

Load model directly

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("PiyushLavaniya/Llama2_Banker") model = AutoModelForCausalLM.from_pretrained("PiyushLavaniya/Llama2_Banker")

Training Details

Training Data

Model is Finetuned on ssbuild/alpaca_finance_en Fine-tuning model on the ssbuild/alpaca_finance_en dataset signifies a strategic customization for financial applications, possibly related to Alpaca Finance. the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->

Training Hyperparameters

  • Training regime: adam_bits = 8

training_arguments = TrainingArguments( per_device_train_batch_size = 1, gradient_accumulation_steps = 4, run_name=f"deb-v2-xl-{adam_bits}bitAdam", logging_steps = 20, learning_rate = 2e-4, fp16=True, max_grad_norm = 0.3, max_steps = 1200, warmup_ratio = 0.03, group_by_length=True, lr_scheduler_type = "constant", )

Downloads last month
5
Safetensors
Model size
6.74B params
Tensor type
F32
FP16
I8