Edit model card

TenaliAI-FinTech-v1

This model was trained from scratch on banking dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1350

Model description

This project is integral to the development of a Natural User Interface(NUI) within the Banking and Finance Industry [BFSI].

The TenaliAI-FinTech model is specifically designed to tackle the intricate task of deciphering the intent behind customer queries in the BFSI sector.

The underlying technology behind TenaliAI-FinTech employs advanced natural language processing and machine learning algorithms. These technologies enhance the model's ability to accurately classify and understand the diverse range of customer queries. By leveraging sophisticated classification techniques, the model ensures a more precise interpretation of user intent, regardless of whether the query originates from the bank's net banking portal, mobile banking portal, or other communication channels.

Furthermore, the model excels in query tokenization, making it proficient in breaking down customer queries into meaningful components. This capability not only streamlines the processing of customer requests but also enables a more efficient and targeted response.

Ultimately, the technology powering TenaliAI-FinTech contributes to an enhanced customer service experience by providing quicker and more accurate responses to inquiries across multiple banking platforms.

Intended uses & limitations

This model is meant to generate "Intent" for a given customer query on bank's netbanking portal or mobile banking. Following is the list of intents :

{
    'add_beneficiary': 0, 
    'balance_enquiry': 1, 
    'beneficiary_details': 2, 
    'bill_payment': 3, 
    'block_card': 4, 
    'bulk_payments': 5, 
    'bulk_payments_status': 6, 
    'change_contact_info': 7, 
    'debit_card_details': 8, 
    'delete_beneficiary': 9, 
    'fd_details': 10, 
    'fd_rate': 11, 
    'fd_rate_large_amount': 12, 
    'funds_transfer_other_bank': 13, 
    'funds_transfer_own_account': 14, 
    'funds_transfer_status': 15, 
    'funds_transfer_third_party': 16, 
    'gst_payment': 17, 
    'investment_details': 18, 
    'list_accounts': 19, 
    'list_beneficiary': 20, 
    'list_billers': 21, 
    'list_fd': 22, 
    'list_investments': 23, 
    'list_loans': 24, 
    'loan_details': 25, 
    'nrv_details': 26, 
    'open_account': 27, 
    'pending_authorization': 28, 
    'pin_change': 29, 
    'raise_request': 30, 
    'request_status': 31, 
    'saving_interest_rate': 32, 
    'send_money_abroad': 33, 
    'ss_fd_rate': 34, 
    'transaction_history': 35, 
    'transaction_limit': 36, 
    'update_beneficiary': 37}

How to use :

  1. Type a query such as
    • "Tell me my last 10 transactions"
    • "I am senior citizen. What is FD rates"
    • "I want to send money to my brother"
    • "I want Fixed Deposit rate for 2 Crore INR"
    • "What is the outstanding EMI or my loan"
    • "How many active loans do I have ?"
    • "I want to add a new beneficiary"
  2. This engine will understand the "intent" behind the query and return the value of LABEL_0 to LABEL_50.
  3. The LABEL having maximum value (which will be at the top in the result) will be the identified "intent"
  4. Use above mapping table and convert LABEL to Code. So, for example, LABEL_34 means "Senior Citizen Fixed Deposit Rate" and so on.

Training and evaluation data

Training Loss Epoch Step Validation Loss
No log 1.0 222 2.3696
No log 2.0 444 1.0432
2.3371 3.0 666 0.3821
2.3371 4.0 888 0.1731
0.3285 5.0 1110 0.1151
0.3285 6.0 1332 0.1089
0.0443 7.0 1554 0.1107
0.0443 8.0 1776 0.1083
0.0443 9.0 1998 0.1025
0.0153 10.0 2220 0.1048

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0
  • Tokenizers 0.19.1

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 229 1.9891
No log 2.0 458 0.6549
2.1005 3.0 687 0.1826
2.1005 4.0 916 0.0937
0.2019 5.0 1145 0.0764
0.2019 6.0 1374 0.0788
0.0251 7.0 1603 0.0759
0.0251 8.0 1832 0.0758
0.0115 9.0 2061 0.0773
0.0115 10.0 2290 0.0777
0.0073 11.0 2519 0.0787
0.0073 12.0 2748 0.0805
0.0073 13.0 2977 0.0815
0.0053 14.0 3206 0.0816
0.0053 15.0 3435 0.0824
0.0041 16.0 3664 0.0838
0.0041 17.0 3893 0.0828
0.0035 18.0 4122 0.0836
0.0035 19.0 4351 0.0836
0.0031 20.0 4580 0.0837

Framework versions

  • Transformers 4.30.0
  • Pytorch 2.0.1
  • Datasets 2.12.0
  • Tokenizers 0.13.3
Downloads last month
37
Safetensors
Model size
110M params
Tensor type
F32
·