Edit model card

Model Introduction

The Analyst QA Model is an open-source tool designed for generating queries and answers specific to financial analysis. It uses advanced natural language processing techniques to query financial datasets and reports effectively. The model mimics the querying abilities of a skilled financial analyst, helping extract key insights, metrics, and trends from financial data. The goal is to support detailed analysis by generating queries that facilitate deeper understanding of financial performance, strategy, and market dynamics.

Key Features

  • Domain Expertise: Employs domain-specific knowledge to generate queries that resonate with financial analysis practices.
  • Contextual Understanding: Utilizes contextual understanding of financial metrics and trends to formulate relevant queries.
  • Comprehensive Query Generation: Focuses on generating queries that cover various aspects of financial data, including performance metrics, strategic insights, and market implications.

Model Details

  • Developed by: OnFinance AI
  • Usage: Query Generation
  • Finetuned from: Meta Llama-3-8b

Applications

The model is designed to support financial professionals in efficiently extracting actionable insights from large datasets and reports. By automating the query generation process, it enhances the analytical capabilities of users, enabling deeper and more informed decision-making based on thorough financial analysis.

How to Get Started with the Model

Use the code below to get started with the model.

import transformers
import torch

text_chunk = "any financial text chunk"
model_id = "OnFinanceAI/llama-3-8b-analyst-qa"

pipeline = transformers.pipeline(
    "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
)
pipeline(text_chunk)

Training Data

The Analyst QA Model was trained on a dataset comprising a combination of human annotated and machine-generated queries based on textual chunks related to financial analysis. The dataset consisted of 5,000+ instances, curated to ensure a diverse representation of queries relevant to financial metrics, trends, and strategic insights. The training data was meticulously curated to encompass various aspects of financial analysis, enhancing the model's ability to generate accurate and insightful queries tailored for financial professionals.

Training Hyperparameters

  • Training regime: float16
  • Optimizer: AdamW
  • Learning rate: 1e-5
  • Number of epochs: 4
  • Gradient accumulation steps: 2
  • Warmup steps: 10

Evaluation

Blind testing was conducted to evaluate the quality of the queries generated by our model. We recruited human evaluators who were provided with a set of generated queries without knowing the source. The evaluators were asked to rate each query on a scale of 1 to 5 based on relevance, clarity, and usefulness.

Testing Data, Factors & Metrics

Pre-Fine-tuned Model Results

Output: "What were volume sales made recently as per management commentary?"

Post-Fine-tuned Model Results

Output: "What is AALTO's return on equity (ROE) over the past 3-5 years, and how does it compare to the industry average and peer group?"

The queries were evaluated based on the following criteria:

  • Relevance to Financial Data: How relevant the query is to the provided financial data, including metrics, trends, and key performance indicators.
  • Clarity for Analysts: How clear and understandable the query is, ensuring it can be easily interpreted by financial analysts.
  • Usefulness for Insight Extraction: How useful the query is in extracting key insights, trends, and actionable information from the data.

Results

The following are the average scores obtained from the blind testing on a set of test queries:

Criteria Pre-Fine-tuned Model Post-Fine-tuned Model
Relevance to Financial Data 4.2 4.7
Clarity for Analysts 3.9 4.6
Insight Extraction 4.0 4.8

Overall, the model performed better, with an average score of 4.7 across all criteria. Additionally, human evaluators were given a set of 115 queries. Out of these, 79 queries generated by our finetuned model were preferred over the original model. This indicates a significant preference for the finetuned model's output, with approximately 68.7% of the queries favoring the finetuned model.

Model Card Contact

OnFinance AI

Downloads last month
2
Safetensors
Model size
8.03B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.