Collin Heenan commited on
Commit
6e8c6ac
1 Parent(s): 5b74554

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -3
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- license: apache-2.0
3
  datasets:
4
  - AdiOO7/llama-2-finance
5
  language:
@@ -18,7 +18,7 @@ The LLama 2 7b language model, fine-tuned on a financial dataset, represents a s
18
 
19
  While a 7 billion parameter model like LLama 2 7b might be considered small compared to some of the gargantuan models available today, it still possesses significant capacity and can offer various benefits, especially when fine-tuned on a specific domain like finance.
20
 
21
-
22
 
23
  ### Architecture and Size:
24
  The LLama 2 7b model, with its 7 billion parameters, harnesses a scaled-down yet potent architecture, providing a robust foundation for understanding and generating complex language structures. Despite being smaller than some colossal language models, it balances computational power and efficiency, ensuring credible natural language processing and generation while maintaining manageable computational demands.
@@ -31,4 +31,16 @@ The LLama 2 7b model, refined with a financial dataset, emerges as a specialized
31
  - **Model type:** Language Model (Transformer-Based)
32
  - **Language(s) (NLP):** English (and potentially other languages, depending on the finetuning dataset)
33
  - **License:** [MIT]
34
- - **Finetuned from model:** https://huggingface.co/NousResearch/Llama-2-7b-hf
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
  datasets:
4
  - AdiOO7/llama-2-finance
5
  language:
 
18
 
19
  While a 7 billion parameter model like LLama 2 7b might be considered small compared to some of the gargantuan models available today, it still possesses significant capacity and can offer various benefits, especially when fine-tuned on a specific domain like finance.
20
 
21
+ #
22
 
23
  ### Architecture and Size:
24
  The LLama 2 7b model, with its 7 billion parameters, harnesses a scaled-down yet potent architecture, providing a robust foundation for understanding and generating complex language structures. Despite being smaller than some colossal language models, it balances computational power and efficiency, ensuring credible natural language processing and generation while maintaining manageable computational demands.
 
31
  - **Model type:** Language Model (Transformer-Based)
32
  - **Language(s) (NLP):** English (and potentially other languages, depending on the finetuning dataset)
33
  - **License:** [MIT]
34
+ - **Finetuned from model:** https://huggingface.co/NousResearch/Llama-2-7b-hf
35
+
36
+ # Intended Use
37
+ This model is intended to assist with various tasks related to the finance domain, leveraging its finetuning on a finance-specific dataset. Potential applications might include:
38
+
39
+ - **Financial Text Generation:** Generate finance-related text, reports, or summaries.
40
+ - **Question Answering:** Answer questions related to financial terms, processes, or general finance-related inquiries.
41
+ - **Sentiment Analysis:** Analyze financial news, reports, or user reviews to extract sentiments and opinions.
42
+ - **Information Retrieval:** Extract specific financial information from given text or documents.
43
+
44
+ # Limitations and Bias
45
+ - **Data Bias:** The model might have biases based on the training data it was fine-tuned on. It may favor certain terminologies, expressions, or perspectives prevalent in the training data.
46
+ - **Domain Limitation:** While specialized in finance, the model might lack in-depth understanding or accuracy in other domains.