Spaces:
Sleeping
Sleeping
jake
commited on
Commit
β’
045c0d3
1
Parent(s):
6b3d634
Update README with configuration and project details
Browse files
README.md
CHANGED
@@ -1,3 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# FinWise AI π
|
2 |
|
3 |
FinWise AI is an AI-powered financial advisor built using the LLaMA 3 model from Meta and the Streamlit framework. This application provides users with financial insights and stock recommendations based on natural language queries.
|
@@ -77,3 +89,41 @@ For questions or comments about the model, please reach out through the model's
|
|
77 |
---
|
78 |
|
79 |
This project demonstrates the capabilities of the LLaMA 3 model from Meta and provides a foundation for building advanced financial advisory tools using AI.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
title: FinWise AI
|
3 |
+
emoji: π
|
4 |
+
colorFrom: pink
|
5 |
+
colorTo: pink
|
6 |
+
sdk: streamlit
|
7 |
+
sdk_version: 1.35.0
|
8 |
+
app_file: app.py
|
9 |
+
pinned: false
|
10 |
+
license: mit
|
11 |
+
---
|
12 |
+
|
13 |
# FinWise AI π
|
14 |
|
15 |
FinWise AI is an AI-powered financial advisor built using the LLaMA 3 model from Meta and the Streamlit framework. This application provides users with financial insights and stock recommendations based on natural language queries.
|
|
|
89 |
---
|
90 |
|
91 |
This project demonstrates the capabilities of the LLaMA 3 model from Meta and provides a foundation for building advanced financial advisory tools using AI.
|
92 |
+
|
93 |
+
## Access and Usage Instructions
|
94 |
+
|
95 |
+
To use the LLaMA 3 model, you must first get access from Hugging Face:
|
96 |
+
|
97 |
+
1. **Visit the Model Page**: Go to the [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) page on Hugging Face.
|
98 |
+
2. **Accept the License**: Read and accept the model license. Once approved, you will be granted access to all the LLaMA 3 models.
|
99 |
+
3. **Download Weights**: Download the weights using the following command after approval:
|
100 |
+
|
101 |
+
```bash
|
102 |
+
huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir meta-llama/Meta-Llama-3-8B-Instruct
|
103 |
+
```
|
104 |
+
|
105 |
+
4. **Use the Model**: Load the model in your application as shown in the example:
|
106 |
+
|
107 |
+
```python
|
108 |
+
import transformers
|
109 |
+
import torch
|
110 |
+
|
111 |
+
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
|
112 |
+
|
113 |
+
pipeline = transformers.pipeline(
|
114 |
+
"text-generation",
|
115 |
+
model=model_id,
|
116 |
+
model_kwargs={"torch_dtype": torch.bfloat16},
|
117 |
+
device="cuda",
|
118 |
+
)
|
119 |
+
```
|
120 |
+
|
121 |
+
## Issues and Feedback
|
122 |
+
|
123 |
+
Please report any software bugs or other problems with the models through one of the following means:
|
124 |
+
|
125 |
+
- **Reporting issues with the model**: [Meta-Llama GitHub Issues](https://github.com/meta-llama/llama3/issues)
|
126 |
+
- **Reporting risky content generated by the model**: [Llama Output Feedback](https://developers.facebook.com/llama_output_feedback)
|
127 |
+
- **Reporting bugs and security concerns**: [Facebook Whitehat](https://facebook.com/whitehat/info)
|
128 |
+
|
129 |
+
For further details, see the MODEL_CARD.md and LICENSE files in the repository.
|