Spaces:
Sleeping
Sleeping
File size: 4,413 Bytes
045c0d3 de6a975 63d5e9e de6a975 9cf78b6 de6a975 63d5e9e de6a975 045c0d3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 |
---
title: FinWise AI
emoji: π
colorFrom: pink
colorTo: pink
sdk: streamlit
sdk_version: 1.35.0
app_file: app.py
pinned: false
license: mit
---
# FinWise AI π
FinWise AI is an AI-powered financial advisor built using the LLaMA 3 model from Meta and the Streamlit framework. This application provides users with financial insights and stock recommendations based on natural language queries.
## Overview
FinWise AI leverages the powerful capabilities of the LLaMA 3 model, a state-of-the-art language model optimized for dialogue use cases. The application allows users to input queries about stock market investments and receive detailed, AI-generated insights.
## Features
- **Natural Language Processing**: Understands and responds to user queries about stock market investments.
- **Real-Time Insights**: Provides up-to-date financial advice and stock recommendations.
- **Streamlit Integration**: Offers an interactive web-based interface for user queries and displaying results.
- **Secure Handling of API Keys**: Uses Hugging Face's secrets management for secure handling of API tokens.
## How to Use
1. **Input Your Query**: Enter a natural language query in the text area provided. For example, "What are the best stocks to invest in today?"
2. **Get Insights**: Click on the "Get Financial Insights" button to receive detailed, AI-generated advice and stock recommendations.
## Installation
To run this application locally, follow these steps:
1. **Clone the Repository**:
```bash
git clone https://huggingface.co/spaces/neuraldevx/FinWise-AI
cd FinWise-AI
```
2. **Set Up Environment Variables**:
Add your Hugging Face token in the Hugging Face Spaces settings under the "Secrets" section with the name `HF_TOKEN`.
3. **Install Dependencies**:
Ensure you have the necessary dependencies listed in `requirements.txt`:
```bash
pip install -r requirements.txt
```
4. **Run the Application**:
```bash
streamlit run app.py
```
## Configuration
The application is configured using the following settings:
- **Title**: FinWise AI
- **Emoji**: π
- **Color From**: Pink
- **Color To**: Pink
- **SDK**: Streamlit
- **SDK Version**: 1.35.0
- **App File**: app.py
- **Pinned**: False
- **License**: MIT
Check out the configuration reference at [Hugging Face Spaces Config Reference](https://huggingface.co/docs/hub/spaces-config-reference).
## License
This project is licensed under the MIT License.
## Contributing
Contributions are welcome! Please fork the repository and submit a pull request.
## Contact
For questions or comments about the model, please reach out through the model's repository on Hugging Face.
---
This project demonstrates the capabilities of the LLaMA 3 model from Meta and provides a foundation for building advanced financial advisory tools using AI.
## Access and Usage Instructions
To use the LLaMA 3 model, you must first get access from Hugging Face:
1. **Visit the Model Page**: Go to the [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) page on Hugging Face.
2. **Accept the License**: Read and accept the model license. Once approved, you will be granted access to all the LLaMA 3 models.
3. **Download Weights**: Download the weights using the following command after approval:
```bash
huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir meta-llama/Meta-Llama-3-8B-Instruct
```
4. **Use the Model**: Load the model in your application as shown in the example:
```python
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda",
)
```
## Issues and Feedback
Please report any software bugs or other problems with the models through one of the following means:
- **Reporting issues with the model**: [Meta-Llama GitHub Issues](https://github.com/meta-llama/llama3/issues)
- **Reporting risky content generated by the model**: [Llama Output Feedback](https://developers.facebook.com/llama_output_feedback)
- **Reporting bugs and security concerns**: [Facebook Whitehat](https://facebook.com/whitehat/info)
For further details, see the MODEL_CARD.md and LICENSE files in the repository.
|