Spaces:
Sleeping
Sleeping
Sandrine Guétin
commited on
Commit
·
2106f78
0
Parent(s):
Version propre de DeepVest
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- .ipynb_checkpoints/README-checkpoint.md +14 -0
- .ipynb_checkpoints/app-checkpoint.py +0 -0
- .ipynb_checkpoints/requirements-checkpoint.txt +0 -0
- .ipynb_checkpoints/setup-checkpoint.py +0 -0
- README.md +77 -0
- app.py +556 -0
- requirements.txt +51 -0
- setup.py +35 -0
- src/.ipynb_checkpoints/__init__-checkpoint.py +0 -0
- src/__init__.py +90 -0
- src/__pycache__/__init__.cpython-310.pyc +0 -0
- src/__pycache__/__init__.cpython-312.pyc +0 -0
- src/analysis/.ipynb_checkpoints/__init__-checkpoint.py +0 -0
- src/analysis/.ipynb_checkpoints/backtest_engine-checkpoint.py +0 -0
- src/analysis/.ipynb_checkpoints/enhanced_backtest-checkpoint.py +0 -0
- src/analysis/.ipynb_checkpoints/goal_simulator-checkpoint.py +0 -0
- src/analysis/.ipynb_checkpoints/ml_analyzer-checkpoint.py +0 -0
- src/analysis/.ipynb_checkpoints/scenario_manager-checkpoint.py +0 -0
- src/analysis/.ipynb_checkpoints/strategy_optimizer-checkpoint.py +0 -0
- src/analysis/__init__.py +29 -0
- src/analysis/__pycache__/__init__.cpython-310.pyc +0 -0
- src/analysis/__pycache__/backtest_engine.cpython-310.pyc +0 -0
- src/analysis/__pycache__/enhanced_backtest.cpython-310.pyc +0 -0
- src/analysis/__pycache__/goal_simulator.cpython-310.pyc +0 -0
- src/analysis/__pycache__/ml_analyzer.cpython-310.pyc +0 -0
- src/analysis/__pycache__/scenario_manager.cpython-310.pyc +0 -0
- src/analysis/__pycache__/strategy_optimizer.cpython-310.pyc +0 -0
- src/analysis/backtest_engine.py +2267 -0
- src/analysis/enhanced_backtest.py +228 -0
- src/analysis/goal_simulator.py +219 -0
- src/analysis/ml_analyzer.py +994 -0
- src/analysis/scenario_manager.py +205 -0
- src/analysis/strategy_optimizer.py +230 -0
- src/core/.ipynb_checkpoints/__init__-checkpoint.py +0 -0
- src/core/.ipynb_checkpoints/base-checkpoint.py +0 -0
- src/core/.ipynb_checkpoints/config-checkpoint.py +0 -0
- src/core/.ipynb_checkpoints/data_fetcher-checkpoint.py +0 -0
- src/core/.ipynb_checkpoints/data_generator-checkpoint.py +0 -0
- src/core/.ipynb_checkpoints/data_provider-checkpoint.py +0 -0
- src/core/.ipynb_checkpoints/event_types-checkpoint.py +0 -0
- src/core/.ipynb_checkpoints/life_events_manager-checkpoint.py +0 -0
- src/core/.ipynb_checkpoints/market_analyzer-checkpoint.py +0 -0
- src/core/.ipynb_checkpoints/market_states-checkpoint.py +0 -0
- src/core/.ipynb_checkpoints/portfolio_optimizer-checkpoint.py +0 -0
- src/core/.ipynb_checkpoints/profile_types-checkpoint.py +0 -0
- src/core/.ipynb_checkpoints/profiling-checkpoint.py +0 -0
- src/core/.ipynb_checkpoints/risk_manager-checkpoint.py +0 -0
- src/core/.ipynb_checkpoints/risk_metrics-checkpoint.py +0 -0
- src/core/.ipynb_checkpoints/technical_indicators-checkpoint.py +0 -0
- src/core/.ipynb_checkpoints/types-checkpoint.py +0 -0
.ipynb_checkpoints/README-checkpoint.md
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
title: ProfilingAI
|
3 |
+
emoji: 📉
|
4 |
+
colorFrom: red
|
5 |
+
colorTo: green
|
6 |
+
sdk: streamlit
|
7 |
+
sdk_version: 1.43.2
|
8 |
+
app_file: app.py
|
9 |
+
pinned: false
|
10 |
+
license: mit
|
11 |
+
short_description: 'Determine the investment profile '
|
12 |
+
---
|
13 |
+
|
14 |
+
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
.ipynb_checkpoints/app-checkpoint.py
ADDED
File without changes
|
.ipynb_checkpoints/requirements-checkpoint.txt
ADDED
File without changes
|
.ipynb_checkpoints/setup-checkpoint.py
ADDED
File without changes
|
README.md
ADDED
@@ -0,0 +1,77 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# DeepVest - Assistant d'Investissement Intelligent
|
2 |
+
|
3 |
+
DeepVest est une IA personnalisée qui analyse en temps réel votre situation personnelle, financière et professionnelle pour proposer des stratégies d'investissement sur mesure qui s'adaptent dynamiquement aux événements de votre vie.
|
4 |
+
|
5 |
+
## 🚀 Fonctionnalités
|
6 |
+
|
7 |
+
- **Analyse multidimensionnelle de la situation personnelle**
|
8 |
+
- Analyse du profil personnel, familial et professionnel
|
9 |
+
- Évaluation des revenus, charges, dettes et patrimoine
|
10 |
+
- Prise en compte des préférences éthiques et de style de vie
|
11 |
+
|
12 |
+
- **Plan d'investissement personnalisé en temps réel**
|
13 |
+
- Génération de plans d'investissement dynamiques et adaptatifs
|
14 |
+
- Recommandations sur la répartition des actifs (actions, obligations, immobilier, etc.)
|
15 |
+
- Ajustement automatique aux événements de vie (promotion, naissance, déménagement)
|
16 |
+
|
17 |
+
- **Analyse des marchés et ajustement automatique**
|
18 |
+
- Surveillance continue des marchés financiers
|
19 |
+
- Adaptation aux conditions macroéconomiques
|
20 |
+
- Optimisation du portefeuille pour le rendement et la gestion des risques
|
21 |
+
|
22 |
+
- **Simulateur d'objectifs de vie**
|
23 |
+
- Simulation d'événements futurs (achat immobilier, études, retraite anticipée)
|
24 |
+
- Ajustement des plans d'investissement pour atteindre ces objectifs
|
25 |
+
|
26 |
+
- **Conseils sur les actifs traditionnels et alternatifs**
|
27 |
+
- Stratégies pour les actions, obligations et immobilier
|
28 |
+
- Intégration d'actifs alternatifs (crypto-monnaies, art, etc.)
|
29 |
+
|
30 |
+
## 📊 Interface utilisateur
|
31 |
+
|
32 |
+
DeepVest propose une interface intuitive et personnalisée permettant:
|
33 |
+
- La création et la gestion de profils d'investisseur
|
34 |
+
- Le suivi des performances de portefeuille
|
35 |
+
- La simulation d'objectifs financiers
|
36 |
+
- La visualisation des conditions de marché en temps réel
|
37 |
+
|
38 |
+
## 🛠️ Installation
|
39 |
+
|
40 |
+
```bash
|
41 |
+
pip install -r requirements.txt
|
42 |
+
```
|
43 |
+
|
44 |
+
## 🚀 Démarrage
|
45 |
+
|
46 |
+
```bash
|
47 |
+
streamlit run app.py
|
48 |
+
```
|
49 |
+
|
50 |
+
## 🧠 Technologies utilisées
|
51 |
+
|
52 |
+
- **Backend**: Python, scikit-learn, TensorFlow/PyTorch, pandas
|
53 |
+
- **Frontend**: Streamlit, React pour les composants d'interface avancés
|
54 |
+
- **Analyse de données**: Numpy, Pandas, yfinance
|
55 |
+
- **IA/ML**: Modèles personnalisés pour l'analyse de profil et de marché
|
56 |
+
|
57 |
+
## 📝 Exemples d'utilisation
|
58 |
+
|
59 |
+
1. **Créer un profil d'investisseur**
|
60 |
+
- Remplir le formulaire avec vos informations personnelles et financières
|
61 |
+
- Obtenir une analyse détaillée de votre profil de risque et vos objectifs
|
62 |
+
|
63 |
+
2. **Analyser un portefeuille existant**
|
64 |
+
- Entrer les actifs de votre portefeuille actuel
|
65 |
+
- Recevoir des recommandations d'optimisation
|
66 |
+
|
67 |
+
3. **Simuler un objectif financier**
|
68 |
+
- Définir un objectif (retraite, achat immobilier, etc.)
|
69 |
+
- Explorer différents scénarios pour atteindre cet objectif
|
70 |
+
|
71 |
+
## 📄 Licence
|
72 |
+
|
73 |
+
[Licence MIT](LICENSE)
|
74 |
+
|
75 |
+
## 👥 Contribution
|
76 |
+
|
77 |
+
Les contributions sont les bienvenues! N'hésitez pas à ouvrir une issue ou soumettre une pull request.
|
app.py
ADDED
@@ -0,0 +1,556 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import streamlit as st
|
2 |
+
import os
|
3 |
+
import sys
|
4 |
+
import asyncio
|
5 |
+
import numpy as np
|
6 |
+
import pandas as pd
|
7 |
+
import matplotlib.pyplot as plt
|
8 |
+
from datetime import datetime, timedelta
|
9 |
+
from pathlib import Path
|
10 |
+
|
11 |
+
# Ajouter le répertoire src au path pour l'importation des modules
|
12 |
+
current_dir = Path(__file__).parent.absolute()
|
13 |
+
sys.path.append(str(current_dir))
|
14 |
+
|
15 |
+
# Import des modules DeepVest
|
16 |
+
try:
|
17 |
+
from src.core.profiling import DeepVestConfig, LLMProfileAnalyzer, DeepVestProfiler
|
18 |
+
except ImportError:
|
19 |
+
st.error("Impossible d'importer les modules DeepVest. Vérifiez que le dossier src est correctement structuré.")
|
20 |
+
|
21 |
+
# Version simplifiée pour la démo si imports échoués
|
22 |
+
class DeepVestConfig:
|
23 |
+
def __init__(self, **kwargs):
|
24 |
+
for key, value in kwargs.items():
|
25 |
+
setattr(self, key, value)
|
26 |
+
|
27 |
+
class LLMProfileAnalyzer:
|
28 |
+
def __init__(self, config):
|
29 |
+
self.config = config
|
30 |
+
|
31 |
+
async def analyze_profile(self, data):
|
32 |
+
return {
|
33 |
+
"risk_score": data.get("risk_tolerance", 3)/5,
|
34 |
+
"investment_horizon": data.get("investment_horizon", 5),
|
35 |
+
"primary_goals": data.get("investment_goals", []),
|
36 |
+
"recommendations": [
|
37 |
+
"Diversifier votre portefeuille",
|
38 |
+
"Maintenir une épargne de sécurité",
|
39 |
+
"Investir régulièrement"
|
40 |
+
],
|
41 |
+
"explanation": "Analyse simulée pour la démonstration."
|
42 |
+
}
|
43 |
+
|
44 |
+
class DeepVestProfiler:
|
45 |
+
def __init__(self, config):
|
46 |
+
self.config = config
|
47 |
+
self.analyzer = LLMProfileAnalyzer(config)
|
48 |
+
|
49 |
+
async def create_profile(self, data):
|
50 |
+
analysis = await self.analyzer.analyze_profile(data)
|
51 |
+
profile = type('Profile', (), {
|
52 |
+
"id": "demo-profile",
|
53 |
+
"risk_tolerance": data.get("risk_tolerance", 3),
|
54 |
+
"investment_horizon": data.get("investment_horizon", 5),
|
55 |
+
"risk_score": analysis["risk_score"],
|
56 |
+
"investment_goals": data.get("investment_goals", []),
|
57 |
+
"llm_analysis_results": type('LLMResults', (), analysis)
|
58 |
+
})
|
59 |
+
return profile
|
60 |
+
|
61 |
+
async def generate_investment_strategy(self, profile):
|
62 |
+
risk_score = getattr(profile, "risk_score", 0.5)
|
63 |
+
|
64 |
+
# Allocation basée sur le score de risque
|
65 |
+
stocks = risk_score
|
66 |
+
bonds = (1 - risk_score) * 0.8
|
67 |
+
cash = (1 - risk_score) * 0.2
|
68 |
+
|
69 |
+
allocation = {
|
70 |
+
'Actions': stocks,
|
71 |
+
'Obligations': bonds,
|
72 |
+
'Liquidités': cash
|
73 |
+
}
|
74 |
+
|
75 |
+
return {
|
76 |
+
"risk_profile": {
|
77 |
+
"score": risk_score,
|
78 |
+
"category": "Dynamique" if risk_score > 0.6 else "Modéré" if risk_score > 0.4 else "Conservateur"
|
79 |
+
},
|
80 |
+
"asset_allocation": allocation,
|
81 |
+
"recommendations": [
|
82 |
+
"Diversifier votre portefeuille selon votre profil de risque",
|
83 |
+
"Maintenir une épargne de sécurité",
|
84 |
+
"Investir régulièrement"
|
85 |
+
]
|
86 |
+
}
|
87 |
+
|
88 |
+
# Configurer la page Streamlit
|
89 |
+
st.set_page_config(
|
90 |
+
page_title="DeepVest - IA d'Investissement Personnalisé",
|
91 |
+
page_icon="💼",
|
92 |
+
layout="wide",
|
93 |
+
initial_sidebar_state="expanded"
|
94 |
+
)
|
95 |
+
|
96 |
+
# Titre et présentation
|
97 |
+
st.title("DeepVest - Assistant d'Investissement Intelligent")
|
98 |
+
st.markdown("""
|
99 |
+
Cette plateforme analyse en temps réel votre situation personnelle, financière et professionnelle
|
100 |
+
pour vous proposer une stratégie d'investissement sur mesure qui s'adapte dynamiquement aux événements de votre vie.
|
101 |
+
""")
|
102 |
+
|
103 |
+
# Initialisation de la session state si elle n'existe pas déjà
|
104 |
+
if 'profile' not in st.session_state:
|
105 |
+
st.session_state.profile = None
|
106 |
+
if 'analyzer' not in st.session_state:
|
107 |
+
config = DeepVestConfig(
|
108 |
+
debug_mode=True,
|
109 |
+
log_prompts=True,
|
110 |
+
db_path="profiles_db"
|
111 |
+
)
|
112 |
+
st.session_state.analyzer = LLMProfileAnalyzer(config)
|
113 |
+
st.session_state.profiler = DeepVestProfiler(config)
|
114 |
+
|
115 |
+
# Navigation par onglets
|
116 |
+
tabs = st.tabs(["Profil Investisseur", "Analyse de Portefeuille", "Simulation d'Objectifs", "Marché en Temps Réel"])
|
117 |
+
|
118 |
+
with tabs[0]:
|
119 |
+
st.header("Votre Profil Investisseur")
|
120 |
+
|
121 |
+
# Formulaire de profil
|
122 |
+
with st.form("profile_form"):
|
123 |
+
col1, col2 = st.columns(2)
|
124 |
+
|
125 |
+
with col1:
|
126 |
+
st.subheader("Informations personnelles")
|
127 |
+
age = st.number_input("Âge", min_value=18, max_value=100, value=30)
|
128 |
+
annual_income = st.number_input("Revenu annuel (€)", min_value=0, value=50000)
|
129 |
+
monthly_savings = st.number_input("Épargne mensuelle (€)", min_value=0, value=500)
|
130 |
+
family_status = st.selectbox("Situation familiale", ["Célibataire", "Marié(e)", "Divorcé(e)", "Veuf/Veuve"])
|
131 |
+
dependents = st.number_input("Personnes à charge", min_value=0, max_value=10, value=0)
|
132 |
+
|
133 |
+
with col2:
|
134 |
+
st.subheader("Profil d'investissement")
|
135 |
+
risk_tolerance = st.slider("Tolérance au risque", min_value=1, max_value=5, value=3,
|
136 |
+
help="1: Très prudent, 5: Très dynamique")
|
137 |
+
investment_horizon = st.slider("Horizon d'investissement (années)", min_value=1, max_value=30, value=10)
|
138 |
+
financial_knowledge = st.slider("Connaissances financières", min_value=1, max_value=5, value=3,
|
139 |
+
help="1: Débutant, 5: Expert")
|
140 |
+
investment_goals = st.multiselect("Objectifs d'investissement",
|
141 |
+
["Retraite", "Achat immobilier", "Études des enfants",
|
142 |
+
"Épargne générale", "Revenus passifs", "Croissance patrimoine"])
|
143 |
+
esg_preferences = st.checkbox("Préférences ESG (investissement responsable)")
|
144 |
+
|
145 |
+
submitted = st.form_submit_button("Créer mon profil")
|
146 |
+
|
147 |
+
if submitted:
|
148 |
+
with st.spinner("Analyse de votre profil en cours..."):
|
149 |
+
# Préparer les données du profil
|
150 |
+
profile_data = {
|
151 |
+
"age": age,
|
152 |
+
"annual_income": annual_income,
|
153 |
+
"monthly_savings": monthly_savings,
|
154 |
+
"marital_status": family_status,
|
155 |
+
"number_of_dependents": dependents,
|
156 |
+
"risk_tolerance": risk_tolerance,
|
157 |
+
"investment_horizon": investment_horizon,
|
158 |
+
"financial_knowledge": financial_knowledge,
|
159 |
+
"investment_goals": investment_goals,
|
160 |
+
"esg_preferences": esg_preferences,
|
161 |
+
"total_assets": 0, # À compléter plus tard
|
162 |
+
"total_debt": 0, # À compléter plus tard
|
163 |
+
}
|
164 |
+
|
165 |
+
# Créer le profil
|
166 |
+
async def create_profile():
|
167 |
+
return await st.session_state.profiler.create_profile(profile_data)
|
168 |
+
|
169 |
+
# Exécuter la fonction asynchrone
|
170 |
+
loop = asyncio.new_event_loop()
|
171 |
+
asyncio.set_event_loop(loop)
|
172 |
+
try:
|
173 |
+
st.session_state.profile = loop.run_until_complete(create_profile())
|
174 |
+
st.success("Profil créé avec succès!")
|
175 |
+
except Exception as e:
|
176 |
+
st.error(f"Erreur lors de la création du profil: {str(e)}")
|
177 |
+
finally:
|
178 |
+
loop.close()
|
179 |
+
|
180 |
+
# Affichage du profil si disponible
|
181 |
+
if st.session_state.profile:
|
182 |
+
profile = st.session_state.profile
|
183 |
+
|
184 |
+
st.subheader("Analyse de votre profil")
|
185 |
+
|
186 |
+
col1, col2, col3 = st.columns(3)
|
187 |
+
with col1:
|
188 |
+
st.metric("Score de risque", f"{profile.risk_score:.2f}/1.00")
|
189 |
+
with col2:
|
190 |
+
st.metric("Horizon recommandé", f"{profile.investment_horizon} ans")
|
191 |
+
with col3:
|
192 |
+
st.metric("Capacité d'investissement", f"{monthly_savings} €/mois")
|
193 |
+
|
194 |
+
# Affichage des recommandations si disponibles
|
195 |
+
if hasattr(profile, 'llm_analysis_results') and hasattr(profile.llm_analysis_results, 'recommendations'):
|
196 |
+
st.subheader("Recommandations personnalisées")
|
197 |
+
for i, rec in enumerate(profile.llm_analysis_results.recommendations[:5], 1):
|
198 |
+
st.write(f"{i}. {rec}")
|
199 |
+
|
200 |
+
# Génération de la stratégie d'investissement
|
201 |
+
if st.button("Générer ma stratégie d'investissement"):
|
202 |
+
with st.spinner("Génération de la stratégie en cours..."):
|
203 |
+
async def generate_strategy():
|
204 |
+
return await st.session_state.profiler.generate_investment_strategy(profile)
|
205 |
+
|
206 |
+
loop = asyncio.new_event_loop()
|
207 |
+
asyncio.set_event_loop(loop)
|
208 |
+
try:
|
209 |
+
strategy = loop.run_until_complete(generate_strategy())
|
210 |
+
|
211 |
+
# Affichage de la stratégie
|
212 |
+
st.subheader("Stratégie d'investissement recommandée")
|
213 |
+
|
214 |
+
# Affichage de l'allocation d'actifs
|
215 |
+
st.write("**Allocation d'actifs recommandée:**")
|
216 |
+
|
217 |
+
# Préparation des données pour le graphique
|
218 |
+
labels = list(strategy['asset_allocation'].keys())
|
219 |
+
sizes = [round(v * 100, 1) for v in strategy['asset_allocation'].values()]
|
220 |
+
|
221 |
+
# Création du graphique
|
222 |
+
fig, ax = plt.subplots(figsize=(8, 6))
|
223 |
+
wedges, texts, autotexts = ax.pie(sizes, autopct='%1.1f%%',
|
224 |
+
textprops={'fontsize': 9, 'weight': 'bold'})
|
225 |
+
ax.axis('equal')
|
226 |
+
ax.legend(wedges, labels, loc="center left", bbox_to_anchor=(1, 0, 0.5, 1))
|
227 |
+
st.pyplot(fig)
|
228 |
+
|
229 |
+
# Affichage des recommandations
|
230 |
+
st.write("**Recommandations clés:**")
|
231 |
+
for i, rec in enumerate(strategy.get('recommendations', []), 1):
|
232 |
+
st.write(f"{i}. {rec}")
|
233 |
+
except Exception as e:
|
234 |
+
st.error(f"Erreur lors de la génération de la stratégie: {str(e)}")
|
235 |
+
finally:
|
236 |
+
loop.close()
|
237 |
+
|
238 |
+
with tabs[1]:
|
239 |
+
st.header("Analyse de Portefeuille")
|
240 |
+
|
241 |
+
# Simulation d'un portefeuille existant
|
242 |
+
st.subheader("Votre portefeuille actuel")
|
243 |
+
|
244 |
+
# Exemple de portefeuille pour démonstration
|
245 |
+
with st.expander("Ajouter des actifs à votre portefeuille"):
|
246 |
+
with st.form("portfolio_form"):
|
247 |
+
asset_type = st.selectbox("Type d'actif", ["Actions", "Obligations", "ETF", "Fonds", "Immobilier", "Autres"])
|
248 |
+
asset_name = st.text_input("Nom/Ticker")
|
249 |
+
asset_value = st.number_input("Valeur (€)", min_value=0.0, value=1000.0)
|
250 |
+
add_asset = st.form_submit_button("Ajouter")
|
251 |
+
|
252 |
+
# Affichage d'un portefeuille exemple
|
253 |
+
portfolio_data = {
|
254 |
+
"Actions": {"AAPL": 5000, "MSFT": 3000, "GOOGL": 4000},
|
255 |
+
"ETF": {"VWCE": 10000, "AGGH": 5000},
|
256 |
+
"Liquidités": {"EUR": 2000}
|
257 |
+
}
|
258 |
+
|
259 |
+
st.write("Portefeuille actuel:")
|
260 |
+
|
261 |
+
# Création d'un DataFrame pour l'affichage
|
262 |
+
portfolio_df = []
|
263 |
+
for category, assets in portfolio_data.items():
|
264 |
+
for asset, value in assets.items():
|
265 |
+
portfolio_df.append({"Catégorie": category, "Actif": asset, "Valeur (€)": value})
|
266 |
+
|
267 |
+
portfolio_df = pd.DataFrame(portfolio_df)
|
268 |
+
st.dataframe(portfolio_df)
|
269 |
+
|
270 |
+
# Calcul des statistiques du portefeuille
|
271 |
+
total_value = portfolio_df["Valeur (€)"].sum()
|
272 |
+
st.metric("Valeur totale du portefeuille", f"{total_value:,.2f} €")
|
273 |
+
|
274 |
+
# Graphique de répartition
|
275 |
+
st.subheader("Répartition du portefeuille")
|
276 |
+
|
277 |
+
# Par catégorie
|
278 |
+
category_allocation = portfolio_df.groupby("Catégorie")["Valeur (€)"].sum()
|
279 |
+
category_allocation_pct = category_allocation / total_value * 100
|
280 |
+
|
281 |
+
fig, ax = plt.subplots(figsize=(8, 6))
|
282 |
+
wedges, texts, autotexts = ax.pie(category_allocation_pct, autopct='%1.1f%%',
|
283 |
+
textprops={'fontsize': 9, 'weight': 'bold'})
|
284 |
+
ax.axis('equal')
|
285 |
+
ax.legend(wedges, category_allocation.index, loc="center left", bbox_to_anchor=(1, 0, 0.5, 1))
|
286 |
+
st.pyplot(fig)
|
287 |
+
|
288 |
+
# Analyse de performance simulée
|
289 |
+
st.subheader("Performance historique simulée")
|
290 |
+
|
291 |
+
# Données simulées pour la démonstration
|
292 |
+
dates = pd.date_range(start=datetime.now() - timedelta(days=365), end=datetime.now(), freq='D')
|
293 |
+
performance = 100 * (1 + np.cumsum(np.random.normal(0.0003, 0.01, size=len(dates))))
|
294 |
+
benchmark = 100 * (1 + np.cumsum(np.random.normal(0.0002, 0.008, size=len(dates))))
|
295 |
+
|
296 |
+
performance_df = pd.DataFrame({
|
297 |
+
'Date': dates,
|
298 |
+
'Portefeuille': performance,
|
299 |
+
'Benchmark': benchmark
|
300 |
+
})
|
301 |
+
|
302 |
+
st.line_chart(performance_df.set_index('Date'))
|
303 |
+
|
304 |
+
# Recommandations d'optimisation
|
305 |
+
st.subheader("Recommandations d'optimisation")
|
306 |
+
|
307 |
+
if st.session_state.profile:
|
308 |
+
st.write("Basé sur votre profil, nous recommandons les ajustements suivants:")
|
309 |
+
|
310 |
+
st.markdown("""
|
311 |
+
1. **Rééquilibrage recommandé**: Réduire l'exposition aux actions technologiques
|
312 |
+
2. **Diversification géographique**: Augmenter l'exposition aux marchés émergents
|
313 |
+
3. **Allocation tactique**: Augmenter la part des obligations face aux incertitudes actuelles
|
314 |
+
""")
|
315 |
+
else:
|
316 |
+
st.info("Créez d'abord votre profil d'investisseur pour obtenir des recommandations personnalisées.")
|
317 |
+
|
318 |
+
with tabs[2]:
|
319 |
+
st.header("Simulation d'Objectifs de Vie")
|
320 |
+
|
321 |
+
st.subheader("Définissez vos objectifs financiers")
|
322 |
+
|
323 |
+
# Sélection d'objectif
|
324 |
+
goal_type = st.selectbox("Type d'objectif", [
|
325 |
+
"Retraite",
|
326 |
+
"Achat immobilier",
|
327 |
+
"Études des enfants",
|
328 |
+
"Création d'entreprise",
|
329 |
+
"Voyage/Sabbatique",
|
330 |
+
"Achat important"
|
331 |
+
])
|
332 |
+
|
333 |
+
# Configuration de l'objectif
|
334 |
+
col1, col2 = st.columns(2)
|
335 |
+
|
336 |
+
with col1:
|
337 |
+
amount = st.number_input("Montant cible (€)", min_value=0, value=100000)
|
338 |
+
years = st.slider("Horizon (années)", min_value=1, max_value=40, value=10)
|
339 |
+
|
340 |
+
with col2:
|
341 |
+
monthly_contribution = st.number_input("Contribution mensuelle (€)", min_value=0, value=500)
|
342 |
+
initial_capital = st.number_input("Capital initial (€)", min_value=0, value=10000)
|
343 |
+
|
344 |
+
# Calcul de simulation
|
345 |
+
if st.button("Simuler l'atteinte de l'objectif"):
|
346 |
+
if st.session_state.profile:
|
347 |
+
risk_profile = st.session_state.profile.risk_score
|
348 |
+
else:
|
349 |
+
risk_profile = 0.5 # Valeur par défaut
|
350 |
+
|
351 |
+
# Estimation du taux de rendement en fonction du profil de risque
|
352 |
+
expected_return = 0.02 + risk_profile * 0.08 # Entre 2% et 10%
|
353 |
+
|
354 |
+
# Simulation
|
355 |
+
periods = years * 12
|
356 |
+
future_value_formula = initial_capital * (1 + expected_return/12) ** periods
|
357 |
+
contribution_future_value = monthly_contribution * ((1 + expected_return/12) ** periods - 1) / (expected_return/12)
|
358 |
+
total_future_value = future_value_formula + contribution_future_value
|
359 |
+
|
360 |
+
# Affichage des résultats
|
361 |
+
success_rate = min(100, total_future_value / amount * 100)
|
362 |
+
|
363 |
+
col1, col2, col3 = st.columns(3)
|
364 |
+
|
365 |
+
with col1:
|
366 |
+
st.metric("Montant projeté", f"{total_future_value:,.2f} €")
|
367 |
+
with col2:
|
368 |
+
st.metric("Objectif", f"{amount:,.2f} €")
|
369 |
+
with col3:
|
370 |
+
st.metric("Taux de réussite", f"{success_rate:.1f}%")
|
371 |
+
|
372 |
+
# Graphique de progression
|
373 |
+
st.subheader("Projection de votre épargne dans le temps")
|
374 |
+
|
375 |
+
# Données pour le graphique
|
376 |
+
months = range(0, periods + 1)
|
377 |
+
cumulative_values = []
|
378 |
+
|
379 |
+
for t in months:
|
380 |
+
value = initial_capital * (1 + expected_return/12) ** t
|
381 |
+
contrib = monthly_contribution * ((1 + expected_return/12) ** t - 1) / (expected_return/12) if expected_return > 0 else monthly_contribution * t
|
382 |
+
cumulative_values.append(value + contrib)
|
383 |
+
|
384 |
+
# Création d'un DataFrame pour le graphique
|
385 |
+
projection_df = pd.DataFrame({
|
386 |
+
'Mois': months,
|
387 |
+
'Valeur Projetée': cumulative_values,
|
388 |
+
'Objectif': [amount] * len(months)
|
389 |
+
})
|
390 |
+
|
391 |
+
# Graphique
|
392 |
+
st.line_chart(projection_df.set_index('Mois'))
|
393 |
+
|
394 |
+
# Recommandations
|
395 |
+
st.subheader("Recommandations pour atteindre votre objectif")
|
396 |
+
|
397 |
+
if total_future_value < amount:
|
398 |
+
shortfall = amount - total_future_value
|
399 |
+
increased_contribution = monthly_contribution * amount / total_future_value
|
400 |
+
|
401 |
+
st.warning(f"Avec les paramètres actuels, vous n'atteindrez pas complètement votre objectif. Il manquera environ {shortfall:,.2f} €.")
|
402 |
+
st.markdown(f"""
|
403 |
+
Pour atteindre votre objectif, vous pourriez:
|
404 |
+
1. **Augmenter votre contribution mensuelle** à environ {increased_contribution:.2f} €
|
405 |
+
2. **Allonger votre horizon d'investissement** de quelques années
|
406 |
+
3. **Ajuster votre profil de risque** pour viser un rendement plus élevé
|
407 |
+
""")
|
408 |
+
else:
|
409 |
+
surplus = total_future_value - amount
|
410 |
+
reduced_contribution = monthly_contribution * amount / total_future_value
|
411 |
+
|
412 |
+
st.success(f"Félicitations ! Vous êtes sur la bonne voie pour atteindre votre objectif, avec un surplus potentiel de {surplus:,.2f} €.")
|
413 |
+
st.markdown(f"""
|
414 |
+
Options à considérer:
|
415 |
+
1. **Réduire votre contribution mensuelle** à environ {reduced_contribution:.2f} € tout en atteignant votre objectif
|
416 |
+
2. **Augmenter votre objectif** pour mettre à profit votre capacité d'épargne
|
417 |
+
3. **Réduire votre profil de risque** pour une approche plus conservatrice
|
418 |
+
""")
|
419 |
+
|
420 |
+
with tabs[3]:
|
421 |
+
st.header("Marché en Temps Réel")
|
422 |
+
|
423 |
+
# Tableau de bord du marché
|
424 |
+
st.subheader("Tableau de bord du marché")
|
425 |
+
|
426 |
+
# Indices principaux (données simulées)
|
427 |
+
col1, col2, col3, col4 = st.columns(4)
|
428 |
+
|
429 |
+
with col1:
|
430 |
+
st.metric("S&P 500", "4,782.21", "+0.42%")
|
431 |
+
with col2:
|
432 |
+
st.metric("CAC 40", "7,596.91", "-0.15%")
|
433 |
+
with col3:
|
434 |
+
st.metric("DAX", "16,752.23", "+0.21%")
|
435 |
+
with col4:
|
436 |
+
st.metric("Nikkei 225", "33,408.39", "-0.54%")
|
437 |
+
|
438 |
+
# Conditions de marché
|
439 |
+
st.subheader("Conditions de marché actuelles")
|
440 |
+
|
441 |
+
# Simulations de données pour démonstration
|
442 |
+
market_sentiment = 0.65 # 0 à 1
|
443 |
+
volatility_level = 0.4 # 0 à 1
|
444 |
+
market_regime = "Bull Market" # Bull/Bear/Neutre
|
445 |
+
|
446 |
+
col1, col2, col3 = st.columns(3)
|
447 |
+
|
448 |
+
with col1:
|
449 |
+
st.write("**Sentiment de marché**")
|
450 |
+
sentiment_color = "green" if market_sentiment > 0.5 else "red"
|
451 |
+
st.markdown(f"<h3 style='text-align: center; color: {sentiment_color};'>{market_sentiment:.0%}</h3>", unsafe_allow_html=True)
|
452 |
+
|
453 |
+
with col2:
|
454 |
+
st.write("**Niveau de volatilité**")
|
455 |
+
volatility_color = "red" if volatility_level > 0.5 else "green"
|
456 |
+
st.markdown(f"<h3 style='text-align: center; color: {volatility_color};'>{volatility_level:.0%}</h3>", unsafe_allow_html=True)
|
457 |
+
|
458 |
+
with col3:
|
459 |
+
st.write("**Régime de marché**")
|
460 |
+
regime_color = "green" if market_regime == "Bull Market" else "red" if market_regime == "Bear Market" else "orange"
|
461 |
+
st.markdown(f"<h3 style='text-align: center; color: {regime_color};'>{market_regime}</h3>", unsafe_allow_html=True)
|
462 |
+
|
463 |
+
# Analyse des actifs
|
464 |
+
st.subheader("Analyse des actifs en temps réel")
|
465 |
+
|
466 |
+
# Sélection d'actif
|
467 |
+
asset_to_analyze = st.selectbox("Sélectionnez un actif à analyser", ["AAPL", "MSFT", "GOOGL", "AMZN", "TSLA", "VWCE", "AGGH"])
|
468 |
+
|
469 |
+
if asset_to_analyze:
|
470 |
+
# Données simulées pour démonstration
|
471 |
+
asset_data = {
|
472 |
+
"price": 178.92,
|
473 |
+
"change": 0.87,
|
474 |
+
"change_percent": 0.49,
|
475 |
+
"volume": 58234567,
|
476 |
+
"pe_ratio": 28.5,
|
477 |
+
"dividend_yield": 0.51,
|
478 |
+
"market_cap": "2.82T",
|
479 |
+
"52w_high": 199.62,
|
480 |
+
"52w_low": 124.17
|
481 |
+
}
|
482 |
+
|
483 |
+
# Affichage des données
|
484 |
+
col1, col2, col3 = st.columns(3)
|
485 |
+
|
486 |
+
with col1:
|
487 |
+
st.metric("Prix", f"${asset_data['price']}", f"{asset_data['change_percent']}%")
|
488 |
+
st.metric("Volume", f"{asset_data['volume']:,}")
|
489 |
+
|
490 |
+
with col2:
|
491 |
+
st.metric("P/E Ratio", f"{asset_data['pe_ratio']}")
|
492 |
+
st.metric("Rendement du dividende", f"{asset_data['dividend_yield']}%")
|
493 |
+
|
494 |
+
with col3:
|
495 |
+
st.metric("Capitalisation", asset_data['market_cap'])
|
496 |
+
st.metric("52 semaines", f"${asset_data['52w_low']} - ${asset_data['52w_high']}")
|
497 |
+
|
498 |
+
# Graphique de prix (données simulées)
|
499 |
+
st.subheader(f"Évolution du prix de {asset_to_analyze}")
|
500 |
+
|
501 |
+
# Données simulées pour le graphique
|
502 |
+
dates = pd.date_range(start=datetime.now() - timedelta(days=90), end=datetime.now(), freq='D')
|
503 |
+
prices = asset_data['price'] * (1 + np.cumsum(np.random.normal(0, 0.015, size=len(dates))))
|
504 |
+
volumes = np.random.randint(30000000, 90000000, size=len(dates))
|
505 |
+
|
506 |
+
asset_df = pd.DataFrame({
|
507 |
+
'Date': dates,
|
508 |
+
'Prix': prices,
|
509 |
+
'Volume': volumes
|
510 |
+
})
|
511 |
+
|
512 |
+
# Graphique de prix
|
513 |
+
st.line_chart(asset_df.set_index('Date')['Prix'])
|
514 |
+
|
515 |
+
# Recommandations
|
516 |
+
st.subheader
|
517 |
+
|
518 |
+
if st.session_state.profile:
|
519 |
+
# Recommandations personnalisées basées sur le profil
|
520 |
+
if st.session_state.profile.risk_score > 0.7:
|
521 |
+
recommendation = "Achat" if asset_data['change_percent'] > 0 else "Conserver"
|
522 |
+
reasoning = "Votre profil de risque élevé vous permet de profiter des opportunités de croissance de cet actif."
|
523 |
+
elif st.session_state.profile.risk_score < 0.3:
|
524 |
+
recommendation = "Conserver" if asset_data['change_percent'] > 0 else "Vente"
|
525 |
+
reasoning = "Votre profil de risque conservateur suggère une approche prudente avec cet actif."
|
526 |
+
else:
|
527 |
+
recommendation = "Conserver"
|
528 |
+
reasoning = "Cet actif correspond à votre profil de risque modéré et peut contribuer à la diversification de votre portefeuille."
|
529 |
+
else:
|
530 |
+
# Recommandation générique
|
531 |
+
recommendation = "Conserver"
|
532 |
+
reasoning = "Créez votre profil pour obtenir des recommandations personnalisées."
|
533 |
+
|
534 |
+
st.write(f"**Recommandation**: {recommendation}")
|
535 |
+
st.write(f"**Justification**: {reasoning}")
|
536 |
+
|
537 |
+
# Sidebar pour les filtres et les options
|
538 |
+
with st.sidebar:
|
539 |
+
st.header("DeepVest")
|
540 |
+
st.image("https://img.icons8.com/color/96/000000/financial-growth.png", width=100)
|
541 |
+
|
542 |
+
st.subheader("Options")
|
543 |
+
|
544 |
+
# Filtres de marché
|
545 |
+
st.write("**Filtres de marché**")
|
546 |
+
market_filter = st.multiselect("Marchés", ["Actions", "Obligations", "Matières premières", "Crypto-monnaies", "Devises"])
|
547 |
+
region_filter = st.multiselect("Régions", ["Amérique du Nord", "Europe", "Asie-Pacifique", "Marchés émergents"])
|
548 |
+
|
549 |
+
# Paramètres avancés
|
550 |
+
st.write("**Paramètres avancés**")
|
551 |
+
show_advanced = st.checkbox("Afficher les métriques avancées")
|
552 |
+
|
553 |
+
# À propos
|
554 |
+
st.write("---")
|
555 |
+
st.write("*DeepVest - ©2025 - version 1.0*")
|
556 |
+
st.write("[Documentation](https://www.example.com) | [Support](mailto:support@deepvest.ai)")
|
requirements.txt
ADDED
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Core dependencies
|
2 |
+
numpy>=1.21.0
|
3 |
+
pandas>=1.3.0
|
4 |
+
scipy>=1.7.0
|
5 |
+
tqdm>=4.62.2
|
6 |
+
python-dotenv>=0.19.0
|
7 |
+
|
8 |
+
# Streamlit UI
|
9 |
+
streamlit>=1.8.0
|
10 |
+
|
11 |
+
# Data visualization
|
12 |
+
matplotlib>=3.4.3
|
13 |
+
seaborn>=0.11.2
|
14 |
+
plotly>=5.6.0
|
15 |
+
|
16 |
+
# Machine Learning & Deep Learning
|
17 |
+
scikit-learn>=0.24.2
|
18 |
+
xgboost>=1.5.0
|
19 |
+
torch>=1.9.0
|
20 |
+
transformers>=4.0.0
|
21 |
+
sentencepiece
|
22 |
+
protobuf
|
23 |
+
|
24 |
+
# Financial data & analysis
|
25 |
+
yfinance>=0.1.63
|
26 |
+
finnhub-python>=2.4.12
|
27 |
+
statsmodels>=0.12.2
|
28 |
+
arch>=5.0.0
|
29 |
+
cvxopt>=1.2.7
|
30 |
+
|
31 |
+
# Reinforcement Learning
|
32 |
+
gymnasium>=0.29.0
|
33 |
+
|
34 |
+
# Web & Data processing
|
35 |
+
aiohttp>=3.8.0
|
36 |
+
websockets>=9.1
|
37 |
+
asyncio>=3.4.3
|
38 |
+
beautifulsoup4>=4.9.3
|
39 |
+
newspaper3k>=0.2.8
|
40 |
+
tweepy>=4.5.0
|
41 |
+
|
42 |
+
# Development & testing
|
43 |
+
pytest>=7.0.0
|
44 |
+
pytest-asyncio>=0.18.0
|
45 |
+
pytest-cov>=3.0.0
|
46 |
+
jupyter>=1.0.0
|
47 |
+
notebook>=6.4.0
|
48 |
+
ipython>=7.26.0
|
49 |
+
ipywidgets>=7.6.5
|
50 |
+
PyYAML>=5.4.1
|
51 |
+
questionary>=1.10.0
|
setup.py
ADDED
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from setuptools import setup, find_packages
|
2 |
+
|
3 |
+
with open("README.md", "r", encoding="utf-8") as fh:
|
4 |
+
long_description = fh.read()
|
5 |
+
|
6 |
+
setup(
|
7 |
+
name="deepvest",
|
8 |
+
version="1.0.0",
|
9 |
+
author="Auteur",
|
10 |
+
author_email="email@example.com",
|
11 |
+
description="Assistant d'investissement personnalisé avec IA",
|
12 |
+
long_description=long_description,
|
13 |
+
long_description_content_type="text/markdown",
|
14 |
+
url="https://github.com/yourusername/deepvest",
|
15 |
+
packages=find_packages(),
|
16 |
+
classifiers=[
|
17 |
+
"Programming Language :: Python :: 3",
|
18 |
+
"License :: OSI Approved :: MIT License",
|
19 |
+
"Operating System :: OS Independent",
|
20 |
+
],
|
21 |
+
python_requires=">=3.8",
|
22 |
+
install_requires=[
|
23 |
+
"numpy>=1.21.0",
|
24 |
+
"pandas>=1.3.0",
|
25 |
+
"scipy>=1.7.0",
|
26 |
+
"matplotlib>=3.4.3",
|
27 |
+
"scikit-learn>=0.24.2",
|
28 |
+
"torch>=1.9.0",
|
29 |
+
"streamlit>=1.8.0",
|
30 |
+
"yfinance>=0.1.63",
|
31 |
+
"transformers>=4.0.0",
|
32 |
+
"aiohttp>=3.8.0",
|
33 |
+
"plotly>=5.6.0",
|
34 |
+
],
|
35 |
+
)
|
src/.ipynb_checkpoints/__init__-checkpoint.py
ADDED
File without changes
|
src/__init__.py
ADDED
@@ -0,0 +1,90 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Version definition
|
2 |
+
__version__ = '1.0.0'
|
3 |
+
|
4 |
+
# Core imports - Using relative imports
|
5 |
+
from .core.market_analyzer import (
|
6 |
+
MarketAnalyzer,
|
7 |
+
MarketAnalyzerImpl,
|
8 |
+
RiskAssessment
|
9 |
+
)
|
10 |
+
from .core.market_states import (
|
11 |
+
MarketConditions,
|
12 |
+
MarketRegime
|
13 |
+
)
|
14 |
+
from .core.portfolio_optimizer import (
|
15 |
+
PortfolioOptimizerImpl,
|
16 |
+
OptimizationConstraints
|
17 |
+
)
|
18 |
+
|
19 |
+
# Strategy imports
|
20 |
+
from .strategies.alternative_assets import (
|
21 |
+
AlternativeAsset,
|
22 |
+
PrivateEquityInvestment,
|
23 |
+
RealEstateInvestment,
|
24 |
+
HedgeFundInvestment,
|
25 |
+
CryptoInvestment,
|
26 |
+
AlternativeAssetsAnalyzer,
|
27 |
+
AlternativeAssetsPortfolio
|
28 |
+
)
|
29 |
+
from .strategies.dynamic_allocator import (
|
30 |
+
MarketState,
|
31 |
+
DynamicAssetAllocationEnv,
|
32 |
+
DynamicAssetAllocator,
|
33 |
+
DynamicPortfolioManager,
|
34 |
+
DQNNetwork
|
35 |
+
)
|
36 |
+
from .strategies.alternative_assets_utils import create_alternative_assets_portfolio
|
37 |
+
|
38 |
+
# Analysis imports
|
39 |
+
from .analysis.backtest_engine import (
|
40 |
+
BacktestEngine,
|
41 |
+
BacktestConfig,
|
42 |
+
BacktestResult,
|
43 |
+
ValidationFramework
|
44 |
+
)
|
45 |
+
|
46 |
+
# Monitoring imports
|
47 |
+
from .monitoring.realtime_monitor import (
|
48 |
+
RealTimeMonitor,
|
49 |
+
WebSocketDataHandler,
|
50 |
+
PortfolioAlert,
|
51 |
+
RebalanceSignal
|
52 |
+
)
|
53 |
+
|
54 |
+
__all__ = [
|
55 |
+
# Core components
|
56 |
+
'MarketAnalyzer',
|
57 |
+
'MarketAnalyzerImpl',
|
58 |
+
'MarketConditions',
|
59 |
+
'MarketRegime',
|
60 |
+
'RiskAssessment',
|
61 |
+
'PortfolioOptimizerImpl',
|
62 |
+
'OptimizationConstraints',
|
63 |
+
|
64 |
+
# Strategy components
|
65 |
+
'AlternativeAsset',
|
66 |
+
'PrivateEquityInvestment',
|
67 |
+
'RealEstateInvestment',
|
68 |
+
'HedgeFundInvestment',
|
69 |
+
'CryptoInvestment',
|
70 |
+
'AlternativeAssetsAnalyzer',
|
71 |
+
'AlternativeAssetsPortfolio',
|
72 |
+
'create_alternative_assets_portfolio',
|
73 |
+
'MarketState',
|
74 |
+
'DynamicAssetAllocationEnv',
|
75 |
+
'DynamicAssetAllocator',
|
76 |
+
'DynamicPortfolioManager',
|
77 |
+
'DQNNetwork',
|
78 |
+
|
79 |
+
# Analysis components
|
80 |
+
'BacktestEngine',
|
81 |
+
'BacktestConfig',
|
82 |
+
'BacktestResult',
|
83 |
+
'ValidationFramework',
|
84 |
+
|
85 |
+
# Monitoring components
|
86 |
+
'RealTimeMonitor',
|
87 |
+
'WebSocketDataHandler',
|
88 |
+
'PortfolioAlert',
|
89 |
+
'RebalanceSignal'
|
90 |
+
]
|
src/__pycache__/__init__.cpython-310.pyc
ADDED
Binary file (1.47 kB). View file
|
|
src/__pycache__/__init__.cpython-312.pyc
ADDED
Binary file (1.54 kB). View file
|
|
src/analysis/.ipynb_checkpoints/__init__-checkpoint.py
ADDED
File without changes
|
src/analysis/.ipynb_checkpoints/backtest_engine-checkpoint.py
ADDED
File without changes
|
src/analysis/.ipynb_checkpoints/enhanced_backtest-checkpoint.py
ADDED
File without changes
|
src/analysis/.ipynb_checkpoints/goal_simulator-checkpoint.py
ADDED
File without changes
|
src/analysis/.ipynb_checkpoints/ml_analyzer-checkpoint.py
ADDED
File without changes
|
src/analysis/.ipynb_checkpoints/scenario_manager-checkpoint.py
ADDED
File without changes
|
src/analysis/.ipynb_checkpoints/strategy_optimizer-checkpoint.py
ADDED
File without changes
|
src/analysis/__init__.py
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# src/analysis/__init__.py
|
2 |
+
|
3 |
+
from .backtest_engine import (
|
4 |
+
BacktestEngine,
|
5 |
+
BacktestConfig,
|
6 |
+
BacktestResult,
|
7 |
+
ValidationFramework
|
8 |
+
)
|
9 |
+
|
10 |
+
from .ml_analyzer import ( # Ajoutez ces imports
|
11 |
+
MLEnhancedAnalyzer,
|
12 |
+
MarketRegimeClassifier,
|
13 |
+
SentimentAnalyzer,
|
14 |
+
MarketImpactPredictor,
|
15 |
+
AlternativeDataAnalyzer
|
16 |
+
)
|
17 |
+
|
18 |
+
__all__ = [
|
19 |
+
'BacktestEngine',
|
20 |
+
'BacktestConfig',
|
21 |
+
'BacktestResult',
|
22 |
+
'ValidationFramework',
|
23 |
+
# Ajoutez les nouvelles classes
|
24 |
+
'MLEnhancedAnalyzer',
|
25 |
+
'MarketRegimeClassifier',
|
26 |
+
'SentimentAnalyzer',
|
27 |
+
'MarketImpactPredictor',
|
28 |
+
'AlternativeDataAnalyzer'
|
29 |
+
]
|
src/analysis/__pycache__/__init__.cpython-310.pyc
ADDED
Binary file (530 Bytes). View file
|
|
src/analysis/__pycache__/backtest_engine.cpython-310.pyc
ADDED
Binary file (62.3 kB). View file
|
|
src/analysis/__pycache__/enhanced_backtest.cpython-310.pyc
ADDED
Binary file (5.67 kB). View file
|
|
src/analysis/__pycache__/goal_simulator.cpython-310.pyc
ADDED
Binary file (6.32 kB). View file
|
|
src/analysis/__pycache__/ml_analyzer.cpython-310.pyc
ADDED
Binary file (27.8 kB). View file
|
|
src/analysis/__pycache__/scenario_manager.cpython-310.pyc
ADDED
Binary file (6.07 kB). View file
|
|
src/analysis/__pycache__/strategy_optimizer.cpython-310.pyc
ADDED
Binary file (7.46 kB). View file
|
|
src/analysis/backtest_engine.py
ADDED
@@ -0,0 +1,2267 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import numpy as np
|
2 |
+
import pandas as pd
|
3 |
+
from typing import List, Dict, Tuple, Optional, Any
|
4 |
+
from dataclasses import dataclass
|
5 |
+
from datetime import datetime, timedelta
|
6 |
+
import statsmodels.api as sm
|
7 |
+
from scipy import stats
|
8 |
+
from sklearn.model_selection import TimeSeriesSplit
|
9 |
+
from sklearn.mixture import GaussianMixture
|
10 |
+
import matplotlib.pyplot as plt
|
11 |
+
from concurrent.futures import ProcessPoolExecutor
|
12 |
+
import warnings
|
13 |
+
warnings.filterwarnings('ignore')
|
14 |
+
|
15 |
+
# Imports from other modules
|
16 |
+
from src.core import (
|
17 |
+
PortfolioOptimizerImpl,
|
18 |
+
OptimizationConstraints,
|
19 |
+
MarketAnalyzerImpl
|
20 |
+
)
|
21 |
+
from src.analysis.ml_analyzer import MLEnhancedAnalyzer
|
22 |
+
from src.core.types import MarketAnalyzerProtocol
|
23 |
+
from ..strategies.alternative_assets import AlternativeAssetsPortfolio
|
24 |
+
from ..core.risk_metrics import EnhancedRiskMetrics
|
25 |
+
from ..core.profiling import PersonalProfile
|
26 |
+
from ..core.event_types import MarketUpdate, PortfolioAlert
|
27 |
+
from ..core.types import MarketAnalyzerProtocol as MarketAnalyzer
|
28 |
+
|
29 |
+
@dataclass
|
30 |
+
class BacktestResult:
|
31 |
+
"""Store backtest results"""
|
32 |
+
returns: pd.Series
|
33 |
+
positions: pd.DataFrame
|
34 |
+
trades: pd.DataFrame
|
35 |
+
metrics: Dict[str, float]
|
36 |
+
risk_metrics: Dict[str, float]
|
37 |
+
transaction_costs: pd.Series
|
38 |
+
turnover: pd.Series
|
39 |
+
drawdowns: pd.Series
|
40 |
+
attribution: Dict[str, pd.Series]
|
41 |
+
stress_test_results: Dict[str, float]
|
42 |
+
|
43 |
+
def to_dict(self) -> Dict:
|
44 |
+
"""Convert results to dictionary format"""
|
45 |
+
return {
|
46 |
+
'returns': self.returns.to_dict(),
|
47 |
+
'positions': self.positions.to_dict(),
|
48 |
+
'trades': self.trades.to_dict(),
|
49 |
+
'metrics': self.metrics,
|
50 |
+
'risk_metrics': self.risk_metrics,
|
51 |
+
'transaction_costs': self.transaction_costs.to_dict(),
|
52 |
+
'turnover': self.turnover.to_dict(),
|
53 |
+
'drawdowns': self.drawdowns.to_dict(),
|
54 |
+
'attribution': {k: v.to_dict() for k, v in self.attribution.items()},
|
55 |
+
'stress_test_results': self.stress_test_results
|
56 |
+
}
|
57 |
+
|
58 |
+
@dataclass
|
59 |
+
class BacktestConfig:
|
60 |
+
"""Configuration for backtest"""
|
61 |
+
start_date: datetime
|
62 |
+
end_date: datetime
|
63 |
+
initial_capital: float
|
64 |
+
rebalance_frequency: str = "monthly"
|
65 |
+
transaction_cost_model: str = "percentage"
|
66 |
+
transaction_cost_params: Dict[str, float] = None
|
67 |
+
include_alternative_assets: bool = True
|
68 |
+
risk_free_rate: float = 0.02
|
69 |
+
benchmark_index: str = "SPY"
|
70 |
+
stress_periods: Dict[str, Tuple[datetime, datetime]] = None
|
71 |
+
|
72 |
+
def __post_init__(self):
|
73 |
+
"""Validate configuration after initialization"""
|
74 |
+
if self.transaction_cost_params is None:
|
75 |
+
self.transaction_cost_params = {'cost_rate': 0.001}
|
76 |
+
|
77 |
+
if self.stress_periods is None:
|
78 |
+
self.stress_periods = {
|
79 |
+
'covid_crash': (
|
80 |
+
datetime(2020, 2, 19),
|
81 |
+
datetime(2020, 3, 23)
|
82 |
+
),
|
83 |
+
'financial_crisis': (
|
84 |
+
datetime(2008, 9, 1),
|
85 |
+
datetime(2009, 3, 1)
|
86 |
+
)
|
87 |
+
}
|
88 |
+
|
89 |
+
class BacktestEngine:
|
90 |
+
"""Main backtesting engine with enhanced risk metrics and performance analysis"""
|
91 |
+
|
92 |
+
def __init__(self, portfolio_optimizer, market_analyzer, backtest_engine, **kwargs):
|
93 |
+
self.backtest_engine = backtest_engine
|
94 |
+
|
95 |
+
self.risk_metrics = EnhancedRiskMetrics(returns=None, prices=None)
|
96 |
+
if not isinstance(portfolio_optimizer, PortfolioOptimizerImpl):
|
97 |
+
raise TypeError("portfolio_optimizer must be instance of PortfolioOptimizerImpl")
|
98 |
+
if market_analyzer is None:
|
99 |
+
ml_analyzer = MLEnhancedAnalyzer()
|
100 |
+
self.market_analyzer = MarketAnalyzerImpl(ml_analyzer=ml_analyzer)
|
101 |
+
else:
|
102 |
+
self.market_analyzer = market_analyzer
|
103 |
+
self.alternative_assets_portfolio = alternative_assets_portfolio
|
104 |
+
self.risk_free_rate = risk_free_rate
|
105 |
+
|
106 |
+
# État interne du backtest
|
107 |
+
self.current_positions = None
|
108 |
+
self.transaction_costs = None
|
109 |
+
self.current_portfolio_value = None
|
110 |
+
self.rebalance_dates = []
|
111 |
+
|
112 |
+
# Cache pour les calculs
|
113 |
+
self._cache = {}
|
114 |
+
|
115 |
+
def _initialize_market_data(self, data: pd.DataFrame):
|
116 |
+
"""Initialize market data properly"""
|
117 |
+
try:
|
118 |
+
# Vérifier les données
|
119 |
+
if data is None or data.empty:
|
120 |
+
raise ValueError("No market data provided")
|
121 |
+
|
122 |
+
# Normaliser l'index temporel
|
123 |
+
data.index = pd.to_datetime(data.index).tz_localize(None)
|
124 |
+
|
125 |
+
# Créer une copie pour éviter les modifications indésirables
|
126 |
+
self.market_data = data.copy()
|
127 |
+
|
128 |
+
# Calculer les rendements
|
129 |
+
self.returns = self.market_data.pct_change().fillna(0)
|
130 |
+
|
131 |
+
# S'assurer que les données sont chronologiquement ordonnées
|
132 |
+
self.market_data = self.market_data.sort_index()
|
133 |
+
self.returns = self.returns.sort_index()
|
134 |
+
|
135 |
+
print("Market data initialized successfully")
|
136 |
+
return True
|
137 |
+
|
138 |
+
except Exception as e:
|
139 |
+
print(f"Error initializing market data: {e}")
|
140 |
+
return False
|
141 |
+
|
142 |
+
|
143 |
+
async def run_backtest(self,
|
144 |
+
config: BacktestConfig,
|
145 |
+
data: pd.DataFrame,
|
146 |
+
constraints: OptimizationConstraints,
|
147 |
+
alternative_data: Optional[pd.DataFrame] = None) -> BacktestResult:
|
148 |
+
|
149 |
+
# Normaliser les timezones
|
150 |
+
data.index = pd.to_datetime(data.index).tz_localize(None)
|
151 |
+
config.start_date = pd.to_datetime(config.start_date).tz_localize(None)
|
152 |
+
config.end_date = pd.to_datetime(config.end_date).tz_localize(None)
|
153 |
+
"""Execute backtest with given configuration"""
|
154 |
+
try:
|
155 |
+
# Validation and initialization of market data
|
156 |
+
self._validate_data(data, config)
|
157 |
+
self._initialize_market_data(data)
|
158 |
+
market_conditions = await self.market_analyzer.analyze_market_conditions(data)
|
159 |
+
|
160 |
+
# Initialize backtest state
|
161 |
+
self._initialize_backtest(config, data)
|
162 |
+
|
163 |
+
# Execute simulation - Passer tous les arguments nécessaires
|
164 |
+
results = self._run_simulation(
|
165 |
+
config=config,
|
166 |
+
constraints=constraints,
|
167 |
+
alternative_data=alternative_data,
|
168 |
+
data=self.market_data # Ajout de data ici
|
169 |
+
)
|
170 |
+
|
171 |
+
# Calculate metrics
|
172 |
+
metrics = self._calculate_performance_metrics(results)
|
173 |
+
risk_metrics = self._calculate_risk_metrics(results)
|
174 |
+
attribution = self._perform_attribution_analysis(results)
|
175 |
+
|
176 |
+
# Run stress tests
|
177 |
+
stress_results = self._run_stress_tests(results, config.stress_periods)
|
178 |
+
|
179 |
+
return BacktestResult(
|
180 |
+
returns=results['returns'],
|
181 |
+
positions=results['positions'],
|
182 |
+
trades=results['trades'],
|
183 |
+
metrics=metrics,
|
184 |
+
risk_metrics=risk_metrics,
|
185 |
+
transaction_costs=results['transaction_costs'],
|
186 |
+
turnover=results['turnover'],
|
187 |
+
drawdowns=self._calculate_drawdowns(results['returns']),
|
188 |
+
attribution=attribution,
|
189 |
+
stress_test_results=stress_results
|
190 |
+
)
|
191 |
+
|
192 |
+
except Exception as e:
|
193 |
+
print(f"Error during backtest execution: {e}")
|
194 |
+
raise
|
195 |
+
|
196 |
+
async def _run_simulation(self, config: BacktestConfig, data: pd.DataFrame, constraints: OptimizationConstraints, alternative_data: Optional[pd.DataFrame] = None) -> Dict[str, pd.DataFrame]:
|
197 |
+
"""Run the main simulation loop"""
|
198 |
+
try:
|
199 |
+
results = {
|
200 |
+
'positions': pd.DataFrame(index=data.index, columns=data.columns, dtype=float),
|
201 |
+
'trades': pd.DataFrame(index=data.index, columns=data.columns, dtype=float),
|
202 |
+
'returns': pd.Series(index=data.index, dtype=float),
|
203 |
+
'transaction_costs': pd.Series(index=data.index, dtype=float),
|
204 |
+
'turnover': pd.Series(index=data.index, dtype=float),
|
205 |
+
'portfolio_values': pd.Series(index=data.index, dtype=float)
|
206 |
+
}
|
207 |
+
|
208 |
+
portfolio_value = config.initial_capital
|
209 |
+
previous_weights = np.zeros(len(data.columns))
|
210 |
+
|
211 |
+
for date in data.index:
|
212 |
+
if self._should_rebalance(date, config.rebalance_frequency):
|
213 |
+
# Get current market data slice
|
214 |
+
current_data = data.loc[:date]
|
215 |
+
|
216 |
+
try:
|
217 |
+
# Analyze market conditions - MODIFICATION ICI
|
218 |
+
market_conditions = await self.market_analyzer.analyze_market_conditions(current_data)
|
219 |
+
except Exception as e:
|
220 |
+
print(f"Market analysis error: {str(e)}")
|
221 |
+
market_conditions = None
|
222 |
+
|
223 |
+
# Optimize portfolio
|
224 |
+
try:
|
225 |
+
new_weights = self.portfolio_optimizer.optimize_portfolio(constraints=constraints)
|
226 |
+
except Exception as e:
|
227 |
+
print(f"Optimization error: {str(e)}")
|
228 |
+
new_weights = previous_weights
|
229 |
+
|
230 |
+
# Calculate trades and costs
|
231 |
+
trades = self._calculate_trades(previous_weights, new_weights, portfolio_value)
|
232 |
+
costs = self._calculate_transaction_costs(trades, data.loc[date], config.transaction_cost_model, config.transaction_cost_params)
|
233 |
+
|
234 |
+
portfolio_value *= (1 - costs)
|
235 |
+
previous_weights = new_weights
|
236 |
+
|
237 |
+
results['trades'].loc[date] = trades
|
238 |
+
results['transaction_costs'].loc[date] = costs
|
239 |
+
results['turnover'].loc[date] = np.sum(np.abs(trades)) / (2 * portfolio_value)
|
240 |
+
|
241 |
+
# Calculate daily returns
|
242 |
+
daily_return = self._calculate_daily_returns(date, data, previous_weights)
|
243 |
+
portfolio_value *= (1 + daily_return)
|
244 |
+
|
245 |
+
results['returns'].loc[date] = daily_return
|
246 |
+
results['positions'].loc[date] = previous_weights
|
247 |
+
results['portfolio_values'].loc[date] = portfolio_value
|
248 |
+
|
249 |
+
return results
|
250 |
+
|
251 |
+
except Exception as e:
|
252 |
+
print(f"Error in simulation: {str(e)}")
|
253 |
+
raise
|
254 |
+
|
255 |
+
def _validate_data(self, data: pd.DataFrame, config: BacktestConfig) -> None:
|
256 |
+
"""Validate input data for backtesting"""
|
257 |
+
try:
|
258 |
+
# Vérifier si les données sont vides
|
259 |
+
if data is None or data.empty:
|
260 |
+
raise ValueError("Data cannot be empty")
|
261 |
+
|
262 |
+
# Vérifier et corriger l'ordre de l'index si nécessaire
|
263 |
+
if not data.index.is_monotonic_increasing:
|
264 |
+
print("Warning: Data index not monotonic, sorting data...")
|
265 |
+
data.sort_index(inplace=True)
|
266 |
+
|
267 |
+
# Convertir les dates en datetime
|
268 |
+
data.index = pd.to_datetime(data.index)
|
269 |
+
config_start = pd.to_datetime(config.start_date)
|
270 |
+
config_end = pd.to_datetime(config.end_date)
|
271 |
+
|
272 |
+
# Vérifier la couverture temporelle
|
273 |
+
if data.index[0] > config_start or data.index[-1] < config_end:
|
274 |
+
raise ValueError(
|
275 |
+
f"Data range {data.index[0]} to {data.index[-1]} "
|
276 |
+
f"does not cover backtest period {config_start} to {config_end}"
|
277 |
+
)
|
278 |
+
|
279 |
+
# Vérifier les valeurs manquantes
|
280 |
+
if data.isnull().any().any():
|
281 |
+
missing_cols = data.columns[data.isnull().any()].tolist()
|
282 |
+
raise ValueError(f"Missing values found in columns: {missing_cols}")
|
283 |
+
|
284 |
+
except Exception as e:
|
285 |
+
print(f"Error validating data: {str(e)}")
|
286 |
+
raise
|
287 |
+
|
288 |
+
def _initialize_backtest(self, config: BacktestConfig, data: pd.DataFrame) -> None:
|
289 |
+
"""Initialize backtest state"""
|
290 |
+
self.current_positions = pd.Series(0, index=data.columns)
|
291 |
+
self.transaction_costs = pd.Series(0, index=data.index)
|
292 |
+
self.current_portfolio_value = config.initial_capital
|
293 |
+
self._cache = {} # Reset cache
|
294 |
+
|
295 |
+
# Generate rebalancing dates
|
296 |
+
self.rebalance_dates = self._generate_rebalance_dates(
|
297 |
+
data.index, config.rebalance_frequency
|
298 |
+
)
|
299 |
+
|
300 |
+
|
301 |
+
def _clean_results(self, results: Dict[str, pd.DataFrame]) -> Dict[str, pd.DataFrame]:
|
302 |
+
"""Nettoie les résultats de la simulation"""
|
303 |
+
try:
|
304 |
+
# Remplacer les inf et -inf par NaN
|
305 |
+
for key, value in results.items():
|
306 |
+
if isinstance(value, pd.DataFrame) or isinstance(value, pd.Series):
|
307 |
+
results[key] = value.replace([np.inf, -np.inf], np.nan)
|
308 |
+
|
309 |
+
# Forward fill pour les positions
|
310 |
+
results['positions'] = results['positions'].fillna(method='ffill')
|
311 |
+
|
312 |
+
# Remplacer les NaN restants par 0
|
313 |
+
for key, value in results.items():
|
314 |
+
if isinstance(value, pd.DataFrame) or isinstance(value, pd.Series):
|
315 |
+
results[key] = value.fillna(0)
|
316 |
+
|
317 |
+
return results
|
318 |
+
|
319 |
+
except Exception as e:
|
320 |
+
print(f"Error cleaning results: {e}")
|
321 |
+
return results
|
322 |
+
|
323 |
+
def _get_sector_mapping(self, columns: pd.Index) -> Dict[str, List[str]]:
|
324 |
+
"""Get sector mapping for assets"""
|
325 |
+
try:
|
326 |
+
return {'default': columns.tolist()}
|
327 |
+
except Exception as e:
|
328 |
+
print(f"Error in sector mapping: {str(e)}")
|
329 |
+
return {}
|
330 |
+
|
331 |
+
def _calculate_value_exposure(self, positions: pd.DataFrame) -> pd.Series:
|
332 |
+
"""Calculate value factor exposure"""
|
333 |
+
try:
|
334 |
+
return positions.rolling(window=252).mean()
|
335 |
+
except Exception as e:
|
336 |
+
print(f"Error calculating value exposure: {str(e)}")
|
337 |
+
return pd.Series(index=positions.index)
|
338 |
+
|
339 |
+
def _calculate_growth_exposure(self, positions: pd.DataFrame) -> pd.Series:
|
340 |
+
"""Calculate growth factor exposure"""
|
341 |
+
try:
|
342 |
+
return positions.pct_change().rolling(window=252).mean()
|
343 |
+
except Exception as e:
|
344 |
+
print(f"Error calculating growth exposure: {str(e)}")
|
345 |
+
return pd.Series(index=positions.index)
|
346 |
+
|
347 |
+
def _calculate_momentum_exposure(self, positions: pd.DataFrame) -> pd.Series:
|
348 |
+
"""Calculate momentum factor exposure"""
|
349 |
+
try:
|
350 |
+
returns = positions.pct_change()
|
351 |
+
return returns.rolling(window=252).mean()
|
352 |
+
except Exception as e:
|
353 |
+
print(f"Error calculating momentum exposure: {str(e)}")
|
354 |
+
return pd.Series(index=positions.index)
|
355 |
+
|
356 |
+
def _calculate_quality_exposure(self, positions: pd.DataFrame) -> pd.Series:
|
357 |
+
"""Calculate quality factor exposure"""
|
358 |
+
try:
|
359 |
+
returns = positions.pct_change()
|
360 |
+
return 1 / returns.rolling(window=63).std()
|
361 |
+
except Exception as e:
|
362 |
+
print(f"Error calculating quality exposure: {str(e)}")
|
363 |
+
return pd.Series(index=positions.index)
|
364 |
+
|
365 |
+
def _calculate_size_exposure(self, positions: pd.DataFrame) -> pd.Series:
|
366 |
+
"""Calculate size factor exposure"""
|
367 |
+
try:
|
368 |
+
return positions.abs().sum(axis=1)
|
369 |
+
except Exception as e:
|
370 |
+
print(f"Error calculating size exposure: {str(e)}")
|
371 |
+
return pd.Series(index=positions.index)
|
372 |
+
|
373 |
+
def _calculate_daily_returns(self, date: datetime, market_returns: pd.DataFrame, positions: np.ndarray) -> float:
|
374 |
+
"""Calculate daily returns properly handling numpy arrays"""
|
375 |
+
try:
|
376 |
+
# Convert numpy array to pandas Series with proper index
|
377 |
+
positions_series = pd.Series(positions, index=market_returns.columns)
|
378 |
+
|
379 |
+
# Get returns for the specific date
|
380 |
+
if isinstance(market_returns, pd.DataFrame):
|
381 |
+
daily_returns = market_returns.loc[date]
|
382 |
+
else:
|
383 |
+
return 0.0
|
384 |
+
|
385 |
+
# Calculate weighted returns
|
386 |
+
return (positions_series * daily_returns).sum()
|
387 |
+
|
388 |
+
except Exception as e:
|
389 |
+
print(f"Error calculating daily returns for {date}: {e}")
|
390 |
+
return 0.0
|
391 |
+
|
392 |
+
def _decompose_performance(self, returns: pd.Series, positions: pd.DataFrame) -> Dict[str, pd.Series]:
|
393 |
+
"""Decompose portfolio performance"""
|
394 |
+
try:
|
395 |
+
decomposition = {
|
396 |
+
'market_effect': returns * positions.mean(),
|
397 |
+
'selection_effect': returns * (positions - positions.mean()),
|
398 |
+
'interaction_effect': returns * positions.diff()
|
399 |
+
}
|
400 |
+
return decomposition
|
401 |
+
except Exception as e:
|
402 |
+
print(f"Error decomposing performance: {str(e)}")
|
403 |
+
return {}
|
404 |
+
|
405 |
+
def _calculate_trades(self,
|
406 |
+
new_weights: np.ndarray,
|
407 |
+
portfolio_value: float,
|
408 |
+
prices: pd.Series) -> np.ndarray:
|
409 |
+
"""Calculate required trades to achieve new weights with advanced handling"""
|
410 |
+
current_values = self.current_positions * portfolio_value
|
411 |
+
target_values = new_weights * portfolio_value
|
412 |
+
|
413 |
+
# Calculer les trades en nombre d'unités
|
414 |
+
trades = (target_values - current_values) / prices
|
415 |
+
|
416 |
+
# Arrondir au nombre entier d'unités le plus proche
|
417 |
+
trades = np.round(trades, decimals=0)
|
418 |
+
|
419 |
+
# Recalculer les poids après arrondissement
|
420 |
+
actual_values = trades * prices
|
421 |
+
actual_weights = actual_values / portfolio_value
|
422 |
+
|
423 |
+
# Stocker les poids réels pour utilisation future
|
424 |
+
self._cache['actual_weights'] = actual_weights
|
425 |
+
|
426 |
+
return trades
|
427 |
+
|
428 |
+
def _calculate_performance_metrics(self, results: Dict[str, pd.DataFrame]) -> Dict[str, float]:
|
429 |
+
"""Calculate comprehensive performance metrics"""
|
430 |
+
try:
|
431 |
+
# Extraire et nettoyer les returns
|
432 |
+
returns = results['returns'].replace([np.inf, -np.inf], np.nan).dropna()
|
433 |
+
|
434 |
+
if len(returns) == 0:
|
435 |
+
raise ValueError("No valid returns data")
|
436 |
+
|
437 |
+
# Calculer les métriques de base
|
438 |
+
total_return = (1 + returns).prod() - 1
|
439 |
+
|
440 |
+
metrics = {
|
441 |
+
'total_return': total_return,
|
442 |
+
'cagr': self._calculate_cagr(returns),
|
443 |
+
'volatility': returns.std() * np.sqrt(252),
|
444 |
+
'sharpe_ratio': self._calculate_sharpe_ratio(returns),
|
445 |
+
'sortino_ratio': self._calculate_sortino_ratio(returns),
|
446 |
+
'max_drawdown': self._calculate_max_drawdown(returns),
|
447 |
+
'win_rate': len(returns[returns > 0]) / len(returns),
|
448 |
+
}
|
449 |
+
|
450 |
+
# Ajouter des métriques avancées seulement si elles peuvent être calculées
|
451 |
+
try:
|
452 |
+
metrics.update({
|
453 |
+
'calmar_ratio': abs(metrics['cagr'] / metrics['max_drawdown']) if metrics['max_drawdown'] != 0 else np.inf,
|
454 |
+
'profit_factor': abs(returns[returns > 0].sum() / returns[returns < 0].sum()) if returns[returns < 0].sum() != 0 else np.inf,
|
455 |
+
'avg_win': returns[returns > 0].mean() if len(returns[returns > 0]) > 0 else 0,
|
456 |
+
'avg_loss': returns[returns < 0].mean() if len(returns[returns < 0]) > 0 else 0
|
457 |
+
})
|
458 |
+
except Exception as e:
|
459 |
+
print(f"Warning: Could not calculate some advanced metrics: {e}")
|
460 |
+
|
461 |
+
# Nettoyer les résultats finaux
|
462 |
+
metrics = {k: v for k, v in metrics.items() if not np.isnan(v) and not np.isinf(v)}
|
463 |
+
|
464 |
+
return metrics
|
465 |
+
|
466 |
+
except Exception as e:
|
467 |
+
print(f"Error calculating performance metrics: {e}")
|
468 |
+
return {}
|
469 |
+
|
470 |
+
|
471 |
+
def _calculate_cagr(self, returns: pd.Series) -> float:
|
472 |
+
"""Calculate Compound Annual Growth Rate"""
|
473 |
+
if returns is None or len(returns) == 0:
|
474 |
+
return 0.0
|
475 |
+
|
476 |
+
total_return = (1 + returns).prod()
|
477 |
+
n_years = len(returns) / 252 # 252 jours de trading par an
|
478 |
+
|
479 |
+
if n_years > 0 and total_return > 0:
|
480 |
+
return (total_return ** (1/n_years)) - 1
|
481 |
+
else:
|
482 |
+
return 0.0
|
483 |
+
|
484 |
+
def _calculate_sortino_ratio(self, returns: pd.Series) -> float:
|
485 |
+
"""Calculate Sortino ratio"""
|
486 |
+
try:
|
487 |
+
excess_returns = returns - self.risk_free_rate/252
|
488 |
+
negative_returns = excess_returns[excess_returns < 0]
|
489 |
+
downside_std = negative_returns.std() * np.sqrt(252)
|
490 |
+
|
491 |
+
if downside_std == 0:
|
492 |
+
return 0.0
|
493 |
+
|
494 |
+
return excess_returns.mean() * 252 / downside_std
|
495 |
+
except Exception as e:
|
496 |
+
print(f"Error calculating Sortino ratio: {e}")
|
497 |
+
return 0.0
|
498 |
+
|
499 |
+
|
500 |
+
def _calculate_var(self, returns: pd.Series, confidence: float = 0.95) -> float:
|
501 |
+
"""Calculate Value at Risk"""
|
502 |
+
try:
|
503 |
+
return np.percentile(returns, (1 - confidence) * 100)
|
504 |
+
except Exception as e:
|
505 |
+
print(f"Error calculating VaR: {e}")
|
506 |
+
return 0.0
|
507 |
+
|
508 |
+
def _get_benchmark_returns(self, results: Dict[str, pd.DataFrame]) -> pd.Series:
|
509 |
+
"""Get benchmark returns series"""
|
510 |
+
returns = results['returns'] # returns est une Series, pas un DataFrame
|
511 |
+
# Retourner une Series vide si pas de benchmark
|
512 |
+
return pd.Series(dtype=float)
|
513 |
+
|
514 |
+
def _get_factor_returns(self) -> pd.DataFrame:
|
515 |
+
"""Récupère les facteurs de risque pour l'attribution de performance"""
|
516 |
+
try:
|
517 |
+
# Période d'analyse
|
518 |
+
dates = self.returns.index
|
519 |
+
|
520 |
+
# Facteurs de base
|
521 |
+
factors = pd.DataFrame(index=dates)
|
522 |
+
|
523 |
+
# Market Factor (Excess Return)
|
524 |
+
factors['MKT'] = self.returns.mean(axis=1) - self.risk_free_rate/252
|
525 |
+
|
526 |
+
# Size Factor (SMB)
|
527 |
+
if hasattr(self, 'market_data') and 'market_cap' in self.market_data:
|
528 |
+
factors['SMB'] = self._calculate_size_factor()
|
529 |
+
else:
|
530 |
+
factors['SMB'] = pd.Series(0, index=dates)
|
531 |
+
|
532 |
+
# Value Factor (HML)
|
533 |
+
if hasattr(self, 'market_data') and 'book_to_market' in self.market_data:
|
534 |
+
factors['HML'] = self._calculate_value_factor()
|
535 |
+
else:
|
536 |
+
factors['HML'] = pd.Series(0, index=dates)
|
537 |
+
|
538 |
+
# Momentum Factor (MOM)
|
539 |
+
factors['MOM'] = self._calculate_momentum_factor()
|
540 |
+
|
541 |
+
# Volatility Factor (VOL)
|
542 |
+
factors['VOL'] = self._calculate_volatility_factor()
|
543 |
+
|
544 |
+
return factors
|
545 |
+
|
546 |
+
except Exception as e:
|
547 |
+
print(f"Erreur dans le calcul des facteurs : {e}")
|
548 |
+
# Retourner un DataFrame minimal en cas d'erreur
|
549 |
+
return pd.DataFrame({'MKT': self.returns.mean(axis=1)})
|
550 |
+
|
551 |
+
def _calculate_risk_metrics(self, results: Dict[str, pd.DataFrame]) -> Dict[str, float]:
|
552 |
+
"""Calculate comprehensive risk metrics"""
|
553 |
+
try:
|
554 |
+
returns = results['returns'].replace([np.inf, -np.inf], np.nan).dropna()
|
555 |
+
|
556 |
+
if len(returns) == 0:
|
557 |
+
raise ValueError("No valid returns data")
|
558 |
+
|
559 |
+
risk_metrics = {}
|
560 |
+
|
561 |
+
# 1. Métriques Value at Risk
|
562 |
+
try:
|
563 |
+
risk_metrics.update({
|
564 |
+
'var_historical_95': self.risk_metrics.calculate_historical_var(returns, 0.95),
|
565 |
+
'var_gaussian_95': self.risk_metrics.calculate_gaussian_var(returns, 0.95),
|
566 |
+
'cvar_95': self.risk_metrics.calculate_cvar(returns, 0.95)
|
567 |
+
})
|
568 |
+
except Exception as e:
|
569 |
+
print(f"Error calculating VaR metrics: {e}")
|
570 |
+
risk_metrics.update({
|
571 |
+
'var_historical_95': 0.0,
|
572 |
+
'var_gaussian_95': 0.0,
|
573 |
+
'cvar_95': 0.0
|
574 |
+
})
|
575 |
+
|
576 |
+
# 2. Métriques de Volatilité
|
577 |
+
try:
|
578 |
+
risk_metrics.update({
|
579 |
+
'volatility_annual': returns.std() * np.sqrt(252),
|
580 |
+
'downside_deviation': self.risk_metrics.calculate_downside_deviation(returns),
|
581 |
+
'upside_volatility': self.risk_metrics.calculate_upside_volatility(returns),
|
582 |
+
'volatility_skew': self.risk_metrics.calculate_volatility_skew(returns)
|
583 |
+
})
|
584 |
+
except Exception as e:
|
585 |
+
print(f"Error calculating volatility metrics: {e}")
|
586 |
+
risk_metrics.update({
|
587 |
+
'volatility_annual': 0.0,
|
588 |
+
'downside_deviation': 0.0,
|
589 |
+
'upside_volatility': 0.0,
|
590 |
+
'volatility_skew': 0.0
|
591 |
+
})
|
592 |
+
|
593 |
+
# 3. Métriques de Distribution
|
594 |
+
try:
|
595 |
+
risk_metrics.update({
|
596 |
+
'skewness': returns.skew(),
|
597 |
+
'kurtosis': returns.kurtosis(),
|
598 |
+
'tail_ratio': self.risk_metrics.calculate_tail_ratio(returns),
|
599 |
+
'jar_bera_stat': self.risk_metrics.calculate_jarque_bera(returns)
|
600 |
+
})
|
601 |
+
except Exception as e:
|
602 |
+
print(f"Error calculating distribution metrics: {e}")
|
603 |
+
risk_metrics.update({
|
604 |
+
'skewness': 0.0,
|
605 |
+
'kurtosis': 0.0,
|
606 |
+
'tail_ratio': 1.0,
|
607 |
+
'jar_bera_stat': 0.0
|
608 |
+
})
|
609 |
+
|
610 |
+
# 4. Métriques de Marché
|
611 |
+
try:
|
612 |
+
risk_metrics.update({
|
613 |
+
'beta': self.risk_metrics.calculate_beta(returns),
|
614 |
+
'alpha': self.risk_metrics.calculate_alpha(returns),
|
615 |
+
'tracking_error': self.risk_metrics.calculate_tracking_error(returns),
|
616 |
+
'r_squared': self.risk_metrics.calculate_r_squared(returns)
|
617 |
+
})
|
618 |
+
except Exception as e:
|
619 |
+
print(f"Error calculating market metrics: {e}")
|
620 |
+
risk_metrics.update({
|
621 |
+
'beta': 1.0,
|
622 |
+
'alpha': 0.0,
|
623 |
+
'tracking_error': 0.0,
|
624 |
+
'r_squared': 0.0
|
625 |
+
})
|
626 |
+
|
627 |
+
# 5. Métriques Avancées
|
628 |
+
try:
|
629 |
+
risk_metrics.update({
|
630 |
+
'ulcer_index': self.risk_metrics.calculate_ulcer_index(returns),
|
631 |
+
'pain_index': self.risk_metrics.calculate_pain_index(returns),
|
632 |
+
'pain_ratio': self.risk_metrics.calculate_pain_ratio(returns),
|
633 |
+
'burke_ratio': self.risk_metrics.calculate_burke_ratio(returns)
|
634 |
+
})
|
635 |
+
except Exception as e:
|
636 |
+
print(f"Error calculating advanced metrics: {e}")
|
637 |
+
risk_metrics.update({
|
638 |
+
'ulcer_index': 0.0,
|
639 |
+
'pain_index': 0.0,
|
640 |
+
'pain_ratio': 0.0,
|
641 |
+
'burke_ratio': 0.0
|
642 |
+
})
|
643 |
+
|
644 |
+
# 6. Analyse des Régimes
|
645 |
+
try:
|
646 |
+
regime_metrics = self._calculate_regime_metrics(returns)
|
647 |
+
risk_metrics.update(regime_metrics)
|
648 |
+
except Exception as e:
|
649 |
+
print(f"Error calculating regime metrics: {e}")
|
650 |
+
risk_metrics.update({
|
651 |
+
'regime_0': {'frequency': 0.0, 'avg_return': 0.0},
|
652 |
+
'regime_1': {'frequency': 0.0, 'avg_return': 0.0},
|
653 |
+
'regime_2': {'frequency': 0.0, 'avg_return': 0.0}
|
654 |
+
})
|
655 |
+
|
656 |
+
# Nettoyer les résultats finaux
|
657 |
+
risk_metrics = {
|
658 |
+
k: v for k, v in risk_metrics.items()
|
659 |
+
if not (isinstance(v, float) and (np.isnan(v) or np.isinf(v)))
|
660 |
+
}
|
661 |
+
|
662 |
+
return risk_metrics
|
663 |
+
|
664 |
+
except Exception as e:
|
665 |
+
print(f"Error calculating risk metrics: {e}")
|
666 |
+
return {}
|
667 |
+
|
668 |
+
def _calculate_rolling_metrics(self, returns: pd.Series, window: int = 252) -> Dict[str, pd.Series]:
|
669 |
+
"""Calculate rolling performance metrics"""
|
670 |
+
try:
|
671 |
+
rolling_metrics = {
|
672 |
+
'rolling_sharpe': self.risk_metrics.calculate_rolling_sharpe(window),
|
673 |
+
'rolling_sortino': self.risk_metrics.calculate_rolling_sortino(window),
|
674 |
+
'rolling_volatility': self.risk_metrics.calculate_rolling_volatility(window),
|
675 |
+
'rolling_beta': self.risk_metrics.calculate_rolling_beta(window),
|
676 |
+
'rolling_var': self.risk_metrics.calculate_rolling_var(window)
|
677 |
+
}
|
678 |
+
return rolling_metrics
|
679 |
+
except Exception as e:
|
680 |
+
print(f"Error calculating rolling metrics: {str(e)}")
|
681 |
+
raise
|
682 |
+
|
683 |
+
def _calculate_regime_metrics(self, returns: pd.Series) -> Dict[str, Dict[str, float]]:
|
684 |
+
"""Calculate metrics under different market regimes"""
|
685 |
+
try:
|
686 |
+
# Identifier les régimes de marché
|
687 |
+
regimes = self._identify_market_regimes(returns)
|
688 |
+
regime_metrics = {}
|
689 |
+
|
690 |
+
# Calculer les métriques pour chaque régime
|
691 |
+
for regime in np.unique(regimes):
|
692 |
+
regime_returns = returns[regimes == regime]
|
693 |
+
regime_metrics[f'regime_{regime}'] = {
|
694 |
+
'frequency': len(regime_returns) / len(returns),
|
695 |
+
'avg_return': regime_returns.mean(),
|
696 |
+
'volatility': regime_returns.std() * np.sqrt(252),
|
697 |
+
'sharpe': self.risk_metrics.calculate_regime_sharpe(regime_returns),
|
698 |
+
'max_drawdown': self.risk_metrics.calculate_regime_drawdown(regime_returns),
|
699 |
+
'win_rate': len(regime_returns[regime_returns > 0]) / len(regime_returns)
|
700 |
+
}
|
701 |
+
|
702 |
+
return regime_metrics
|
703 |
+
|
704 |
+
except Exception as e:
|
705 |
+
print(f"Error calculating regime metrics: {str(e)}")
|
706 |
+
raise
|
707 |
+
|
708 |
+
|
709 |
+
def _calculate_recovery_time(self, returns: pd.Series) -> int:
|
710 |
+
"""Calcule le temps de récupération après un drawdown"""
|
711 |
+
if returns is None or len(returns) == 0:
|
712 |
+
return 0
|
713 |
+
|
714 |
+
cumulative = (1 + returns).cumprod()
|
715 |
+
running_max = cumulative.expanding().max()
|
716 |
+
drawdowns = cumulative / running_max - 1
|
717 |
+
|
718 |
+
recovery_periods = 0
|
719 |
+
in_drawdown = False
|
720 |
+
|
721 |
+
for i, dd in enumerate(drawdowns):
|
722 |
+
if not in_drawdown and dd < 0:
|
723 |
+
in_drawdown = True
|
724 |
+
start_idx = i
|
725 |
+
elif in_drawdown and dd >= 0:
|
726 |
+
in_drawdown = False
|
727 |
+
recovery_periods = i - start_idx
|
728 |
+
|
729 |
+
return recovery_periods
|
730 |
+
|
731 |
+
def _identify_market_regimes(self, returns: pd.Series, n_regimes: int = 3) -> np.ndarray:
|
732 |
+
try:
|
733 |
+
# Préparer les features pour le GMM
|
734 |
+
features = self._prepare_regime_features(returns)
|
735 |
+
features = np.nan_to_num(features).reshape(-1, 1) # Reshape en 2D
|
736 |
+
|
737 |
+
gmm = GaussianMixture(n_components=n_regimes, random_state=42)
|
738 |
+
regimes = gmm.fit_predict(features)
|
739 |
+
|
740 |
+
return regimes
|
741 |
+
except Exception as e:
|
742 |
+
print(f"Error identifying market regimes: {e}")
|
743 |
+
return np.zeros(len(returns))
|
744 |
+
|
745 |
+
def _prepare_regime_features(self, returns: pd.Series) -> np.ndarray:
|
746 |
+
"""Prepare features for regime detection"""
|
747 |
+
try:
|
748 |
+
window = 21 # fenêtre de 1 mois
|
749 |
+
|
750 |
+
# Calculer les features
|
751 |
+
mean = returns.rolling(window=window).mean().fillna(0)
|
752 |
+
std = returns.rolling(window=window).std().fillna(0)
|
753 |
+
skew = returns.rolling(window=window).skew().fillna(0)
|
754 |
+
kurt = returns.rolling(window=window).kurt().fillna(0)
|
755 |
+
|
756 |
+
# Combiner les features en matrice 2D
|
757 |
+
features = np.column_stack([
|
758 |
+
mean.values,
|
759 |
+
std.values,
|
760 |
+
skew.values,
|
761 |
+
kurt.values
|
762 |
+
])
|
763 |
+
|
764 |
+
return features
|
765 |
+
|
766 |
+
except Exception as e:
|
767 |
+
print(f"Error preparing regime features: {e}")
|
768 |
+
return np.zeros((len(returns), 4))
|
769 |
+
|
770 |
+
def _perform_attribution_analysis(self, results: Dict[str, pd.DataFrame]) -> Dict[str, pd.Series]:
|
771 |
+
"""Perform detailed performance attribution analysis"""
|
772 |
+
try:
|
773 |
+
returns = results['returns']
|
774 |
+
positions = results['positions']
|
775 |
+
|
776 |
+
attribution = {
|
777 |
+
# Attribution par facteurs
|
778 |
+
'factor_attribution': self._calculate_factor_attribution(returns, positions),
|
779 |
+
|
780 |
+
# Attribution par secteur
|
781 |
+
'sector_attribution': self._calculate_sector_attribution(returns, positions),
|
782 |
+
|
783 |
+
# Attribution par style
|
784 |
+
'style_attribution': self._calculate_style_attribution(returns, positions),
|
785 |
+
|
786 |
+
# Attribution du risque
|
787 |
+
'risk_attribution': self._calculate_risk_attribution(returns, positions),
|
788 |
+
|
789 |
+
# Décomposition de la performance
|
790 |
+
'performance_decomposition': self._decompose_performance(returns, positions)
|
791 |
+
}
|
792 |
+
|
793 |
+
return attribution
|
794 |
+
|
795 |
+
except Exception as e:
|
796 |
+
print(f"Error performing attribution analysis: {str(e)}")
|
797 |
+
raise
|
798 |
+
|
799 |
+
def _calculate_factor_attribution(self, returns: pd.Series, positions: pd.DataFrame) -> pd.Series:
|
800 |
+
"""
|
801 |
+
Calcule l'attribution de performance basée sur les facteurs
|
802 |
+
|
803 |
+
Args:
|
804 |
+
returns: Rendements du portefeuille
|
805 |
+
positions: Positions du portefeuille
|
806 |
+
"""
|
807 |
+
try:
|
808 |
+
# Charger les rendements des facteurs
|
809 |
+
factors = pd.DataFrame(index=returns.index)
|
810 |
+
|
811 |
+
# Market Factor
|
812 |
+
factors['MKT'] = returns.fillna(0).mean() - self.risk_free_rate/252
|
813 |
+
|
814 |
+
# Size Factor (SMB)
|
815 |
+
factors['SMB'] = positions.fillna(0).rolling(window=21).mean().std()
|
816 |
+
|
817 |
+
# Value Factor (HML)
|
818 |
+
factors['HML'] = pd.Series(0, index=returns.index) # placeholder
|
819 |
+
|
820 |
+
# Momentum Factor (MOM)
|
821 |
+
factors['MOM'] = returns.fillna(0).rolling(window=252).mean()
|
822 |
+
|
823 |
+
# Volatility Factor (VOL)
|
824 |
+
factors['VOL'] = returns.fillna(0).rolling(window=63).std()
|
825 |
+
|
826 |
+
# Nettoyer les valeurs aberrantes
|
827 |
+
factors = factors.replace([np.inf, -np.inf], np.nan)
|
828 |
+
factors = factors.fillna(0)
|
829 |
+
returns = returns.fillna(0)
|
830 |
+
|
831 |
+
# Régresser les rendements sur les facteurs
|
832 |
+
try:
|
833 |
+
model = sm.OLS(returns, factors).fit()
|
834 |
+
factor_contribution = pd.Series(
|
835 |
+
model.params * factors.mean(),
|
836 |
+
index=factors.columns
|
837 |
+
)
|
838 |
+
except:
|
839 |
+
factor_contribution = pd.Series(0, index=factors.columns)
|
840 |
+
|
841 |
+
return factor_contribution
|
842 |
+
|
843 |
+
except Exception as e:
|
844 |
+
print(f"Error calculating factor attribution: {str(e)}")
|
845 |
+
return pd.Series()
|
846 |
+
|
847 |
+
def _calculate_sector_attribution(self, returns: pd.Series, positions: pd.DataFrame) -> pd.Series:
|
848 |
+
"""Calculate sector-based performance attribution"""
|
849 |
+
try:
|
850 |
+
# Charger les mappings sectoriels
|
851 |
+
sector_mapping = self._get_sector_mapping(positions.columns)
|
852 |
+
sector_returns = {}
|
853 |
+
|
854 |
+
# Calculer les contributions par secteur
|
855 |
+
for sector, assets in sector_mapping.items():
|
856 |
+
sector_positions = positions[assets].sum(axis=1)
|
857 |
+
sector_ret = (returns * sector_positions).sum()
|
858 |
+
sector_returns[sector] = {
|
859 |
+
'total_return': sector_ret,
|
860 |
+
'average_weight': sector_positions.mean(),
|
861 |
+
'contribution': sector_ret * sector_positions.mean()
|
862 |
+
}
|
863 |
+
|
864 |
+
return pd.DataFrame(sector_returns).T
|
865 |
+
|
866 |
+
except Exception as e:
|
867 |
+
print(f"Error calculating sector attribution: {str(e)}")
|
868 |
+
raise
|
869 |
+
|
870 |
+
def _calculate_style_attribution(self, returns: pd.Series, positions: pd.DataFrame) -> pd.Series:
|
871 |
+
"""Calculate investment style attribution (value, growth, momentum, etc.)"""
|
872 |
+
try:
|
873 |
+
# Définir les facteurs de style
|
874 |
+
style_factors = {
|
875 |
+
'value': self._calculate_value_exposure(positions),
|
876 |
+
'growth': self._calculate_growth_exposure(positions),
|
877 |
+
'momentum': self._calculate_momentum_exposure(positions),
|
878 |
+
'quality': self._calculate_quality_exposure(positions),
|
879 |
+
'size': self._calculate_size_exposure(positions)
|
880 |
+
}
|
881 |
+
|
882 |
+
# Calculer l'attribution pour chaque style
|
883 |
+
style_attribution = {}
|
884 |
+
for style, exposure in style_factors.items():
|
885 |
+
style_return = (returns * exposure).sum()
|
886 |
+
style_attribution[style] = style_return
|
887 |
+
|
888 |
+
return pd.Series(style_attribution)
|
889 |
+
|
890 |
+
except Exception as e:
|
891 |
+
print(f"Error calculating style attribution: {str(e)}")
|
892 |
+
raise
|
893 |
+
|
894 |
+
def _calculate_risk_attribution(self, returns: pd.Series, positions: pd.DataFrame) -> pd.Series:
|
895 |
+
"""Calculate risk-based performance attribution"""
|
896 |
+
try:
|
897 |
+
# Calculer la contribution au risque de chaque position
|
898 |
+
vol = returns.std() * np.sqrt(252)
|
899 |
+
pos_vol = positions.std() * np.sqrt(252)
|
900 |
+
risk_contribution = pos_vol * positions.corr() * vol
|
901 |
+
|
902 |
+
return pd.Series(risk_contribution.sum(), index=positions.columns)
|
903 |
+
|
904 |
+
except Exception as e:
|
905 |
+
print(f"Error calculating risk attribution: {str(e)}")
|
906 |
+
return pd.Series()
|
907 |
+
|
908 |
+
def _calculate_transaction_costs(self,
|
909 |
+
trades: np.ndarray,
|
910 |
+
prices: pd.Series,
|
911 |
+
cost_model: str,
|
912 |
+
cost_params: Dict[str, float]) -> float:
|
913 |
+
"""Calculate transaction costs based on specified model"""
|
914 |
+
if cost_model == "percentage":
|
915 |
+
return np.sum(np.abs(trades * prices)) * cost_params['cost_rate']
|
916 |
+
elif cost_model == "fixed":
|
917 |
+
return np.sum(np.abs(trades > 0)) * cost_params['fixed_cost']
|
918 |
+
else:
|
919 |
+
raise ValueError(f"Unknown transaction cost model: {cost_model}")
|
920 |
+
|
921 |
+
|
922 |
+
def _calculate_sharpe_ratio(self, returns: pd.Series) -> float:
|
923 |
+
"""Calculate Sharpe ratio"""
|
924 |
+
try:
|
925 |
+
excess_returns = returns - self.risk_free_rate/252 # Conversion en taux journalier
|
926 |
+
if excess_returns.std() == 0:
|
927 |
+
return 0.0
|
928 |
+
return np.sqrt(252) * excess_returns.mean() / excess_returns.std()
|
929 |
+
except Exception as e:
|
930 |
+
print(f"Error calculating Sharpe ratio: {e}")
|
931 |
+
return 0.0
|
932 |
+
|
933 |
+
def _calculate_liquidity_score(self, results: BacktestResult, profile: PersonalProfile) -> Dict[str, float]:
|
934 |
+
"""
|
935 |
+
Calcule le score de liquidité du portefeuille par rapport au profil investisseur
|
936 |
+
|
937 |
+
Args:
|
938 |
+
results: Résultats du backtest
|
939 |
+
profile: Profil de l'investisseur
|
940 |
+
|
941 |
+
Returns:
|
942 |
+
Dict contenant les métriques de liquidité
|
943 |
+
"""
|
944 |
+
try:
|
945 |
+
# Calculer la liquidité moyenne quotidienne
|
946 |
+
daily_volume = results.positions * self.current_portfolio_value
|
947 |
+
|
948 |
+
# Calculer le ratio de liquidité (combien de jours pour liquider)
|
949 |
+
liquidation_days = daily_volume / (results.positions * 0.1) # Assume 10% max volume quotidien
|
950 |
+
|
951 |
+
# Calculer le coût de liquidité estimé
|
952 |
+
liquidity_cost = daily_volume * 0.0025 # Estimation du spread bid-ask
|
953 |
+
|
954 |
+
# Évaluer l'adéquation avec les besoins de liquidité du profil
|
955 |
+
required_liquidity = profile.annual_income / 12 # Besoins mensuels
|
956 |
+
|
957 |
+
liquidity_score = {
|
958 |
+
'average_days_to_liquidate': liquidation_days.mean().mean(),
|
959 |
+
'liquidity_cost_percent': (liquidity_cost.sum() / self.current_portfolio_value),
|
960 |
+
'liquidity_coverage_ratio': daily_volume.mean().sum() / required_liquidity,
|
961 |
+
'emergency_buffer_ratio': results.positions['cash'].iloc[-1] if 'cash' in results.positions else 0,
|
962 |
+
'overall_liquidity_score': self._calculate_overall_liquidity_score(
|
963 |
+
liquidation_days.mean().mean(),
|
964 |
+
required_liquidity,
|
965 |
+
profile.risk_tolerance
|
966 |
+
)
|
967 |
+
}
|
968 |
+
|
969 |
+
return liquidity_score
|
970 |
+
|
971 |
+
except Exception as e:
|
972 |
+
print(f"Erreur dans le calcul du score de liquidité : {e}")
|
973 |
+
return {
|
974 |
+
'average_days_to_liquidate': 0,
|
975 |
+
'liquidity_cost_percent': 0,
|
976 |
+
'liquidity_coverage_ratio': 0,
|
977 |
+
'emergency_buffer_ratio': 0,
|
978 |
+
'overall_liquidity_score': 0
|
979 |
+
}
|
980 |
+
|
981 |
+
def _calculate_overall_liquidity_score(self,
|
982 |
+
days_to_liquidate: float,
|
983 |
+
required_liquidity: float,
|
984 |
+
risk_tolerance: int) -> float:
|
985 |
+
"""
|
986 |
+
Calcule le score global de liquidité
|
987 |
+
|
988 |
+
Args:
|
989 |
+
days_to_liquidate: Nombre moyen de jours pour liquider
|
990 |
+
required_liquidity: Liquidité requise
|
991 |
+
risk_tolerance: Tolérance au risque du profil (1-5)
|
992 |
+
|
993 |
+
Returns:
|
994 |
+
Score entre 0 et 1
|
995 |
+
"""
|
996 |
+
# Ajuster les seuils selon la tolérance au risque
|
997 |
+
max_acceptable_days = {
|
998 |
+
1: 2, # Très conservateur
|
999 |
+
2: 5, # Conservateur
|
1000 |
+
3: 10, # Modéré
|
1001 |
+
4: 15, # Dynamique
|
1002 |
+
5: 20 # Agressif
|
1003 |
+
}[risk_tolerance]
|
1004 |
+
|
1005 |
+
# Calculer le score normalisé
|
1006 |
+
days_score = max(0, 1 - (days_to_liquidate / max_acceptable_days))
|
1007 |
+
|
1008 |
+
# Ajouter d'autres facteurs si nécessaire
|
1009 |
+
# ...
|
1010 |
+
|
1011 |
+
return days_score
|
1012 |
+
|
1013 |
+
def _run_stress_tests(self, results: Dict[str, pd.DataFrame], stress_periods: Dict[str, Tuple[datetime, datetime]]) -> Dict[str, float]:
|
1014 |
+
"""Run stress tests with proper timezone handling"""
|
1015 |
+
try:
|
1016 |
+
returns = results['returns']
|
1017 |
+
|
1018 |
+
# Normaliser l'index temporel
|
1019 |
+
if returns.index.tz is not None:
|
1020 |
+
returns.index = returns.index.tz_localize(None)
|
1021 |
+
|
1022 |
+
stress_results = {}
|
1023 |
+
|
1024 |
+
for period_name, (start, end) in stress_periods.items():
|
1025 |
+
try:
|
1026 |
+
# Convertir et normaliser les dates de la période
|
1027 |
+
start = pd.to_datetime(start).tz_localize(None)
|
1028 |
+
end = pd.to_datetime(end).tz_localize(None)
|
1029 |
+
|
1030 |
+
if start > returns.index[-1] or end < returns.index[0]:
|
1031 |
+
print(f"No data for stress period {period_name}")
|
1032 |
+
continue
|
1033 |
+
|
1034 |
+
start = max(start, returns.index[0])
|
1035 |
+
end = min(end, returns.index[-1])
|
1036 |
+
|
1037 |
+
period_returns = returns[start:end]
|
1038 |
+
|
1039 |
+
if len(period_returns) > 0:
|
1040 |
+
stress_results[period_name] = {
|
1041 |
+
'total_return': (1 + period_returns).prod() - 1,
|
1042 |
+
'max_drawdown': self._calculate_max_drawdown(period_returns),
|
1043 |
+
'volatility': period_returns.std() * np.sqrt(252),
|
1044 |
+
'worst_day': period_returns.min(),
|
1045 |
+
'recovery_time': self._calculate_recovery_time(period_returns)
|
1046 |
+
}
|
1047 |
+
|
1048 |
+
except Exception as e:
|
1049 |
+
print(f"Error in stress period {period_name}: {e}")
|
1050 |
+
|
1051 |
+
return stress_results
|
1052 |
+
|
1053 |
+
except Exception as e:
|
1054 |
+
print(f"Error in stress tests: {e}")
|
1055 |
+
return {}
|
1056 |
+
|
1057 |
+
def _run_historical_stress_tests(self, returns: pd.Series, stress_periods: Dict[str, Tuple[datetime, datetime]]) -> Dict[str, float]:
|
1058 |
+
"""Run historical stress tests"""
|
1059 |
+
historical_results = {}
|
1060 |
+
|
1061 |
+
# Vérifier d'abord si nous avons des données
|
1062 |
+
if returns is None or len(returns) == 0:
|
1063 |
+
print("Warning: No return data available for stress tests")
|
1064 |
+
return historical_results
|
1065 |
+
|
1066 |
+
# Obtenir la plage de dates disponible
|
1067 |
+
data_start = returns.index.min()
|
1068 |
+
data_end = returns.index.max()
|
1069 |
+
|
1070 |
+
for period_name, (start, end) in stress_periods.items():
|
1071 |
+
try:
|
1072 |
+
# Convertir les dates
|
1073 |
+
start = pd.to_datetime(start)
|
1074 |
+
end = pd.to_datetime(end)
|
1075 |
+
|
1076 |
+
# Vérifier si la période est dans notre plage de données
|
1077 |
+
if start > data_end or end < data_start:
|
1078 |
+
print(f"Warning: Stress period {period_name} ({start} to {end}) outside available data range ({data_start} to {data_end})")
|
1079 |
+
continue
|
1080 |
+
|
1081 |
+
# Ajuster la période si nécessaire
|
1082 |
+
adjusted_start = max(start, data_start)
|
1083 |
+
adjusted_end = min(end, data_end)
|
1084 |
+
|
1085 |
+
# Gérer les fuseaux horaires de manière plus robuste
|
1086 |
+
if returns.index.tz is not None:
|
1087 |
+
adjusted_start = adjusted_start.tz_localize(returns.index.tz)
|
1088 |
+
adjusted_end = adjusted_end.tz_localize(returns.index.tz)
|
1089 |
+
|
1090 |
+
period_returns = returns.loc[adjusted_start:adjusted_end]
|
1091 |
+
|
1092 |
+
# Vérifier si nous avons assez de données
|
1093 |
+
if len(period_returns) < 5: # minimum arbitraire de 5 points de données
|
1094 |
+
print(f"Warning: Insufficient data for stress period {period_name}")
|
1095 |
+
continue
|
1096 |
+
|
1097 |
+
historical_results[period_name] = {
|
1098 |
+
'total_return': (1 + period_returns).prod() - 1,
|
1099 |
+
'max_drawdown': self.risk_metrics.calculate_max_drawdown(period_returns),
|
1100 |
+
'volatility': period_returns.std() * np.sqrt(252),
|
1101 |
+
'var_95': np.percentile(period_returns, 5),
|
1102 |
+
'worst_day': period_returns.min(),
|
1103 |
+
'recovery_days': self._calculate_recovery_time(period_returns)
|
1104 |
+
}
|
1105 |
+
|
1106 |
+
# Vérifier la validité des résultats
|
1107 |
+
for key, value in historical_results[period_name].items():
|
1108 |
+
if np.isnan(value) or np.isinf(value):
|
1109 |
+
historical_results[period_name][key] = 0
|
1110 |
+
print(f"Warning: Invalid {key} value for period {period_name}")
|
1111 |
+
|
1112 |
+
except Exception as e:
|
1113 |
+
print(f"Error in stress period {period_name}: {str(e)}")
|
1114 |
+
historical_results[period_name] = {
|
1115 |
+
'total_return': 0,
|
1116 |
+
'max_drawdown': 0,
|
1117 |
+
'volatility': 0,
|
1118 |
+
'var_95': 0,
|
1119 |
+
'worst_day': 0,
|
1120 |
+
'recovery_days': 0
|
1121 |
+
}
|
1122 |
+
|
1123 |
+
if not historical_results:
|
1124 |
+
print("Warning: No valid stress test results generated")
|
1125 |
+
|
1126 |
+
return historical_results
|
1127 |
+
|
1128 |
+
def _run_correlation_stress_tests(self, positions: pd.DataFrame) -> Dict[str, float]:
|
1129 |
+
"""
|
1130 |
+
Run correlation stress tests on portfolio positions
|
1131 |
+
|
1132 |
+
Args:
|
1133 |
+
positions: Portfolio positions over time
|
1134 |
+
|
1135 |
+
Returns:
|
1136 |
+
Dictionary of correlation stress test results
|
1137 |
+
"""
|
1138 |
+
try:
|
1139 |
+
# Calculer la matrice de corrélation de base
|
1140 |
+
base_corr = positions.corr()
|
1141 |
+
|
1142 |
+
# Différents scénarios de stress sur les corrélations
|
1143 |
+
stress_scenarios = {
|
1144 |
+
'high_correlation': {
|
1145 |
+
'adjustment': 0.3, # Augmentation des corrélations de 30%
|
1146 |
+
'floor': 0.3, # Corrélation minimum
|
1147 |
+
'cap': 0.95 # Corrélation maximum
|
1148 |
+
},
|
1149 |
+
'correlation_breakdown': {
|
1150 |
+
'adjustment': -0.5, # Diminution des corrélations de 50%
|
1151 |
+
'floor': -0.8, # Corrélation minimum
|
1152 |
+
'cap': 0.2 # Corrélation maximum
|
1153 |
+
}
|
1154 |
+
}
|
1155 |
+
|
1156 |
+
results = {}
|
1157 |
+
|
1158 |
+
for scenario, params in stress_scenarios.items():
|
1159 |
+
# Ajuster la matrice de corrélation
|
1160 |
+
stressed_corr = base_corr + params['adjustment']
|
1161 |
+
stressed_corr = np.clip(stressed_corr, params['floor'], params['cap'])
|
1162 |
+
np.fill_diagonal(stressed_corr, 1.0)
|
1163 |
+
|
1164 |
+
# Calculer le risque du portefeuille avec les nouvelles corrélations
|
1165 |
+
weights = positions.iloc[-1] # Utiliser les derniers poids
|
1166 |
+
portfolio_risk = np.sqrt(
|
1167 |
+
weights @ stressed_corr @ weights *
|
1168 |
+
(self.returns.std() * np.sqrt(252))**2
|
1169 |
+
)
|
1170 |
+
|
1171 |
+
results[scenario] = {
|
1172 |
+
'portfolio_risk': portfolio_risk,
|
1173 |
+
'diversification_score': 1 - (portfolio_risk / (weights * self.returns.std() * np.sqrt(252)).sum()),
|
1174 |
+
'max_correlation': stressed_corr.max(),
|
1175 |
+
'min_correlation': stressed_corr.min()
|
1176 |
+
}
|
1177 |
+
|
1178 |
+
return results
|
1179 |
+
|
1180 |
+
except Exception as e:
|
1181 |
+
print(f"Error in correlation stress tests: {str(e)}")
|
1182 |
+
return {
|
1183 |
+
'high_correlation': {
|
1184 |
+
'portfolio_risk': 0.0,
|
1185 |
+
'diversification_score': 0.0,
|
1186 |
+
'max_correlation': 0.0,
|
1187 |
+
'min_correlation': 0.0
|
1188 |
+
}
|
1189 |
+
}
|
1190 |
+
|
1191 |
+
def _run_hypothetical_stress_tests(self, returns: pd.Series) -> Dict[str, float]:
|
1192 |
+
"""Run hypothetical stress scenarios"""
|
1193 |
+
scenarios = {
|
1194 |
+
'market_crash': {'shock': -0.20, 'duration': 22}, # -20% sur un mois
|
1195 |
+
'volatility_spike': {'vol_multiplier': 3, 'duration': 10},
|
1196 |
+
'correlation_breakdown': {'correlation_shock': 0.8, 'duration': 15},
|
1197 |
+
'liquidity_crisis': {'bid_ask_multiplier': 5, 'duration': 30}
|
1198 |
+
}
|
1199 |
+
|
1200 |
+
results = {}
|
1201 |
+
for scenario_name, params in scenarios.items():
|
1202 |
+
scenario_returns = self._simulate_scenario(returns, params)
|
1203 |
+
results[scenario_name] = {
|
1204 |
+
'max_loss': scenario_returns.min(),
|
1205 |
+
'var_95': np.percentile(scenario_returns, 5),
|
1206 |
+
'expected_shortfall': scenario_returns[scenario_returns < np.percentile(scenario_returns, 5)].mean(),
|
1207 |
+
'recovery_time': self._estimate_recovery_time(scenario_returns)
|
1208 |
+
}
|
1209 |
+
|
1210 |
+
return results
|
1211 |
+
|
1212 |
+
|
1213 |
+
def _run_monte_carlo_stress_tests(self, returns: pd.Series, n_simulations: int = 1000) -> Dict[str, float]:
|
1214 |
+
"""Run Monte Carlo stress tests"""
|
1215 |
+
try:
|
1216 |
+
# Ajuster un modèle GMM pour capturer les fat tails
|
1217 |
+
gmm = GaussianMixture(n_components=2, random_state=42)
|
1218 |
+
gmm.fit(returns.values.reshape(-1, 1))
|
1219 |
+
|
1220 |
+
# Générer des scénarios
|
1221 |
+
simulated_returns = []
|
1222 |
+
for _ in range(n_simulations):
|
1223 |
+
scenario = gmm.sample(n_samples=len(returns))[0].flatten()
|
1224 |
+
simulated_returns.append(scenario)
|
1225 |
+
|
1226 |
+
simulated_returns = np.array(simulated_returns)
|
1227 |
+
|
1228 |
+
# Calculer les statistiques
|
1229 |
+
results = {
|
1230 |
+
'var_99': np.percentile(simulated_returns, 1),
|
1231 |
+
'es_99': np.mean(simulated_returns[simulated_returns < np.percentile(simulated_returns, 1)]),
|
1232 |
+
'max_drawdown_99': np.percentile([self._calculate_max_drawdown(sim) for sim in simulated_returns], 99),
|
1233 |
+
'worst_case_loss': simulated_returns.min()
|
1234 |
+
}
|
1235 |
+
|
1236 |
+
return results
|
1237 |
+
|
1238 |
+
except Exception as e:
|
1239 |
+
print(f"Error running Monte Carlo stress tests: {str(e)}")
|
1240 |
+
raise
|
1241 |
+
|
1242 |
+
|
1243 |
+
def _validate_returns(self, returns: pd.Series) -> pd.Series:
|
1244 |
+
"""Validate and clean return data"""
|
1245 |
+
if returns is None or len(returns) == 0:
|
1246 |
+
return pd.Series()
|
1247 |
+
|
1248 |
+
# Convertir en Series si nécessaire
|
1249 |
+
if isinstance(returns, np.ndarray):
|
1250 |
+
returns = pd.Series(returns)
|
1251 |
+
|
1252 |
+
# Nettoyer les valeurs invalides
|
1253 |
+
returns = returns.replace([np.inf, -np.inf], np.nan).dropna()
|
1254 |
+
|
1255 |
+
return returns
|
1256 |
+
|
1257 |
+
def generate_report(self, results: BacktestResult, save_path: Optional[str] = None) -> Dict[str, Any]:
|
1258 |
+
"""Generate comprehensive backtest report"""
|
1259 |
+
try:
|
1260 |
+
report = {
|
1261 |
+
'summary': self._generate_summary_metrics(results),
|
1262 |
+
'detailed_analysis': self._generate_detailed_analysis(results),
|
1263 |
+
'risk_analysis': self._generate_risk_analysis(results),
|
1264 |
+
'attribution': self._generate_attribution_analysis(results),
|
1265 |
+
'stress_tests': self._generate_stress_test_analysis(results)
|
1266 |
+
}
|
1267 |
+
|
1268 |
+
# Générer les visualisations
|
1269 |
+
figures = self._generate_report_figures(results)
|
1270 |
+
|
1271 |
+
if save_path:
|
1272 |
+
self._save_report(report, figures, save_path)
|
1273 |
+
|
1274 |
+
return report
|
1275 |
+
|
1276 |
+
except Exception as e:
|
1277 |
+
print(f"Error generating report: {str(e)}")
|
1278 |
+
raise
|
1279 |
+
|
1280 |
+
def _generate_profile_performance_analysis(self, results: BacktestResult, profile: PersonalProfile) -> Dict[str, Any]:
|
1281 |
+
"""Generate profile-specific performance analysis"""
|
1282 |
+
return self._calculate_profile_specific_metrics(results, profile)
|
1283 |
+
|
1284 |
+
def _generate_profile_risk_analysis(self, results: BacktestResult, profile: PersonalProfile) -> Dict[str, float]:
|
1285 |
+
"""Generate profile-specific risk analysis"""
|
1286 |
+
return {
|
1287 |
+
'risk_tolerance_alignment': self._calculate_risk_alignment(results.returns, profile),
|
1288 |
+
'risk_adjusted_returns': results.metrics.get('sharpe_ratio', 0)
|
1289 |
+
}
|
1290 |
+
|
1291 |
+
def _generate_goal_tracking_analysis(self, results: BacktestResult, profile: PersonalProfile) -> Dict[str, float]:
|
1292 |
+
"""Generate goal tracking analysis"""
|
1293 |
+
try:
|
1294 |
+
goal_tracking = {}
|
1295 |
+
current_value = results.positions.sum().sum()
|
1296 |
+
initial_value = self.initial_capital
|
1297 |
+
|
1298 |
+
for goal in profile.investment_goals:
|
1299 |
+
if isinstance(goal, str):
|
1300 |
+
# Si le goal est juste une chaîne de caractères
|
1301 |
+
progress = (current_value / initial_value) - 1
|
1302 |
+
goal_tracking[goal] = progress
|
1303 |
+
elif isinstance(goal, dict):
|
1304 |
+
# Si le goal est un dictionnaire avec des détails
|
1305 |
+
target = goal.get('target_value', initial_value * 1.5)
|
1306 |
+
progress = (current_value / target)
|
1307 |
+
goal_tracking[goal['name']] = progress
|
1308 |
+
|
1309 |
+
return goal_tracking
|
1310 |
+
|
1311 |
+
except Exception as e:
|
1312 |
+
print(f"Error in goal tracking analysis: {e}")
|
1313 |
+
return {}
|
1314 |
+
|
1315 |
+
def _generate_recommendations(self, results: BacktestResult, profile: PersonalProfile) -> List[str]:
|
1316 |
+
"""
|
1317 |
+
Génère des recommandations basées sur les résultats du backtest et le profil investisseur
|
1318 |
+
|
1319 |
+
Args:
|
1320 |
+
results: Résultats du backtest
|
1321 |
+
profile: Profil de l'investisseur
|
1322 |
+
|
1323 |
+
Returns:
|
1324 |
+
Liste des recommandations
|
1325 |
+
"""
|
1326 |
+
recommendations = []
|
1327 |
+
|
1328 |
+
# 1. Vérification de l'alignement du risque
|
1329 |
+
realized_vol = results.returns.std() * np.sqrt(252)
|
1330 |
+
target_vol = {
|
1331 |
+
1: 0.05, 2: 0.08, 3: 0.12, 4: 0.18, 5: 0.25
|
1332 |
+
}[profile.risk_tolerance]
|
1333 |
+
|
1334 |
+
if realized_vol > target_vol * 1.2:
|
1335 |
+
recommendations.append("Le portefeuille présente un risque supérieur à votre profil. Considérez une réduction de l'exposition aux actifs risqués.")
|
1336 |
+
elif realized_vol < target_vol * 0.8:
|
1337 |
+
recommendations.append("Le portefeuille est plus conservateur que votre profil. Une augmentation mesurée du risque pourrait améliorer les rendements.")
|
1338 |
+
|
1339 |
+
# 2. Vérification du drawdown
|
1340 |
+
max_dd = results.risk_metrics.get('max_drawdown', 0)
|
1341 |
+
max_acceptable_dd = -0.05 * profile.risk_tolerance
|
1342 |
+
|
1343 |
+
if abs(max_dd) > abs(max_acceptable_dd):
|
1344 |
+
recommendations.append("Les pertes maximales dépassent le seuil recommandé. Envisagez des stratégies de protection supplémentaires.")
|
1345 |
+
|
1346 |
+
# 3. Vérification de la diversification
|
1347 |
+
current_weights = results.positions.iloc[-1]
|
1348 |
+
if current_weights.max() > 0.3:
|
1349 |
+
recommendations.append("La concentration du portefeuille est élevée. Une meilleure diversification est recommandée.")
|
1350 |
+
|
1351 |
+
# 4. Vérification de la performance vs objectifs
|
1352 |
+
total_return = results.metrics.get('total_return', 0)
|
1353 |
+
required_return = 0.07 + 0.02 * profile.risk_tolerance
|
1354 |
+
|
1355 |
+
if total_return < required_return:
|
1356 |
+
recommendations.append("La performance est inférieure aux objectifs. Une révision de la stratégie d'investissement pourrait être nécessaire.")
|
1357 |
+
|
1358 |
+
# 5. Vérification de la liquidité
|
1359 |
+
if hasattr(profile, 'liquidity_needs') and profile.liquidity_needs > 0:
|
1360 |
+
liquidity_score = self._calculate_liquidity_score(results, profile)
|
1361 |
+
if liquidity_score['overall_liquidity_score'] < 0.7:
|
1362 |
+
recommendations.append("Le niveau de liquidité pourrait être insuffisant pour vos besoins. Considérez des actifs plus liquides.")
|
1363 |
+
|
1364 |
+
# 6. Recommandations basées sur l'horizon d'investissement
|
1365 |
+
if profile.investment_horizon > 10:
|
1366 |
+
recommendations.append("Votre horizon long terme permet d'envisager une exposition plus importante aux actifs de croissance.")
|
1367 |
+
elif profile.investment_horizon < 5:
|
1368 |
+
recommendations.append("Compte tenu de votre horizon court terme, privilégiez la préservation du capital.")
|
1369 |
+
|
1370 |
+
return recommendations
|
1371 |
+
|
1372 |
+
def _get_target_allocation(self, profile: PersonalProfile) -> Dict[str, float]:
|
1373 |
+
"""Get target asset allocation based on profile"""
|
1374 |
+
try:
|
1375 |
+
# Allocation de base selon le profil de risque
|
1376 |
+
risk_level = profile.risk_tolerance
|
1377 |
+
if risk_level == 1: # Très conservateur
|
1378 |
+
return {
|
1379 |
+
'bonds': 0.70,
|
1380 |
+
'stocks': 0.20,
|
1381 |
+
'cash': 0.10
|
1382 |
+
}
|
1383 |
+
elif risk_level == 2: # Conservateur
|
1384 |
+
return {
|
1385 |
+
'bonds': 0.60,
|
1386 |
+
'stocks': 0.30,
|
1387 |
+
'cash': 0.10
|
1388 |
+
}
|
1389 |
+
elif risk_level == 3: # Modéré
|
1390 |
+
return {
|
1391 |
+
'bonds': 0.40,
|
1392 |
+
'stocks': 0.50,
|
1393 |
+
'cash': 0.10
|
1394 |
+
}
|
1395 |
+
elif risk_level == 4: # Dynamique
|
1396 |
+
return {
|
1397 |
+
'bonds': 0.20,
|
1398 |
+
'stocks': 0.70,
|
1399 |
+
'cash': 0.10
|
1400 |
+
}
|
1401 |
+
else: # Agressif
|
1402 |
+
return {
|
1403 |
+
'bonds': 0.10,
|
1404 |
+
'stocks': 0.80,
|
1405 |
+
'cash': 0.10
|
1406 |
+
}
|
1407 |
+
except Exception as e:
|
1408 |
+
print(f"Error getting target allocation: {e}")
|
1409 |
+
return {'stocks': 1.0} # Allocation par défaut
|
1410 |
+
|
1411 |
+
def _calculate_allocation_drift(self, results: BacktestResult, profile: PersonalProfile) -> Dict[str, float]:
|
1412 |
+
"""Calculate drift from target allocation"""
|
1413 |
+
try:
|
1414 |
+
target = self._get_target_allocation(profile)
|
1415 |
+
current = results.positions.iloc[-1].to_dict()
|
1416 |
+
|
1417 |
+
return {
|
1418 |
+
asset: current.get(asset, 0) - target.get(asset, 0)
|
1419 |
+
for asset in set(list(target.keys()) + list(current.keys()))
|
1420 |
+
}
|
1421 |
+
except Exception as e:
|
1422 |
+
print(f"Error calculating allocation drift: {e}")
|
1423 |
+
return {}
|
1424 |
+
|
1425 |
+
def _get_rebalancing_frequency(self, profile: PersonalProfile) -> str:
|
1426 |
+
"""Get appropriate rebalancing frequency"""
|
1427 |
+
try:
|
1428 |
+
if profile.risk_tolerance <= 2:
|
1429 |
+
return "quarterly"
|
1430 |
+
elif profile.risk_tolerance <= 4:
|
1431 |
+
return "monthly"
|
1432 |
+
else:
|
1433 |
+
return "weekly"
|
1434 |
+
except Exception as e:
|
1435 |
+
print(f"Error getting rebalancing frequency: {e}")
|
1436 |
+
return "monthly"
|
1437 |
+
|
1438 |
+
def _get_rebalancing_triggers(self, profile: PersonalProfile) -> Dict[str, float]:
|
1439 |
+
"""Get rebalancing trigger thresholds"""
|
1440 |
+
try:
|
1441 |
+
base_threshold = 0.05 # 5% de base
|
1442 |
+
risk_adjustment = (profile.risk_tolerance - 3) * 0.01 # Ajustement selon le risque
|
1443 |
+
|
1444 |
+
return {
|
1445 |
+
'asset_drift': base_threshold + risk_adjustment,
|
1446 |
+
'volatility_change': 0.20,
|
1447 |
+
'correlation_change': 0.30
|
1448 |
+
}
|
1449 |
+
except Exception as e:
|
1450 |
+
print(f"Error getting rebalancing triggers: {e}")
|
1451 |
+
return {'asset_drift': 0.05}
|
1452 |
+
|
1453 |
+
def _calculate_stop_loss_levels(self, profile: PersonalProfile) -> Dict[str, float]:
|
1454 |
+
"""Calculate stop loss levels"""
|
1455 |
+
try:
|
1456 |
+
base_stop = 0.15 # 15% de base
|
1457 |
+
risk_adjustment = (profile.risk_tolerance - 3) * 0.05
|
1458 |
+
|
1459 |
+
return {
|
1460 |
+
'portfolio': base_stop + risk_adjustment,
|
1461 |
+
'individual_positions': (base_stop + risk_adjustment) * 1.5
|
1462 |
+
}
|
1463 |
+
except Exception as e:
|
1464 |
+
print(f"Error calculating stop loss levels: {e}")
|
1465 |
+
return {'portfolio': 0.15}
|
1466 |
+
|
1467 |
+
def _get_hedging_strategy(self, profile: PersonalProfile) -> Dict[str, Any]:
|
1468 |
+
"""Define hedging strategy"""
|
1469 |
+
try:
|
1470 |
+
if profile.risk_tolerance <= 2:
|
1471 |
+
return {
|
1472 |
+
'use_hedging': True,
|
1473 |
+
'hedge_ratio': 0.3,
|
1474 |
+
'instruments': ['options', 'inverse_etfs']
|
1475 |
+
}
|
1476 |
+
elif profile.risk_tolerance <= 4:
|
1477 |
+
return {
|
1478 |
+
'use_hedging': True,
|
1479 |
+
'hedge_ratio': 0.2,
|
1480 |
+
'instruments': ['options']
|
1481 |
+
}
|
1482 |
+
else:
|
1483 |
+
return {
|
1484 |
+
'use_hedging': False,
|
1485 |
+
'hedge_ratio': 0.0,
|
1486 |
+
'instruments': []
|
1487 |
+
}
|
1488 |
+
except Exception as e:
|
1489 |
+
print(f"Error getting hedging strategy: {e}")
|
1490 |
+
return {'use_hedging': False}
|
1491 |
+
|
1492 |
+
def _get_factor_returns(self) -> pd.DataFrame:
|
1493 |
+
"""Récupère les rendements des facteurs pour l'analyse d'attribution"""
|
1494 |
+
try:
|
1495 |
+
# Créer une base de facteurs simple si pas de données externes
|
1496 |
+
dates = self.returns.index
|
1497 |
+
n_periods = len(dates)
|
1498 |
+
|
1499 |
+
factors = pd.DataFrame(index=dates)
|
1500 |
+
factors['Market'] = self.returns.mean(axis=1)
|
1501 |
+
factors['Size'] = np.random.normal(0, 0.01, n_periods) # Simulation simple
|
1502 |
+
factors['Value'] = np.random.normal(0, 0.01, n_periods)
|
1503 |
+
factors['Momentum'] = np.random.normal(0, 0.01, n_periods)
|
1504 |
+
|
1505 |
+
return factors
|
1506 |
+
|
1507 |
+
except Exception as e:
|
1508 |
+
print(f"Erreur dans le calcul des rendements de facteurs : {e}")
|
1509 |
+
return pd.DataFrame(index=self.returns.index)
|
1510 |
+
|
1511 |
+
def _get_risk_management_rules(self, profile: PersonalProfile) -> Dict[str, Any]:
|
1512 |
+
"""Obtient les règles de gestion des risques basées sur le profil"""
|
1513 |
+
try:
|
1514 |
+
return {
|
1515 |
+
'stop_loss': self._calculate_stop_loss_levels(profile),
|
1516 |
+
'hedging_strategy': self._get_hedging_strategy(profile),
|
1517 |
+
'rebalancing_triggers': self._get_rebalancing_triggers(profile),
|
1518 |
+
'risk_limits': {
|
1519 |
+
'max_drawdown': -0.05 * profile.risk_tolerance,
|
1520 |
+
'max_volatility': 0.05 * profile.risk_tolerance,
|
1521 |
+
'max_var': -0.02 * profile.risk_tolerance,
|
1522 |
+
'max_position_size': min(0.2 + 0.05 * profile.risk_tolerance, 0.4)
|
1523 |
+
},
|
1524 |
+
'diversification_rules': {
|
1525 |
+
'min_assets': max(5, 10 - profile.risk_tolerance),
|
1526 |
+
'max_sector_exposure': min(0.3 + 0.05 * profile.risk_tolerance, 0.5),
|
1527 |
+
'min_asset_classes': max(2, 4 - profile.risk_tolerance)
|
1528 |
+
}
|
1529 |
+
}
|
1530 |
+
except Exception as e:
|
1531 |
+
print(f"Error getting risk management rules: {e}")
|
1532 |
+
# Retourner des règles par défaut en cas d'erreur
|
1533 |
+
return {
|
1534 |
+
'stop_loss': {'portfolio': 0.15, 'individual_positions': 0.25},
|
1535 |
+
'risk_limits': {
|
1536 |
+
'max_drawdown': -0.15,
|
1537 |
+
'max_volatility': 0.20,
|
1538 |
+
'max_var': -0.10,
|
1539 |
+
'max_position_size': 0.30
|
1540 |
+
}
|
1541 |
+
}
|
1542 |
+
|
1543 |
+
def _calculate_diversification_metrics(self, results: BacktestResult) -> Dict[str, float]:
|
1544 |
+
"""Calculate diversification metrics"""
|
1545 |
+
try:
|
1546 |
+
positions = results.positions.iloc[-1]
|
1547 |
+
returns = results.returns
|
1548 |
+
|
1549 |
+
return {
|
1550 |
+
'herfindahl_index': np.sum(positions ** 2),
|
1551 |
+
'effective_n': 1 / np.sum(positions ** 2),
|
1552 |
+
'correlation_score': 1 - abs(returns.corr().mean().mean())
|
1553 |
+
}
|
1554 |
+
except Exception as e:
|
1555 |
+
print(f"Error calculating diversification metrics: {e}")
|
1556 |
+
return {}
|
1557 |
+
|
1558 |
+
def _generate_summary_metrics(self, results: BacktestResult) -> Dict[str, Any]:
|
1559 |
+
"""Generate summary metrics for the report"""
|
1560 |
+
return {
|
1561 |
+
'performance_summary': {
|
1562 |
+
'total_return': f"{results.metrics['total_return']:.2%}",
|
1563 |
+
'annualized_return': f"{results.metrics['cagr']:.2%}",
|
1564 |
+
'sharpe_ratio': f"{results.metrics['sharpe_ratio']:.2f}",
|
1565 |
+
'max_drawdown': f"{results.metrics['max_drawdown']:.2%}",
|
1566 |
+
'volatility': f"{results.metrics['volatility']:.2%}"
|
1567 |
+
},
|
1568 |
+
'risk_summary': {
|
1569 |
+
'var_95': f"{results.risk_metrics['var_historical_95']:.2%}",
|
1570 |
+
'beta': f"{results.risk_metrics['beta']:.2f}",
|
1571 |
+
'correlation': f"{results.risk_metrics['r_squared']:.2f}",
|
1572 |
+
'tracking_error': f"{results.risk_metrics['tracking_error']:.2%}"
|
1573 |
+
},
|
1574 |
+
'trading_summary': {
|
1575 |
+
'turnover': f"{results.turnover.mean():.2%}",
|
1576 |
+
'transaction_costs': f"{results.transaction_costs.sum():.2%}",
|
1577 |
+
'win_rate': f"{results.metrics['win_rate']:.2%}"
|
1578 |
+
}
|
1579 |
+
}
|
1580 |
+
|
1581 |
+
def _simulate_scenario(self, returns: pd.Series, scenario_params: Dict) -> pd.Series:
|
1582 |
+
"""Simulate scenario based on given parameters"""
|
1583 |
+
try:
|
1584 |
+
simulated_returns = returns.copy()
|
1585 |
+
|
1586 |
+
if 'shock' in scenario_params:
|
1587 |
+
shock_value = scenario_params['shock']
|
1588 |
+
duration = scenario_params.get('duration', 22) # Default 1 month
|
1589 |
+
simulated_returns.iloc[-duration:] *= (1 + shock_value)
|
1590 |
+
|
1591 |
+
elif 'vol_multiplier' in scenario_params:
|
1592 |
+
vol_mult = scenario_params['vol_multiplier']
|
1593 |
+
simulated_returns *= vol_mult
|
1594 |
+
|
1595 |
+
return simulated_returns
|
1596 |
+
|
1597 |
+
except Exception as e:
|
1598 |
+
print(f"Error in scenario simulation: {e}")
|
1599 |
+
return returns
|
1600 |
+
|
1601 |
+
def _generate_report_figures(self, results: BacktestResult) -> Dict[str, plt.Figure]:
|
1602 |
+
"""Generate visualization figures for the report"""
|
1603 |
+
figures = {}
|
1604 |
+
|
1605 |
+
# Performance cumulative
|
1606 |
+
fig_perf = plt.figure(figsize=(12, 6))
|
1607 |
+
cum_returns = (1 + results.returns).cumprod()
|
1608 |
+
plt.plot(cum_returns.index, cum_returns.values)
|
1609 |
+
plt.title('Cumulative Performance')
|
1610 |
+
figures['performance'] = fig_perf
|
1611 |
+
|
1612 |
+
# Drawdowns
|
1613 |
+
fig_dd = plt.figure(figsize=(12, 6))
|
1614 |
+
plt.plot(results.drawdowns.index, results.drawdowns.values)
|
1615 |
+
plt.title('Drawdown Analysis')
|
1616 |
+
figures['drawdowns'] = fig_dd
|
1617 |
+
|
1618 |
+
# Allocation au fil du temps
|
1619 |
+
fig_alloc = plt.figure(figsize=(12, 6))
|
1620 |
+
results.positions.plot(kind='area', stacked=True)
|
1621 |
+
plt.title('Portfolio Allocation Over Time')
|
1622 |
+
figures['allocation'] = fig_alloc
|
1623 |
+
|
1624 |
+
# Risk Metrics Rolling
|
1625 |
+
fig_risk = plt.figure(figsize=(12, 6))
|
1626 |
+
risk_metrics = pd.DataFrame({
|
1627 |
+
'Rolling Volatility': results.risk_metrics['rolling_volatility'],
|
1628 |
+
'Rolling VaR': results.risk_metrics['rolling_var'],
|
1629 |
+
'Rolling Beta': results.risk_metrics['rolling_beta']
|
1630 |
+
})
|
1631 |
+
risk_metrics.plot()
|
1632 |
+
plt.title('Rolling Risk Metrics')
|
1633 |
+
figures['risk'] = fig_risk
|
1634 |
+
|
1635 |
+
return figures
|
1636 |
+
|
1637 |
+
def _save_report(self, report: Dict[str, Any], figures: Dict[str, plt.Figure], save_path: str):
|
1638 |
+
"""Save report and figures to files"""
|
1639 |
+
try:
|
1640 |
+
# Créer le dossier si nécessaire
|
1641 |
+
os.makedirs(save_path, exist_ok=True)
|
1642 |
+
|
1643 |
+
# Sauvegarder le rapport en JSON
|
1644 |
+
with open(os.path.join(save_path, 'report.json'), 'w') as f:
|
1645 |
+
json.dump(report, f, indent=4)
|
1646 |
+
|
1647 |
+
# Sauvegarder les figures
|
1648 |
+
for name, fig in figures.items():
|
1649 |
+
fig.savefig(os.path.join(save_path, f'{name}.png'))
|
1650 |
+
|
1651 |
+
except Exception as e:
|
1652 |
+
print(f"Error saving report: {str(e)}")
|
1653 |
+
raise
|
1654 |
+
|
1655 |
+
def run_profile_based_backtest(self,
|
1656 |
+
profile: PersonalProfile,
|
1657 |
+
data: pd.DataFrame,
|
1658 |
+
alternative_data: Optional[pd.DataFrame] = None) -> BacktestResult:
|
1659 |
+
"""
|
1660 |
+
Run backtest adapted to investor profile
|
1661 |
+
"""
|
1662 |
+
try:
|
1663 |
+
# Créer la configuration basée sur le profil
|
1664 |
+
config = self._create_profile_based_config(profile)
|
1665 |
+
|
1666 |
+
# Créer les contraintes basées sur le profil
|
1667 |
+
constraints = self._create_profile_based_constraints(profile, len(data.columns))
|
1668 |
+
|
1669 |
+
# Exécuter le backtest
|
1670 |
+
results = self.run_backtest(config, data, constraints, alternative_data)
|
1671 |
+
|
1672 |
+
# Ajouter les métriques spécifiques au profil
|
1673 |
+
results.metrics.update(self._calculate_profile_specific_metrics(results, profile))
|
1674 |
+
|
1675 |
+
return results
|
1676 |
+
|
1677 |
+
except Exception as e:
|
1678 |
+
print(f"Error in profile-based backtest: {str(e)}")
|
1679 |
+
raise
|
1680 |
+
|
1681 |
+
def _create_profile_based_config(self, profile: PersonalProfile) -> BacktestConfig:
|
1682 |
+
"""Create backtest configuration based on investor profile"""
|
1683 |
+
rebalance_freq = {
|
1684 |
+
1: "quarterly", # Très faible risque
|
1685 |
+
2: "monthly", # Faible risque
|
1686 |
+
3: "monthly", # Risque moyen
|
1687 |
+
4: "weekly", # Risque élevé
|
1688 |
+
5: "weekly" # Très haut risque
|
1689 |
+
}
|
1690 |
+
|
1691 |
+
return BacktestConfig(
|
1692 |
+
start_date=datetime.now() - timedelta(days=365*2),
|
1693 |
+
end_date=datetime.now(),
|
1694 |
+
initial_capital=profile.total_assets - profile.total_debt,
|
1695 |
+
rebalance_frequency=rebalance_freq[profile.risk_tolerance],
|
1696 |
+
transaction_cost_model="percentage",
|
1697 |
+
transaction_cost_params={'cost_rate': 0.001},
|
1698 |
+
include_alternative_assets=profile.risk_tolerance > 2
|
1699 |
+
)
|
1700 |
+
|
1701 |
+
def _create_profile_based_constraints(self,
|
1702 |
+
profile: PersonalProfile,
|
1703 |
+
n_assets: int) -> OptimizationConstraints:
|
1704 |
+
"""Create optimization constraints based on investor profile"""
|
1705 |
+
# Contraintes de base selon le niveau de risque
|
1706 |
+
risk_based_constraints = {
|
1707 |
+
1: {'max_weight': 0.15, 'min_bond_alloc': 0.60},
|
1708 |
+
2: {'max_weight': 0.20, 'min_bond_alloc': 0.40},
|
1709 |
+
3: {'max_weight': 0.25, 'min_bond_alloc': 0.20},
|
1710 |
+
4: {'max_weight': 0.30, 'min_bond_alloc': 0.10},
|
1711 |
+
5: {'max_weight': 0.35, 'min_bond_alloc': 0.00}
|
1712 |
+
}
|
1713 |
+
|
1714 |
+
constraints = risk_based_constraints[profile.risk_tolerance]
|
1715 |
+
|
1716 |
+
return OptimizationConstraints(
|
1717 |
+
min_weights=np.zeros(n_assets),
|
1718 |
+
max_weights=np.ones(n_assets) * constraints['max_weight'],
|
1719 |
+
asset_classes={
|
1720 |
+
'bonds': list(range(n_assets // 3)),
|
1721 |
+
'stocks': list(range(n_assets // 3, n_assets))
|
1722 |
+
},
|
1723 |
+
min_class_weights={'bonds': constraints['min_bond_alloc']},
|
1724 |
+
max_class_weights={'stocks': 1 - constraints['min_bond_alloc']},
|
1725 |
+
max_turnover=0.2 if profile.risk_tolerance <= 3 else 0.3
|
1726 |
+
)
|
1727 |
+
|
1728 |
+
def _calculate_profile_specific_metrics(self,
|
1729 |
+
results: BacktestResult,
|
1730 |
+
profile: PersonalProfile) -> Dict[str, float]:
|
1731 |
+
"""Calculate additional metrics specific to investor profile"""
|
1732 |
+
returns = results.returns
|
1733 |
+
|
1734 |
+
# Métriques spécifiques au profil
|
1735 |
+
profile_metrics = {
|
1736 |
+
'risk_tolerance_alignment': self._calculate_risk_alignment(returns, profile),
|
1737 |
+
'income_generation': self._calculate_income_metrics(returns, profile),
|
1738 |
+
'goal_achievement': self._calculate_goal_progress(results, profile),
|
1739 |
+
'liquidity_adequacy': self._calculate_liquidity_score(results, profile)
|
1740 |
+
}
|
1741 |
+
|
1742 |
+
return profile_metrics
|
1743 |
+
|
1744 |
+
def _calculate_risk_alignment(self, returns: pd.Series, profile: PersonalProfile) -> float:
|
1745 |
+
"""Calcule l'alignement entre le risque réel et le profil de risque"""
|
1746 |
+
realized_vol = returns.std() * np.sqrt(252)
|
1747 |
+
target_vol = {
|
1748 |
+
1: 0.05, # Très faible risque
|
1749 |
+
2: 0.08, # Faible risque
|
1750 |
+
3: 0.12, # Risque moyen
|
1751 |
+
4: 0.18, # Risque élevé
|
1752 |
+
5: 0.25 # Très haut risque
|
1753 |
+
}[profile.risk_tolerance]
|
1754 |
+
|
1755 |
+
return 1 - min(abs(realized_vol - target_vol) / target_vol, 1)
|
1756 |
+
|
1757 |
+
def _calculate_income_metrics(self, returns: pd.Series, profile: PersonalProfile) -> Dict[str, float]:
|
1758 |
+
"""Calcule les métriques liées au revenu du portefeuille"""
|
1759 |
+
if not hasattr(self, 'current_portfolio_value'):
|
1760 |
+
self.current_portfolio_value = profile.total_assets - profile.total_debt
|
1761 |
+
|
1762 |
+
annual_return = (1 + returns).prod() ** (252/len(returns)) - 1
|
1763 |
+
|
1764 |
+
return {
|
1765 |
+
'income_to_need_ratio': (self.current_portfolio_value * annual_return) / profile.annual_income,
|
1766 |
+
'sustainable_withdrawal_rate': min(annual_return * 0.7, 0.04),
|
1767 |
+
'income_stability': self._calculate_income_stability(returns)
|
1768 |
+
}
|
1769 |
+
|
1770 |
+
def _calculate_goal_progress(self, results: BacktestResult, profile: PersonalProfile) -> Dict[str, float]:
|
1771 |
+
"""Évalue la progression vers les objectifs d'investissement"""
|
1772 |
+
try:
|
1773 |
+
current_value = results.positions.iloc[-1].sum()
|
1774 |
+
initial_value = results.positions.iloc[0].sum()
|
1775 |
+
|
1776 |
+
if initial_value == 0:
|
1777 |
+
initial_value = self.current_portfolio_value # Utiliser la valeur initiale du portfolio
|
1778 |
+
|
1779 |
+
progress = {}
|
1780 |
+
for goal in profile.investment_goals:
|
1781 |
+
goal_progress = {
|
1782 |
+
'current_value': current_value,
|
1783 |
+
'progress_ratio': (current_value / initial_value - 1) if initial_value != 0 else 0,
|
1784 |
+
'time_to_goal': self._estimate_time_to_goal(
|
1785 |
+
current_value,
|
1786 |
+
target_value=initial_value * 1.5, # exemple
|
1787 |
+
returns=results.returns
|
1788 |
+
)
|
1789 |
+
}
|
1790 |
+
progress[goal] = goal_progress
|
1791 |
+
|
1792 |
+
return progress
|
1793 |
+
|
1794 |
+
except Exception as e:
|
1795 |
+
print(f"Error calculating goal progress: {str(e)}")
|
1796 |
+
return {}
|
1797 |
+
|
1798 |
+
def _calculate_income_stability(self, returns: pd.Series) -> float:
|
1799 |
+
"""Calcule la stabilité des revenus générés"""
|
1800 |
+
monthly_returns = returns.resample('M').sum()
|
1801 |
+
return 1 - monthly_returns.std() / abs(monthly_returns.mean())
|
1802 |
+
|
1803 |
+
def _calculate_drawdowns(self, returns: pd.Series) -> pd.Series:
|
1804 |
+
"""Calculate drawdown series"""
|
1805 |
+
if isinstance(returns, np.ndarray):
|
1806 |
+
returns = pd.Series(returns)
|
1807 |
+
cumulative_returns = (1 + returns).cumprod()
|
1808 |
+
running_max = cumulative_returns.expanding().max()
|
1809 |
+
drawdowns = cumulative_returns/running_max - 1
|
1810 |
+
return drawdowns
|
1811 |
+
|
1812 |
+
def _calculate_max_drawdown(self, returns: pd.Series) -> float:
|
1813 |
+
"""Calculate maximum drawdown"""
|
1814 |
+
if isinstance(returns, np.ndarray):
|
1815 |
+
returns = pd.Series(returns)
|
1816 |
+
drawdowns = self._calculate_drawdowns(returns)
|
1817 |
+
return drawdowns.min()
|
1818 |
+
|
1819 |
+
def _estimate_time_to_goal(self, current_value: float, target_value: float, returns: pd.Series) -> float:
|
1820 |
+
"""Estime le temps nécessaire pour atteindre un objectif"""
|
1821 |
+
annual_return = (1 + returns).prod() ** (252/len(returns)) - 1
|
1822 |
+
if annual_return <= 0:
|
1823 |
+
return float('inf')
|
1824 |
+
return np.log(target_value / current_value) / np.log(1 + annual_return)
|
1825 |
+
|
1826 |
+
def generate_profile_based_report(self,
|
1827 |
+
results: BacktestResult,
|
1828 |
+
profile: PersonalProfile) -> Dict[str, Any]:
|
1829 |
+
"""Generate comprehensive report including profile-specific analysis"""
|
1830 |
+
try:
|
1831 |
+
# Calculer les éléments du rapport
|
1832 |
+
goal_tracking = {}
|
1833 |
+
for goal in profile.investment_goals:
|
1834 |
+
goal_name = goal if isinstance(goal, str) else goal.get('name')
|
1835 |
+
progress = self._calculate_goal_progress(results)
|
1836 |
+
goal_tracking[goal_name] = progress
|
1837 |
+
|
1838 |
+
report = {
|
1839 |
+
'profile_summary': {
|
1840 |
+
'risk_profile': f"Niveau {profile.risk_tolerance}/5",
|
1841 |
+
'horizon': f"{profile.investment_horizon} ans",
|
1842 |
+
'knowledge': f"Niveau {profile.financial_knowledge}/5",
|
1843 |
+
'stability': f"Niveau {profile.income_stability}/5"
|
1844 |
+
},
|
1845 |
+
'investment_strategy': {
|
1846 |
+
'current_allocation': results.positions.iloc[-1].to_dict(),
|
1847 |
+
'rebalancing_frequency': self._get_rebalancing_frequency(profile),
|
1848 |
+
'risk_management': self._get_risk_management_rules(profile)
|
1849 |
+
},
|
1850 |
+
'goal_tracking': goal_tracking,
|
1851 |
+
'risk_analysis': {
|
1852 |
+
'realized_volatility': results.risk_metrics.get('volatility', 0),
|
1853 |
+
'max_drawdown': results.risk_metrics.get('max_drawdown', 0),
|
1854 |
+
'var_95': results.risk_metrics.get('var_95', 0)
|
1855 |
+
},
|
1856 |
+
'recommendations': self._generate_recommendations(results, profile)
|
1857 |
+
}
|
1858 |
+
|
1859 |
+
return report
|
1860 |
+
|
1861 |
+
except Exception as e:
|
1862 |
+
print(f"Error generating profile-based report: {str(e)}")
|
1863 |
+
raise
|
1864 |
+
|
1865 |
+
def _calculate_goal_progress(self, results: BacktestResult) -> float:
|
1866 |
+
"""Calcule la progression vers les objectifs"""
|
1867 |
+
if not hasattr(results, 'returns'):
|
1868 |
+
return 0.0
|
1869 |
+
|
1870 |
+
cumulative_return = (1 + results.returns).prod() - 1
|
1871 |
+
return cumulative_return
|
1872 |
+
|
1873 |
+
def _calculate_goal_progress(self, results: BacktestResult) -> float:
|
1874 |
+
"""Calcule la progression vers les objectifs"""
|
1875 |
+
if not hasattr(results, 'returns'):
|
1876 |
+
return 0.0
|
1877 |
+
|
1878 |
+
cumulative_return = (1 + results.returns).prod() - 1
|
1879 |
+
return cumulative_return
|
1880 |
+
|
1881 |
+
def _generate_profile_summary(self, profile: PersonalProfile) -> pd.DataFrame:
|
1882 |
+
"""Generate summary of investor profile"""
|
1883 |
+
# Créer des séries distinctes pour chaque section
|
1884 |
+
risk_profile = pd.Series({
|
1885 |
+
'Niveau de tolérance': profile.risk_tolerance,
|
1886 |
+
'Horizon d\'investissement': profile.investment_horizon,
|
1887 |
+
'Connaissance financière': profile.financial_knowledge
|
1888 |
+
})
|
1889 |
+
|
1890 |
+
financial_situation = pd.Series({
|
1891 |
+
'Revenu annuel': profile.annual_income,
|
1892 |
+
'Actifs totaux': profile.total_assets,
|
1893 |
+
'Dette totale': profile.total_debt,
|
1894 |
+
'Valeur nette': profile.total_assets - profile.total_debt
|
1895 |
+
})
|
1896 |
+
|
1897 |
+
# Créer un DataFrame avec les différentes sections
|
1898 |
+
summary_df = pd.DataFrame({
|
1899 |
+
'Profil de Risque': risk_profile,
|
1900 |
+
'Situation Financière': financial_situation
|
1901 |
+
})
|
1902 |
+
|
1903 |
+
# Ajouter les objectifs d'investissement de manière formatée
|
1904 |
+
if hasattr(profile, 'investment_goals'):
|
1905 |
+
summary_df['Objectifs'] = pd.Series(profile.investment_goals)
|
1906 |
+
|
1907 |
+
# Ajouter les contraintes si elles existent
|
1908 |
+
if hasattr(profile, 'constraints'):
|
1909 |
+
summary_df['Contraintes'] = pd.Series(profile.constraints)
|
1910 |
+
|
1911 |
+
return summary_df
|
1912 |
+
|
1913 |
+
def _generate_strategy_summary(self, results: BacktestResult, profile: PersonalProfile) -> Dict[str, Any]:
|
1914 |
+
"""Generate summary of investment strategy"""
|
1915 |
+
return {
|
1916 |
+
'asset_allocation': {
|
1917 |
+
'target_allocation': self._get_target_allocation(profile),
|
1918 |
+
'current_allocation': results.positions.iloc[-1].to_dict(),
|
1919 |
+
'allocation_drift': self._calculate_allocation_drift(results, profile)
|
1920 |
+
},
|
1921 |
+
'rebalancing_strategy': {
|
1922 |
+
'frequency': self._get_rebalancing_frequency(profile),
|
1923 |
+
'triggers': self._get_rebalancing_triggers(profile),
|
1924 |
+
'last_rebalance': self.rebalance_dates[-1].strftime('%Y-%m-%d')
|
1925 |
+
},
|
1926 |
+
'risk_management': {
|
1927 |
+
'stop_loss_levels': self._calculate_stop_loss_levels(profile),
|
1928 |
+
'hedging_strategy': self._get_hedging_strategy(profile),
|
1929 |
+
'diversification_metrics': self._calculate_diversification_metrics(results)
|
1930 |
+
}
|
1931 |
+
}
|
1932 |
+
|
1933 |
+
def _generate_profile_figures(self, results: BacktestResult, profile: PersonalProfile) -> Dict[str, plt.Figure]:
|
1934 |
+
"""Generate profile-specific visualizations"""
|
1935 |
+
figures = {}
|
1936 |
+
|
1937 |
+
# Goal Progress Tracking
|
1938 |
+
fig_goals = plt.figure(figsize=(12, 6))
|
1939 |
+
self._plot_goal_progress(results, profile, fig_goals)
|
1940 |
+
figures['goal_progress'] = fig_goals
|
1941 |
+
|
1942 |
+
# Risk Profile Alignment
|
1943 |
+
fig_risk = plt.figure(figsize=(12, 6))
|
1944 |
+
self._plot_risk_alignment(results, profile, fig_risk)
|
1945 |
+
figures['risk_alignment'] = fig_risk
|
1946 |
+
|
1947 |
+
# Income Generation
|
1948 |
+
fig_income = plt.figure(figsize=(12, 6))
|
1949 |
+
self._plot_income_analysis(results, profile, fig_income)
|
1950 |
+
figures['income_analysis'] = fig_income
|
1951 |
+
|
1952 |
+
# Portfolio Evolution vs Objectives
|
1953 |
+
fig_evolution = plt.figure(figsize=(12, 6))
|
1954 |
+
self._plot_portfolio_evolution(results, profile, fig_evolution)
|
1955 |
+
figures['portfolio_evolution'] = fig_evolution
|
1956 |
+
|
1957 |
+
return figures
|
1958 |
+
|
1959 |
+
def _plot_goal_progress(self, results: BacktestResult, profile: PersonalProfile, fig: plt.Figure):
|
1960 |
+
"""Plot progress towards investment goals"""
|
1961 |
+
ax = fig.add_subplot(111)
|
1962 |
+
|
1963 |
+
goal_progress = self._calculate_goal_progress(results, profile)
|
1964 |
+
goals = list(goal_progress.keys())
|
1965 |
+
progress_values = [progress['progress_ratio'] for progress in goal_progress.values()]
|
1966 |
+
|
1967 |
+
ax.barh(goals, progress_values)
|
1968 |
+
ax.axvline(x=0, color='k', linestyle='-', alpha=0.3)
|
1969 |
+
ax.set_title('Progress Towards Investment Goals')
|
1970 |
+
ax.set_xlabel('Progress Ratio')
|
1971 |
+
|
1972 |
+
# Ajouter les annotations de temps restant
|
1973 |
+
for i, goal in enumerate(goals):
|
1974 |
+
time_to_goal = goal_progress[goal]['time_to_goal']
|
1975 |
+
if time_to_goal != float('inf'):
|
1976 |
+
ax.text(progress_values[i], i, f'{time_to_goal:.1f} years left',
|
1977 |
+
va='center')
|
1978 |
+
|
1979 |
+
def _plot_risk_alignment(self, results: BacktestResult, profile: PersonalProfile, fig: plt.Figure):
|
1980 |
+
"""Plot risk alignment analysis"""
|
1981 |
+
realized_vol = results.returns.std() * np.sqrt(252)
|
1982 |
+
target_vol = {
|
1983 |
+
1: 0.05, 2: 0.08, 3: 0.12, 4: 0.18, 5: 0.25
|
1984 |
+
}[profile.risk_tolerance]
|
1985 |
+
|
1986 |
+
ax = fig.add_subplot(111)
|
1987 |
+
|
1988 |
+
# Plotting realized vs target volatility
|
1989 |
+
dates = results.returns.index
|
1990 |
+
rolling_vol = results.returns.rolling(window=63).std() * np.sqrt(252)
|
1991 |
+
|
1992 |
+
ax.plot(dates, rolling_vol, label='Realized Volatility')
|
1993 |
+
ax.axhline(y=target_vol, color='r', linestyle='--', label='Target Volatility')
|
1994 |
+
|
1995 |
+
ax.set_title('Risk Profile Alignment Over Time')
|
1996 |
+
ax.set_xlabel('Date')
|
1997 |
+
ax.set_ylabel('Annualized Volatility')
|
1998 |
+
ax.legend()
|
1999 |
+
|
2000 |
+
def _plot_income_analysis(self, results: BacktestResult, profile: PersonalProfile, fig: plt.Figure):
|
2001 |
+
"""Plot income generation analysis"""
|
2002 |
+
ax = fig.add_subplot(111)
|
2003 |
+
|
2004 |
+
# Calculate monthly income
|
2005 |
+
monthly_income = results.returns * self.current_portfolio_value
|
2006 |
+
monthly_income = monthly_income.resample('M').sum()
|
2007 |
+
|
2008 |
+
# Plot income generation
|
2009 |
+
ax.plot(monthly_income.index, monthly_income.values, label='Portfolio Income')
|
2010 |
+
ax.axhline(y=profile.annual_income/12, color='r', linestyle='--',
|
2011 |
+
label='Monthly Income Target')
|
2012 |
+
|
2013 |
+
ax.set_title('Monthly Income Generation vs Target')
|
2014 |
+
ax.set_xlabel('Date')
|
2015 |
+
ax.set_ylabel('Monthly Income')
|
2016 |
+
ax.legend()
|
2017 |
+
|
2018 |
+
def _plot_portfolio_evolution(self, results: BacktestResult, profile: PersonalProfile, fig: plt.Figure):
|
2019 |
+
"""Plot portfolio evolution against objectives"""
|
2020 |
+
ax = fig.add_subplot(111)
|
2021 |
+
|
2022 |
+
portfolio_values = (1 + results.returns).cumprod() * self.current_portfolio_value
|
2023 |
+
|
2024 |
+
# Plot actual portfolio value
|
2025 |
+
ax.plot(portfolio_values.index, portfolio_values.values, label='Portfolio Value')
|
2026 |
+
|
2027 |
+
# Plot goal targets
|
2028 |
+
for goal in profile.investment_goals:
|
2029 |
+
if isinstance(goal, dict) and 'target_value' in goal:
|
2030 |
+
ax.axhline(y=goal['target_value'], linestyle='--',
|
2031 |
+
label=f"Goal: {goal['name']}")
|
2032 |
+
|
2033 |
+
ax.set_title('Portfolio Evolution vs Goals')
|
2034 |
+
ax.set_xlabel('Date')
|
2035 |
+
ax.set_ylabel('Portfolio Value')
|
2036 |
+
ax.legend()
|
2037 |
+
|
2038 |
+
def display_results(report: Dict, results: BacktestResult, profile: PersonalProfile):
|
2039 |
+
plt.figure(figsize=(15, 10))
|
2040 |
+
|
2041 |
+
# Performance du portefeuille
|
2042 |
+
plt.subplot(221)
|
2043 |
+
if hasattr(results, 'returns') and isinstance(results.returns, pd.Series):
|
2044 |
+
cumulative_returns = (1 + results.returns).cumprod()
|
2045 |
+
plt.plot(cumulative_returns.index, cumulative_returns.values)
|
2046 |
+
plt.title('Performance Cumulative')
|
2047 |
+
plt.grid(True)
|
2048 |
+
|
2049 |
+
# Allocation des actifs
|
2050 |
+
plt.subplot(222)
|
2051 |
+
if hasattr(results, 'positions') and isinstance(results.positions, pd.DataFrame):
|
2052 |
+
if not results.positions.empty:
|
2053 |
+
results.positions.plot(kind='area', stacked=True, ax=plt.gca())
|
2054 |
+
plt.title('Allocation des Actifs')
|
2055 |
+
plt.grid(True)
|
2056 |
+
|
2057 |
+
# Progression des objectifs
|
2058 |
+
plt.subplot(223)
|
2059 |
+
if 'goal_tracking' in report:
|
2060 |
+
goal_progress = report['goal_tracking']
|
2061 |
+
if isinstance(goal_progress, dict):
|
2062 |
+
# Extraire les valeurs de progression
|
2063 |
+
goals = []
|
2064 |
+
values = []
|
2065 |
+
for goal, data in goal_progress.items():
|
2066 |
+
if isinstance(data, dict) and 'progress_ratio' in data:
|
2067 |
+
goals.append(goal)
|
2068 |
+
values.append(data['progress_ratio'])
|
2069 |
+
elif isinstance(data, (int, float)):
|
2070 |
+
goals.append(goal)
|
2071 |
+
values.append(data)
|
2072 |
+
|
2073 |
+
if goals and values:
|
2074 |
+
# Créer le graphique à barres
|
2075 |
+
y_pos = np.arange(len(goals))
|
2076 |
+
plt.bar(y_pos, values)
|
2077 |
+
plt.xticks(y_pos, goals, rotation=45, ha='right')
|
2078 |
+
plt.title('Progression vers les Objectifs')
|
2079 |
+
plt.grid(True)
|
2080 |
+
|
2081 |
+
# Métriques
|
2082 |
+
plt.subplot(224)
|
2083 |
+
if hasattr(results, 'metrics'):
|
2084 |
+
key_metrics = ['sharpe_ratio', 'sortino_ratio', 'max_drawdown']
|
2085 |
+
metrics_values = [results.metrics.get(m, 0) for m in key_metrics]
|
2086 |
+
plt.bar(range(len(key_metrics)), metrics_values)
|
2087 |
+
plt.xticks(range(len(key_metrics)), key_metrics, rotation=45)
|
2088 |
+
plt.title('Métriques Clés')
|
2089 |
+
plt.grid(True)
|
2090 |
+
|
2091 |
+
plt.tight_layout()
|
2092 |
+
|
2093 |
+
# Affichage des statistiques
|
2094 |
+
print("\n=== Profil Investisseur ===")
|
2095 |
+
if isinstance(report.get('profile_summary'), pd.DataFrame):
|
2096 |
+
print(report['profile_summary'])
|
2097 |
+
|
2098 |
+
print("\n=== Statistiques Clés ===")
|
2099 |
+
if hasattr(results, 'metric s'):
|
2100 |
+
for metric, value in results.metrics.items():
|
2101 |
+
if isinstance(value, (int, float)) and not isinstance(value, bool):
|
2102 |
+
try:
|
2103 |
+
print(f"{metric}: {value:.2%}")
|
2104 |
+
except (ValueError, TypeError):
|
2105 |
+
print(f"{metric}: {value}")
|
2106 |
+
|
2107 |
+
plt.show()
|
2108 |
+
|
2109 |
+
def _plot_portfolio_performance(results: BacktestResult):
|
2110 |
+
"""Crée les graphiques de performance"""
|
2111 |
+
plt.figure(figsize=(12, 6))
|
2112 |
+
|
2113 |
+
# Performance cumulative
|
2114 |
+
cumulative_returns = (1 + results.returns).cumprod()
|
2115 |
+
plt.plot(cumulative_returns.index, cumulative_returns.values)
|
2116 |
+
plt.title('Performance Cumulative')
|
2117 |
+
plt.xlabel('Date')
|
2118 |
+
plt.ylabel('Valeur du Portefeuille')
|
2119 |
+
plt.grid(True)
|
2120 |
+
plt.show()
|
2121 |
+
|
2122 |
+
# Drawdowns
|
2123 |
+
plt.figure(figsize=(12, 6))
|
2124 |
+
plt.plot(results.drawdowns.index, results.drawdowns.values)
|
2125 |
+
plt.title('Drawdowns')
|
2126 |
+
plt.xlabel('Date')
|
2127 |
+
plt.ylabel('Drawdown')
|
2128 |
+
plt.grid(True)
|
2129 |
+
plt.show()
|
2130 |
+
|
2131 |
+
def _plot_risk_metrics(results: BacktestResult):
|
2132 |
+
"""Crée les graphiques des métriques de risque"""
|
2133 |
+
# Rolling volatility
|
2134 |
+
if 'rolling_volatility' in results.risk_metrics:
|
2135 |
+
plt.figure(figsize=(12, 6))
|
2136 |
+
plt.plot(results.returns.index, results.risk_metrics['rolling_volatility'])
|
2137 |
+
plt.title('Volatilité Glissante')
|
2138 |
+
plt.xlabel('Date')
|
2139 |
+
plt.ylabel('Volatilité')
|
2140 |
+
plt.grid(True)
|
2141 |
+
plt.show()
|
2142 |
+
|
2143 |
+
# Allocation des actifs
|
2144 |
+
plt.figure(figsize=(12, 6))
|
2145 |
+
results.positions.plot(kind='area', stacked=True)
|
2146 |
+
plt.title('Allocation des Actifs')
|
2147 |
+
plt.xlabel('Date')
|
2148 |
+
plt.ylabel('Poids (%)')
|
2149 |
+
plt.grid(True)
|
2150 |
+
plt.show()
|
2151 |
+
|
2152 |
+
|
2153 |
+
def _should_rebalance(self, date: datetime, config: Optional[str] = None) -> bool:
|
2154 |
+
"""Determine if portfolio should be rebalanced at given date
|
2155 |
+
|
2156 |
+
Args:
|
2157 |
+
date: Current date to check for rebalancing
|
2158 |
+
config: Optional rebalancing frequency config
|
2159 |
+
|
2160 |
+
Returns:
|
2161 |
+
bool: True if rebalancing is needed, False otherwise
|
2162 |
+
"""
|
2163 |
+
if self.rebalance_dates is None:
|
2164 |
+
return False
|
2165 |
+
|
2166 |
+
# Si une configuration est fournie, utilisez-la
|
2167 |
+
if config is not None:
|
2168 |
+
frequency = config
|
2169 |
+
else:
|
2170 |
+
frequency = "monthly" # Fréquence par défaut
|
2171 |
+
|
2172 |
+
# Vérifier si la date est une date de rebalancement selon la fréquence
|
2173 |
+
if frequency == "daily":
|
2174 |
+
return True
|
2175 |
+
elif frequency == "weekly":
|
2176 |
+
return date.weekday() == 0 # Lundi
|
2177 |
+
elif frequency == "monthly":
|
2178 |
+
return date.day == 1
|
2179 |
+
elif frequency == "quarterly":
|
2180 |
+
return date.day == 1 and date.month in [1, 4, 7, 10]
|
2181 |
+
|
2182 |
+
return date in self.rebalance_dates
|
2183 |
+
|
2184 |
+
@staticmethod
|
2185 |
+
def _estimate_recovery_time(returns: pd.Series) -> int:
|
2186 |
+
"""Estimate time to recover from drawdown"""
|
2187 |
+
cum_returns = (1 + returns).cumprod()
|
2188 |
+
running_max = cum_returns.expanding().max()
|
2189 |
+
drawdowns = cum_returns/running_max - 1
|
2190 |
+
|
2191 |
+
recovery_periods = []
|
2192 |
+
in_drawdown = False
|
2193 |
+
start_idx = 0
|
2194 |
+
|
2195 |
+
for i, dd in enumerate(drawdowns):
|
2196 |
+
if not in_drawdown and dd < 0:
|
2197 |
+
in_drawdown = True
|
2198 |
+
start_idx = i
|
2199 |
+
elif in_drawdown and dd >= 0:
|
2200 |
+
in_drawdown = False
|
2201 |
+
recovery_periods.append(i - start_idx)
|
2202 |
+
|
2203 |
+
return np.mean(recovery_periods) if recovery_periods else 0
|
2204 |
+
|
2205 |
+
@staticmethod
|
2206 |
+
def _generate_rebalance_dates(dates: pd.DatetimeIndex, frequency: str) -> List[datetime]:
|
2207 |
+
"""Generate rebalancing dates based on frequency"""
|
2208 |
+
if frequency == "daily":
|
2209 |
+
return list(dates)
|
2210 |
+
elif frequency == "weekly":
|
2211 |
+
return list(dates[dates.weekday == 0])
|
2212 |
+
elif frequency == "monthly":
|
2213 |
+
return list(dates[dates.day == 1])
|
2214 |
+
elif frequency == "quarterly":
|
2215 |
+
return list(dates[(dates.month % 3 == 1) & (dates.day == 1)])
|
2216 |
+
else:
|
2217 |
+
raise ValueError(f"Unsupported rebalancing frequency: {frequency}")
|
2218 |
+
|
2219 |
+
class ValidationFramework:
|
2220 |
+
"""Framework for comprehensive strategy validation"""
|
2221 |
+
|
2222 |
+
def __init__(self, backtest_engine: BacktestEngine):
|
2223 |
+
self.backtest_engine = backtest_engine
|
2224 |
+
|
2225 |
+
def validate_strategy(self,
|
2226 |
+
config: BacktestConfig,
|
2227 |
+
data: pd.DataFrame,
|
2228 |
+
constraints: OptimizationConstraints) -> Dict[str, Any]:
|
2229 |
+
"""
|
2230 |
+
Perform comprehensive strategy validation.
|
2231 |
+
"""
|
2232 |
+
validation_results = {}
|
2233 |
+
|
2234 |
+
# 1. In-sample backtesting
|
2235 |
+
insample_results = self.backtest_engine.run_backtest(
|
2236 |
+
config, data, constraints
|
2237 |
+
)
|
2238 |
+
validation_results['insample_results'] = insample_results
|
2239 |
+
|
2240 |
+
# 2. Walk-forward analysis
|
2241 |
+
walkforward_results = self.backtest_engine.run_walk_forward_analysis(
|
2242 |
+
config, data, constraints
|
2243 |
+
)
|
2244 |
+
validation_results['walkforward_results'] = walkforward_results
|
2245 |
+
|
2246 |
+
# 3. Monte Carlo simulation
|
2247 |
+
monte_carlo_results = self.backtest_engine.run_monte_carlo_simulation(
|
2248 |
+
config, data, constraints
|
2249 |
+
)
|
2250 |
+
validation_results['monte_carlo_results'] = monte_carlo_results
|
2251 |
+
|
2252 |
+
return validation_results
|
2253 |
+
|
2254 |
+
|
2255 |
+
__all__ = [
|
2256 |
+
'BacktestEngine',
|
2257 |
+
'BacktestConfig',
|
2258 |
+
'BacktestResult',
|
2259 |
+
'ValidationFramework'
|
2260 |
+
]
|
2261 |
+
|
2262 |
+
|
2263 |
+
|
2264 |
+
|
2265 |
+
|
2266 |
+
|
2267 |
+
|
src/analysis/enhanced_backtest.py
ADDED
@@ -0,0 +1,228 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# src/analysis/enhanced_backtest.py
|
2 |
+
|
3 |
+
import pandas as pd
|
4 |
+
import numpy as np
|
5 |
+
from typing import Dict, List, Optional
|
6 |
+
from datetime import datetime, timedelta
|
7 |
+
from dataclasses import dataclass
|
8 |
+
|
9 |
+
from src.models.signals import StrategySignal
|
10 |
+
|
11 |
+
@dataclass
|
12 |
+
class EnhancedBacktestResult:
|
13 |
+
"""Résultats détaillés du backtest"""
|
14 |
+
returns: pd.Series
|
15 |
+
positions: pd.DataFrame
|
16 |
+
trades: pd.DataFrame
|
17 |
+
signals: pd.DataFrame
|
18 |
+
performance_metrics: Dict[str, float]
|
19 |
+
risk_metrics: Dict[str, float]
|
20 |
+
alternative_contribution: Dict[str, float]
|
21 |
+
signal_quality: Dict[str, float]
|
22 |
+
|
23 |
+
class EnhancedBacktestEngine:
|
24 |
+
"""Moteur de backtest avancé pour la stratégie améliorée"""
|
25 |
+
|
26 |
+
def __init__(self,
|
27 |
+
enhanced_strategy,
|
28 |
+
risk_manager,
|
29 |
+
data_fetcher,
|
30 |
+
initial_capital: float = 1_000_000):
|
31 |
+
self.strategy = enhanced_strategy
|
32 |
+
self.risk_manager = risk_manager
|
33 |
+
self.data_fetcher = data_fetcher
|
34 |
+
self.initial_capital = initial_capital
|
35 |
+
|
36 |
+
async def run_enhanced_backtest(self,
|
37 |
+
symbols: List[str],
|
38 |
+
start_date: datetime,
|
39 |
+
end_date: datetime,
|
40 |
+
use_alternative_data: bool = True) -> EnhancedBacktestResult:
|
41 |
+
"""Exécute un backtest complet avec données alternatives"""
|
42 |
+
try:
|
43 |
+
# Récupération des données
|
44 |
+
market_data = await self.data_fetcher.fetch_market_data(symbols, start_date, end_date)
|
45 |
+
|
46 |
+
# Initialisation des structures de résultats
|
47 |
+
results = self._initialize_results(market_data.index, symbols)
|
48 |
+
portfolio_value = self.initial_capital
|
49 |
+
current_positions = {symbol: 0 for symbol in symbols}
|
50 |
+
|
51 |
+
# Boucle principale du backtest
|
52 |
+
for date in market_data.index:
|
53 |
+
try:
|
54 |
+
# 1. Récupération des données alternatives pour cette date
|
55 |
+
if use_alternative_data:
|
56 |
+
alternative_data = await self._get_historical_alternative_data(
|
57 |
+
symbols, date
|
58 |
+
)
|
59 |
+
else:
|
60 |
+
alternative_data = {}
|
61 |
+
|
62 |
+
# 2. Génération des signaux
|
63 |
+
signals = await self.strategy.generate_trade_signals(
|
64 |
+
portfolio=current_positions,
|
65 |
+
market_data=market_data.loc[:date],
|
66 |
+
alternative_data=alternative_data
|
67 |
+
)
|
68 |
+
|
69 |
+
# 3. Exécution des trades
|
70 |
+
trades = self._execute_signals(
|
71 |
+
signals,
|
72 |
+
current_positions,
|
73 |
+
portfolio_value,
|
74 |
+
market_data.loc[date]
|
75 |
+
)
|
76 |
+
|
77 |
+
# 4. Mise à jour du portefeuille
|
78 |
+
portfolio_value, current_positions = self._update_portfolio(
|
79 |
+
portfolio_value,
|
80 |
+
current_positions,
|
81 |
+
trades,
|
82 |
+
market_data.loc[date]
|
83 |
+
)
|
84 |
+
|
85 |
+
# 5. Enregistrement des résultats
|
86 |
+
self._record_results(
|
87 |
+
results,
|
88 |
+
date,
|
89 |
+
portfolio_value,
|
90 |
+
current_positions,
|
91 |
+
trades,
|
92 |
+
signals
|
93 |
+
)
|
94 |
+
|
95 |
+
except Exception as e:
|
96 |
+
print(f"Erreur pendant le backtest à la date {date}: {e}")
|
97 |
+
continue
|
98 |
+
|
99 |
+
# Calcul des métriques finales
|
100 |
+
return self._calculate_final_metrics(results, market_data)
|
101 |
+
|
102 |
+
except Exception as e:
|
103 |
+
print(f"Erreur dans le backtest: {e}")
|
104 |
+
raise
|
105 |
+
|
106 |
+
def _initialize_results(self, dates: pd.DatetimeIndex, symbols: List[str]) -> Dict:
|
107 |
+
"""Initialise les structures de données pour les résultats"""
|
108 |
+
return {
|
109 |
+
'portfolio_value': pd.Series(index=dates, dtype=float),
|
110 |
+
'positions': pd.DataFrame(index=dates, columns=symbols, dtype=float),
|
111 |
+
'trades': pd.DataFrame(columns=['symbol', 'type', 'size', 'price', 'cost']),
|
112 |
+
'signals': pd.DataFrame(columns=['symbol', 'direction', 'confidence', 'size']),
|
113 |
+
'alternative_signals': pd.DataFrame(columns=['symbol', 'signal_type', 'value'])
|
114 |
+
}
|
115 |
+
|
116 |
+
async def _get_historical_alternative_data(self,
|
117 |
+
symbols: List[str],
|
118 |
+
date: datetime) -> Dict:
|
119 |
+
"""Récupère les données alternatives historiques"""
|
120 |
+
try:
|
121 |
+
# Satellite data
|
122 |
+
satellite_data = await self.data_fetcher.fetch_historical_satellite_data(
|
123 |
+
symbols, date
|
124 |
+
)
|
125 |
+
|
126 |
+
# Social media data
|
127 |
+
social_data = await self.data_fetcher.fetch_historical_social_data(
|
128 |
+
symbols, date
|
129 |
+
)
|
130 |
+
|
131 |
+
# Web traffic data
|
132 |
+
traffic_data = await self.data_fetcher.fetch_historical_traffic_data(
|
133 |
+
symbols, date
|
134 |
+
)
|
135 |
+
|
136 |
+
return {
|
137 |
+
'satellite': satellite_data,
|
138 |
+
'social_media': social_data,
|
139 |
+
'web_traffic': traffic_data
|
140 |
+
}
|
141 |
+
|
142 |
+
except Exception as e:
|
143 |
+
print(f"Erreur récupération données alternatives: {e}")
|
144 |
+
return {}
|
145 |
+
|
146 |
+
def _execute_signals(self,
|
147 |
+
signals: List[StrategySignal],
|
148 |
+
current_positions: Dict[str, float],
|
149 |
+
portfolio_value: float,
|
150 |
+
market_data: pd.Series) -> List[Dict]:
|
151 |
+
"""Exécute les signaux de trading"""
|
152 |
+
trades = []
|
153 |
+
|
154 |
+
for signal in signals:
|
155 |
+
try:
|
156 |
+
if signal.direction == 'buy':
|
157 |
+
size = self._calculate_buy_size(
|
158 |
+
signal, portfolio_value, current_positions
|
159 |
+
)
|
160 |
+
if size > 0:
|
161 |
+
trades.append({
|
162 |
+
'symbol': signal.symbol,
|
163 |
+
'type': 'buy',
|
164 |
+
'size': size,
|
165 |
+
'price': market_data[signal.symbol],
|
166 |
+
'cost': size * market_data[signal.symbol] * 0.001 # 0.1% de coût
|
167 |
+
})
|
168 |
+
|
169 |
+
elif signal.direction == 'sell':
|
170 |
+
size = self._calculate_sell_size(
|
171 |
+
signal, current_positions
|
172 |
+
)
|
173 |
+
if size > 0:
|
174 |
+
trades.append({
|
175 |
+
'symbol': signal.symbol,
|
176 |
+
'type': 'sell',
|
177 |
+
'size': size,
|
178 |
+
'price': market_data[signal.symbol],
|
179 |
+
'cost': size * market_data[signal.symbol] * 0.001
|
180 |
+
})
|
181 |
+
|
182 |
+
except Exception as e:
|
183 |
+
print(f"Erreur exécution signal {signal.symbol}: {e}")
|
184 |
+
continue
|
185 |
+
|
186 |
+
return trades
|
187 |
+
|
188 |
+
def _calculate_final_metrics(self, results: Dict, market_data: pd.DataFrame) -> EnhancedBacktestResult:
|
189 |
+
"""Calcule les métriques finales du backtest"""
|
190 |
+
try:
|
191 |
+
# Calcul des rendements
|
192 |
+
portfolio_returns = results['portfolio_value'].pct_change().dropna()
|
193 |
+
|
194 |
+
# Métriques de performance
|
195 |
+
performance_metrics = {
|
196 |
+
'total_return': (results['portfolio_value'].iloc[-1] / self.initial_capital) - 1,
|
197 |
+
'annual_return': self._calculate_annual_return(portfolio_returns),
|
198 |
+
'sharpe_ratio': self._calculate_sharpe_ratio(portfolio_returns),
|
199 |
+
'max_drawdown': self._calculate_max_drawdown(portfolio_returns)
|
200 |
+
}
|
201 |
+
|
202 |
+
# Métriques de risque
|
203 |
+
risk_metrics = self.risk_manager.calculate_risk_metrics(portfolio_returns)
|
204 |
+
|
205 |
+
# Contribution des données alternatives
|
206 |
+
alternative_contribution = self._calculate_alternative_contribution(
|
207 |
+
results['signals'], results['alternative_signals']
|
208 |
+
)
|
209 |
+
|
210 |
+
# Qualité des signaux
|
211 |
+
signal_quality = self._evaluate_signal_quality(
|
212 |
+
results['signals'], portfolio_returns
|
213 |
+
)
|
214 |
+
|
215 |
+
return EnhancedBacktestResult(
|
216 |
+
returns=portfolio_returns,
|
217 |
+
positions=results['positions'],
|
218 |
+
trades=results['trades'],
|
219 |
+
signals=results['signals'],
|
220 |
+
performance_metrics=performance_metrics,
|
221 |
+
risk_metrics=risk_metrics,
|
222 |
+
alternative_contribution=alternative_contribution,
|
223 |
+
signal_quality=signal_quality
|
224 |
+
)
|
225 |
+
|
226 |
+
except Exception as e:
|
227 |
+
print(f"Erreur calcul métriques finales: {e}")
|
228 |
+
raise
|
src/analysis/goal_simulator.py
ADDED
@@ -0,0 +1,219 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# src/analysis/goal_simulator.py
|
2 |
+
|
3 |
+
import numpy as np
|
4 |
+
import pandas as pd
|
5 |
+
from typing import Dict, List, Optional
|
6 |
+
from dataclasses import dataclass
|
7 |
+
from datetime import datetime
|
8 |
+
from ..core.profiling import PersonalProfile
|
9 |
+
|
10 |
+
@dataclass
|
11 |
+
class GoalSimulationResult:
|
12 |
+
"""Résultat de la simulation d'un objectif"""
|
13 |
+
success_probability: float
|
14 |
+
required_monthly_investment: float
|
15 |
+
risk_assessment: Dict[str, float]
|
16 |
+
recommended_adjustments: List[str]
|
17 |
+
timeline: pd.DataFrame
|
18 |
+
stress_test_results: Dict[str, float]
|
19 |
+
|
20 |
+
@dataclass
|
21 |
+
class InvestmentGoal:
|
22 |
+
"""Définition d'un objectif d'investissement"""
|
23 |
+
name: str
|
24 |
+
target_amount: float
|
25 |
+
timeframe: int # en années
|
26 |
+
priority: int
|
27 |
+
current_progress: float
|
28 |
+
required_return: float
|
29 |
+
|
30 |
+
class GoalSimulator:
|
31 |
+
"""Simulateur d'objectifs financiers"""
|
32 |
+
|
33 |
+
def __init__(self, market_analyzer, risk_manager):
|
34 |
+
self.market_analyzer = market_analyzer
|
35 |
+
self.risk_manager = risk_manager
|
36 |
+
self.monte_carlo_sims = 1000
|
37 |
+
|
38 |
+
async def simulate_goal_achievement(self,
|
39 |
+
goal: InvestmentGoal,
|
40 |
+
current_portfolio: Dict,
|
41 |
+
personal_profile: PersonalProfile) -> GoalSimulationResult:
|
42 |
+
"""Simule la réalisation d'un objectif financier"""
|
43 |
+
try:
|
44 |
+
# Générer les scénarios de marché
|
45 |
+
scenarios = self._generate_market_scenarios(
|
46 |
+
timeframe=goal.timeframe,
|
47 |
+
current_portfolio=current_portfolio,
|
48 |
+
profile=personal_profile
|
49 |
+
)
|
50 |
+
|
51 |
+
# Calculer la probabilité de succès
|
52 |
+
success_prob = self._calculate_success_probability(
|
53 |
+
scenarios=scenarios,
|
54 |
+
goal=goal
|
55 |
+
)
|
56 |
+
|
57 |
+
# Calculer l'investissement mensuel requis
|
58 |
+
required_investment = self._calculate_required_investment(
|
59 |
+
goal=goal,
|
60 |
+
scenarios=scenarios,
|
61 |
+
success_target=0.8 # 80% de probabilité de succès
|
62 |
+
)
|
63 |
+
|
64 |
+
# Analyse de risque
|
65 |
+
risk_assessment = self._assess_goal_risks(
|
66 |
+
goal=goal,
|
67 |
+
scenarios=scenarios,
|
68 |
+
personal_profile=personal_profile
|
69 |
+
)
|
70 |
+
|
71 |
+
# Recommandations d'ajustement
|
72 |
+
adjustments = self._generate_recommendations(
|
73 |
+
goal=goal,
|
74 |
+
success_prob=success_prob,
|
75 |
+
risk_assessment=risk_assessment,
|
76 |
+
personal_profile=personal_profile
|
77 |
+
)
|
78 |
+
|
79 |
+
return GoalSimulationResult(
|
80 |
+
success_probability=success_prob,
|
81 |
+
required_monthly_investment=required_investment,
|
82 |
+
risk_assessment=risk_assessment,
|
83 |
+
recommended_adjustments=adjustments,
|
84 |
+
timeline=self._generate_timeline(scenarios, goal),
|
85 |
+
stress_test_results=self._run_stress_tests(scenarios, goal)
|
86 |
+
)
|
87 |
+
|
88 |
+
except Exception as e:
|
89 |
+
print(f"Erreur dans la simulation de l'objectif: {e}")
|
90 |
+
raise
|
91 |
+
|
92 |
+
def _generate_market_scenarios(self,
|
93 |
+
timeframe: int,
|
94 |
+
current_portfolio: Dict,
|
95 |
+
profile: PersonalProfile) -> np.ndarray:
|
96 |
+
"""Génère des scénarios de marché via Monte Carlo"""
|
97 |
+
try:
|
98 |
+
# Paramètres de simulation
|
99 |
+
n_months = timeframe * 12
|
100 |
+
n_sims = self.monte_carlo_sims
|
101 |
+
|
102 |
+
# Générer les rendements mensuels
|
103 |
+
monthly_returns = np.random.normal(
|
104 |
+
loc=0.005, # 6% annuel
|
105 |
+
scale=0.02, # Volatilité mensuelle
|
106 |
+
size=(n_sims, n_months)
|
107 |
+
)
|
108 |
+
|
109 |
+
# Ajuster les rendements selon le profil de risque
|
110 |
+
adjusted_returns = self._adjust_returns_for_risk_profile(
|
111 |
+
monthly_returns, profile
|
112 |
+
)
|
113 |
+
|
114 |
+
# Générer les trajectoires de portefeuille
|
115 |
+
initial_value = sum(current_portfolio.values())
|
116 |
+
scenarios = np.zeros((n_sims, n_months))
|
117 |
+
|
118 |
+
for i in range(n_sims):
|
119 |
+
scenarios[i] = initial_value * np.cumprod(1 + adjusted_returns[i])
|
120 |
+
|
121 |
+
return scenarios
|
122 |
+
|
123 |
+
except Exception as e:
|
124 |
+
print(f"Erreur dans la génération des scénarios: {e}")
|
125 |
+
raise
|
126 |
+
|
127 |
+
def _calculate_success_probability(self,
|
128 |
+
scenarios: np.ndarray,
|
129 |
+
goal: InvestmentGoal) -> float:
|
130 |
+
"""Calcule la probabilité d'atteindre l'objectif"""
|
131 |
+
try:
|
132 |
+
# Valeurs finales des scénarios
|
133 |
+
final_values = scenarios[:, -1]
|
134 |
+
|
135 |
+
# Nombre de scénarios atteignant l'objectif
|
136 |
+
success_count = np.sum(final_values >= goal.target_amount)
|
137 |
+
|
138 |
+
return success_count / len(scenarios)
|
139 |
+
|
140 |
+
except Exception as e:
|
141 |
+
print(f"Erreur dans le calcul de la probabilité de succès: {e}")
|
142 |
+
return 0.0
|
143 |
+
|
144 |
+
def _calculate_required_investment(self,
|
145 |
+
goal: InvestmentGoal,
|
146 |
+
scenarios: np.ndarray,
|
147 |
+
success_target: float) -> float:
|
148 |
+
"""Calcule l'investissement mensuel nécessaire"""
|
149 |
+
try:
|
150 |
+
current_gap = goal.target_amount - goal.current_progress
|
151 |
+
months_remaining = goal.timeframe * 12
|
152 |
+
|
153 |
+
# Calculer le taux de rendement moyen des scénarios
|
154 |
+
avg_return = np.mean(np.diff(scenarios) / scenarios[:, :-1])
|
155 |
+
|
156 |
+
# Formule PMT modifiée pour tenir compte du risque
|
157 |
+
r = (1 + avg_return) ** (1/12) - 1 # Taux mensuel
|
158 |
+
required_monthly = (current_gap * r) / ((1 + r) ** months_remaining - 1)
|
159 |
+
|
160 |
+
# Ajouter une marge de sécurité
|
161 |
+
safety_margin = 1.1 # 10% de marge
|
162 |
+
return required_monthly * safety_margin
|
163 |
+
|
164 |
+
except Exception as e:
|
165 |
+
print(f"Erreur dans le calcul de l'investissement requis: {e}")
|
166 |
+
return 0.0
|
167 |
+
|
168 |
+
def _assess_goal_risks(self,
|
169 |
+
goal: InvestmentGoal,
|
170 |
+
scenarios: np.ndarray,
|
171 |
+
personal_profile: PersonalProfile) -> Dict[str, float]:
|
172 |
+
"""Évalue les risques liés à l'objectif"""
|
173 |
+
try:
|
174 |
+
return {
|
175 |
+
'shortfall_risk': self._calculate_shortfall_risk(scenarios, goal),
|
176 |
+
'time_risk': self._calculate_time_risk(scenarios, goal),
|
177 |
+
'investment_risk': self._calculate_investment_risk(goal, personal_profile),
|
178 |
+
'liquidity_risk': self._calculate_liquidity_risk(goal, personal_profile)
|
179 |
+
}
|
180 |
+
except Exception as e:
|
181 |
+
print(f"Erreur dans l'évaluation des risques: {e}")
|
182 |
+
return {}
|
183 |
+
|
184 |
+
def _generate_recommendations(self,
|
185 |
+
goal: InvestmentGoal,
|
186 |
+
success_prob: float,
|
187 |
+
risk_assessment: Dict[str, float],
|
188 |
+
personal_profile: PersonalProfile) -> List[str]:
|
189 |
+
"""Génère des recommandations d'ajustement"""
|
190 |
+
recommendations = []
|
191 |
+
|
192 |
+
# Analyse du taux de succès
|
193 |
+
if success_prob < 0.6:
|
194 |
+
recommendations.append(
|
195 |
+
"La probabilité d'atteinte de l'objectif est faible. Considérez : "
|
196 |
+
"1. Augmenter l'épargne mensuelle "
|
197 |
+
"2. Allonger l'horizon d'investissement "
|
198 |
+
"3. Ajuster l'objectif à la baisse"
|
199 |
+
)
|
200 |
+
|
201 |
+
# Analyse des risques
|
202 |
+
for risk_type, risk_level in risk_assessment.items():
|
203 |
+
if risk_level > 0.7:
|
204 |
+
recommendations.append(
|
205 |
+
f"Risque {risk_type} élevé. Suggestion : "
|
206 |
+
f"{self._get_risk_mitigation_strategy(risk_type)}"
|
207 |
+
)
|
208 |
+
|
209 |
+
return recommendations
|
210 |
+
|
211 |
+
def _get_risk_mitigation_strategy(self, risk_type: str) -> str:
|
212 |
+
"""Retourne une stratégie de mitigation pour chaque type de risque"""
|
213 |
+
strategies = {
|
214 |
+
'shortfall_risk': "Augmenter l'épargne ou réduire l'objectif",
|
215 |
+
'time_risk': "Prévoir une période tampon supplémentaire",
|
216 |
+
'investment_risk': "Diversifier davantage le portefeuille",
|
217 |
+
'liquidity_risk': "Maintenir une réserve de liquidité suffisante"
|
218 |
+
}
|
219 |
+
return strategies.get(risk_type, "Revoir la stratégie globale")
|
src/analysis/ml_analyzer.py
ADDED
@@ -0,0 +1,994 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# src/analysis/ml_analyzer.py
|
2 |
+
|
3 |
+
import numpy as np
|
4 |
+
import pandas as pd
|
5 |
+
from typing import Dict, List, Optional, Any
|
6 |
+
from datetime import datetime
|
7 |
+
from sklearn.ensemble import RandomForestClassifier, GradientBoostingRegressor
|
8 |
+
from sklearn.preprocessing import StandardScaler
|
9 |
+
import torch
|
10 |
+
import torch.nn as nn
|
11 |
+
import logging
|
12 |
+
|
13 |
+
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
14 |
+
from ..core.types import MLAnalyzerProtocol
|
15 |
+
|
16 |
+
class AlternativeDataAnalyzer:
|
17 |
+
"""Analyze alternative data sources for market insights"""
|
18 |
+
|
19 |
+
def __init__(self):
|
20 |
+
self.social_sentiment_model = RandomForestClassifier(n_estimators=100)
|
21 |
+
self.web_traffic_model = RandomForestClassifier(n_estimators=50)
|
22 |
+
|
23 |
+
def analyze_social_sentiment(self, social_data: List[Dict]) -> float:
|
24 |
+
"""Analyze social media sentiment"""
|
25 |
+
features = self._extract_social_features(social_data)
|
26 |
+
return float(self.social_sentiment_model.predict(features)[0])
|
27 |
+
|
28 |
+
def analyze_web_traffic(self, traffic_data: pd.DataFrame) -> Dict[str, float]:
|
29 |
+
"""Analyze web traffic patterns"""
|
30 |
+
features = self._prepare_traffic_features(traffic_data)
|
31 |
+
predictions = self.web_traffic_model.predict(features)
|
32 |
+
return {
|
33 |
+
'growth_score': float(np.mean(predictions)),
|
34 |
+
'trend_strength': float(np.std(predictions))
|
35 |
+
}
|
36 |
+
|
37 |
+
|
38 |
+
class MarketImpactPredictor:
|
39 |
+
"""Predicts market impact of trades using machine learning"""
|
40 |
+
|
41 |
+
def __init__(self):
|
42 |
+
self.impact_model = GradientBoostingRegressor(
|
43 |
+
n_estimators=100,
|
44 |
+
learning_rate=0.1,
|
45 |
+
max_depth=3,
|
46 |
+
random_state=42
|
47 |
+
)
|
48 |
+
self.scaler = StandardScaler()
|
49 |
+
self.feature_columns = [
|
50 |
+
'trade_size_norm',
|
51 |
+
'volume_profile',
|
52 |
+
'volatility',
|
53 |
+
'bid_ask_spread',
|
54 |
+
'market_regime',
|
55 |
+
'time_of_day'
|
56 |
+
]
|
57 |
+
|
58 |
+
def predict_trade_impact(self,
|
59 |
+
trade_size: float,
|
60 |
+
market_data: pd.DataFrame,
|
61 |
+
market_conditions: Optional[Dict] = None) -> Dict[str, float]:
|
62 |
+
"""
|
63 |
+
Predict market impact of a trade
|
64 |
+
|
65 |
+
Args:
|
66 |
+
trade_size: Size of the trade
|
67 |
+
market_data: Market data (prices, volumes, etc.)
|
68 |
+
market_conditions: Additional market condition indicators
|
69 |
+
|
70 |
+
Returns:
|
71 |
+
Dictionary with impact predictions and metrics
|
72 |
+
"""
|
73 |
+
try:
|
74 |
+
# Extract features
|
75 |
+
features = self._extract_impact_features(
|
76 |
+
trade_size,
|
77 |
+
market_data,
|
78 |
+
market_conditions
|
79 |
+
)
|
80 |
+
|
81 |
+
# Scale features
|
82 |
+
scaled_features = self.scaler.transform(features)
|
83 |
+
|
84 |
+
# Predict impact
|
85 |
+
base_impact = float(self.impact_model.predict(scaled_features)[0])
|
86 |
+
|
87 |
+
# Adjust for market conditions
|
88 |
+
adjusted_impact = self._adjust_for_market_conditions(
|
89 |
+
base_impact,
|
90 |
+
market_conditions
|
91 |
+
)
|
92 |
+
|
93 |
+
# Calculate confidence and risk metrics
|
94 |
+
confidence = self._calculate_prediction_confidence(features)
|
95 |
+
risk_metrics = self._calculate_impact_risk_metrics(
|
96 |
+
adjusted_impact,
|
97 |
+
market_data
|
98 |
+
)
|
99 |
+
|
100 |
+
return {
|
101 |
+
'predicted_impact': adjusted_impact,
|
102 |
+
'base_impact': base_impact,
|
103 |
+
'confidence': confidence,
|
104 |
+
'risk_metrics': risk_metrics,
|
105 |
+
'recommended_sizing': self._calculate_recommended_sizing(
|
106 |
+
trade_size,
|
107 |
+
adjusted_impact,
|
108 |
+
market_data
|
109 |
+
)
|
110 |
+
}
|
111 |
+
|
112 |
+
except Exception as e:
|
113 |
+
print(f"Error predicting market impact: {e}")
|
114 |
+
return {
|
115 |
+
'predicted_impact': self._calculate_simple_impact(
|
116 |
+
trade_size,
|
117 |
+
market_data
|
118 |
+
),
|
119 |
+
'confidence': 0.0,
|
120 |
+
'risk_metrics': {}
|
121 |
+
}
|
122 |
+
|
123 |
+
def _extract_impact_features(self,
|
124 |
+
trade_size: float,
|
125 |
+
market_data: pd.DataFrame,
|
126 |
+
market_conditions: Optional[Dict]) -> np.ndarray:
|
127 |
+
"""Extract features for impact prediction"""
|
128 |
+
try:
|
129 |
+
features = []
|
130 |
+
|
131 |
+
# Normalize trade size by average daily volume
|
132 |
+
avg_volume = market_data['volume'].mean()
|
133 |
+
trade_size_norm = trade_size / avg_volume if avg_volume > 0 else 0
|
134 |
+
features.append(trade_size_norm)
|
135 |
+
|
136 |
+
# Volume profile
|
137 |
+
volume_profile = self._calculate_volume_profile(market_data)
|
138 |
+
features.append(volume_profile)
|
139 |
+
|
140 |
+
# Market volatility
|
141 |
+
volatility = market_data['close'].pct_change().std()
|
142 |
+
features.append(volatility)
|
143 |
+
|
144 |
+
# Bid-ask spread
|
145 |
+
spread = self._calculate_bid_ask_spread(market_data)
|
146 |
+
features.append(spread)
|
147 |
+
|
148 |
+
# Market regime indicator
|
149 |
+
regime_indicator = self._get_regime_indicator(market_conditions)
|
150 |
+
features.append(regime_indicator)
|
151 |
+
|
152 |
+
# Time of day effect
|
153 |
+
time_feature = self._calculate_time_feature(market_data)
|
154 |
+
features.append(time_feature)
|
155 |
+
|
156 |
+
return np.array(features).reshape(1, -1)
|
157 |
+
|
158 |
+
except Exception as e:
|
159 |
+
print(f"Error extracting impact features: {e}")
|
160 |
+
return np.zeros((1, len(self.feature_columns)))
|
161 |
+
|
162 |
+
def _adjust_for_market_conditions(self,
|
163 |
+
base_impact: float,
|
164 |
+
market_conditions: Optional[Dict]) -> float:
|
165 |
+
"""Adjust impact prediction based on market conditions"""
|
166 |
+
try:
|
167 |
+
if not market_conditions:
|
168 |
+
return base_impact
|
169 |
+
|
170 |
+
# Volatility adjustment
|
171 |
+
volatility_factor = market_conditions.get('volatility', 1.0)
|
172 |
+
|
173 |
+
# Liquidity adjustment
|
174 |
+
liquidity_factor = 1 / market_conditions.get('liquidity', 1.0)
|
175 |
+
|
176 |
+
# Regime adjustment
|
177 |
+
regime_factor = self._get_regime_factor(market_conditions)
|
178 |
+
|
179 |
+
# Combined adjustment
|
180 |
+
adjustment = (volatility_factor * liquidity_factor * regime_factor)
|
181 |
+
|
182 |
+
return base_impact * adjustment
|
183 |
+
|
184 |
+
except Exception as e:
|
185 |
+
print(f"Error adjusting for market conditions: {e}")
|
186 |
+
return base_impact
|
187 |
+
|
188 |
+
def _calculate_prediction_confidence(self, features: np.ndarray) -> float:
|
189 |
+
"""Calculate confidence score for impact prediction"""
|
190 |
+
try:
|
191 |
+
# Base confidence
|
192 |
+
confidence = 0.8
|
193 |
+
|
194 |
+
# Adjust based on feature quality
|
195 |
+
for i, feature in enumerate(features[0]):
|
196 |
+
if np.isnan(feature) or np.isinf(feature):
|
197 |
+
confidence *= 0.8
|
198 |
+
|
199 |
+
# Adjust based on feature values
|
200 |
+
trade_size_norm = features[0][0]
|
201 |
+
if trade_size_norm > 0.1: # Large trades
|
202 |
+
confidence *= 0.9
|
203 |
+
|
204 |
+
return float(confidence)
|
205 |
+
|
206 |
+
except Exception as e:
|
207 |
+
print(f"Error calculating prediction confidence: {e}")
|
208 |
+
return 0.5
|
209 |
+
|
210 |
+
def _calculate_impact_risk_metrics(self,
|
211 |
+
impact: float,
|
212 |
+
market_data: pd.DataFrame) -> Dict[str, float]:
|
213 |
+
"""Calculate risk metrics for impact prediction"""
|
214 |
+
try:
|
215 |
+
metrics = {}
|
216 |
+
|
217 |
+
# Calculate VaR of impact
|
218 |
+
metrics['impact_var_95'] = impact * 1.645 # Assuming normal distribution
|
219 |
+
|
220 |
+
# Calculate potential slippage
|
221 |
+
metrics['max_slippage'] = impact * 2
|
222 |
+
|
223 |
+
# Timing risk
|
224 |
+
metrics['timing_risk'] = self._calculate_timing_risk(market_data)
|
225 |
+
|
226 |
+
return metrics
|
227 |
+
|
228 |
+
except Exception as e:
|
229 |
+
print(f"Error calculating impact risk metrics: {e}")
|
230 |
+
return {}
|
231 |
+
|
232 |
+
def _calculate_simple_impact(self,
|
233 |
+
trade_size: float,
|
234 |
+
market_data: pd.DataFrame) -> float:
|
235 |
+
"""Calculate simple market impact estimate"""
|
236 |
+
try:
|
237 |
+
avg_volume = market_data['volume'].mean()
|
238 |
+
participation_rate = trade_size / avg_volume if avg_volume > 0 else 0
|
239 |
+
|
240 |
+
# Square root model
|
241 |
+
return 0.1 * np.sqrt(participation_rate)
|
242 |
+
|
243 |
+
except Exception as e:
|
244 |
+
print(f"Error calculating simple impact: {e}")
|
245 |
+
return 0.01 # Default 1 bp impact
|
246 |
+
|
247 |
+
def _calculate_recommended_sizing(self,
|
248 |
+
trade_size: float,
|
249 |
+
predicted_impact: float,
|
250 |
+
market_data: pd.DataFrame) -> Dict[str, float]:
|
251 |
+
"""Calculate recommended trade sizing"""
|
252 |
+
try:
|
253 |
+
avg_volume = market_data['volume'].mean()
|
254 |
+
|
255 |
+
# Calculate different size recommendations
|
256 |
+
aggressive = min(trade_size, avg_volume * 0.1)
|
257 |
+
normal = min(trade_size, avg_volume * 0.05)
|
258 |
+
passive = min(trade_size, avg_volume * 0.02)
|
259 |
+
|
260 |
+
return {
|
261 |
+
'aggressive_size': float(aggressive),
|
262 |
+
'normal_size': float(normal),
|
263 |
+
'passive_size': float(passive),
|
264 |
+
'optimal_size': float(normal) # Could be adjusted based on urgency
|
265 |
+
}
|
266 |
+
|
267 |
+
except Exception as e:
|
268 |
+
print(f"Error calculating recommended sizing: {e}")
|
269 |
+
return {'optimal_size': trade_size * 0.05}
|
270 |
+
|
271 |
+
@staticmethod
|
272 |
+
def _calculate_volume_profile(market_data: pd.DataFrame) -> float:
|
273 |
+
"""Calculate volume profile metric"""
|
274 |
+
try:
|
275 |
+
recent_volume = market_data['volume'].tail(20).mean()
|
276 |
+
hist_volume = market_data['volume'].mean()
|
277 |
+
return recent_volume / hist_volume if hist_volume > 0 else 1.0
|
278 |
+
except:
|
279 |
+
return 1.0
|
280 |
+
|
281 |
+
@staticmethod
|
282 |
+
def _calculate_bid_ask_spread(market_data: pd.DataFrame) -> float:
|
283 |
+
"""Calculate average bid-ask spread"""
|
284 |
+
try:
|
285 |
+
if 'bid' in market_data and 'ask' in market_data:
|
286 |
+
spread = (market_data['ask'] - market_data['bid']) / market_data['ask']
|
287 |
+
return float(spread.mean())
|
288 |
+
return 0.001 # Default 10 bp spread
|
289 |
+
except:
|
290 |
+
return 0.001
|
291 |
+
|
292 |
+
@staticmethod
|
293 |
+
def _get_regime_indicator(market_conditions: Optional[Dict]) -> float:
|
294 |
+
"""Get numerical indicator for market regime"""
|
295 |
+
try:
|
296 |
+
if not market_conditions:
|
297 |
+
return 0.0
|
298 |
+
regime = market_conditions.get('regime', 'normal')
|
299 |
+
regime_map = {
|
300 |
+
'crisis': -1.0,
|
301 |
+
'stress': -0.5,
|
302 |
+
'normal': 0.0,
|
303 |
+
'calm': 0.5,
|
304 |
+
'highly_liquid': 1.0
|
305 |
+
}
|
306 |
+
return regime_map.get(regime, 0.0)
|
307 |
+
except:
|
308 |
+
return 0.0
|
309 |
+
|
310 |
+
@staticmethod
|
311 |
+
def _calculate_time_feature(market_data: pd.DataFrame) -> float:
|
312 |
+
"""Calculate time of day feature"""
|
313 |
+
try:
|
314 |
+
if isinstance(market_data.index, pd.DatetimeIndex):
|
315 |
+
hour = market_data.index[-1].hour
|
316 |
+
# Normalize to 0-1 range, with peak at market open/close
|
317 |
+
return np.sin(np.pi * hour / 12)
|
318 |
+
return 0.0
|
319 |
+
except:
|
320 |
+
return 0.0
|
321 |
+
|
322 |
+
@staticmethod
|
323 |
+
def _get_regime_factor(market_conditions: Optional[Dict]) -> float:
|
324 |
+
"""Get adjustment factor for market regime"""
|
325 |
+
try:
|
326 |
+
if not market_conditions:
|
327 |
+
return 1.0
|
328 |
+
regime = market_conditions.get('regime', 'normal')
|
329 |
+
regime_factors = {
|
330 |
+
'crisis': 2.0,
|
331 |
+
'stress': 1.5,
|
332 |
+
'normal': 1.0,
|
333 |
+
'calm': 0.8,
|
334 |
+
'highly_liquid': 0.5
|
335 |
+
}
|
336 |
+
return regime_factors.get(regime, 1.0)
|
337 |
+
except:
|
338 |
+
return 1.0
|
339 |
+
|
340 |
+
@staticmethod
|
341 |
+
def _calculate_timing_risk(market_data: pd.DataFrame) -> float:
|
342 |
+
"""Calculate timing risk metric"""
|
343 |
+
try:
|
344 |
+
returns = market_data['close'].pct_change()
|
345 |
+
volatility = returns.std()
|
346 |
+
return float(volatility * np.sqrt(252))
|
347 |
+
except:
|
348 |
+
return 0.2 # Default 20% annualized vol
|
349 |
+
|
350 |
+
class SentimentAnalyzer:
|
351 |
+
"""Advanced sentiment analyzer for market and news data"""
|
352 |
+
|
353 |
+
def __init__(self):
|
354 |
+
self.sentiment_model = None
|
355 |
+
self.text_vectorizer = None
|
356 |
+
self.market_indicators = [
|
357 |
+
'price_momentum',
|
358 |
+
'volume_change',
|
359 |
+
'volatility',
|
360 |
+
'rsi',
|
361 |
+
'macd',
|
362 |
+
'social_sentiment'
|
363 |
+
]
|
364 |
+
|
365 |
+
def analyze_sentiment(self,
|
366 |
+
market_data: pd.DataFrame,
|
367 |
+
news_data: Optional[List[str]] = None,
|
368 |
+
social_data: Optional[Dict[str, Any]] = None) -> Dict[str, float]:
|
369 |
+
"""
|
370 |
+
Analyze overall market sentiment using multiple data sources.
|
371 |
+
|
372 |
+
Args:
|
373 |
+
market_data: Market price and volume data
|
374 |
+
news_data: List of news articles/headlines
|
375 |
+
social_data: Social media sentiment data
|
376 |
+
|
377 |
+
Returns:
|
378 |
+
Dictionary of sentiment scores
|
379 |
+
"""
|
380 |
+
try:
|
381 |
+
# Market technical sentiment
|
382 |
+
market_sentiment = self._analyze_market_sentiment(market_data)
|
383 |
+
|
384 |
+
# News sentiment
|
385 |
+
news_sentiment = self._analyze_news_sentiment(news_data) if news_data else 0.5
|
386 |
+
|
387 |
+
# Social media sentiment
|
388 |
+
social_sentiment = self._analyze_social_sentiment(social_data) if social_data else 0.5
|
389 |
+
|
390 |
+
# Combine all sentiment signals
|
391 |
+
composite_sentiment = self._calculate_composite_sentiment(
|
392 |
+
market_sentiment,
|
393 |
+
news_sentiment,
|
394 |
+
social_sentiment
|
395 |
+
)
|
396 |
+
|
397 |
+
return {
|
398 |
+
'composite_sentiment': composite_sentiment,
|
399 |
+
'market_sentiment': market_sentiment,
|
400 |
+
'news_sentiment': news_sentiment,
|
401 |
+
'social_sentiment': social_sentiment,
|
402 |
+
'sentiment_indicators': self._calculate_sentiment_indicators(market_data),
|
403 |
+
'confidence_score': self._calculate_confidence_score(market_data)
|
404 |
+
}
|
405 |
+
|
406 |
+
except Exception as e:
|
407 |
+
print(f"Error in sentiment analysis: {e}")
|
408 |
+
return {
|
409 |
+
'composite_sentiment': 0.5,
|
410 |
+
'market_sentiment': 0.5,
|
411 |
+
'news_sentiment': 0.5,
|
412 |
+
'social_sentiment': 0.5,
|
413 |
+
'sentiment_indicators': {},
|
414 |
+
'confidence_score': 0.0
|
415 |
+
}
|
416 |
+
|
417 |
+
def _analyze_market_sentiment(self, market_data: pd.DataFrame) -> float:
|
418 |
+
"""Analyze sentiment from market data"""
|
419 |
+
try:
|
420 |
+
if market_data.empty:
|
421 |
+
return 0.5
|
422 |
+
|
423 |
+
sentiment_scores = []
|
424 |
+
|
425 |
+
# Price momentum
|
426 |
+
returns = market_data['close'].pct_change()
|
427 |
+
momentum = returns.rolling(window=20).mean().iloc[-1]
|
428 |
+
sentiment_scores.append(self._normalize_score(momentum, -0.02, 0.02))
|
429 |
+
|
430 |
+
# Volume trend
|
431 |
+
volume_change = market_data['volume'].pct_change()
|
432 |
+
vol_trend = volume_change.rolling(window=20).mean().iloc[-1]
|
433 |
+
sentiment_scores.append(self._normalize_score(vol_trend, -0.1, 0.1))
|
434 |
+
|
435 |
+
# RSI
|
436 |
+
rsi = self._calculate_rsi(market_data)
|
437 |
+
sentiment_scores.append(self._normalize_score(rsi, 30, 70))
|
438 |
+
|
439 |
+
# Weighted average
|
440 |
+
weights = [0.5, 0.3, 0.2]
|
441 |
+
return sum(score * weight for score, weight
|
442 |
+
in zip(sentiment_scores, weights))
|
443 |
+
|
444 |
+
except Exception as e:
|
445 |
+
print(f"Error in market sentiment analysis: {e}")
|
446 |
+
return 0.5
|
447 |
+
|
448 |
+
def _analyze_news_sentiment(self, news_data: List[str]) -> float:
|
449 |
+
"""Analyze sentiment from news articles"""
|
450 |
+
try:
|
451 |
+
if not news_data:
|
452 |
+
return 0.5
|
453 |
+
|
454 |
+
# Simple keyword-based sentiment (placeholder)
|
455 |
+
positive_words = {'growth', 'profit', 'success', 'gain', 'rally'}
|
456 |
+
negative_words = {'loss', 'risk', 'decline', 'crash', 'fear'}
|
457 |
+
|
458 |
+
total_score = 0
|
459 |
+
for text in news_data:
|
460 |
+
words = set(text.lower().split())
|
461 |
+
positive_count = len(words.intersection(positive_words))
|
462 |
+
negative_count = len(words.intersection(negative_words))
|
463 |
+
|
464 |
+
if positive_count + negative_count > 0:
|
465 |
+
score = positive_count / (positive_count + negative_count)
|
466 |
+
else:
|
467 |
+
score = 0.5
|
468 |
+
|
469 |
+
total_score += score
|
470 |
+
|
471 |
+
return total_score / len(news_data)
|
472 |
+
|
473 |
+
except Exception as e:
|
474 |
+
print(f"Error in news sentiment analysis: {e}")
|
475 |
+
return 0.5
|
476 |
+
|
477 |
+
def _analyze_social_sentiment(self, social_data: Dict[str, Any]) -> float:
|
478 |
+
"""Analyze sentiment from social media data"""
|
479 |
+
try:
|
480 |
+
if not social_data:
|
481 |
+
return 0.5
|
482 |
+
|
483 |
+
sentiment_scores = []
|
484 |
+
|
485 |
+
# Process different social media sources
|
486 |
+
for source, data in social_data.items():
|
487 |
+
if source == 'twitter':
|
488 |
+
score = self._analyze_twitter_sentiment(data)
|
489 |
+
elif source == 'reddit':
|
490 |
+
score = self._analyze_reddit_sentiment(data)
|
491 |
+
else:
|
492 |
+
score = 0.5
|
493 |
+
|
494 |
+
sentiment_scores.append(score)
|
495 |
+
|
496 |
+
return np.mean(sentiment_scores) if sentiment_scores else 0.5
|
497 |
+
|
498 |
+
except Exception as e:
|
499 |
+
print(f"Error in social sentiment analysis: {e}")
|
500 |
+
return 0.5
|
501 |
+
|
502 |
+
def _calculate_composite_sentiment(self,
|
503 |
+
market_sentiment: float,
|
504 |
+
news_sentiment: float,
|
505 |
+
social_sentiment: float) -> float:
|
506 |
+
"""Calculate weighted composite sentiment"""
|
507 |
+
weights = {
|
508 |
+
'market': 0.5,
|
509 |
+
'news': 0.3,
|
510 |
+
'social': 0.2
|
511 |
+
}
|
512 |
+
|
513 |
+
composite = (
|
514 |
+
weights['market'] * market_sentiment +
|
515 |
+
weights['news'] * news_sentiment +
|
516 |
+
weights['social'] * social_sentiment
|
517 |
+
)
|
518 |
+
|
519 |
+
return min(max(composite, 0), 1) # Ensure between 0 and 1
|
520 |
+
|
521 |
+
def _calculate_sentiment_indicators(self, market_data: pd.DataFrame) -> Dict[str, float]:
|
522 |
+
"""Calculate various sentiment indicators"""
|
523 |
+
try:
|
524 |
+
indicators = {}
|
525 |
+
|
526 |
+
for indicator in self.market_indicators:
|
527 |
+
if indicator == 'price_momentum':
|
528 |
+
value = self._calculate_momentum(market_data)
|
529 |
+
elif indicator == 'volume_change':
|
530 |
+
value = self._calculate_volume_change(market_data)
|
531 |
+
elif indicator == 'volatility':
|
532 |
+
value = self._calculate_volatility(market_data)
|
533 |
+
else:
|
534 |
+
value = 0.0
|
535 |
+
|
536 |
+
indicators[indicator] = value
|
537 |
+
|
538 |
+
return indicators
|
539 |
+
|
540 |
+
except Exception as e:
|
541 |
+
print(f"Error calculating sentiment indicators: {e}")
|
542 |
+
return {}
|
543 |
+
|
544 |
+
@staticmethod
|
545 |
+
def _normalize_score(value: float, min_val: float, max_val: float) -> float:
|
546 |
+
"""Normalize a value to a 0-1 range"""
|
547 |
+
try:
|
548 |
+
return min(max((value - min_val) / (max_val - min_val), 0), 1)
|
549 |
+
except:
|
550 |
+
return 0.5
|
551 |
+
|
552 |
+
@staticmethod
|
553 |
+
def _calculate_rsi(data: pd.DataFrame, periods: int = 14) -> float:
|
554 |
+
"""Calculate RSI indicator"""
|
555 |
+
try:
|
556 |
+
delta = data['close'].diff()
|
557 |
+
gain = (delta.where(delta > 0, 0)).rolling(window=periods).mean()
|
558 |
+
loss = (-delta.where(delta < 0, 0)).rolling(window=periods).mean()
|
559 |
+
rs = gain / loss
|
560 |
+
return float(100 - (100 / (1 + rs.iloc[-1])))
|
561 |
+
except:
|
562 |
+
return 50.0
|
563 |
+
|
564 |
+
def _calculate_confidence_score(self, market_data: pd.DataFrame) -> float:
|
565 |
+
"""Calculate confidence score for sentiment analysis"""
|
566 |
+
try:
|
567 |
+
# Volatility-based confidence
|
568 |
+
volatility = self._calculate_volatility(market_data)
|
569 |
+
volume = self._calculate_volume_change(market_data)
|
570 |
+
|
571 |
+
# Lower confidence in high volatility periods
|
572 |
+
confidence = 1 - min(volatility, 0.5)
|
573 |
+
|
574 |
+
# Higher confidence with higher volume
|
575 |
+
volume_factor = min(max(volume, 0), 1)
|
576 |
+
confidence *= (0.7 + 0.3 * volume_factor)
|
577 |
+
|
578 |
+
return float(confidence)
|
579 |
+
|
580 |
+
except Exception as e:
|
581 |
+
print(f"Error calculating confidence score: {e}")
|
582 |
+
return 0.0
|
583 |
+
|
584 |
+
@staticmethod
|
585 |
+
def _calculate_momentum(data: pd.DataFrame, window: int = 20) -> float:
|
586 |
+
"""Calculate price momentum"""
|
587 |
+
try:
|
588 |
+
returns = data['close'].pct_change()
|
589 |
+
return float(returns.rolling(window=window).mean().iloc[-1])
|
590 |
+
except:
|
591 |
+
return 0.0
|
592 |
+
|
593 |
+
@staticmethod
|
594 |
+
def _calculate_volume_change(data: pd.DataFrame, window: int = 20) -> float:
|
595 |
+
"""Calculate volume trend"""
|
596 |
+
try:
|
597 |
+
volume_change = data['volume'].pct_change()
|
598 |
+
return float(volume_change.rolling(window=window).mean().iloc[-1])
|
599 |
+
except:
|
600 |
+
return 0.0
|
601 |
+
|
602 |
+
@staticmethod
|
603 |
+
def _calculate_volatility(data: pd.DataFrame, window: int = 20) -> float:
|
604 |
+
"""Calculate price volatility"""
|
605 |
+
try:
|
606 |
+
returns = data['close'].pct_change()
|
607 |
+
return float(returns.rolling(window=window).std().iloc[-1])
|
608 |
+
except:
|
609 |
+
return 0.0
|
610 |
+
|
611 |
+
class MarketRegimeClassifier:
|
612 |
+
"""Classifier for market regimes using machine learning"""
|
613 |
+
|
614 |
+
def __init__(self):
|
615 |
+
self.model = GaussianMixture(n_components=3, random_state=42)
|
616 |
+
self.scaler = StandardScaler()
|
617 |
+
self.regimes = ['bear', 'neutral', 'bull']
|
618 |
+
|
619 |
+
def extract_features(self, market_data: pd.DataFrame) -> np.ndarray:
|
620 |
+
"""Extract relevant features for regime classification"""
|
621 |
+
features = []
|
622 |
+
|
623 |
+
# Price-based features
|
624 |
+
returns = market_data['close'].pct_change()
|
625 |
+
features.extend([
|
626 |
+
returns.mean(), # Mean return
|
627 |
+
returns.std(), # Volatility
|
628 |
+
returns.skew(), # Asymmetry
|
629 |
+
returns.kurt(), # Kurtosis
|
630 |
+
self._calculate_momentum(market_data), # Momentum
|
631 |
+
self._calculate_rsi(market_data), # RSI
|
632 |
+
self._calculate_volatility_regime(returns) # Volatility regime
|
633 |
+
])
|
634 |
+
|
635 |
+
return np.array(features).reshape(1, -1)
|
636 |
+
|
637 |
+
def predict_regime(self, market_data: pd.DataFrame) -> str:
|
638 |
+
"""Predict current market regime"""
|
639 |
+
features = self.extract_features(market_data)
|
640 |
+
scaled_features = self.scaler.transform(features)
|
641 |
+
regime_idx = self.model.predict(scaled_features)[0]
|
642 |
+
return self.regimes[regime_idx]
|
643 |
+
|
644 |
+
def _calculate_momentum(self, data: pd.DataFrame, window: int = 63) -> float:
|
645 |
+
"""Calculate price momentum"""
|
646 |
+
returns = data['close'].pct_change()
|
647 |
+
return returns.rolling(window=window).mean().iloc[-1]
|
648 |
+
|
649 |
+
def _calculate_rsi(self, data: pd.DataFrame, periods: int = 14) -> float:
|
650 |
+
"""Calculate RSI indicator"""
|
651 |
+
delta = data['close'].pct_change()
|
652 |
+
gain = (delta.where(delta > 0, 0)).rolling(window=periods).mean()
|
653 |
+
loss = (-delta.where(delta < 0, 0)).rolling(window=periods).mean()
|
654 |
+
rs = gain / loss
|
655 |
+
return 100 - (100 / (1 + rs.iloc[-1]))
|
656 |
+
|
657 |
+
def _calculate_volatility_regime(self, returns: pd.Series, window: int = 21) -> float:
|
658 |
+
"""Calculate volatility regime indicator"""
|
659 |
+
vol = returns.rolling(window=window).std() * np.sqrt(252)
|
660 |
+
return vol.iloc[-1]
|
661 |
+
|
662 |
+
class MLEnhancedAnalyzer:
|
663 |
+
"""Enhanced market analyzer with machine learning capabilities"""
|
664 |
+
def _setup_logger(self) -> logging.Logger:
|
665 |
+
"""Configuration du système de logging"""
|
666 |
+
logger = logging.getLogger('Deepvest')
|
667 |
+
logger.setLevel(logging.INFO)
|
668 |
+
handler = logging.StreamHandler()
|
669 |
+
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
|
670 |
+
handler.setFormatter(formatter)
|
671 |
+
logger.addHandler(handler)
|
672 |
+
return logger
|
673 |
+
|
674 |
+
def __init__(self):
|
675 |
+
self.regime_classifier = MarketRegimeClassifier()
|
676 |
+
self.sentiment_analyzer = self._initialize_sentiment_model()
|
677 |
+
|
678 |
+
def _initialize_sentiment_model(self):
|
679 |
+
return GaussianMixture(n_components=2, random_state=42)
|
680 |
+
|
681 |
+
def predict_market_regime(self, market_data: pd.DataFrame) -> str:
|
682 |
+
"""Predict market regime"""
|
683 |
+
return self.regime_classifier.predict_regime(market_data)
|
684 |
+
|
685 |
+
def analyze_market_sentiment(self,
|
686 |
+
market_data: pd.DataFrame,
|
687 |
+
news_data: Optional[List[str]] = None) -> float:
|
688 |
+
"""Analyze market sentiment"""
|
689 |
+
market_features = self._extract_market_features(market_data)
|
690 |
+
sentiment_score = self.sentiment_analyzer.predict_proba(market_features)
|
691 |
+
if news_data:
|
692 |
+
news_sentiment = self._analyze_news_sentiment(news_data)
|
693 |
+
sentiment_score = 0.7 * sentiment_score + 0.3 * news_sentiment
|
694 |
+
return float(sentiment_score.mean())
|
695 |
+
|
696 |
+
def _extract_market_features(self, market_data: pd.DataFrame) -> np.ndarray:
|
697 |
+
"""Extract features for sentiment analysis"""
|
698 |
+
if market_data.empty:
|
699 |
+
return np.array([[0]])
|
700 |
+
|
701 |
+
features = []
|
702 |
+
returns = market_data['close'].pct_change().dropna()
|
703 |
+
|
704 |
+
# Basic features
|
705 |
+
features.extend([
|
706 |
+
returns.mean(),
|
707 |
+
returns.std(),
|
708 |
+
returns.skew(),
|
709 |
+
returns.kurt()
|
710 |
+
])
|
711 |
+
|
712 |
+
# Technical features
|
713 |
+
features.extend([
|
714 |
+
self._calculate_macd(market_data),
|
715 |
+
self._calculate_rsi(market_data),
|
716 |
+
self._calculate_bollinger_bands(market_data)
|
717 |
+
])
|
718 |
+
|
719 |
+
return np.array(features).reshape(1, -1)
|
720 |
+
|
721 |
+
@staticmethod
|
722 |
+
def _calculate_macd(data: pd.DataFrame) -> float:
|
723 |
+
"""Calculate MACD indicator"""
|
724 |
+
if 'close' not in data.columns or len(data) < 26:
|
725 |
+
return 0.0
|
726 |
+
exp1 = data['close'].ewm(span=12, adjust=False).mean()
|
727 |
+
exp2 = data['close'].ewm(span=26, adjust=False).mean()
|
728 |
+
macd = exp1 - exp2
|
729 |
+
return float(macd.iloc[-1])
|
730 |
+
|
731 |
+
@staticmethod
|
732 |
+
def _calculate_bollinger_bands(data: pd.DataFrame) -> float:
|
733 |
+
"""Calculate Bollinger Bands position"""
|
734 |
+
if 'close' not in data.columns or len(data) < 20:
|
735 |
+
return 0.0
|
736 |
+
ma = data['close'].rolling(window=20).mean()
|
737 |
+
std = data['close'].rolling(window=20).std()
|
738 |
+
upper = ma + (std * 2)
|
739 |
+
lower = ma - (std * 2)
|
740 |
+
position = (data['close'] - lower) / (upper - lower)
|
741 |
+
return float(position.iloc[-1])
|
742 |
+
|
743 |
+
def _analyze_news_sentiment(self, news_data: List[str]) -> float:
|
744 |
+
"""Simple sentiment analysis of news data"""
|
745 |
+
# Placeholder - would normally use NLP model
|
746 |
+
return 0.5
|
747 |
+
|
748 |
+
class MLEnhancedAnalyzer(MLAnalyzerProtocol):
|
749 |
+
"""ML-enhanced market analysis implementation"""
|
750 |
+
|
751 |
+
def __init__(self):
|
752 |
+
# Modèles pour différentes tâches d'analyse
|
753 |
+
self.regime_classifier = RandomForestClassifier(n_estimators=100, random_state=42)
|
754 |
+
self.risk_predictor = GradientBoostingRegressor(n_estimators=100, random_state=42)
|
755 |
+
self.scaler = StandardScaler()
|
756 |
+
|
757 |
+
# Modèle de sentiment
|
758 |
+
self.sentiment_tokenizer = AutoTokenizer.from_pretrained("nickmuchi/finbert-tone-finetuned-finance-text-classification")
|
759 |
+
self.sentiment_model = AutoModelForSequenceClassification.from_pretrained("nickmuchi/finbert-tone-finetuned-finance-text-classification")
|
760 |
+
|
761 |
+
async def analyze_market_conditions(self,
|
762 |
+
market_data: pd.DataFrame,
|
763 |
+
news_data: Optional[List[str]] = None,
|
764 |
+
alternative_data: Optional[Dict] = None) -> Dict[str, Any]:
|
765 |
+
"""Analyse complète des conditions de marché utilisant ML"""
|
766 |
+
try:
|
767 |
+
# Préparer les features
|
768 |
+
features = self._prepare_features(market_data)
|
769 |
+
scaled_features = self.scaler.transform(features)
|
770 |
+
|
771 |
+
# Analyse du régime de marché
|
772 |
+
regime = self._predict_market_regime(scaled_features)
|
773 |
+
|
774 |
+
# Analyse des risques
|
775 |
+
risk_metrics = self._analyze_risk_factors(market_data)
|
776 |
+
|
777 |
+
# Analyse technique
|
778 |
+
technical_metrics = self._analyze_technical_factors(market_data)
|
779 |
+
|
780 |
+
# Analyse du sentiment si des données sont disponibles
|
781 |
+
sentiment = {}
|
782 |
+
if news_data:
|
783 |
+
sentiment = await self._analyze_sentiment(news_data)
|
784 |
+
|
785 |
+
# Analyse des données alternatives si disponibles
|
786 |
+
alt_insights = {}
|
787 |
+
if alternative_data:
|
788 |
+
alt_insights = self._analyze_alternative_data(alternative_data)
|
789 |
+
|
790 |
+
return {
|
791 |
+
'regime': regime,
|
792 |
+
'risk_metrics': risk_metrics,
|
793 |
+
'technical_metrics': technical_metrics,
|
794 |
+
'sentiment': sentiment,
|
795 |
+
'alternative_insights': alt_insights,
|
796 |
+
'timestamp': datetime.now()
|
797 |
+
}
|
798 |
+
|
799 |
+
except Exception as e:
|
800 |
+
print(f"Error in ML market analysis: {str(e)}")
|
801 |
+
return self._get_default_analysis()
|
802 |
+
|
803 |
+
def _prepare_features(self, market_data: pd.DataFrame) -> np.ndarray:
|
804 |
+
"""Prépare les features pour l'analyse ML"""
|
805 |
+
try:
|
806 |
+
# Calculer les rendements si nécessaire
|
807 |
+
if 'returns' not in market_data.columns:
|
808 |
+
returns = market_data.pct_change().dropna()
|
809 |
+
else:
|
810 |
+
returns = market_data['returns'].dropna()
|
811 |
+
|
812 |
+
features = []
|
813 |
+
|
814 |
+
# Features basées sur les rendements
|
815 |
+
features.extend([
|
816 |
+
returns.mean(),
|
817 |
+
returns.std(),
|
818 |
+
returns.skew(),
|
819 |
+
returns.kurtosis(),
|
820 |
+
(returns < 0).mean()
|
821 |
+
])
|
822 |
+
|
823 |
+
# Features basées sur le volume si disponible
|
824 |
+
if 'volume' in market_data.columns:
|
825 |
+
volume = market_data['volume']
|
826 |
+
features.extend([
|
827 |
+
volume.mean(),
|
828 |
+
volume.std() / volume.mean(),
|
829 |
+
(volume.pct_change() > 0).mean()
|
830 |
+
])
|
831 |
+
|
832 |
+
# Features techniques si disponibles
|
833 |
+
if 'close' in market_data.columns:
|
834 |
+
close_prices = market_data['close']
|
835 |
+
features.extend(self._calculate_technical_features(close_prices))
|
836 |
+
|
837 |
+
return np.array(features).reshape(1, -1)
|
838 |
+
|
839 |
+
except Exception as e:
|
840 |
+
print(f"Error preparing features: {str(e)}")
|
841 |
+
return np.zeros((1, 10)) # Features par défaut
|
842 |
+
|
843 |
+
def _predict_market_regime(self, features: np.ndarray) -> str:
|
844 |
+
"""Prédit le régime de marché actuel"""
|
845 |
+
try:
|
846 |
+
prediction = self.regime_classifier.predict(features)[0]
|
847 |
+
regimes = ['BULL_MARKET', 'BEAR_MARKET', 'HIGH_VOLATILITY', 'LOW_VOLATILITY']
|
848 |
+
return regimes[prediction] if isinstance(prediction, (int, np.integer)) else 'LOW_VOLATILITY'
|
849 |
+
except Exception as e:
|
850 |
+
print(f"Error predicting regime: {str(e)}")
|
851 |
+
return 'LOW_VOLATILITY'
|
852 |
+
|
853 |
+
def _analyze_risk_factors(self, market_data: pd.DataFrame) -> Dict[str, float]:
|
854 |
+
"""Analyse des facteurs de risque"""
|
855 |
+
try:
|
856 |
+
returns = market_data.pct_change().dropna()
|
857 |
+
|
858 |
+
risk_metrics = {
|
859 |
+
'volatility': float(returns.std() * np.sqrt(252)),
|
860 |
+
'tail_risk': self._calculate_tail_risk(returns),
|
861 |
+
'var_95': float(np.percentile(returns, 5)),
|
862 |
+
'skewness': float(returns.skew()),
|
863 |
+
'kurtosis': float(returns.kurtosis())
|
864 |
+
}
|
865 |
+
|
866 |
+
return risk_metrics
|
867 |
+
|
868 |
+
except Exception as e:
|
869 |
+
print(f"Error analyzing risk factors: {str(e)}")
|
870 |
+
return {
|
871 |
+
'volatility': 0.15,
|
872 |
+
'tail_risk': 0.05,
|
873 |
+
'var_95': -0.02,
|
874 |
+
'skewness': 0.0,
|
875 |
+
'kurtosis': 3.0
|
876 |
+
}
|
877 |
+
|
878 |
+
async def _analyze_sentiment(self, news_data: List[str]) -> Dict[str, float]:
|
879 |
+
"""Analyse du sentiment des nouvelles"""
|
880 |
+
try:
|
881 |
+
sentiments = []
|
882 |
+
for text in news_data:
|
883 |
+
# Tokenization et préparation
|
884 |
+
inputs = self.sentiment_tokenizer(text, return_tensors="pt", padding=True, truncation=True)
|
885 |
+
|
886 |
+
# Analyse avec le modèle
|
887 |
+
with torch.no_grad():
|
888 |
+
outputs = self.sentiment_model(**inputs)
|
889 |
+
logits = outputs.logits
|
890 |
+
probs = torch.softmax(logits, dim=1)
|
891 |
+
sentiments.append(probs.tolist()[0])
|
892 |
+
|
893 |
+
# Agrégation des sentiments
|
894 |
+
avg_sentiment = np.mean(sentiments, axis=0)
|
895 |
+
return {
|
896 |
+
'positive': float(avg_sentiment[2]), # Indice 2 pour le sentiment positif
|
897 |
+
'neutral': float(avg_sentiment[1]), # Indice 1 pour le sentiment neutre
|
898 |
+
'negative': float(avg_sentiment[0]) # Indice 0 pour le sentiment négatif
|
899 |
+
}
|
900 |
+
|
901 |
+
except Exception as e:
|
902 |
+
print(f"Error analyzing sentiment: {str(e)}")
|
903 |
+
return {'positive': 0.33, 'neutral': 0.34, 'negative': 0.33}
|
904 |
+
|
905 |
+
def _analyze_technical_factors(self, market_data: pd.DataFrame) -> Dict[str, float]:
|
906 |
+
"""Analyse des facteurs techniques"""
|
907 |
+
try:
|
908 |
+
close_prices = market_data['close']
|
909 |
+
|
910 |
+
return {
|
911 |
+
'rsi': self._calculate_rsi(close_prices),
|
912 |
+
'macd': self._calculate_macd(close_prices),
|
913 |
+
'momentum': self._calculate_momentum(close_prices),
|
914 |
+
'trend_strength': self._calculate_trend_strength(close_prices)
|
915 |
+
}
|
916 |
+
|
917 |
+
except Exception as e:
|
918 |
+
print(f"Error analyzing technical factors: {str(e)}")
|
919 |
+
return {
|
920 |
+
'rsi': 50.0,
|
921 |
+
'macd': 0.0,
|
922 |
+
'momentum': 0.0,
|
923 |
+
'trend_strength': 0.0
|
924 |
+
}
|
925 |
+
|
926 |
+
def _analyze_alternative_data(self, alternative_data: Dict) -> Dict[str, Any]:
|
927 |
+
"""Analyse des données alternatives"""
|
928 |
+
insights = {}
|
929 |
+
|
930 |
+
try:
|
931 |
+
# Analyse des données sociales si disponibles
|
932 |
+
if 'social_media' in alternative_data:
|
933 |
+
insights['social'] = self._analyze_social_data(alternative_data['social_media'])
|
934 |
+
|
935 |
+
# Analyse des données de trafic web si disponibles
|
936 |
+
if 'web_traffic' in alternative_data:
|
937 |
+
insights['traffic'] = self._analyze_web_traffic(alternative_data['web_traffic'])
|
938 |
+
|
939 |
+
# Analyse des données satellite si disponibles
|
940 |
+
if 'satellite' in alternative_data:
|
941 |
+
insights['satellite'] = self._analyze_satellite_data(alternative_data['satellite'])
|
942 |
+
|
943 |
+
except Exception as e:
|
944 |
+
print(f"Error analyzing alternative data: {str(e)}")
|
945 |
+
|
946 |
+
return insights
|
947 |
+
|
948 |
+
@staticmethod
|
949 |
+
def _calculate_rsi(prices: pd.Series, periods: int = 14) -> float:
|
950 |
+
"""Calcul du RSI"""
|
951 |
+
try:
|
952 |
+
delta = prices.diff()
|
953 |
+
gain = (delta.where(delta > 0, 0)).rolling(window=periods).mean()
|
954 |
+
loss = (-delta.where(delta < 0, 0)).rolling(window=periods).mean()
|
955 |
+
rs = gain / loss
|
956 |
+
return float(100 - (100 / (1 + rs.iloc[-1])))
|
957 |
+
except Exception:
|
958 |
+
return 50.0
|
959 |
+
|
960 |
+
@staticmethod
|
961 |
+
def _calculate_macd(prices: pd.Series) -> float:
|
962 |
+
"""Calcul du MACD"""
|
963 |
+
try:
|
964 |
+
exp1 = prices.ewm(span=12, adjust=False).mean()
|
965 |
+
exp2 = prices.ewm(span=26, adjust=False).mean()
|
966 |
+
macd = exp1 - exp2
|
967 |
+
signal = macd.ewm(span=9, adjust=False).mean()
|
968 |
+
return float(macd.iloc[-1] - signal.iloc[-1])
|
969 |
+
except Exception:
|
970 |
+
return 0.0
|
971 |
+
|
972 |
+
@staticmethod
|
973 |
+
def _get_default_analysis() -> Dict[str, Any]:
|
974 |
+
"""Retourne une analyse par défaut en cas d'erreur"""
|
975 |
+
return {
|
976 |
+
'regime': 'LOW_VOLATILITY',
|
977 |
+
'risk_metrics': {
|
978 |
+
'volatility': 0.15,
|
979 |
+
'tail_risk': 0.05,
|
980 |
+
'var_95': -0.02
|
981 |
+
},
|
982 |
+
'technical_metrics': {
|
983 |
+
'rsi': 50.0,
|
984 |
+
'macd': 0.0,
|
985 |
+
'momentum': 0.0
|
986 |
+
},
|
987 |
+
'sentiment': {
|
988 |
+
'positive': 0.33,
|
989 |
+
'neutral': 0.34,
|
990 |
+
'negative': 0.33
|
991 |
+
},
|
992 |
+
'alternative_insights': {},
|
993 |
+
'timestamp': datetime.now()
|
994 |
+
}
|
src/analysis/scenario_manager.py
ADDED
@@ -0,0 +1,205 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# src/analysis/scenario_manager.py
|
2 |
+
|
3 |
+
import numpy as np
|
4 |
+
import pandas as pd
|
5 |
+
from typing import Dict, List, Optional
|
6 |
+
from dataclasses import dataclass
|
7 |
+
from datetime import datetime, timedelta
|
8 |
+
from enum import Enum
|
9 |
+
|
10 |
+
class MarketScenario(Enum):
|
11 |
+
"""Types de scénarios de marché"""
|
12 |
+
BULL_MARKET = "bull_market"
|
13 |
+
BEAR_MARKET = "bear_market"
|
14 |
+
HIGH_VOLATILITY = "high_volatility"
|
15 |
+
LOW_VOLATILITY = "low_volatility"
|
16 |
+
CRISIS = "crisis"
|
17 |
+
RECOVERY = "recovery"
|
18 |
+
SIDEWAYS = "sideways"
|
19 |
+
HIGH_INFLATION = "high_inflation"
|
20 |
+
STAGFLATION = "stagflation"
|
21 |
+
|
22 |
+
@dataclass
|
23 |
+
class ScenarioConfig:
|
24 |
+
"""Configuration d'un scénario"""
|
25 |
+
scenario_type: MarketScenario
|
26 |
+
duration: int # en jours
|
27 |
+
severity: float # 0 à 1
|
28 |
+
parameters: Dict[str, float]
|
29 |
+
macro_conditions: Dict[str, float]
|
30 |
+
|
31 |
+
class ScenarioManager:
|
32 |
+
"""Gestionnaire de scénarios de marché"""
|
33 |
+
|
34 |
+
def __init__(self, data_fetcher, market_analyzer, risk_manager):
|
35 |
+
self.data_fetcher = data_fetcher
|
36 |
+
self.market_analyzer = market_analyzer
|
37 |
+
self.risk_manager = risk_manager
|
38 |
+
|
39 |
+
def generate_scenario(self, base_data: pd.DataFrame, config: ScenarioConfig) -> pd.DataFrame:
|
40 |
+
"""Génère un scénario de marché spécifique"""
|
41 |
+
try:
|
42 |
+
# Cloner les données de base
|
43 |
+
scenario_data = base_data.copy()
|
44 |
+
|
45 |
+
# Appliquer les modifications selon le scénario
|
46 |
+
if config.scenario_type == MarketScenario.BULL_MARKET:
|
47 |
+
scenario_data = self._generate_bull_market(scenario_data, config)
|
48 |
+
elif config.scenario_type == MarketScenario.BEAR_MARKET:
|
49 |
+
scenario_data = self._generate_bear_market(scenario_data, config)
|
50 |
+
elif config.scenario_type == MarketScenario.CRISIS:
|
51 |
+
scenario_data = self._generate_crisis(scenario_data, config)
|
52 |
+
# ... autres scénarios
|
53 |
+
|
54 |
+
# Ajouter les conditions macro-économiques
|
55 |
+
scenario_data = self._add_macro_conditions(scenario_data, config.macro_conditions)
|
56 |
+
|
57 |
+
return scenario_data
|
58 |
+
|
59 |
+
except Exception as e:
|
60 |
+
print(f"Erreur dans la génération du scénario: {e}")
|
61 |
+
return base_data
|
62 |
+
|
63 |
+
async def test_strategy_under_scenarios(self,
|
64 |
+
strategy,
|
65 |
+
scenarios: List[ScenarioConfig],
|
66 |
+
base_portfolio: Dict[str, float]) -> Dict:
|
67 |
+
"""Teste la stratégie sous différents scénarios"""
|
68 |
+
results = {}
|
69 |
+
|
70 |
+
for scenario_config in scenarios:
|
71 |
+
try:
|
72 |
+
# Générer les données du scénario
|
73 |
+
scenario_data = self.generate_scenario(
|
74 |
+
base_data=await self._get_base_data(),
|
75 |
+
config=scenario_config
|
76 |
+
)
|
77 |
+
|
78 |
+
# Tester la stratégie
|
79 |
+
scenario_results = await self._backtest_scenario(
|
80 |
+
strategy=strategy,
|
81 |
+
scenario_data=scenario_data,
|
82 |
+
base_portfolio=base_portfolio,
|
83 |
+
config=scenario_config
|
84 |
+
)
|
85 |
+
|
86 |
+
results[scenario_config.scenario_type.value] = scenario_results
|
87 |
+
|
88 |
+
except Exception as e:
|
89 |
+
print(f"Erreur test scénario {scenario_config.scenario_type}: {e}")
|
90 |
+
|
91 |
+
return results
|
92 |
+
|
93 |
+
def generate_stress_scenarios(self) -> List[ScenarioConfig]:
|
94 |
+
"""Génère une série de scénarios de stress"""
|
95 |
+
return [
|
96 |
+
ScenarioConfig(
|
97 |
+
scenario_type=MarketScenario.CRISIS,
|
98 |
+
duration=63, # ~3 mois
|
99 |
+
severity=0.8,
|
100 |
+
parameters={
|
101 |
+
'volatility_increase': 2.5,
|
102 |
+
'correlation_increase': 0.4,
|
103 |
+
'liquidity_decrease': 0.6
|
104 |
+
},
|
105 |
+
macro_conditions={
|
106 |
+
'gdp_growth': -0.03,
|
107 |
+
'inflation_rate': 0.06,
|
108 |
+
'interest_rate': 0.04
|
109 |
+
}
|
110 |
+
),
|
111 |
+
# ... autres scénarios de stress
|
112 |
+
]
|
113 |
+
|
114 |
+
def _generate_bull_market(self, data: pd.DataFrame, config: ScenarioConfig) -> pd.DataFrame:
|
115 |
+
"""Génère un scénario de marché haussier"""
|
116 |
+
try:
|
117 |
+
# Calculer les rendements de base
|
118 |
+
returns = data.pct_change()
|
119 |
+
|
120 |
+
# Ajuster les rendements pour le marché haussier
|
121 |
+
bull_adjustment = 0.0002 * config.severity # +20bp par jour en moyenne
|
122 |
+
adjusted_returns = returns + bull_adjustment
|
123 |
+
|
124 |
+
# Ajuster la volatilité
|
125 |
+
vol_adjustment = 0.8 # Réduction de la volatilité
|
126 |
+
adjusted_returns = adjusted_returns * vol_adjustment
|
127 |
+
|
128 |
+
# Reconstruire les prix
|
129 |
+
return (1 + adjusted_returns).cumprod() * data.iloc[0]
|
130 |
+
|
131 |
+
except Exception as e:
|
132 |
+
print(f"Erreur génération bull market: {e}")
|
133 |
+
return data
|
134 |
+
|
135 |
+
def _generate_crisis(self, data: pd.DataFrame, config: ScenarioConfig) -> pd.DataFrame:
|
136 |
+
"""Génère un scénario de crise"""
|
137 |
+
try:
|
138 |
+
# Phase 1: Chute initiale rapide
|
139 |
+
initial_drop = -0.15 * config.severity
|
140 |
+
data_crisis = data.copy()
|
141 |
+
drop_period = int(config.duration * 0.3)
|
142 |
+
|
143 |
+
# Générer la chute
|
144 |
+
drop_returns = np.linspace(0, initial_drop, drop_period)
|
145 |
+
data_crisis.iloc[:drop_period] *= (1 + drop_returns).cumprod()
|
146 |
+
|
147 |
+
# Phase 2: Haute volatilité
|
148 |
+
vol_period = data_crisis.iloc[drop_period:]
|
149 |
+
increased_vol = vol_period.pct_change() * (1 + config.parameters['volatility_increase'])
|
150 |
+
data_crisis.iloc[drop_period:] = (1 + increased_vol).cumprod() * vol_period.iloc[0]
|
151 |
+
|
152 |
+
return data_crisis
|
153 |
+
|
154 |
+
except Exception as e:
|
155 |
+
print(f"Erreur génération crise: {e}")
|
156 |
+
return data
|
157 |
+
|
158 |
+
def _add_macro_conditions(self, data: pd.DataFrame, macro_conditions: Dict[str, float]) -> pd.DataFrame:
|
159 |
+
"""Ajoute les conditions macro-économiques aux données"""
|
160 |
+
data_with_macro = data.copy()
|
161 |
+
|
162 |
+
for condition, value in macro_conditions.items():
|
163 |
+
data_with_macro[f'macro_{condition}'] = value
|
164 |
+
|
165 |
+
return data_with_macro
|
166 |
+
|
167 |
+
async def _backtest_scenario(self,
|
168 |
+
strategy,
|
169 |
+
scenario_data: pd.DataFrame,
|
170 |
+
base_portfolio: Dict[str, float],
|
171 |
+
config: ScenarioConfig) -> Dict:
|
172 |
+
"""Exécute un backtest sur un scénario spécifique"""
|
173 |
+
try:
|
174 |
+
# Ajuster les paramètres du risk manager pour le scénario
|
175 |
+
self.risk_manager.adjust_for_scenario(config)
|
176 |
+
|
177 |
+
# Exécuter le backtest
|
178 |
+
results = await strategy.backtest(
|
179 |
+
data=scenario_data,
|
180 |
+
initial_portfolio=base_portfolio,
|
181 |
+
risk_manager=self.risk_manager
|
182 |
+
)
|
183 |
+
|
184 |
+
# Analyser les résultats
|
185 |
+
analysis = self._analyze_scenario_results(results, config)
|
186 |
+
|
187 |
+
return {
|
188 |
+
'config': config,
|
189 |
+
'results': results,
|
190 |
+
'analysis': analysis
|
191 |
+
}
|
192 |
+
|
193 |
+
except Exception as e:
|
194 |
+
print(f"Erreur backtest scénario: {e}")
|
195 |
+
return {}
|
196 |
+
|
197 |
+
def _analyze_scenario_results(self, results: Dict, config: ScenarioConfig) -> Dict:
|
198 |
+
"""Analyse les résultats d'un scénario"""
|
199 |
+
return {
|
200 |
+
'max_drawdown': self._calculate_max_drawdown(results['returns']),
|
201 |
+
'recovery_time': self._calculate_recovery_time(results['returns']),
|
202 |
+
'risk_adjusted_return': self._calculate_risk_adjusted_return(results['returns']),
|
203 |
+
'strategy_adaptation': self._evaluate_strategy_adaptation(results, config),
|
204 |
+
'risk_management_effectiveness': self._evaluate_risk_management(results)
|
205 |
+
}
|
src/analysis/strategy_optimizer.py
ADDED
@@ -0,0 +1,230 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# src/analysis/strategy_optimizer.py
|
2 |
+
|
3 |
+
import numpy as np
|
4 |
+
import pandas as pd
|
5 |
+
from typing import Dict, List, Tuple
|
6 |
+
from dataclasses import dataclass
|
7 |
+
import optuna
|
8 |
+
import matplotlib.pyplot as plt
|
9 |
+
import seaborn as sns
|
10 |
+
from concurrent.futures import ProcessPoolExecutor
|
11 |
+
from datetime import datetime, timedelta
|
12 |
+
|
13 |
+
from src.models.backtest_results import EnhancedBacktestResult
|
14 |
+
|
15 |
+
|
16 |
+
@dataclass
|
17 |
+
class OptimizationResult:
|
18 |
+
"""Résultats de l'optimisation de la stratégie"""
|
19 |
+
best_params: Dict[str, float]
|
20 |
+
performance_metrics: Dict[str, float]
|
21 |
+
optimization_path: pd.DataFrame
|
22 |
+
parameter_importance: Dict[str, float]
|
23 |
+
|
24 |
+
class StrategyOptimizer:
|
25 |
+
"""Optimiseur de stratégie utilisant Optuna"""
|
26 |
+
|
27 |
+
def __init__(self, enhanced_backtest_engine, n_trials: int = 100):
|
28 |
+
self.backtest_engine = enhanced_backtest_engine
|
29 |
+
self.n_trials = n_trials
|
30 |
+
self.study = None
|
31 |
+
|
32 |
+
async def optimize_strategy(self,
|
33 |
+
symbols: List[str],
|
34 |
+
start_date: pd.Timestamp,
|
35 |
+
end_date: pd.Timestamp) -> OptimizationResult:
|
36 |
+
"""Optimise les paramètres de la stratégie"""
|
37 |
+
|
38 |
+
# Création de l'étude Optuna
|
39 |
+
self.study = optuna.create_study(
|
40 |
+
direction="maximize",
|
41 |
+
study_name="strategy_optimization",
|
42 |
+
sampler=optuna.samplers.TPESampler(seed=42)
|
43 |
+
)
|
44 |
+
|
45 |
+
# Exécution de l'optimisation
|
46 |
+
self.study.optimize(
|
47 |
+
lambda trial: self._objective(trial, symbols, start_date, end_date),
|
48 |
+
n_trials=self.n_trials,
|
49 |
+
show_progress_bar=True
|
50 |
+
)
|
51 |
+
|
52 |
+
# Analyse des résultats
|
53 |
+
optimization_results = self._analyze_optimization_results()
|
54 |
+
|
55 |
+
# Visualisation des résultats
|
56 |
+
self._plot_optimization_results()
|
57 |
+
|
58 |
+
return optimization_results
|
59 |
+
|
60 |
+
async def _objective(self,
|
61 |
+
trial: optuna.Trial,
|
62 |
+
symbols: List[str],
|
63 |
+
start_date: pd.Timestamp,
|
64 |
+
end_date: pd.Timestamp) -> float:
|
65 |
+
"""Fonction objectif pour l'optimisation"""
|
66 |
+
|
67 |
+
# Paramètres à optimiser
|
68 |
+
params = {
|
69 |
+
'technical_weight': trial.suggest_float('technical_weight', 0.2, 0.6),
|
70 |
+
'alternative_weight': trial.suggest_float('alternative_weight', 0.1, 0.4),
|
71 |
+
'social_weight': trial.suggest_float('social_weight', 0.1, 0.4),
|
72 |
+
'signal_threshold': trial.suggest_float('signal_threshold', 0.3, 0.7),
|
73 |
+
'position_size_factor': trial.suggest_float('position_size_factor', 0.5, 2.0),
|
74 |
+
'stop_loss': trial.suggest_float('stop_loss', 0.02, 0.10),
|
75 |
+
'take_profit': trial.suggest_float('take_profit', 0.03, 0.15)
|
76 |
+
}
|
77 |
+
|
78 |
+
# Exécution du backtest avec les paramètres actuels
|
79 |
+
results = await self.backtest_engine.run_enhanced_backtest(
|
80 |
+
symbols=symbols,
|
81 |
+
start_date=start_date,
|
82 |
+
end_date=end_date,
|
83 |
+
strategy_params=params
|
84 |
+
)
|
85 |
+
|
86 |
+
# Calcul du score d'optimisation
|
87 |
+
optimization_score = self._calculate_optimization_score(results)
|
88 |
+
|
89 |
+
return optimization_score
|
90 |
+
|
91 |
+
def _calculate_optimization_score(self, results: Dict) -> float:
|
92 |
+
"""Calcule le score pour l'optimisation"""
|
93 |
+
# Extraction des métriques
|
94 |
+
sharpe_ratio = results.performance_metrics['sharpe_ratio']
|
95 |
+
max_drawdown = abs(results.performance_metrics['max_drawdown'])
|
96 |
+
return_risk_ratio = results.performance_metrics['annual_return'] / max_drawdown
|
97 |
+
|
98 |
+
# Combinaison pondérée des métriques
|
99 |
+
score = (0.4 * sharpe_ratio +
|
100 |
+
0.3 * return_risk_ratio +
|
101 |
+
0.3 * (1 / (1 + max_drawdown)))
|
102 |
+
|
103 |
+
return score
|
104 |
+
|
105 |
+
def _analyze_optimization_results(self) -> OptimizationResult:
|
106 |
+
"""Analyse des résultats de l'optimisation"""
|
107 |
+
# Meilleurs paramètres
|
108 |
+
best_params = self.study.best_params
|
109 |
+
|
110 |
+
# Chemin d'optimisation
|
111 |
+
optimization_path = pd.DataFrame(
|
112 |
+
[t.params for t in self.study.trials],
|
113 |
+
index=[t.number for t in self.study.trials]
|
114 |
+
)
|
115 |
+
|
116 |
+
# Importance des paramètres
|
117 |
+
parameter_importance = optuna.importance.get_param_importances(self.study)
|
118 |
+
|
119 |
+
return OptimizationResult(
|
120 |
+
best_params=best_params,
|
121 |
+
performance_metrics=self.study.best_value,
|
122 |
+
optimization_path=optimization_path,
|
123 |
+
parameter_importance=parameter_importance
|
124 |
+
)
|
125 |
+
|
126 |
+
def _plot_optimization_results(self):
|
127 |
+
"""Visualisation des résultats d'optimisation"""
|
128 |
+
# Configuration du style
|
129 |
+
plt.style.use('seaborn')
|
130 |
+
fig = plt.figure(figsize=(15, 10))
|
131 |
+
|
132 |
+
# 1. Évolution de l'optimisation
|
133 |
+
plt.subplot(221)
|
134 |
+
optuna.visualization.matplotlib.plot_optimization_history(self.study)
|
135 |
+
plt.title('Progression de l\'optimisation')
|
136 |
+
|
137 |
+
# 2. Importance des paramètres
|
138 |
+
plt.subplot(222)
|
139 |
+
optuna.visualization.matplotlib.plot_param_importances(self.study)
|
140 |
+
plt.title('Importance des paramètres')
|
141 |
+
|
142 |
+
# 3. Corrélations entre paramètres
|
143 |
+
plt.subplot(223)
|
144 |
+
optuna.visualization.matplotlib.plot_parallel_coordinate(self.study)
|
145 |
+
plt.title('Corrélations des paramètres')
|
146 |
+
|
147 |
+
# 4. Distribution des meilleurs paramètres
|
148 |
+
plt.subplot(224)
|
149 |
+
data = pd.DataFrame(
|
150 |
+
[t.params for t in self.study.trials],
|
151 |
+
columns=self.study.best_params.keys()
|
152 |
+
)
|
153 |
+
sns.boxplot(data=data)
|
154 |
+
plt.xticks(rotation=45)
|
155 |
+
plt.title('Distribution des paramètres')
|
156 |
+
|
157 |
+
plt.tight_layout()
|
158 |
+
plt.show()
|
159 |
+
|
160 |
+
# Visualisation détaillée des résultats de backtest
|
161 |
+
class BacktestVisualizer:
|
162 |
+
"""Visualisation détaillée des résultats de backtest"""
|
163 |
+
|
164 |
+
@staticmethod
|
165 |
+
def create_performance_dashboard(results: EnhancedBacktestResult):
|
166 |
+
"""Création d'un dashboard de performance complet"""
|
167 |
+
plt.style.use('seaborn')
|
168 |
+
fig = plt.figure(figsize=(20, 12))
|
169 |
+
|
170 |
+
# 1. Courbe de croissance du portefeuille
|
171 |
+
plt.subplot(331)
|
172 |
+
cumulative_returns = (1 + results.returns).cumprod()
|
173 |
+
plt.plot(cumulative_returns.index, cumulative_returns.values)
|
174 |
+
plt.title('Performance du portefeuille')
|
175 |
+
|
176 |
+
# 2. Drawdowns
|
177 |
+
plt.subplot(332)
|
178 |
+
drawdowns = BacktestVisualizer._calculate_drawdowns(results.returns)
|
179 |
+
plt.fill_between(drawdowns.index, drawdowns.values, 0, color='red', alpha=0.3)
|
180 |
+
plt.title('Drawdowns')
|
181 |
+
|
182 |
+
# 3. Distribution des rendements
|
183 |
+
plt.subplot(333)
|
184 |
+
sns.histplot(results.returns, kde=True)
|
185 |
+
plt.title('Distribution des rendements')
|
186 |
+
|
187 |
+
# 4. Heat map des positions
|
188 |
+
plt.subplot(334)
|
189 |
+
sns.heatmap(results.positions.T, cmap='RdYlGn', center=0)
|
190 |
+
plt.title('Évolution des positions')
|
191 |
+
|
192 |
+
# 5. Contribution par signal
|
193 |
+
plt.subplot(335)
|
194 |
+
signal_returns = BacktestVisualizer._calculate_signal_returns(results)
|
195 |
+
sns.barplot(data=signal_returns)
|
196 |
+
plt.title('Performance par type de signal')
|
197 |
+
|
198 |
+
# 6. Métriques de risque
|
199 |
+
plt.subplot(336)
|
200 |
+
risk_metrics = pd.Series(results.risk_metrics)
|
201 |
+
sns.barplot(x=risk_metrics.index, y=risk_metrics.values)
|
202 |
+
plt.xticks(rotation=45)
|
203 |
+
plt.title('Métriques de risque')
|
204 |
+
|
205 |
+
# 7. Impact des données alternatives
|
206 |
+
plt.subplot(337)
|
207 |
+
alt_contribution = pd.Series(results.alternative_contribution)
|
208 |
+
sns.barplot(x=alt_contribution.index, y=alt_contribution.values)
|
209 |
+
plt.xticks(rotation=45)
|
210 |
+
plt.title('Contribution des données alternatives')
|
211 |
+
|
212 |
+
plt.tight_layout()
|
213 |
+
plt.show()
|
214 |
+
|
215 |
+
@staticmethod
|
216 |
+
def _calculate_drawdowns(returns: pd.Series) -> pd.Series:
|
217 |
+
"""Calcul des drawdowns"""
|
218 |
+
cumulative = (1 + returns).cumprod()
|
219 |
+
running_max = cumulative.expanding().max()
|
220 |
+
drawdowns = cumulative / running_max - 1
|
221 |
+
return drawdowns
|
222 |
+
|
223 |
+
@staticmethod
|
224 |
+
def _calculate_signal_returns(results: EnhancedBacktestResult) -> pd.DataFrame:
|
225 |
+
"""Calcul des rendements par type de signal"""
|
226 |
+
signal_returns = pd.DataFrame()
|
227 |
+
for signal_type in results.signals['direction'].unique():
|
228 |
+
mask = results.signals['direction'] == signal_type
|
229 |
+
signal_returns[signal_type] = results.returns[mask].mean()
|
230 |
+
return signal_returns
|
src/core/.ipynb_checkpoints/__init__-checkpoint.py
ADDED
File without changes
|
src/core/.ipynb_checkpoints/base-checkpoint.py
ADDED
File without changes
|
src/core/.ipynb_checkpoints/config-checkpoint.py
ADDED
File without changes
|
src/core/.ipynb_checkpoints/data_fetcher-checkpoint.py
ADDED
File without changes
|
src/core/.ipynb_checkpoints/data_generator-checkpoint.py
ADDED
File without changes
|
src/core/.ipynb_checkpoints/data_provider-checkpoint.py
ADDED
File without changes
|
src/core/.ipynb_checkpoints/event_types-checkpoint.py
ADDED
File without changes
|
src/core/.ipynb_checkpoints/life_events_manager-checkpoint.py
ADDED
File without changes
|
src/core/.ipynb_checkpoints/market_analyzer-checkpoint.py
ADDED
File without changes
|
src/core/.ipynb_checkpoints/market_states-checkpoint.py
ADDED
File without changes
|
src/core/.ipynb_checkpoints/portfolio_optimizer-checkpoint.py
ADDED
File without changes
|
src/core/.ipynb_checkpoints/profile_types-checkpoint.py
ADDED
File without changes
|
src/core/.ipynb_checkpoints/profiling-checkpoint.py
ADDED
File without changes
|
src/core/.ipynb_checkpoints/risk_manager-checkpoint.py
ADDED
File without changes
|
src/core/.ipynb_checkpoints/risk_metrics-checkpoint.py
ADDED
File without changes
|
src/core/.ipynb_checkpoints/technical_indicators-checkpoint.py
ADDED
File without changes
|
src/core/.ipynb_checkpoints/types-checkpoint.py
ADDED
File without changes
|