markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
![image](./images/pandas.png)Pandas est le package de prédilection pour traiter des données structurées.Pandas est basé sur 2 structures extrêmement liées les Series et le DataFrame.Ces deux structures permettent de traiter des données sous forme de tableaux indexés.Les classes de Pandas utilisent des classes de Numpy, il est donc possible d'utiliser les fonctions universelles de Numpy sur les objets Pandas. | # on importe pandas avec :
import pandas as pd
import numpy as np
%matplotlib inline | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Les Series de Pandas- Les Series sont indexées, c'est leur avantage sur les arrays de NumPy- On peut utiliser les fonctions `.values` et `.index` pour voir les différentes parties de chaque Series- On définit une Series par `pd.Series([,], index=['','',])`- On peut appeler un élément avec `ma_serie['France']`- On peut aussi faire des conditions :```pythonma_serie[ma_serie>5000000]``````'France' in ma_serie```- Les objets Series peuvent être transformés en dictionnaires en utilisant :`.to_dict()` **Exercice :** Définir un objet Series comprenant la population de 5 pays puis afficher les pays ayant une population > 50’000’000. | ser_pop = pd.Series([70,8,300,1200],index=["France","Suisse","USA","Chine"])
ser_pop
# on extrait une valeur avec une clé
ser_pop["France"]
# on peut aussi utiliser une position avec .iloc[]
ser_pop.iloc[0]
# on applique la condition entre []
ser_pop[ser_pop>50] | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
D'autres opérations sur les objets series- Pour définir le nom de la Series, on utilise `.name`- Pour définir le titre de la colonne des observations, on utilise `.index.name` **Exercice :** Définir les noms de l’objet et de la colonne des pays pour la Series précédente | ser_pop.name = "Populations"
ser_pop.index.name = "Pays"
ser_pop | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Les données manquantesDans pandas, les données manquantes sont identifiés avec les fonctions de Numpy (`np.nan`). On a d'autres fonctions telles que : | pd.Series([2,np.nan,4],index=['a','b','c'])
pd.isna(pd.Series([2,np.nan,4],index=['a','b','c']))
pd.notna(pd.Series([2,np.nan,4],index=['a','b','c'])) | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Les dates avec pandas- Python possède un module datetime qui permet de gérer facilement des dates- Pandas permet d'appliquer les opérations sur les dates aux Series et aux DataFrame- Le format es dates Python est `YYYY-MM-DD HH:MM:SS`- On peut générer des dates avec la fonction `pd.date_range()` avec différente fréquences `freq=`- On peut utiliser ces dates comme index dans un DataFrame ou dans un objet Series- On peut changer la fréquence en utilisant `.asfreq()`- Pour transformer une chaine de caractère en date, on utilise `pd.to_datetime()` avec l’option `dayfirst=True` si on est dans le cas français-On pourra aussi spécifier un format pour accélérer le processus `%Y%m%d` **Exercice :**Créez un objet Series et ajoutez des dates partant du 3 octobre 2017 par jour jusqu’à aujourd’hui. Afficher le résultat dans un graphique (on utilisera la méthode `.plot()` | dates = pd.date_range("2017-10-03", "2020-02-27",freq="W")
valeurs = np.random.random(size=len(dates))
ma_serie=pd.Series(valeurs, index =dates)
ma_serie.plot()
len(dates) | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Le DataFrame - Les DataFrame sont des objets très souples pouvant être construits de différentes façon- On peut les construire en récupérant des données copier / coller, où directement sur Internet, ou en entrant les valeurs manuellement- Les DataFrame se rapprochent des dictionnaires et on peut construire ces objets en utilisant `DataFrame(dico)`- De nombreux détails sur la création des DataFrame se trouve sur ce site : Construction de DataFrameOn peut simplement construire un DataFrame avec le classe pd.DataFrame() à partir de différentes structures : | frame1=pd.DataFrame(np.random.randn(10).reshape(5,2),
index=["obs_"+str(i) for i in range(5)],
columns=["col_"+str(i) for i in range(2)])
frame1 | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Opérations sur les DataFrameOn peut afficher le nom des colonnes : | print(frame1.columns) | Index(['col_0', 'col_1'], dtype='object')
| MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
On peut accéder à une colonne avec :- `frame1.col_0` : attention au cas de nom de colonnes avec des espaces...- `frame1['col_0']`On peut accéder à une cellule avec :- `frame1.loc['obs1','col_0']` : on utilise les index et le nom des colonnes- `frame1.iloc[1,0]` : on utilise les positions dans le DataFrame Options de visualisation et de résuméPour afficher les 3 premières lignes, on peut utiliser : | frame1.head(3) | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Pour afficher un résumé du DF : | frame1.info() | <class 'pandas.core.frame.DataFrame'>
Index: 5 entries, obs_0 to obs_4
Data columns (total 2 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 5 non-null float64
1 col_1 5 non-null float64
dtypes: float64(2)
memory usage: 120.0+ bytes
| MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Importer des données externesPandas est l'outil le plus efficace pour importer des données externes, il prend en charge de nombreux formats dont csv, Excel, SQL, SAS... Importation de données avec PandasQuel que soit le type de fichier, Pandas possède une fonction :```pythonframe=pd.read_...('chemin_du_fichier/nom_du_fichier',...)```Pour écrire un DataFrame dans un fichier, on utilise :```pythonframe.to_...('chemin_du_fichier/nom_du_fichier',...)``` **Exercice :** Importer un fichier `.csv` avec `pd.read_csv()`. On utilisera le fichier "./data/airbnb.csv" | # on prend la colonne id comme index de notre DataFrame
airbnb = pd.read_csv("https://www.stat4decision.com/airbnb.csv",index_col="id")
airbnb.info()
# la colonne price est sous forme d'objet et donc de chaîne de caractères
# on a 2933 locations qui coûtent 80$ la nuit
airbnb["price"].value_counts()
dpt = pd.read_csv("./data/base-dpt.csv", sep = ";")
dpt.head()
dpt.info() | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 1300 entries, 0 to 1299
Data columns (total 38 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 CODGEO 1300 non-null int64
1 LIBGEO 1300 non-null object
2 REG 1300 non-null int64
3 DEP 1300 non-null int64
4 P14_POP 1297 non-null float64
5 P09_POP 1297 non-null float64
6 SUPERF 1297 non-null float64
7 NAIS0914 1297 non-null float64
8 DECE0914 1297 non-null float64
9 P14_MEN 1297 non-null float64
10 NAISD16 1296 non-null float64
11 DECESD16 1296 non-null float64
12 P14_LOG 1297 non-null float64
13 P14_RP 1297 non-null float64
14 P14_RSECOCC 1297 non-null float64
15 P14_LOGVAC 1297 non-null float64
16 P14_RP_PROP 1297 non-null float64
17 NBMENFISC14 1280 non-null float64
18 PIMP14 561 non-null float64
19 MED14 1280 non-null float64
20 TP6014 462 non-null float64
21 P14_EMPLT 1297 non-null float64
22 P14_EMPLT_SAL 1297 non-null float64
23 P09_EMPLT 1300 non-null float64
24 P14_POP1564 1297 non-null float64
25 P14_CHOM1564 1297 non-null float64
26 P14_ACT1564 1297 non-null float64
27 ETTOT15 1299 non-null float64
28 ETAZ15 1299 non-null float64
29 ETBE15 1299 non-null float64
30 ETFZ15 1299 non-null float64
31 ETGU15 1299 non-null float64
32 ETGZ15 1299 non-null float64
33 ETOQ15 1299 non-null float64
34 ETTEF115 1299 non-null float64
35 ETTEFP1015 1299 non-null float64
36 Geo Shape 1297 non-null object
37 geo_point_2d 1297 non-null object
dtypes: float64(32), int64(3), object(3)
memory usage: 386.1+ KB
| MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
D'autres types de données JSONLes objets JSON ressemblent à des dictionnaires.On utilise le module `json` puis la fonction `json.loads()` pour transformer une entrée JSON en objet json HTMLOn utilise `pd.read_html(url)`. Cet fonction est basée sur les packages `beautifulsoup` et `html5lib`Cette fonction renvoie une liste de DataFrame qui représentent tous les DataFrame de la page. On ira ensuite chercher l'élément qui nous intéresse avec `frame_list[0]` **Exercice :** Importez un tableau en html depuis la page | bank = pd.read_html("http://www.fdic.gov/bank/individual/failed/banklist.html")
# read_html() stocke les tableaux d'une page web dans une liste
type(bank)
len(bank)
bank[0].head(10)
nba = pd.read_html("https://en.wikipedia.org/wiki/2018%E2%80%9319_NBA_season")
len(nba)
nba[3] | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Importer depuis ExcelOn a deux approches pour Excel :- On peut utiliser `pd.read_excel()`- On peut utiliser la classe `pd.ExcelFile()`Dans ce cas, on utilise :```pythonxlsfile=pd.ExcelFile('fichier.xlsx')xlsfile.parse('Sheet1')``` **Exercice :** Importez un fichier Excel avec les deux approches, on utilisera : `credit2.xlsx` et `ville.xls` | pd.read_excel("./data/credit2.xlsx",usecols=["Age","Gender"])
pd.read_excel("./data/credit2.xlsx",usecols="A:C")
credit2 = pd.read_excel("./data/credit2.xlsx", index_col="Customer_ID")
credit2.head()
# on crée un objet du type ExcelFile
ville = pd.ExcelFile("./data/ville.xls")
ville.sheet_names
# on extrait toutes les feuilles avec le mot ville dans le nom de la feuille dans une liste de dataframes
list_feuilles_ville = []
for nom in ville.sheet_names:
if "ville" in nom:
list_feuilles_ville.append(ville.parse(nom)) | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
On crée une fonction qui permet d'importer les feuilles excel ayant le terme nom_dans_feuille dans le nom de la feuille | def import_excel_feuille(chemin_fichier, nom_dans_feuille = ""):
""" fonction qui importe les feuilles excel ayant le terme nom_dans_feuille dans le nom de la feuille"""
excel = pd.ExcelFile(chemin_fichier)
list_feuilles = []
for nom_feuille in excel.sheet_names:
if nom_dans_feuille in nom_feuille:
list_feuilles.append(excel.parse(nom))
return list_feuilles
list_ain = import_excel_feuille("./data/ville.xls",nom_dans_feuille="ain")
list_ain[0].head() | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Importer des données SQLPandas possède une fonction `read_sql()` qui permet d’importer directement des bases de données ou des queries dans des DataFrameIl faut tout de même un connecteur pour accéder aux bases de donnéesPour mettre en place ce connecteur, on utlise le package SQLAlchemy.Suivant le type de base de données, on utilisera différents codes mais la structure du code est toujours la même | # on importe l'outil de connexion
from sqlalchemy import create_engine | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
On crée une connexion```pythonconnexion=create_engine("sqlite:///(...).sqlite")``` On utlise une des fonctions de Pandas pour charger les données```pythonrequete="""select ... from ..."""frame_sql=pd.read_sql_query(requete,connexion)``` **Exercices :** Importez la base de données SQLite salaries et récupérez la table Salaries dans un DataFrame | connexion=create_engine("sqlite:///./data/salaries.sqlite")
connexion.table_names()
salaries = pd.read_sql_query("select * from salaries", con=connexion)
salaries.head() | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Importer depuis SPSSPandas possède une fonction `pd.read_spss()`Attention ! Il faut la dernière version de Pandas et installer des packages supplémentaires !**Exercice :** Importer le fichier SPSS se trouvant dans ./data/ | #base = pd.read_spss("./data/Base.sav") | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Les tris avec Pandas Pour effectuer des tris, on utilise :- `.sort_index()` pour le tri des index- `.sort_values()` pour le tri des données- `.rank()` affiche le rang des observationsIl peut y avoir plusieurs tris dans la même opération. Dans ce cas, on utilise des listes de colonnes :```pythonframe.sort_values(["col_1","col_2"])``` **Exercice :** Triez les données sur les salaires en se basant sur le TotalPay et le JobTitle | salaries.sort_values(["JobTitle","TotalPay"],ascending=[True, False]) | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Les statistiques simplesLes Dataframe possèdent de nombreuses méthodes pour calculer des statistiques simples :- `.sum(axis=0)` permet de faire une somme par colonne- `.sum(axis=1)` permet de faire une somme par ligne- `.min()` et `.max()` donnent le minimum par colonne- `.idxmin()` et `.idxmax()` donnent l’index du minimum et du maximum- `.describe()` affiche un tableau de statistiques descriptives par colonne- `.corr()` pour calculer la corrélation entre les colonnes **Exercice :** Obtenir les différentes statistiques descriptives pour les données AirBnB.On peut s'intéresser à la colonne `Price` (attention des prétraitements sont nécessaires) | # cette colonne est sous forme d'object, il va falloir la modifier
airbnb["price"].dtype
airbnb["price_num"] = pd.to_numeric(airbnb["price"].str.replace("$","")
.str.replace(",",""))
airbnb["price_num"].dtype
airbnb["price_num"].mean()
airbnb["price_num"].describe()
# on extrait l'id de la location avec le prix max
airbnb["price_num"].idxmax()
# on affiche cette location
airbnb.loc[airbnb["price_num"].idxmax()] | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Calcul de la moyenne pondérée sur une enquête | base = pd.read_csv("./data/Base.csv")
#moyenne pondérée
np.average(base["resp_age"],weights=base["Weight"])
# moyenne
base["resp_age"].mean() | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Utilisation de statsmodels | from statsmodels.stats.weightstats import DescrStatsW
# on sélectionne les colonnes numériques
base_num = base.select_dtypes(np.number)
# on calcule les stats desc pondérées
mes_stat = DescrStatsW(base_num, weights=base["Weight"])
base_num.columns
mes_stat.var
mes_stat_age = DescrStatsW(base["resp_age"], weights=base["Weight"])
mes_stat_age.mean | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
On va construire une fonction permettant de calculer les stat desc pondérées d'une colonne | def stat_desc_w_ipsos(data, columns, weights):
""" Cette fonction calcule et affiche les moyennes et écarts-types pondérés
Input : - data : données sous forme de DataFrame
- columns : nom des colonnes quanti à analyser
- weights : nom de la colonne des poids
"""
from statsmodels.stats.weightstats import DescrStatsW
mes_stats = DescrStatsW(data[columns],weights=data[weights])
print("Moyenne pondérée :", mes_stats.mean)
print("Ecart-type pondéré :", mes_stats.std)
stat_desc_w_ipsos(base,"resp_age","Weight") | Moyenne pondérée : 48.40297631233564
Ecart-type pondéré : 17.1309963999935
| MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Le traitement des données manquantes- Les données manquantes sont identifiées par `NaN`- `.dropna()` permet de retirer les données manquantes dans un objet Series et l’ensemble d’une ligne dans le cas d’un DataFrame- Pour éliminer par colonne, on utilise `.dropna(axis=1)`- Remplacer toutes les données manquantes `.fillna(valeur)` Les jointures avec PandasOn veut joindre des jeux de données en utilisant des clés (variables communes)- `pd.merge()` permet de joindre deux DataFrame, on utilise comme options `on='key'`- On peut utiliser comme option `how=`, on peut avoir : - `left` dans ce cas, on garde le jeu de données à gauche et pour les données de droite des valeurs manquantes sont ajoutées. - `outer`, on garde toutes les valeurs des deux jeux de données - ...- On peut avoir plusieurs clés et faire une jointure sur les deux clés `on=['key1','key2']`Pour plus de détails : **Exercice :** Joindre deux dataframes (credit1 et credit2). | credit1 = pd.read_csv("./data/credit1.txt",sep="\t")
credit_global = pd.merge(credit1,credit2,how="inner",on="Customer_ID")
credit_global.head() | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
On fait une jointure entre les données des locations Airbnb et les données de calendrier de remplissage des appartements | airbnb_reduit = airbnb[["price_num","latitude","longitude"]]
calendar = pd.read_csv("https://www.stat4decision.com/calendar.csv.gz")
calendar.head()
new_airbnb = pd.merge(calendar,airbnb[["price_num","latitude","longitude"]],
left_on = "listing_id",right_index=True)
new_airbnb.shape | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
On veut extraire des statistiques de basePar exemple, la moyenne des prix pour les locations du 8 juillet 2018 : | new_airbnb[new_airbnb["date"]=='2018-07-08']["price_num"].mean() | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
On extrait le nombre de nuitées disponibles / occuppées : | new_airbnb["available"].value_counts(normalize = True) | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Si on regarde le part de locations occuppées le 8 janvier 2019, on a : | new_airbnb[new_airbnb["date"]=='2019-01-08']["available"].value_counts(normalize = True) | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
La moyenne des prix des appartements disponibles le 8 juillet 2018 : | new_airbnb[(new_airbnb["date"]=='2018-07-08')&(new_airbnb["available"]=='t')]["price_num"].mean() | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
On transforme la colonne date qui est sous forme de chaîne de caractère en DateTime, ceci permet de faire de nouvelles opérations : | new_airbnb["date"]= pd.to_datetime(new_airbnb["date"])
# on construit une colonne avec le jour de la semaine
new_airbnb["jour_semaine"]=new_airbnb["date"].dt.day_name() | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
La moyenne des pris des Samedi soirs disponibles est donc : | new_airbnb[(new_airbnb["jour_semaine"]=='Saturday')&(new_airbnb["available"]=='t')]["price_num"].mean() | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Gestion des duplications- On utilise `.duplicated()` ou `.drop_duplicates()` dans le cas où on désire effacer les lignes se répétant- On peut se concentrer sur une seule variables en entrant directement le nom de la variable. Dans ce cas, c’est la première apparition qui compte. Si on veut prendre la dernière apparition, on utilise l’option `keep="last"`. On pourra avoir :```pythonframe1.drop_duplicates(["col_0","col_1"],keep="last")``` DiscrétisationPour discrétiser, on utilise la fonction `pd.cut()`, on va définir une liste de points pour discrétiser et on entre cette liste comme second paramètre de la fonction.Une fois discrétisé, on peut afficher les modalités obtenues en utilisant `.categories`On peut aussi compter les occurrence en utilisant `pd.value_counts()`Il est aussi possible d’entrer le nombre de segments comme second paramètreOn utilisera aussi `qcut()` **Exercice :** Créez une variable dans le dataframe AirBnB pour obtenir des niveaux de prix. | airbnb["price_disc1"]=pd.cut(airbnb["price_num"],bins=5)
airbnb["price_disc2"]=pd.qcut(airbnb["price_num"],5)
airbnb["price_disc1"].value_counts()
airbnb["price_disc2"].value_counts() | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Les tableaux croisés avec PandasLes DataFrame possèdent des méthodes pour générer des tableaux croisés, notamment :```pythonframe1.pivot_table()```Cette méthode permet de gérer de nombreux cas avec des fonctions standards et sur mesure. **Exercice :** Afficher un tableau Pivot pour les données AirBnB. | # on définit un
def moy2(x):
return x.mean()/x.var() | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
On croise le room_type avec le niveau de prix et on regarde le review_score_rating moyen + le nombre d'occurences et une fonction "maison" : | airbnb['room_type']
airbnb['price_disc2']
airbnb['review_scores_rating']
airbnb.pivot_table(values=["review_scores_rating",'review_scores_cleanliness'],
index="room_type",
columns='price_disc2',aggfunc=["count","mean",moy2]) | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
L'utilisation de GroupBy sur des DataFrame- `.groupby` permet de rassembler des observations en fonction d’une variable dite de groupe- Par exemple, `frame.groupby('X').mean()` donnera les moyennes par groupes de `X`- On peut aussi utiliser `.size()` pour connaître la taille des groupes et utiliser d’autres fonctions (`.sum()`)- On peut effectuer de nombreuses opérations de traitement avec le groupby | airbnb_group_room = airbnb.groupby(['room_type','price_disc2'])
airbnb_group_room["price_num"].describe()
# on peut afficher plusieurs statistiques
airbnb_group_room["price_num"].agg(["mean","median","std","count"])
new_airbnb.groupby(['available','jour_semaine'])["price_num"].agg(["mean","count"]) | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Essayez d'utiliser une fonction lambda sur le groupby **Exercice :** - Données sur les salaires- On utilise le `groupby()` pour rassembler les types d’emploi- Et on calcule des statistiques pour chaque typeOn peut utiliser la méthode `.agg()` avec par exemple `'mean'` comme paramètreOn utilise aussi fréquemment la méthode `.apply()` combinée à une fonction lambda | # on passe tous les JobTitle en minuscule
salaries["JobTitle"]= salaries["JobTitle"].str.lower()
# nombre de JobTitle différents
salaries["JobTitle"].nunique()
salaries.groupby("JobTitle")["TotalPay"].mean().sort_values(ascending=False)
salaries.groupby("JobTitle")["TotalPay"].agg(["mean","count"]).sort_values("count",ascending=False) | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
On peut aussi faire des représentations graphiques avancées : | import matplotlib.pyplot as plt
plt.figure(figsize=(10,5))
plt.scatter("longitude","latitude", data = airbnb[airbnb["price_num"]<150], s=1,c = "price_num", cmap=plt.get_cmap("jet"))
plt.colorbar()
plt.savefig("paris_airbnb.jpg")
airbnb[airbnb["price_num"]<150] | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Linear Regression with a Real DatasetThis Colab uses a real dataset to predict the prices of houses in California. Learning Objectives:After doing this Colab, you'll know how to do the following: * Read a .csv file into a [pandas](https://developers.google.com/machine-learning/glossary/pandas) DataFrame. * Examine a [dataset](https://developers.google.com/machine-learning/glossary/data_set). * Experiment with different [features](https://developers.google.com/machine-learning/glossary/feature) in building a model. * Tune the model's [hyperparameters](https://developers.google.com/machine-learning/glossary/hyperparameter). The Dataset The [dataset for this exercise](https://developers.google.com/machine-learning/crash-course/california-housing-data-description) is based on 1990 census data from California. The dataset is old but still provides a great opportunity to learn about machine learning programming. Use the right version of TensorFlowThe following hidden code cell ensures that the Colab will run on TensorFlow 2.X. | #@title Run on TensorFlow 2.x
%tensorflow_version 2.x | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
Import relevant modulesThe following hidden code cell imports the necessary code to run the code in the rest of this Colaboratory. | #@title Import relevant modules
import pandas as pd
import tensorflow as tf
from matplotlib import pyplot as plt
# The following lines adjust the granularity of reporting.
pd.options.display.max_rows = 10
pd.options.display.float_format = "{:.1f}".format | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
The datasetDatasets are often stored on disk or at a URL in [.csv format](https://wikipedia.org/wiki/Comma-separated_values). A well-formed .csv file contains column names in the first row, followed by many rows of data. A comma divides each value in each row. For example, here are the first five rows of the .csv file file holding the California Housing Dataset:```"longitude","latitude","housing_median_age","total_rooms","total_bedrooms","population","households","median_income","median_house_value"-114.310000,34.190000,15.000000,5612.000000,1283.000000,1015.000000,472.000000,1.493600,66900.000000-114.470000,34.400000,19.000000,7650.000000,1901.000000,1129.000000,463.000000,1.820000,80100.000000-114.560000,33.690000,17.000000,720.000000,174.000000,333.000000,117.000000,1.650900,85700.000000-114.570000,33.640000,14.000000,1501.000000,337.000000,515.000000,226.000000,3.191700,73400.000000``` Load the .csv file into a pandas DataFrameThis Colab, like many machine learning programs, gathers the .csv file and stores the data in memory as a pandas Dataframe. pandas is an open source Python library. The primary datatype in pandas is a DataFrame. You can imagine a pandas DataFrame as a spreadsheet in which each row is identified by a number and each column by a name. pandas is itself built on another open source Python library called NumPy. If you aren't familiar with these technologies, please view these two quick tutorials:* [NumPy](https://colab.research.google.com/github/google/eng-edu/blob/master/ml/cc/exercises/numpy_ultraquick_tutorial.ipynb?utm_source=linearregressionreal-colab&utm_medium=colab&utm_campaign=colab-external&utm_content=numpy_tf2-colab&hl=en)* [Pandas DataFrames](https://colab.research.google.com/github/google/eng-edu/blob/master/ml/cc/exercises/pandas_dataframe_ultraquick_tutorial.ipynb?utm_source=linearregressionreal-colab&utm_medium=colab&utm_campaign=colab-external&utm_content=pandas_tf2-colab&hl=en)The following code cell imports the .csv file into a pandas DataFrame and scales the values in the label (`median_house_value`): | # Import the dataset.
training_df = pd.read_csv(filepath_or_buffer="https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv")
# Scale the label.
training_df["median_house_value"] /= 1000.0
# Print the first rows of the pandas DataFrame.
training_df.head() | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
Scaling `median_house_value` puts the value of each house in units of thousands. Scaling will keep loss values and learning rates in a friendlier range. Although scaling a label is usually *not* essential, scaling features in a multi-feature model usually *is* essential. Examine the datasetA large part of most machine learning projects is getting to know your data. The pandas API provides a `describe` function that outputs the following statistics about every column in the DataFrame:* `count`, which is the number of rows in that column. Ideally, `count` contains the same value for every column. * `mean` and `std`, which contain the mean and standard deviation of the values in each column. * `min` and `max`, which contain the lowest and highest values in each column.* `25%`, `50%`, `75%`, which contain various [quantiles](https://developers.google.com/machine-learning/glossary/quantile). | # Get statistics on the dataset.
training_df.describe()
| _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
Task 1: Identify anomalies in the datasetDo you see any anomalies (strange values) in the data? | #@title Double-click to view a possible answer.
# The maximum value (max) of several columns seems very
# high compared to the other quantiles. For example,
# example the total_rooms column. Given the quantile
# values (25%, 50%, and 75%), you might expect the
# max value of total_rooms to be approximately
# 5,000 or possibly 10,000. However, the max value
# is actually 37,937.
# When you see anomalies in a column, become more careful
# about using that column as a feature. That said,
# anomalies in potential features sometimes mirror
# anomalies in the label, which could make the column
# be (or seem to be) a powerful feature.
# Also, as you will see later in the course, you
# might be able to represent (pre-process) raw data
# in order to make columns into useful features. | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
Define functions that build and train a modelThe following code defines two functions: * `build_model(my_learning_rate)`, which builds a randomly-initialized model. * `train_model(model, feature, label, epochs)`, which trains the model from the examples (feature and label) you pass. Since you don't need to understand model building code right now, we've hidden this code cell. You may optionally double-click the following headline to see the code that builds and trains a model. | #@title Define the functions that build and train a model
def build_model(my_learning_rate):
"""Create and compile a simple linear regression model."""
# Most simple tf.keras models are sequential.
model = tf.keras.models.Sequential()
# Describe the topography of the model.
# The topography of a simple linear regression model
# is a single node in a single layer.
model.add(tf.keras.layers.Dense(units=1,
input_shape=(1,)))
# Compile the model topography into code that TensorFlow can efficiently
# execute. Configure training to minimize the model's mean squared error.
model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=my_learning_rate),
loss="mean_squared_error",
metrics=[tf.keras.metrics.RootMeanSquaredError()])
return model
def train_model(model, df, feature, label, epochs, batch_size):
"""Train the model by feeding it data."""
# Feed the model the feature and the label.
# The model will train for the specified number of epochs.
history = model.fit(x=df[feature],
y=df[label],
batch_size=batch_size,
epochs=epochs)
# Gather the trained model's weight and bias.
trained_weight = model.get_weights()[0]
trained_bias = model.get_weights()[1]
# The list of epochs is stored separately from the rest of history.
epochs = history.epoch
# Isolate the error for each epoch.
hist = pd.DataFrame(history.history)
# To track the progression of training, we're going to take a snapshot
# of the model's root mean squared error at each epoch.
rmse = hist["root_mean_squared_error"]
return trained_weight, trained_bias, epochs, rmse
print("Defined the create_model and traing_model functions.") | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
Define plotting functionsThe following [matplotlib](https://developers.google.com/machine-learning/glossary/matplotlib) functions create the following plots:* a scatter plot of the feature vs. the label, and a line showing the output of the trained model* a loss curveYou may optionally double-click the headline to see the matplotlib code, but note that writing matplotlib code is not an important part of learning ML programming. | #@title Define the plotting functions
def plot_the_model(trained_weight, trained_bias, feature, label):
"""Plot the trained model against 200 random training examples."""
# Label the axes.
plt.xlabel(feature)
plt.ylabel(label)
# Create a scatter plot from 200 random points of the dataset.
random_examples = training_df.sample(n=200)
plt.scatter(random_examples[feature], random_examples[label])
# Create a red line representing the model. The red line starts
# at coordinates (x0, y0) and ends at coordinates (x1, y1).
x0 = 0
y0 = trained_bias
x1 = 10000
y1 = trained_bias + (trained_weight * x1)
plt.plot([x0, x1], [y0, y1], c='r')
# Render the scatter plot and the red line.
plt.show()
def plot_the_loss_curve(epochs, rmse):
"""Plot a curve of loss vs. epoch."""
plt.figure()
plt.xlabel("Epoch")
plt.ylabel("Root Mean Squared Error")
plt.plot(epochs, rmse, label="Loss")
plt.legend()
plt.ylim([rmse.min()*0.97, rmse.max()])
plt.show()
print("Defined the plot_the_model and plot_the_loss_curve functions.") | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
Call the model functionsAn important part of machine learning is determining which [features](https://developers.google.com/machine-learning/glossary/feature) correlate with the [label](https://developers.google.com/machine-learning/glossary/label). For example, real-life home-value prediction models typically rely on hundreds of features and synthetic features. However, this model relies on only one feature. For now, you'll arbitrarily use `total_rooms` as that feature. | # The following variables are the hyperparameters.
learning_rate = 0.01
epochs = 30
batch_size = 30
# Specify the feature and the label.
my_feature = "total_rooms" # the total number of rooms on a specific city block.
my_label="median_house_value" # the median value of a house on a specific city block.
# That is, you're going to create a model that predicts house value based
# solely on total_rooms.
# Discard any pre-existing version of the model.
my_model = None
# Invoke the functions.
my_model = build_model(learning_rate)
weight, bias, epochs, rmse = train_model(my_model, training_df,
my_feature, my_label,
epochs, batch_size)
print("\nThe learned weight for your model is %.4f" % weight)
print("The learned bias for your model is %.4f\n" % bias )
plot_the_model(weight, bias, my_feature, my_label)
plot_the_loss_curve(epochs, rmse) | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
A certain amount of randomness plays into training a model. Consequently, you'll get different results each time you train the model. That said, given the dataset and the hyperparameters, the trained model will generally do a poor job describing the feature's relation to the label. Use the model to make predictionsYou can use the trained model to make predictions. In practice, [you should make predictions on examples that are not used in training](https://developers.google.com/machine-learning/crash-course/training-and-test-sets/splitting-data). However, for this exercise, you'll just work with a subset of the same training dataset. A later Colab exercise will explore ways to make predictions on examples not used in training.First, run the following code to define the house prediction function: | def predict_house_values(n, feature, label):
"""Predict house values based on a feature."""
batch = training_df[feature][10000:10000 + n]
predicted_values = my_model.predict_on_batch(x=batch)
print("feature label predicted")
print(" value value value")
print(" in thousand$ in thousand$")
print("--------------------------------------")
for i in range(n):
print ("%5.0f %6.0f %15.0f" % (training_df[feature][10000 + i],
training_df[label][10000 + i],
predicted_values[i][0] )) | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
Now, invoke the house prediction function on 10 examples: | predict_house_values(10, my_feature, my_label) | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
Task 2: Judge the predictive power of the modelLook at the preceding table. How close is the predicted value to the label value? In other words, does your model accurately predict house values? | #@title Double-click to view the answer.
# Most of the predicted values differ significantly
# from the label value, so the trained model probably
# doesn't have much predictive power. However, the
# first 10 examples might not be representative of
# the rest of the examples. | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
Task 3: Try a different featureThe `total_rooms` feature had only a little predictive power. Would a different feature have greater predictive power? Try using `population` as the feature instead of `total_rooms`. Note: When you change features, you might also need to change the hyperparameters. | my_feature = "?" # Replace the ? with population or possibly
# a different column name.
# Experiment with the hyperparameters.
learning_rate = 2
epochs = 3
batch_size = 120
# Don't change anything below this line.
my_model = build_model(learning_rate)
weight, bias, epochs, rmse = train_model(my_model, training_df,
my_feature, my_label,
epochs, batch_size)
plot_the_model(weight, bias, my_feature, my_label)
plot_the_loss_curve(epochs, rmse)
predict_house_values(15, my_feature, my_label)
#@title Double-click to view a possible solution.
my_feature = "population" # Pick a feature other than "total_rooms"
# Possibly, experiment with the hyperparameters.
learning_rate = 0.05
epochs = 18
batch_size = 3
# Don't change anything below.
my_model = build_model(learning_rate)
weight, bias, epochs, rmse = train_model(my_model, training_df,
my_feature, my_label,
epochs, batch_size)
plot_the_model(weight, bias, my_feature, my_label)
plot_the_loss_curve(epochs, rmse)
predict_house_values(10, my_feature, my_label) | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
Did `population` produce better predictions than `total_rooms`? | #@title Double-click to view the answer.
# Training is not entirely deterministic, but population
# typically converges at a slightly higher RMSE than
# total_rooms. So, population appears to be about
# the same or slightly worse at making predictions
# than total_rooms. | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
Task 4: Define a synthetic featureYou have determined that `total_rooms` and `population` were not useful features. That is, neither the total number of rooms in a neighborhood nor the neighborhood's population successfully predicted the median house price of that neighborhood. Perhaps though, the *ratio* of `total_rooms` to `population` might have some predictive power. That is, perhaps block density relates to median house value.To explore this hypothesis, do the following: 1. Create a [synthetic feature](https://developers.google.com/machine-learning/glossary/synthetic_feature) that's a ratio of `total_rooms` to `population`. (If you are new to pandas DataFrames, please study the [Pandas DataFrame Ultraquick Tutorial](https://colab.research.google.com/github/google/eng-edu/blob/master/ml/cc/exercises/pandas_dataframe_ultraquick_tutorial.ipynb?utm_source=linearregressionreal-colab&utm_medium=colab&utm_campaign=colab-external&utm_content=pandas_tf2-colab&hl=en).)2. Tune the three hyperparameters.3. Determine whether this synthetic feature produces a lower loss value than any of the single features you tried earlier in this exercise. | # Define a synthetic feature named rooms_per_person
training_df["rooms_per_person"] = ? # write your code here.
# Don't change the next line.
my_feature = "rooms_per_person"
# Assign values to these three hyperparameters.
learning_rate = ?
epochs = ?
batch_size = ?
# Don't change anything below this line.
my_model = build_model(learning_rate)
weight, bias, epochs, rmse = train_model(my_model, training_df,
my_feature, my_label,
epochs, batch_size)
plot_the_loss_curve(epochs, rmse)
predict_house_values(15, my_feature, my_label)
#@title Double-click to view a possible solution to Task 4.
# Define a synthetic feature
training_df["rooms_per_person"] = training_df["total_rooms"] / training_df["population"]
my_feature = "rooms_per_person"
# Tune the hyperparameters.
learning_rate = 0.06
epochs = 24
batch_size = 30
# Don't change anything below this line.
my_model = build_model(learning_rate)
weight, bias, epochs, mae = train_model(my_model, training_df,
my_feature, my_label,
epochs, batch_size)
plot_the_loss_curve(epochs, mae)
predict_house_values(15, my_feature, my_label)
| _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
Based on the loss values, this synthetic feature produces a better model than the individual features you tried in Task 2 and Task 3. However, the model still isn't creating great predictions. Task 5. Find feature(s) whose raw values correlate with the labelSo far, we've relied on trial-and-error to identify possible features for the model. Let's rely on statistics instead.A **correlation matrix** indicates how each attribute's raw values relate to the other attributes' raw values. Correlation values have the following meanings: * `1.0`: perfect positive correlation; that is, when one attribute rises, the other attribute rises. * `-1.0`: perfect negative correlation; that is, when one attribute rises, the other attribute falls. * `0.0`: no correlation; the two column's [are not linearly related](https://en.wikipedia.org/wiki/Correlation_and_dependence/media/File:Correlation_examples2.svg).In general, the higher the absolute value of a correlation value, the greater its predictive power. For example, a correlation value of -0.8 implies far more predictive power than a correlation of -0.2.The following code cell generates the correlation matrix for attributes of the California Housing Dataset: | # Generate a correlation matrix.
training_df.corr() | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
The correlation matrix shows nine potential features (including a syntheticfeature) and one label (`median_house_value`). A strong negative correlation or strong positive correlation with the label suggests a potentially good feature. **Your Task:** Determine which of the nine potential features appears to be the best candidate for a feature? | #@title Double-click here for the solution to Task 5
# The `median_income` correlates 0.7 with the label
# (median_house_value), so median_income` might be a
# good feature. The other seven potential features
# all have a correlation relatively close to 0.
# If time permits, try median_income as the feature
# and see whether the model improves. | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
Analisis de O3 y SO2 arduair vs estacion universidad pontificia bolivarianaSe compararon los resultados generados por el equipo arduair y la estacion de calidad de aire propiedad de la universidad pontificia bolivariana seccional bucaramangaCabe resaltar que durante la ejecucion de las pruebas, el se sospechaba equipo de SO2 de la universidad pontificia, por lo cual no se pueden interpretar estos resultados como fiables Library imports | import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import datetime as dt
import xlrd
%matplotlib inline
pd.options.mode.chained_assignment = None | _____no_output_____ | MIT | o3_so2_upb/estacion_upb_data_processing_03.ipynb | fega/arduair-calibration |
Estudios de correlacionSe realizaron graficos de correlacion para el ozono y el dioxido de azufre con la estacion de referencia.Tambien se comparo los datos crudos arrojados por el sensor de ozono, con las ecuaciones de calibracion propuesta por el [datasheet](https://www.terraelectronica.ru/%2Fds%2Fpdf%2FM%2Fmq131-low.pdf), obteniendose mejores resultados con los datos sin procesar. | #Arduair prototype data
dfArd=pd.read_csv('DATA.TXT',names=['year','month','day','hour','minute','second','hum','temp','pr','l','co','so2','no2','o3','pm10','pm25','void'])
#Dates to datetime
dates=dfArd[['year','month','day','hour','minute','second']]
dates['year']=dates['year'].add(2000)
dates['minute']=dates['minute'].add(60)
dfArd['datetime']=pd.to_datetime(dates)
#agregation
dfArdo3=dfArd[['datetime','o3']]
dfArdso2=dfArd[['datetime','so2']]
#O3 processing
MQ131_RL= 10 #Load resistance
MQ131_VIN = 5 #Vin
MQ131_RO = 5 #reference resistance
dfArdo3['rs']=((MQ131_VIN/dfArdo3['o3'])/dfArdo3['o3'])*MQ131_RL;
dfArdo3['rs_ro'] = dfArdo3['rs']/MQ131_RO;
dfArdo3['rs_ro_abs']=abs(dfArdo3['rs_ro'])
#station data
dfo3=pd.read_csv('o3_upb.csv')
dfso2=pd.read_csv('so2_upb.csv')
dfso2.tail()
dfso2['datetime']=pd.to_datetime(dfso2['date time'])
dfo3['datetime']=pd.to_datetime(dfo3['date time'])
dfso2=dfso2[['datetime','pump_status']]
dfo3=dfo3[['datetime','pump_status']]
# bad label correction
dfso2.columns = ['datetime', 'raw_so2']
dfo3.columns = ['datetime', 'ozone_UPB']
#grouping
dfArdo3 =dfArdo3 .groupby(pd.Grouper(key='datetime',freq='1h',axis=1)).mean()
dfArdso2=dfArdso2.groupby(pd.Grouper(key='datetime',freq='1h',axis=1)).mean()
dfo3 =dfo3 .groupby(pd.Grouper(key='datetime',freq='1h',axis=1)).mean()
dfso2 =dfso2 .groupby(pd.Grouper(key='datetime',freq='1h',axis=1)).mean()
df2=pd.concat([dfo3,dfArdo3], join='inner', axis=1).reset_index()
df3=pd.concat([dfso2,dfArdso2], join='inner', axis=1).reset_index()
#Ozono calibrado
sns.jointplot(data=df2,x='ozone_UPB',y='rs_ro', kind='reg')
#Ozono crudo
sns.jointplot(data=df2,x='ozone_UPB',y='o3', kind='reg')
#SO2
sns.jointplot(data=df3,x='raw_so2',y='so2', kind='reg')
dfso2.head() | C:\Users\fega0\Anaconda3\lib\site-packages\statsmodels\nonparametric\kdetools.py:20: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
y = X[:m/2+1] + np.r_[0,X[m/2+1:],0]*1j
| MIT | o3_so2_upb/estacion_upb_data_processing_03.ipynb | fega/arduair-calibration |
Defino algunas funciones de ayuda | def polyfitEq(x,y):
C= np.polyfit(x,y,1)
m=C[0]
b=C[1]
return 'y = x*{} + {}'.format(m,b)
def calibrate(x,y):
C= np.polyfit(x,y,1)
m=C[0]
b=C[1]
return x*m+b
def rename_labels(obj,unit):
obj.columns=obj.columns.map(lambda x: x.replace('2',' stc_cdmb'))
obj.columns=obj.columns.map(lambda x: x+' '+unit)
return obj.columns
print('')
print('Ozono promedio 1h, sin procesar')
print(polyfitEq(df2['ozone_UPB'],df2['o3']))
#print('')
#print('Promedio 2h')
#print(polyfitEq(df2['pm10'],df2['pm10_dusttrack']))
print('')
print('Promedio 3h')
print(polyfitEq(df3['raw_so2'],df3['so2'])) |
Ozono promedio 1h, sin procesar
y = x*-7.386462397051218 + 735.7745124254552
Promedio 3h
y = x*3.9667587988316875 + 471.89151081632417
| MIT | o3_so2_upb/estacion_upb_data_processing_03.ipynb | fega/arduair-calibration |
Datasheets calibrados | df2['o3']=calibrate(df2['o3'],df2['ozone_UPB'])
df2.plot(figsize=[15,5])
df3['so2']=calibrate(df3['so2'],df3['raw_so2'])
df3.plot(figsize=[15,5])
df2.head()
df2.columns = ['datetime', 'Ozono estación UPB [ppb]','Ozono prototipo [ppb]','rs','rs_ro','rs_ro_abs']
sns.jointplot(data=df2,x='Ozono prototipo [ppb]',y='Ozono estación UPB [ppb]', kind='reg',stat_func=None) | C:\Users\fega0\Anaconda3\lib\site-packages\statsmodels\nonparametric\kdetools.py:20: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
y = X[:m/2+1] + np.r_[0,X[m/2+1:],0]*1j
| MIT | o3_so2_upb/estacion_upb_data_processing_03.ipynb | fega/arduair-calibration |
Accessing TerraClimate data with the Planetary Computer STAC API[TerraClimate](http://www.climatologylab.org/terraclimate.html) is a dataset of monthly climate and climatic water balance for global terrestrial surfaces from 1958-2019. These data provide important inputs for ecological and hydrological studies at global scales that require high spatial resolution and time-varying data. All data have monthly temporal resolution and a ~4-km (1/24th degree) spatial resolution. The data cover the period from 1958-2019.This example will show you how temperature has increased over the past 60 years across the globe. Environment setup | import warnings
warnings.filterwarnings("ignore", "invalid value", RuntimeWarning) | _____no_output_____ | MIT | datasets/terraclimate/terraclimate-example.ipynb | ianthomas23/PlanetaryComputerExamples |
Data accesshttps://planetarycomputer.microsoft.com/api/stac/v1/collections/terraclimate is a STAC Collection with links to all the metadata about this dataset. We'll load it with [PySTAC](https://pystac.readthedocs.io/en/latest/). | import pystac
url = "https://planetarycomputer.microsoft.com/api/stac/v1/collections/terraclimate"
collection = pystac.read_file(url)
collection | _____no_output_____ | MIT | datasets/terraclimate/terraclimate-example.ipynb | ianthomas23/PlanetaryComputerExamples |
The collection contains assets, which are links to the root of a Zarr store, which can be opened with xarray. | asset = collection.assets["zarr-https"]
asset
import fsspec
import xarray as xr
store = fsspec.get_mapper(asset.href)
ds = xr.open_zarr(store, **asset.extra_fields["xarray:open_kwargs"])
ds | _____no_output_____ | MIT | datasets/terraclimate/terraclimate-example.ipynb | ianthomas23/PlanetaryComputerExamples |
We'll process the data in parallel using [Dask](https://dask.org). | from dask_gateway import GatewayCluster
cluster = GatewayCluster()
cluster.scale(16)
client = cluster.get_client()
print(cluster.dashboard_link) | https://pcc-staging.westeurope.cloudapp.azure.com/compute/services/dask-gateway/clusters/staging.5cae9b2b4c7d4f7fa37c5a4ac1e8112d/status
| MIT | datasets/terraclimate/terraclimate-example.ipynb | ianthomas23/PlanetaryComputerExamples |
The link printed out above can be opened in a new tab or the [Dask labextension](https://github.com/dask/dask-labextension). See [Scale with Dask](https://planetarycomputer.microsoft.com/docs/quickstarts/scale-with-dask/) for more on using Dask, and how to access the Dashboard. Analyze and plot global temperatureWe can quickly plot a map of one of the variables. In this case, we are downsampling (coarsening) the dataset for easier plotting. | import cartopy.crs as ccrs
import matplotlib.pyplot as plt
average_max_temp = ds.isel(time=-1)["tmax"].coarsen(lat=8, lon=8).mean().load()
fig, ax = plt.subplots(figsize=(20, 10), subplot_kw=dict(projection=ccrs.Robinson()))
average_max_temp.plot(ax=ax, transform=ccrs.PlateCarree())
ax.coastlines(); | _____no_output_____ | MIT | datasets/terraclimate/terraclimate-example.ipynb | ianthomas23/PlanetaryComputerExamples |
Let's see how temperature has changed over the observational record, when averaged across the entire domain. Since we'll do some other calculations below we'll also add `.load()` to execute the command instead of specifying it lazily. Note that there are some data quality issues before 1965 so we'll start our analysis there. | temperature = (
ds["tmax"].sel(time=slice("1965", None)).mean(dim=["lat", "lon"]).persist()
)
temperature.plot(figsize=(12, 6)); | _____no_output_____ | MIT | datasets/terraclimate/terraclimate-example.ipynb | ianthomas23/PlanetaryComputerExamples |
With all the seasonal fluctuations (from summer and winter) though, it can be hard to see any obvious trends. So let's try grouping by year and plotting that timeseries. | temperature.groupby("time.year").mean().plot(figsize=(12, 6)); | _____no_output_____ | MIT | datasets/terraclimate/terraclimate-example.ipynb | ianthomas23/PlanetaryComputerExamples |
Now the increase in temperature is obvious, even when averaged across the entire domain.Now, let's see how those changes are different in different parts of the world. And let's focus just on summer months in the northern hemisphere, when it's hottest. Let's take a climatological slice at the beginning of the period and the same at the end of the period, calculate the difference, and map it to see how different parts of the world have changed differently.First we'll just grab the summer months. | %%time
import dask
summer_months = [6, 7, 8]
summer = ds.tmax.where(ds.time.dt.month.isin(summer_months), drop=True)
early_period = slice("1958-01-01", "1988-12-31")
late_period = slice("1988-01-01", "2018-12-31")
early, late = dask.compute(
summer.sel(time=early_period).mean(dim="time"),
summer.sel(time=late_period).mean(dim="time"),
)
increase = (late - early).coarsen(lat=8, lon=8).mean()
fig, ax = plt.subplots(figsize=(20, 10), subplot_kw=dict(projection=ccrs.Robinson()))
increase.plot(ax=ax, transform=ccrs.PlateCarree(), robust=True)
ax.coastlines(); | _____no_output_____ | MIT | datasets/terraclimate/terraclimate-example.ipynb | ianthomas23/PlanetaryComputerExamples |
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**ディープラーンニングを利用したテキスト分類**_ Contents1. [事前準備](1.-事前準備)1. [自動機械学習 Automated Machine Learning](2.-自動機械学習-Automated-Machine-Learning)1. [結果の確認](3.-結果の確認) 1. 事前準備本デモンストレーションでは、AutoML の深層学習の機能を用いてテキストデータの分類モデルを構築します。 AutoML には Deep Neural Network が含まれており、テキストデータから **Embedding** を作成することができます。GPU サーバを利用することで **BERT** が利用されます。深層学習の機能を利用するためには Azure Machine Learning の Enterprise Edition が必要になります。詳細は[こちら](https://docs.microsoft.com/en-us/azure/machine-learning/concept-editionsautomated-training-capabilities-automl)をご確認ください。 1.1 Python SDK のインポート Azure Machine Learning の Python SDK などをインポートします。 | import logging
import os
import shutil
import pandas as pd
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.core.dataset import Dataset
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
from azureml.core.run import Run
from azureml.widgets import RunDetails
from azureml.core.model import Model
from azureml.train.automl import AutoMLConfig
from sklearn.datasets import fetch_20newsgroups
from azureml.automl.core.featurization import FeaturizationConfig | _____no_output_____ | MIT | notebooks/automl-classification-Force-text-dnn.ipynb | konabuta/AutoML-Pipeline |
Azure ML Python SDK のバージョンが 1.8.0 以上になっていることを確認します。 | print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") | _____no_output_____ | MIT | notebooks/automl-classification-Force-text-dnn.ipynb | konabuta/AutoML-Pipeline |
1.2 Azure ML Workspace との接続 | ws = Workspace.from_config()
# 実験名の指定
experiment_name = 'livedoor-news-classification-BERT'
experiment = Experiment(ws, experiment_name)
output = {}
#output['Subscription ID'] = ws.subscription_id
output['Workspace Name'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T | _____no_output_____ | MIT | notebooks/automl-classification-Force-text-dnn.ipynb | konabuta/AutoML-Pipeline |
1.3 計算環境の準備BERT を利用するための GPU の `Compute Cluster` を準備します。 | from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Compute Cluster の名称
amlcompute_cluster_name = "gpucluster"
# クラスターの存在確認
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
except ComputeTargetException:
print('指定された名称のクラスターが見つからないので新規に作成します.')
compute_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_NC6_V3",
max_nodes = 4)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True) | _____no_output_____ | MIT | notebooks/automl-classification-Force-text-dnn.ipynb | konabuta/AutoML-Pipeline |
1.4 学習データの準備今回は [livedoor New](https://www.rondhuit.com/download/ldcc-20140209.tar.gz) を学習データとして利用します。ニュースのカテゴリー分類のモデルを構築します。 | target_column_name = 'label' # カテゴリーの列
feature_column_name = 'text' # ニュース記事の列
train_dataset = Dataset.get_by_name(ws, "livedoor").keep_columns(["text","label"])
train_dataset.take(5).to_pandas_dataframe() | _____no_output_____ | MIT | notebooks/automl-classification-Force-text-dnn.ipynb | konabuta/AutoML-Pipeline |
2. 自動機械学習 Automated Machine Learning 2.1 設定と制約条件 自動機械学習 Automated Machine Learning の設定と学習を行っていきます。 | from azureml.automl.core.featurization import FeaturizationConfig
featurization_config = FeaturizationConfig()
# テキストデータの言語を指定します。日本語の場合は "jpn" と指定します。
featurization_config = FeaturizationConfig(dataset_language="jpn") # 英語の場合は下記をコメントアウトしてください。
# 明示的に `text` の列がテキストデータであると指定します。
featurization_config.add_column_purpose('text', 'Text')
#featurization_config.blocked_transformers = ['TfIdf','CountVectorizer'] # BERT のみを利用したい場合はコメントアウトを外します
# 自動機械学習の設定
automl_settings = {
"experiment_timeout_hours" : 2, # 学習時間 (hour)
"primary_metric": 'accuracy', # 評価指標
"max_concurrent_iterations": 4, # 計算環境の最大並列数
"max_cores_per_iteration": -1,
"enable_dnn": True, # 深層学習を有効
"enable_early_stopping": False,
"validation_size": 0.2,
"verbosity": logging.INFO,
"force_text_dnn": True,
#"n_cross_validations": 5,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
training_data=train_dataset,
label_column_name=target_column_name,
featurization=featurization_config,
**automl_settings
) | _____no_output_____ | MIT | notebooks/automl-classification-Force-text-dnn.ipynb | konabuta/AutoML-Pipeline |
2.2 モデル学習 自動機械学習 Automated Machine Learning によるモデル学習を開始します。 | automl_run = experiment.submit(automl_config, show_output=False)
# run_id を出力
automl_run.id
# Azure Machine Learning studio の URL を出力
automl_run
# # 途中でセッションが切れた場合の対処
# from azureml.train.automl.run import AutoMLRun
# ws = Workspace.from_config()
# experiment = ws.experiments['livedoor-news-classification-BERT']
# run_id = "AutoML_e69a63ae-ef52-4783-9a9f-527d69d7cc9d"
# automl_run = AutoMLRun(experiment, run_id = run_id)
# automl_run
| _____no_output_____ | MIT | notebooks/automl-classification-Force-text-dnn.ipynb | konabuta/AutoML-Pipeline |
2.3 モデルの登録 | # 一番精度が高いモデルを抽出
best_run, fitted_model = automl_run.get_output()
# モデルファイル(.pkl) のダウンロード
model_dir = '../model'
best_run.download_file('outputs/model.pkl', model_dir + '/model.pkl')
# Azure ML へモデル登録
model_name = 'livedoor-model'
model = Model.register(model_path = model_dir + '/model.pkl',
model_name = model_name,
tags=None,
workspace=ws) | _____no_output_____ | MIT | notebooks/automl-classification-Force-text-dnn.ipynb | konabuta/AutoML-Pipeline |
3. テストデータに対する予測値の出力 | from sklearn.externals import joblib
trained_model = joblib.load(model_dir + '/model.pkl')
trained_model
test_dataset = Dataset.get_by_name(ws, "livedoor").keep_columns(["text"])
predicted = trained_model.predict_proba(test_dataset.to_pandas_dataframe()) | _____no_output_____ | MIT | notebooks/automl-classification-Force-text-dnn.ipynb | konabuta/AutoML-Pipeline |
4. モデルの解釈一番精度が良かったチャンピョンモデルを選択し、モデルの解釈をしていきます。 モデルに含まれるライブラリを予め Python 環境にインストールする必要があります。[automl_env.yml](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/automl_env.yml)を用いて、conda の仮想環境に必要なパッケージをインストールしてください。 | # 特徴量エンジニアリング後の変数名の確認
fitted_model.named_steps['datatransformer'].get_json_strs_for_engineered_feature_names()
#fitted_model.named_steps['datatransformer']. get_engineered_feature_names ()
# 特徴エンジニアリングのプロセスの可視化
text_transformations_used = []
for column_group in fitted_model.named_steps['datatransformer'].get_featurization_summary():
text_transformations_used.extend(column_group['Transformations'])
text_transformations_used | _____no_output_____ | MIT | notebooks/automl-classification-Force-text-dnn.ipynb | konabuta/AutoML-Pipeline |
process_autogluon_results- cleans up the dataframes a bit for the report setup | #@markdown add auto-Colab formatting with `IPython.display`
from IPython.display import HTML, display
# colab formatting
def set_css():
display(
HTML(
"""
<style>
pre {
white-space: pre-wrap;
}
</style>
"""
)
)
get_ipython().events.register("pre_run_cell", set_css)
!nvidia-smi
!pip install -U plotly orca kaleido -q
import plotly.express as px
import numpy as np
import pandas as pd
from pathlib import Path
import os
#@title mount drive
from google.colab import drive
drive_base_str = '/content/drive'
drive.mount(drive_base_str)
#@markdown determine root
import os
from pathlib import Path
peter_base = Path('/content/drive/MyDrive/ETHZ-2022-S/ML-healthcare-projects/project1/gluon-autoML/')
if peter_base.exists() and peter_base.is_dir():
path = str(peter_base.resolve())
else:
# original
path = '/content/drive/MyDrive/ETH/'
print(f"base drive dir is:\n{path}") | _____no_output_____ | Apache-2.0 | notebooks/colab/automl-baseline/process_autogluon_results.ipynb | pszemraj/ml4hc-s22-project01 |
define folder for outputs | _out_dir_name = "Formatted-results-report" #@param {type:"string"}
output_path = os.path.join(path, _out_dir_name)
os.makedirs(output_path, exist_ok=True)
print(f"notebook outputs will be stored in:\n{output_path}")
_out = Path(output_path)
_src = Path(path) | _____no_output_____ | Apache-2.0 | notebooks/colab/automl-baseline/process_autogluon_results.ipynb | pszemraj/ml4hc-s22-project01 |
load data MIT | data_dir = _src / "final-results"
csv_files = {f.stem:f for f in data_dir.iterdir() if f.is_file() and f.suffix=='.csv'}
print(csv_files)
mit_ag = pd.read_csv(csv_files['mitbih_autogluon_results'])
mit_ag.info()
mit_ag.sort_values(by='score_val', ascending=False, inplace=True)
mit_ag.head()
orig_cols = list(mit_ag.columns)
new_cols = []
for i, col in enumerate(orig_cols):
col = col.lower()
if 'unnamed' in col:
new_cols.append(f"delete_me_{i}")
continue
col = col.replace('score', 'accuracy')
new_cols.append(col)
mit_ag.columns = new_cols
mit_ag.columns
try:
del mit_ag['delete_me_0']
except Exception as e:
print(f'skipping delete - {e}')
mit_ag.reset_index(drop=True, inplace=True)
mit_ag.head()
| _____no_output_____ | Apache-2.0 | notebooks/colab/automl-baseline/process_autogluon_results.ipynb | pszemraj/ml4hc-s22-project01 |
save mit-gluon-reformat | mit_ag.to_csv(_out / "MITBIH_autogluon_baseline_results_Accuracy.csv", index=False) | _____no_output_____ | Apache-2.0 | notebooks/colab/automl-baseline/process_autogluon_results.ipynb | pszemraj/ml4hc-s22-project01 |
PTB reformat | ptb_ag = pd.read_csv(csv_files['ptbdb_autogluon_results']).convert_dtypes()
ptb_ag.info()
ptb_ag.sort_values(by='score_val', ascending=False, inplace=True)
ptb_ag.head()
orig_cols = list(ptb_ag.columns)
new_cols = []
for i, col in enumerate(orig_cols):
col = col.lower()
if 'unnamed' in col:
new_cols.append(f"delete_me_{i}")
continue
col = col.replace('score', 'roc_auc')
new_cols.append(col)
ptb_ag.columns = new_cols
print(f'the columns for the ptb results are now:\n{ptb_ag.columns}')
try:
del ptb_ag['delete_me_0']
except Exception as e:
print(f'skipping delete - {e}')
ptb_ag.reset_index(drop=True, inplace=True)
ptb_ag.head()
ptb_ag.to_csv(_out / "PTBDB_autogluon_baseline_results_ROCAUC.csv", index=False)
print(f'results are in {_out.resolve()}')
| _____no_output_____ | Apache-2.0 | notebooks/colab/automl-baseline/process_autogluon_results.ipynb | pszemraj/ml4hc-s22-project01 |
Repeatable splitting In this notebook, we will explore the impact of different ways of creating machine learning datasets.Repeatability is important in machine learning. If you do the same thing now and 5 minutes from now and get different answers, then it makes experimentation is difficult. In other words, you will find it difficult to gauge whether a change you made has resulted in an improvement or not. | import google.datalab.bigquery as bq | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive/02_generalization/repeatable_splitting.ipynb | AmirQureshi/code-to-run- |
Create a simple machine learning model The dataset that we will use is a BigQuery public dataset of airline arrival data. Click on the link, and look at the column names. Switch to the Details tab to verify that the number of records is 70 million, and then switch to the Preview tab to look at a few rows.We want to predict the arrival delay of an airline based on the departure delay. The model that we will use is a zero-bias linear model:$$ delay_{arrival} = \alpha * delay_{departure} $$To train the model is to estimate a good value for $\alpha$. One approach to estimate alpha is to use this formula:$$ \alpha = \frac{\sum delay_{departure} delay_{arrival} }{ \sum delay_{departure}^2 } $$Because we'd like to capture the idea that this relationship is different for flights from New York to Los Angeles vs. flights from Austin to Indianapolis (shorter flight, less busy airports), we'd compute a different $alpha$ for each airport-pair. For simplicity, we'll do this model only for flights between Denver and Los Angeles. Naive random split (not repeatable) | compute_alpha = """
#standardSQL
SELECT
SAFE_DIVIDE(SUM(arrival_delay * departure_delay), SUM(departure_delay * departure_delay)) AS alpha
FROM
(
SELECT RAND() AS splitfield,
arrival_delay,
departure_delay
FROM
`bigquery-samples.airline_ontime_data.flights`
WHERE
departure_airport = 'DEN' AND arrival_airport = 'LAX'
)
WHERE
splitfield < 0.8
"""
results = bq.Query(compute_alpha).execute().result().to_dataframe()
alpha = results['alpha'][0]
print alpha | 0.975701430281
| Apache-2.0 | courses/machine_learning/deepdive/02_generalization/repeatable_splitting.ipynb | AmirQureshi/code-to-run- |
What is wrong with calculating RMSE on the training and test data as follows? | compute_rmse = """
#standardSQL
SELECT
dataset,
SQRT(AVG((arrival_delay - ALPHA * departure_delay)*(arrival_delay - ALPHA * departure_delay))) AS rmse,
COUNT(arrival_delay) AS num_flights
FROM (
SELECT
IF (RAND() < 0.8, 'train', 'eval') AS dataset,
arrival_delay,
departure_delay
FROM
`bigquery-samples.airline_ontime_data.flights`
WHERE
departure_airport = 'DEN'
AND arrival_airport = 'LAX' )
GROUP BY
dataset
"""
bq.Query(compute_rmse.replace('ALPHA', str(alpha))).execute().result() | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive/02_generalization/repeatable_splitting.ipynb | AmirQureshi/code-to-run- |
Hint:* Are you really getting the same training data in the compute_rmse query as in the compute_alpha query?* Do you get the same answers each time you rerun the compute_alpha and compute_rmse blocks? How do we correctly train and evaluate? Here's the right way to compute the RMSE using the actual training and held-out (evaluation) data. Note how much harder this feels.Although the calculations are now correct, the experiment is still not repeatable.Try running it several times; do you get the same answer? | train_and_eval_rand = """
#standardSQL
WITH
alldata AS (
SELECT
IF (RAND() < 0.8,
'train',
'eval') AS dataset,
arrival_delay,
departure_delay
FROM
`bigquery-samples.airline_ontime_data.flights`
WHERE
departure_airport = 'DEN'
AND arrival_airport = 'LAX' ),
training AS (
SELECT
SAFE_DIVIDE( SUM(arrival_delay * departure_delay) , SUM(departure_delay * departure_delay)) AS alpha
FROM
alldata
WHERE
dataset = 'train' )
SELECT
MAX(alpha) AS alpha,
dataset,
SQRT(AVG((arrival_delay - alpha * departure_delay)*(arrival_delay - alpha * departure_delay))) AS rmse,
COUNT(arrival_delay) AS num_flights
FROM
alldata,
training
GROUP BY
dataset
"""
bq.Query(train_and_eval_rand).execute().result() | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive/02_generalization/repeatable_splitting.ipynb | AmirQureshi/code-to-run- |
Using HASH of date to split the data Let's split by date and train. | compute_alpha = """
#standardSQL
SELECT
SAFE_DIVIDE(SUM(arrival_delay * departure_delay), SUM(departure_delay * departure_delay)) AS alpha
FROM
`bigquery-samples.airline_ontime_data.flights`
WHERE
departure_airport = 'DEN' AND arrival_airport = 'LAX'
AND MOD(ABS(FARM_FINGERPRINT(date)), 10) < 8
"""
results = bq.Query(compute_alpha).execute().result().to_dataframe()
alpha = results['alpha'][0]
print alpha | 0.975803914362
| Apache-2.0 | courses/machine_learning/deepdive/02_generalization/repeatable_splitting.ipynb | AmirQureshi/code-to-run- |
We can now use the alpha to compute RMSE. Because the alpha value is repeatable, we don't need to worry that the alpha in the compute_rmse will be different from the alpha computed in the compute_alpha. | compute_rmse = """
#standardSQL
SELECT
IF(MOD(ABS(FARM_FINGERPRINT(date)), 10) < 8, 'train', 'eval') AS dataset,
SQRT(AVG((arrival_delay - ALPHA * departure_delay)*(arrival_delay - ALPHA * departure_delay))) AS rmse,
COUNT(arrival_delay) AS num_flights
FROM
`bigquery-samples.airline_ontime_data.flights`
WHERE
departure_airport = 'DEN'
AND arrival_airport = 'LAX'
GROUP BY
dataset
"""
print bq.Query(compute_rmse.replace('ALPHA', str(alpha))).execute().result().to_dataframe().head() | dataset rmse num_flights
0 eval 12.764685 15671
1 train 13.160712 64018
| Apache-2.0 | courses/machine_learning/deepdive/02_generalization/repeatable_splitting.ipynb | AmirQureshi/code-to-run- |
Reading Survey Data(Sanna Tyrvainen 2021)Code to read the soft CIFAR-10 survey resultssurvey_answers = a pickle file with a list of arrays of survey results and original CIFAR-10 labels data_batch_1 = a pickle file of CIFAR-10 1/5 training dataset with a dictionary of * b'batch_label', = 'training batch 1 of 5' * b'labels' = CIFAR-10 label * b'data' = CIFAR-10 images * b'filenames' = CIFAR-10 image names |
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pickle
import torch
def unpickle(file):
with open(file, 'rb') as fo:
dict = pickle.load(fo, encoding='bytes')
return dict
def imagshow(img):
plt.imshow(np.transpose(img, (1, 2, 0)))
plt.show()
labels = unpickle('survey_answers');
imgdict = unpickle('data_batch_1');
imgdata = imgdict[b'data'];
labeldata = imgdict[b'labels'];
class_names = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
images = imgdata.reshape(len(imgdata),3, 32,32)
print('Example:')
ii = 7
print('survey answer: ', labels[ii])
imagshow(images[ii])
print(labeldata[ii], class_names[labels[ii][1]])
| Example:
survey answer: ([0, 0, 0, 0, 2, 0, 0, 4, 0, 0], 7)
| MIT | data/read_data.ipynb | sannatti/softcifar |
Import the necessary imports | from __future__ import print_function, division, absolute_import
import tensorflow as tf
from tensorflow.contrib import keras
import numpy as np
import os
from sklearn import preprocessing
from sklearn.metrics import confusion_matrix
import itertools
import cPickle #python 2.x
#import _pickle as cPickle #python 3.x
import h5py
from matplotlib import pyplot as plt
%matplotlib inline | _____no_output_____ | MIT | Neuroseeker_Analysis.ipynb | atabakd/start_brain |
Now read the data | with h5py.File("NS_LP_DS.h5", "r") as hf:
LFP_features_train = hf["LFP_features_train"][...]
targets_train = hf["targets_train"][...]
speeds_train = hf["speeds_train"][...]
LFP_features_eval = hf["LFP_features_eval"][...]
targets_eval = hf["targets_eval"][...]
speeds_eval = hf["speeds_eval"][...] | _____no_output_____ | MIT | Neuroseeker_Analysis.ipynb | atabakd/start_brain |
And make sure it looks ok | rand_sample = np.random.randint(LFP_features_eval.shape[0])
for i in range(LFP_features_train.shape[-1]):
plt.figure(figsize=(20,7))
plt_data = LFP_features_eval[rand_sample,:,i]
plt.plot(np.arange(-0.5, 0., 0.5/plt_data.shape[0]), plt_data)
plt.xlable("time")
plt.title(str(i)) | _____no_output_____ | MIT | Neuroseeker_Analysis.ipynb | atabakd/start_brain |
Now we write some helper functions to easily select regions. | block = np.array([[2,4,6,8],[1,3,5,7]])
channels = np.concatenate([(block + i*8) for i in range(180)][::-1])
brain_regions = {'Parietal Cortex': 8000, 'Hypocampus CA1': 6230, 'Hypocampus DG': 5760, 'Thalamus LPMR': 4450,
'Thalamus Posterior': 3500, 'Thalamus VPM': 1930, 'SubThalamic': 1050}
brain_regions = {k:v//22.5 for k,v in brain_regions.iteritems()}
used_channels = np.arange(9,1440,20, dtype=np.int16)[:-6]
for i in (729,749,1209,1229):
used_channels = np.delete(used_channels, np.where(used_channels==i)[0])
# for k,v in brain_regions.iteritems():
# print("{0}: {1}".format(k,v))
channels_dict = {'Parietal Cortex': np.arange(1096,1440, dtype=np.int16),
'Hypocampus CA1': np.arange(1016,1096, dtype=np.int16),
'Hypocampus DG': np.arange(784,1016, dtype=np.int16),
'Thalamus LPMR': np.arange(616,784, dtype=np.int16),
'Thalamus Posterior': np.arange(340,616, dtype=np.int16),
'Thalamus VPM': np.arange(184,340, dtype=np.int16),
'SubThalamic': np.arange(184, dtype=np.int16)}
used_channels_dict = {k:list() for k in channels_dict.iterkeys()}
# print("hello")
for ch in used_channels:
for key in channels_dict.iterkeys():
if ch in channels_dict[key]:
used_channels_dict[key].append(ch)
LFP_features_train_current = LFP_features_train
LFP_features_eval_current = LFP_features_eval
# current_channels = np.sort(used_channels_dict['Hypocampus CA1']+used_channels_dict['Hypocampus DG']+\
# used_channels_dict['Thalamus Posterior'])
# current_idxs = np.array([np.where(ch==used_channels)[0] for ch in current_channels]).squeeze()
# LFP_features_train_current = LFP_features_train[...,current_idxs]
# LFP_features_eval_current = LFP_features_eval[...,current_idxs] | _____no_output_____ | MIT | Neuroseeker_Analysis.ipynb | atabakd/start_brain |
Create a call back to save the best validation accuracy | model_chk_path = 'my_model.hdf5'
mcp = keras.callbacks.ModelCheckpoint(model_chk_path, monitor="val_acc",
save_best_only=True) | _____no_output_____ | MIT | Neuroseeker_Analysis.ipynb | atabakd/start_brain |
Below I have defined a couple of different network architectures to play with. | # try:
# model = None
# except NameError:
# pass
# decay = 1e-3
# conv1d = keras.layers.Convolution1D
# maxPool = keras.layers.MaxPool1D
# model = keras.models.Sequential()
# model.add(conv1d(64, 5, padding='same', strides=2, activation='relu',
# kernel_regularizer=keras.regularizers.l2(decay),
# input_shape=LFP_features_train.shape[1:]))
# model.add(maxPool())
# model.add(conv1d(128, 3, padding='same', strides=2, activation='relu',
# kernel_regularizer=keras.regularizers.l2(decay)))
# model.add(maxPool())
# model.add(conv1d(128, 3, padding='same', strides=2, activation='relu',
# kernel_regularizer=keras.regularizers.l2(decay)))
# model.add(maxPool())
# model.add(conv1d(128, 3, padding='same', strides=2, activation='relu',
# kernel_regularizer=keras.regularizers.l2(decay)))
# model.add(maxPool())
# model.add(keras.layers.Flatten())
# model.add(keras.layers.Dropout(rate=0.5))
# model.add(keras.layers.Dense(2, activation='softmax', kernel_regularizer=keras.regularizers.l2(decay)))
# try:
# model = None
# except NameError:
# pass
# decay = 1e-3
# conv1d = keras.layers.Convolution1D
# maxPool = keras.layers.MaxPool1D
# BN = keras.layers.BatchNormalization
# Act = keras.layers.Activation('relu')
# model = keras.models.Sequential()
# model.add(conv1d(64, 5, padding='same', strides=2,
# kernel_regularizer=keras.regularizers.l1_l2(decay),
# input_shape=LFP_features_train_current.shape[1:]))
# model.add(BN())
# model.add(Act)
# model.add(maxPool())
# model.add(conv1d(128, 3, padding='same', strides=2,
# kernel_regularizer=keras.regularizers.l1_l2(decay)))
# model.add(BN())
# model.add(Act)
# model.add(maxPool())
# model.add(conv1d(128, 3, padding='same', strides=2,
# kernel_regularizer=keras.regularizers.l1_l2(decay)))
# model.add(BN())
# model.add(Act)
# model.add(maxPool())
# model.add(conv1d(128, 3, padding='same', strides=2,
# kernel_regularizer=keras.regularizers.l1_l2(decay)))
# model.add(BN())
# model.add(Act)
# model.add(maxPool())
# model.add(keras.layers.Flatten())
# model.add(keras.layers.Dropout(rate=0.5))
# model.add(keras.layers.Dense(2, activation='softmax', kernel_regularizer=keras.regularizers.l2(decay)))
# try:
# model = None
# except NameError:
# pass
# decay = 1e-3
# conv1d = keras.layers.Convolution1D
# maxPool = keras.layers.MaxPool1D
# model = keras.models.Sequential()
# model.add(conv1d(33, 5, padding='same', activation='relu', kernel_regularizer=keras.regularizers.l2(decay),
# input_shape=LFP_features_train.shape[1:]))
# model.add(maxPool())
# model.add(conv1d(33, 3, padding='same', activation='relu', kernel_regularizer=keras.regularizers.l2(decay)))
# model.add(maxPool())
# model.add(conv1d(16, 3, padding='same', activation='relu', kernel_regularizer=keras.regularizers.l2(decay)))
# model.add(maxPool())
# model.add(conv1d(4, 3, padding='same', activation='relu', kernel_regularizer=keras.regularizers.l2(decay)))
# model.add(maxPool())
# model.add(keras.layers.Flatten())
# model.add(keras.layers.Dropout(rate=0.5))
# model.add(keras.layers.Dense(2, activation='softmax', kernel_regularizer=keras.regularizers.l2(decay)))
try:
model = None
except NameError:
pass
decay = 1e-3
regul = keras.regularizers.l1(decay)
conv1d = keras.layers.Convolution1D
maxPool = keras.layers.MaxPool1D
BN = keras.layers.BatchNormalization
Act = keras.layers.Activation('relu')
model = keras.models.Sequential()
model.add(keras.layers.Convolution1D(64, 5, padding='same', strides=2,
kernel_regularizer=keras.regularizers.l1_l2(decay),
input_shape=LFP_features_train_current.shape[1:]))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation('relu'))
model.add(keras.layers.MaxPool1D())
model.add(keras.layers.Convolution1D(128, 3, padding='same', strides=2,
kernel_regularizer=keras.regularizers.l1_l2(decay)))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation('relu'))
# model.add(keras.layers.MaxPool1D())
# model.add(keras.layers.Convolution1D(128, 3, padding='same', strides=2,
# kernel_regularizer=keras.regularizers.l1_l2(decay)))
# model.add(keras.layers.BatchNormalization())
# model.add(keras.layers.Activation('relu'))
# # model.add(keras.layers.GlobalMaxPooling1D())
# model.add(keras.layers.MaxPool1D())
# model.add(keras.layers.Convolution1D(128, 3, padding='same', strides=2,
# kernel_regularizer=keras.regularizers.l1_l2(decay)))
# model.add(keras.layers.BatchNormalization())
# model.add(keras.layers.Activation('relu'))
# model.add(maxPool())
# model.add(keras.layers.Flatten())
model.add(keras.layers.GlobalMaxPooling1D())
model.add(keras.layers.Dropout(rate=0.5))
model.add(keras.layers.Dense(2, activation='softmax', kernel_regularizer=keras.regularizers.l1_l2(decay)))
model.compile(optimizer='Adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
history = model.fit(LFP_features_train_current,
targets_train,
epochs=20,
batch_size=1024,
validation_data=(LFP_features_eval_current, targets_eval),
callbacks=[mcp]) | _____no_output_____ | MIT | Neuroseeker_Analysis.ipynb | atabakd/start_brain |
Helper function for the confusion matrix | def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / np.maximum(cm.sum(axis=1)[:, np.newaxis],1.0)
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
cm = (cm*1000).astype(np.int16)
cm = np.multiply(cm, 0.1)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, "{0}%".format(cm[i, j]),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
return plt.gcf()
class_names = ['go', 'stop']
model.load_weights('my_model.hdf5')
y_pred_initial = model.predict(LFP_features_eval)
targets_eval_1d = np.argmax(targets_eval, axis=1)
y_pred = np.argmax(y_pred_initial, axis=1)
cnf_matrix = confusion_matrix(targets_eval_1d, y_pred)
np.set_printoptions(precision=2)
plt.figure()
fig = plot_confusion_matrix(cnf_matrix, classes=class_names, normalize=True,
title='Normalized confusion matrix')
wrong_idxs = np.where(y_pred != targets_eval_1d)[0]
wrong_vals = speeds_eval[wrong_idxs]
# wrong_vals.squeeze().shape
# crazy_wrong_idxs.shape
plt.cla()
plt.close()
plt.figure(figsize=(20,7))
n, bins, patches = plt.hist(wrong_vals.squeeze(),
bins=np.arange(0,1,0.01),)
plt.plot(bins)
plt.xlim([0,1])
fig_dist = plt.gcf() | _____no_output_____ | MIT | Neuroseeker_Analysis.ipynb | atabakd/start_brain |
Train and evaluation accuracies | acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.figure(figsize=(20,7))
plt.plot(epochs, acc, 'bo', label='Training')
plt.plot(epochs, val_acc, 'b', label='Validation')
plt.title('Training and validation accuracy')
plt.legend(loc='lower right', fontsize=24)
plt.xticks(np.arange(20)) | _____no_output_____ | MIT | Neuroseeker_Analysis.ipynb | atabakd/start_brain |
IDS Instruction: Regression(Lisa Mannel) Simple linear regression First we import the packages necessary fo this instruction: | import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, mean_absolute_error | _____no_output_____ | MIT | Instruction4/Instruction4-RegressionSVM.ipynb | danikhani/ITDS-Instructions-WS20 |
Consider the data set "df" with feature variables "x" and "y" given below. | df1 = pd.DataFrame({'x': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], 'y': [1, 3, 2, 5, 7, 8, 8, 9, 10, 12]})
print(df1) | x y
0 0 1
1 1 3
2 2 2
3 3 5
4 4 7
5 5 8
6 6 8
7 7 9
8 8 10
9 9 12
| MIT | Instruction4/Instruction4-RegressionSVM.ipynb | danikhani/ITDS-Instructions-WS20 |
To get a first impression of the given data, let's have a look at its scatter plot: | plt.scatter(df1.x, df1.y, color = "y", marker = "o", s = 40)
plt.xlabel('x')
plt.ylabel('y')
plt.title('first overview of the data')
plt.show() | _____no_output_____ | MIT | Instruction4/Instruction4-RegressionSVM.ipynb | danikhani/ITDS-Instructions-WS20 |
We can already see a linear correlation between x and y. Assume the feature x to be descriptive, while y is our target feature. We want a linear function, y=ax+b, that predicts y as accurately as possible based on x. To achieve this goal we use linear regression from the sklearn package. | #define the set of descriptive features (in this case only 'x' is in that set) and the target feature (in this case 'y')
descriptiveFeatures1=df1[['x']]
print(descriptiveFeatures1)
targetFeature1=df1['y']
#define the classifier
classifier = LinearRegression()
#train the classifier
model1 = classifier.fit(descriptiveFeatures1, targetFeature1) | x
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
| MIT | Instruction4/Instruction4-RegressionSVM.ipynb | danikhani/ITDS-Instructions-WS20 |
Now we can use the classifier to predict y. We print the predictions as well as the coefficient and bias (*intercept*) of the linear function. | #use the classifier to make prediction
targetFeature1_predict = classifier.predict(descriptiveFeatures1)
print(targetFeature1_predict)
#print coefficient and intercept
print('Coefficients: \n', classifier.coef_)
print('Intercept: \n', classifier.intercept_) | [ 1.23636364 2.40606061 3.57575758 4.74545455 5.91515152 7.08484848
8.25454545 9.42424242 10.59393939 11.76363636]
Coefficients:
[1.16969697]
Intercept:
1.2363636363636399
| MIT | Instruction4/Instruction4-RegressionSVM.ipynb | danikhani/ITDS-Instructions-WS20 |
Let's visualize our regression function with the scatterplot showing the original data set. Herefore, we use the predicted values. | #visualize data points
plt.scatter(df1.x, df1.y, color = "y", marker = "o", s = 40)
#visualize regression function
plt.plot(descriptiveFeatures1, targetFeature1_predict, color = "g")
plt.xlabel('x')
plt.ylabel('y')
plt.title('the data and the regression function')
plt.show() | _____no_output_____ | MIT | Instruction4/Instruction4-RegressionSVM.ipynb | danikhani/ITDS-Instructions-WS20 |
Now it is your turn. Build a simple linear regression for the data below. Use col1 as descriptive feature and col2 as target feature. Also plot your results. | df2 = pd.DataFrame({'col1': [770, 677, 428, 410, 371, 504, 1136, 695, 551, 550], 'col2': [54, 47, 28, 38, 29, 38, 80, 52, 45, 40]})
#Your turn
# features that we use for the prediction are called the "descriptive" features
descriptiveFeatures2=df2[['col1']]
# the feature we would like to predict is called target fueature
targetFeature2=df2['col2']
# traing regression model:
classifier2 = LinearRegression()
model2 = classifier2.fit(descriptiveFeatures2, targetFeature2)
#use the classifier to make prediction
targetFeature2_predict = classifier2.predict(descriptiveFeatures2)
#visualize data points
plt.scatter(df2.col1, df2.col2, color = "y", marker = "o")
#visualize regression function
plt.plot(descriptiveFeatures2, targetFeature2_predict, color = "g")
plt.xlabel('col1')
plt.ylabel('col2')
plt.title('the data and the regression function')
plt.show() | _____no_output_____ | MIT | Instruction4/Instruction4-RegressionSVM.ipynb | danikhani/ITDS-Instructions-WS20 |