markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
The values for the second and third columns which are Have you seen any of the 6 films in the Star Wars franchise? and Do you consider yourself to be a fan of the Star Wars film franchise? respectively are Yes, No, NaN. We want to change them to True or False.
star_wars['Have you seen any of the 6 films in the Star Wars franchise?'] = star_wars['Have you seen any of the 6 films in the Star Wars franchise?'].map({'Yes': True, 'No': False}) star_wars['Do you consider yourself to be a fan of the Star Wars film franchise?'] = star_wars['Do you consider yourself to be a fan of t...
Star Wars survey/Star Wars survey.ipynb
frankbearzou/Data-analysis
mit
Cleaning the columns from index 3 to 9. From the fourth column to ninth columns are checkbox questions: If values are the movie names: they have seen the movies. If values are NaN: they have not seen the movies. We are going to convert the values of these columns to bool type.
for col in star_wars.columns[3:9]: star_wars[col] = star_wars[col].apply(lambda x: False if pd.isnull(x) else True)
Star Wars survey/Star Wars survey.ipynb
frankbearzou/Data-analysis
mit
Rename the columns from index 3 to 9 for better readibility. seen_1 means Star Wars Episode I, and so on.
star_wars.rename(columns={'Which of the following Star Wars films have you seen? Please select all that apply.': 'seen_1', \ 'Unnamed: 4': 'seen_2', \ 'Unnamed: 5': 'seen_3', \ 'Unnamed: 6': 'seen_4', \ 'Unnamed: 7':...
Star Wars survey/Star Wars survey.ipynb
frankbearzou/Data-analysis
mit
Cleaning the columns from index 9 to 15. Changing data type to float.
star_wars[star_wars.columns[9:15]] = star_wars[star_wars.columns[9:15]].astype(float)
Star Wars survey/Star Wars survey.ipynb
frankbearzou/Data-analysis
mit
Renaming columns names.
star_wars.rename(columns={'Please rank the Star Wars films in order of preference with 1 being your favorite film in the franchise and 6 being your least favorite film.': 'ranking_1', \ 'Unnamed: 10': 'ranking_2', \ 'Unnamed: 11': 'ranking_3', \ ...
Star Wars survey/Star Wars survey.ipynb
frankbearzou/Data-analysis
mit
Cleaning the cloumns from index 15 to 29.
star_wars.rename(columns={'Please state whether you view the following characters favorably, unfavorably, or are unfamiliar with him/her.': 'Luck Skywalker', \ 'Unnamed: 16': 'Han Solo', \ 'Unnamed: 17': 'Princess Leia Oragana', \ 'Unnamed: 1...
Star Wars survey/Star Wars survey.ipynb
frankbearzou/Data-analysis
mit
Data Analysis Finding The Most Seen Movie
seen_sum = star_wars[['seen_1', 'seen_2', 'seen_3', 'seen_4', 'seen_5', 'seen_6']].sum() seen_sum seen_sum.idxmax()
Star Wars survey/Star Wars survey.ipynb
frankbearzou/Data-analysis
mit
From the data above, we can find that the most seen movie is the episode V.
ax = seen_sum.plot(kind='bar') for p in ax.patches: ax.annotate(str(p.get_height()), (p.get_x() * 1.005, p.get_height() * 1.01)) plt.show()
Star Wars survey/Star Wars survey.ipynb
frankbearzou/Data-analysis
mit
Finding The Highest Ranked Movie.
ranking_mean = star_wars[['ranking_1', 'ranking_2', 'ranking_3', 'ranking_4', 'ranking_5', 'ranking_6']].mean() ranking_mean ranking_mean.idxmin()
Star Wars survey/Star Wars survey.ipynb
frankbearzou/Data-analysis
mit
The highest ranked movie is ranking_5 which is the episode V.
ranking_mean.plot(kind='bar') plt.show()
Star Wars survey/Star Wars survey.ipynb
frankbearzou/Data-analysis
mit
Let's break down data by Gender.
males = star_wars[star_wars['Gender'] == 'Male'] females = star_wars[star_wars['Gender'] == 'Female']
Star Wars survey/Star Wars survey.ipynb
frankbearzou/Data-analysis
mit
The number of movies seen.
males[males.columns[3:9]].sum().plot(kind='bar', title='male seen') plt.show() males[females.columns[3:9]].sum().plot(kind='bar', title='female seen') plt.show()
Star Wars survey/Star Wars survey.ipynb
frankbearzou/Data-analysis
mit
The ranking of movies.
males[males.columns[9:15]].mean().plot(kind='bar', title='Male Ranking') plt.show() females[males.columns[9:15]].mean().plot(kind='bar', title='Female Ranking') plt.show()
Star Wars survey/Star Wars survey.ipynb
frankbearzou/Data-analysis
mit
From the charts above, we do not find significant difference among gender. Star Wars Character Favorability Ratings
star_wars['Luck Skywalker'].value_counts() star_wars[star_wars.columns[15:29]].head() fav = star_wars[star_wars.columns[15:29]].dropna() fav.head()
Star Wars survey/Star Wars survey.ipynb
frankbearzou/Data-analysis
mit
Convert fav to pivot table.
fav_df_list = [] for col in fav.columns.tolist(): row = fav[col].value_counts() d1 = pd.DataFrame(data={'favorably': row[0] + row[1], \ 'neutral': row[2], \ 'unfavorably': row[4] + row[5], \ 'Unfamiliar': row[3]}, \ index=[col], \ ...
Star Wars survey/Star Wars survey.ipynb
frankbearzou/Data-analysis
mit
Who Shot First?
shot_first = star_wars['Which character shot first?'].value_counts() shot_first shot_sum = shot_first.sum() shot_first = shot_first.apply(lambda x: x / shot_sum * 100) shot_first ax = shot_first.plot(kind='barh') for p in ax.patches: ax.annotate(str("{0:.2f}%".format(round(p.get_width(),2))), (p.get_width() * ...
Star Wars survey/Star Wars survey.ipynb
frankbearzou/Data-analysis
mit
Define the necessary environment variables and install the KubeFlow Pipeline SDK We assume this notebook kernel has access to Python's site-packages and is in Python3. Please fill in the below environment variables with you own settings. KFP_PACKAGE: The latest release of kubeflow pipeline platform library. KUBEFLOW_P...
KFP_SERVICE="ml-pipeline.kubeflow.svc.cluster.local:8888" KFP_PACKAGE = 'http://kubeflow.oss-cn-beijing.aliyuncs.com/kfp/0.1.14/kfp.tar.gz' KFP_ARENA_PACKAGE = 'http://kubeflow.oss-cn-beijing.aliyuncs.com/kfp-arena/kfp-arena-0.3.tar.gz' KUBEFLOW_PIPELINE_LINK = '' MOUNT="['user-susan:/training']" GPUs=1
samples/contrib/arena-samples/standalonejob/standalone_pipeline.ipynb
kubeflow/kfp-tekton-backend
apache-2.0
Install the necessary python packages Note: Please change pip3 to the package manager that's used for this Notebook Kernel.
!pip3 install $KFP_PACKAGE --upgrade
samples/contrib/arena-samples/standalonejob/standalone_pipeline.ipynb
kubeflow/kfp-tekton-backend
apache-2.0
Note: Install arena's python package
!pip3 install $KFP_ARENA_PACKAGE --upgrade
samples/contrib/arena-samples/standalonejob/standalone_pipeline.ipynb
kubeflow/kfp-tekton-backend
apache-2.0
2. Define pipeline tasks using the kfp library.
import arena import kfp.dsl as dsl @dsl.pipeline( name='pipeline to run jobs', description='shows how to run pipeline jobs.' ) def sample_pipeline(learning_rate='0.01', dropout='0.9', model_version='1'): """A pipeline for end to end machine learning workflow.""" # 1. prepare data prepare_data = aren...
samples/contrib/arena-samples/standalonejob/standalone_pipeline.ipynb
kubeflow/kfp-tekton-backend
apache-2.0
Cleaning the Raw Data Printing the 3rd element in the test dataset shows the data contains text with newlines, punctuation, misspellings, and other items common in text documents. To build a model, we will clean up the text by removing some of these issues.
news_train_data.data[2], news_train_data.target_names[news_train_data.target[2]] def clean_and_tokenize_text(news_data): """Cleans some issues with the text data Args: news_data: list of text strings Returns: For each text string, an array of tokenized words are returned in a list """ ...
samples/contrib/mlworkbench/text_classification_20newsgroup/Text Classification --- 20NewsGroup (small data).ipynb
googledatalab/notebooks
apache-2.0
Get Vocabulary We will need to filter the vocabulary to remove high frequency words and low frequency words.
def get_unique_tokens_per_row(text_token_list): """Collect unique tokens per row. Args: text_token_list: list, where each element is a list containing tokenized text Returns: One list containing the unique tokens in every row. For example, if row one contained ['pizza', 'pizza'] whil...
samples/contrib/mlworkbench/text_classification_20newsgroup/Text Classification --- 20NewsGroup (small data).ipynb
googledatalab/notebooks
apache-2.0
Save the Cleaned Data For Training
!mkdir -p ./data with open('./data/train.csv', 'w') as f: writer = csv.writer(f, lineterminator='\n') for target, text in zip(news_train_data.target, clean_train_data): writer.writerow([news_train_data.target_names[target], text]) with open('./data/eval.csv', 'w') as f: writer = csv.writer...
samples/contrib/mlworkbench/text_classification_20newsgroup/Text Classification --- 20NewsGroup (small data).ipynb
googledatalab/notebooks
apache-2.0
Create Model with ML Workbench The MLWorkbench Magics are a set of Datalab commands that allow an easy code-free experience to training, deploying, and predicting ML models. This notebook will take the cleaned data from the previous notebook and build a text classification model. The MLWorkbench Magics are a collection...
import google.datalab.contrib.mlworkbench.commands # This loads the '%%ml' magics
samples/contrib/mlworkbench/text_classification_20newsgroup/Text Classification --- 20NewsGroup (small data).ipynb
googledatalab/notebooks
apache-2.0
First, define the dataset we are going to use for training.
%%ml dataset create name: newsgroup_data format: csv train: ./data/train.csv eval: ./data/eval.csv schema: - name: news_label type: STRING - name: text type: STRING %%ml dataset explore name: newsgroup_data
samples/contrib/mlworkbench/text_classification_20newsgroup/Text Classification --- 20NewsGroup (small data).ipynb
googledatalab/notebooks
apache-2.0
Step 1: Analyze The first step in the MLWorkbench workflow is to analyze the data for the requested transformations. We are going to build a bag of words representation on the text and use this in a linear model. Therefore, the analyze step will compute the vocabularies and related statistics of the data for traing.
%%ml analyze output: ./analysis data: newsgroup_data features: news_label: transform: target text: transform: bag_of_words !ls ./analysis
samples/contrib/mlworkbench/text_classification_20newsgroup/Text Classification --- 20NewsGroup (small data).ipynb
googledatalab/notebooks
apache-2.0
Step 2: Transform This step is optional as training can start from csv data (the same data used in the analysis step). The transform step performs some transformations on the input data and saves the results to a special TensorFlow file called a TFRecord file containing TF.Example protocol buffers. This allows training...
!rm -rf ./transform %%ml transform --shuffle output: ./transform analysis: ./analysis data: newsgroup_data # note: the errors_* files are all 0 size, which means no error. !ls ./transform/ -l -h
samples/contrib/mlworkbench/text_classification_20newsgroup/Text Classification --- 20NewsGroup (small data).ipynb
googledatalab/notebooks
apache-2.0
Create a "transformed dataset" to use in next step.
%%ml dataset create name: newsgroup_transformed train: ./transform/train-* eval: ./transform/eval-* format: transformed
samples/contrib/mlworkbench/text_classification_20newsgroup/Text Classification --- 20NewsGroup (small data).ipynb
googledatalab/notebooks
apache-2.0
Step 3: Training MLWorkbench automatically builds standard TensorFlow models without you having to write any TensorFlow code.
# Training should use an empty output folder. So if you run training multiple times, # use different folders or remove the output from the previous run. !rm -fr ./train
samples/contrib/mlworkbench/text_classification_20newsgroup/Text Classification --- 20NewsGroup (small data).ipynb
googledatalab/notebooks
apache-2.0
The following training step takes about 10~15 minutes.
%%ml train output: ./train analysis: ./analysis/ data: newsgroup_transformed model_args: model: linear_classification top-n: 5
samples/contrib/mlworkbench/text_classification_20newsgroup/Text Classification --- 20NewsGroup (small data).ipynb
googledatalab/notebooks
apache-2.0
Go to Tensorboard (link shown above) to monitor the training progress. Note that training stops when it detects accuracy is no longer increasing for eval data.
# You can also plot the summary events which will be saved with the notebook. from google.datalab.ml import Summary summary = Summary('./train') summary.list_events() summary.plot(['loss', 'accuracy'])
samples/contrib/mlworkbench/text_classification_20newsgroup/Text Classification --- 20NewsGroup (small data).ipynb
googledatalab/notebooks
apache-2.0
The output of training is two models, one in training_output/model and another in training_output/evaluation_model. These tensorflow models are identical except the latter assumes the target column is part of the input and copies the target value to the output. Therefore, the latter is ideal for evaluation.
!ls ./train/
samples/contrib/mlworkbench/text_classification_20newsgroup/Text Classification --- 20NewsGroup (small data).ipynb
googledatalab/notebooks
apache-2.0
Step 4: Evaluation using batch prediction Below, we use the evaluation model and run batch prediction locally. Batch prediction is needed for large datasets where the data cannot fit in memory. For demo purpose, we will use the training evaluation data again.
%%ml batch_predict model: ./train/evaluation_model/ output: ./batch_predict format: csv data: csv: ./data/eval.csv # It creates a results csv file, and a results schema json file. !ls ./batch_predict
samples/contrib/mlworkbench/text_classification_20newsgroup/Text Classification --- 20NewsGroup (small data).ipynb
googledatalab/notebooks
apache-2.0
Note that the output of prediction is a csv file containing the score for each label class. 'predicted_n' is the label for the nth largest score. We care about 'predicted', the final model prediction.
!head -n 5 ./batch_predict/predict_results_eval.csv %%ml evaluate confusion_matrix --plot csv: ./batch_predict/predict_results_eval.csv %%ml evaluate accuracy csv: ./batch_predict/predict_results_eval.csv
samples/contrib/mlworkbench/text_classification_20newsgroup/Text Classification --- 20NewsGroup (small data).ipynb
googledatalab/notebooks
apache-2.0
Step 5: BigQuery to analyze evaluate results Sometimes you want to query your prediction/evaluation results using SQL. It is easy.
# Create bucket !gsutil mb gs://bq-mlworkbench-20news-lab !gsutil cp -r ./batch_predict/predict_results_eval.csv gs://bq-mlworkbench-20news-lab # Use Datalab's Bigquery API to load CSV files into table. import google.datalab.bigquery as bq import json with open('./batch_predict/predict_results_schema.json', 'r') as ...
samples/contrib/mlworkbench/text_classification_20newsgroup/Text Classification --- 20NewsGroup (small data).ipynb
googledatalab/notebooks
apache-2.0
Now, run any SQL queries on "table newspredict.result1". Below we query all wrong predictions.
%%bq query SELECT * FROM newspredict.result1 WHERE predicted != target
samples/contrib/mlworkbench/text_classification_20newsgroup/Text Classification --- 20NewsGroup (small data).ipynb
googledatalab/notebooks
apache-2.0
Prediction Local Instant Prediction The MLWorkbench also supports running prediction and displaying the results within the notebook. Note that we use the non-evaluation model below (./train/model) which takes input with no target column.
%%ml predict model: ./train/model/ headers: text data: - nasa - windows xp
samples/contrib/mlworkbench/text_classification_20newsgroup/Text Classification --- 20NewsGroup (small data).ipynb
googledatalab/notebooks
apache-2.0
Why Does My Model Predict this? Prediction Explanation. "%%ml explain" gives you insights on what are important features in the prediction data that contribute positively or negatively to certain labels. We use LIME under "%%ml explain". (LIME is an open sourced library performing feature sensitivity analysis. It is ba...
# Pick some data from eval csv file. They are cleaned text. # The truth labels for the following 3 instances are # - rec.autos # - comp.windows.x # - talk.politics.mideast instance0 = ('little confused models [number] [number] heard le se someone tell differences far features ' + 'performance curious book ...
samples/contrib/mlworkbench/text_classification_20newsgroup/Text Classification --- 20NewsGroup (small data).ipynb
googledatalab/notebooks
apache-2.0
The first and second instances are predicted correctly. The third is wrong. Below we run "%%ml explain" to understand more.
%%ml explain --detailview_only model: ./train/model labels: rec.autos type: text data: $instance0 %%ml explain --detailview_only model: ./train/model labels: comp.windows.x type: text data: $instance1
samples/contrib/mlworkbench/text_classification_20newsgroup/Text Classification --- 20NewsGroup (small data).ipynb
googledatalab/notebooks
apache-2.0
On instance 2, the top prediction result does not match truth. Predicted is "talk.politics.guns" while truth is "talk.politics.mideast". So let's analyze these two labels.
%%ml explain --detailview_only model: ./train/model labels: talk.politics.guns,talk.politics.mideast type: text data: $instance2
samples/contrib/mlworkbench/text_classification_20newsgroup/Text Classification --- 20NewsGroup (small data).ipynb
googledatalab/notebooks
apache-2.0
Deploying Model to ML Engine Now that we have a trained model, have analyzed the results, and have tested the model output locally, we are ready to deploy it to the cloud for real predictions. Deploying a model requires the files are on GCS. The next few cells makes a bucket on GCS, copies the locally trained model, a...
!gsutil -q mb gs://bq-mlworkbench-20news-lab # Move the regular model to GCS !gsutil -m cp -r ./train/model gs://bq-mlworkbench-20news-lab
samples/contrib/mlworkbench/text_classification_20newsgroup/Text Classification --- 20NewsGroup (small data).ipynb
googledatalab/notebooks
apache-2.0
See this doc https://cloud.google.com/ml-engine/docs/how-tos/managing-models-jobs for a the definition of ML Engine models and versions. An ML Engine version runs predictions and is contained in a ML Engine model. We will create a new ML Engine model, and depoly the TensorFlow graph as a ML Engine version. This can be ...
%%ml model deploy path: gs://bq-mlworkbench-20news-lab name: news.alpha
samples/contrib/mlworkbench/text_classification_20newsgroup/Text Classification --- 20NewsGroup (small data).ipynb
googledatalab/notebooks
apache-2.0
How to Build Your Own Prediction Client A common task is to call a deployed model from different applications. Below is an example of writing a python client to run prediction. Covering model permissions topics is outside the scope of this notebook, but for more information see https://cloud.google.com/ml-engine/docs/...
from oauth2client.client import GoogleCredentials from googleapiclient import discovery from googleapiclient import errors # Store your project ID, model name, and version name in the format the API needs. api_path = 'projects/{your_project_ID}/models/{model_name}/versions/{version_name}'.format( your_project_ID=g...
samples/contrib/mlworkbench/text_classification_20newsgroup/Text Classification --- 20NewsGroup (small data).ipynb
googledatalab/notebooks
apache-2.0
To demonstrate prediction client further, check API Explorer (https://developers.google.com/apis-explorer). it allows you to send raw HTTP requests to many Google APIs. This is useful for understanding the requests and response, and help you build your own client with your favorite language. Please visit https://develo...
# The output of this cell is placed in the name box # Store your project ID, model name, and version name in the format the API needs. api_path = 'projects/{your_project_ID}/models/{model_name}/versions/{version_name}'.format( your_project_ID=google.datalab.Context.default().project_id, model_name='news', v...
samples/contrib/mlworkbench/text_classification_20newsgroup/Text Classification --- 20NewsGroup (small data).ipynb
googledatalab/notebooks
apache-2.0
The fields text box can be empty. Note that because we deployed the non-evaluation model, our depolyed model takes a csv input which only has one column. In general, the "instances" is a list of csv strings for models trained by MLWorkbench. Click in the request body box, and note a small drop down menu appears in the ...
print('Place the following in the request body box') request = {'instances': ['nasa', 'windows xp']} print(json.dumps(request))
samples/contrib/mlworkbench/text_classification_20newsgroup/Text Classification --- 20NewsGroup (small data).ipynb
googledatalab/notebooks
apache-2.0
Then click the "Authorize and execute" button. The prediction results are returned in the browser. Cleaning up the deployed model
%%ml model delete name: news.alpha %%ml model delete name: news # Delete the GCS bucket !gsutil -m rm -r gs://bq-mlworkbench-20news-lab # Delete BQ table bq.Dataset('newspredict').delete(delete_contents = True)
samples/contrib/mlworkbench/text_classification_20newsgroup/Text Classification --- 20NewsGroup (small data).ipynb
googledatalab/notebooks
apache-2.0
Fichiers Les fichiers permettent deux usages principaux : récupérer des données d'une exécution du programme à l'autre (lorsque le programme s'arrête, toutes les variables sont perdues) échanger des informations avec d'autres programmes (Excel par exemple). Le format le plus souvent utilisé est le fichier plat, texte...
mat = [[1.0, 0.0],[0.0,1.0] ] # matrice de type liste de listes with open ("mat.txt", "w") as f : # création d'un fichier en mode écriture for i in range (0,len (mat)) : # for j in range (0, len (mat [i])) : # s = str (mat [i][j]) ...
_doc/notebooks/td1a/td1a_cenonce_session4.ipynb
sdpython/ensae_teaching_cs
mit
Le même programme mais écrit avec une écriture condensée :
mat = [[1.0, 0.0],[0.0,1.0] ] # matrice de type liste de listes with open ("mat.txt", "w") as f : # création d'un fichier s = '\n'.join ( '\t'.join( str(x) for x in row ) for row in mat ) f.write ( s ) # on vérifie que le fichier existe : print([ _ for _ in os.listdir(".") if "mat" in ...
_doc/notebooks/td1a/td1a_cenonce_session4.ipynb
sdpython/ensae_teaching_cs
mit
On regare les premières lignes du fichier mat2.txt :
import pyensae %load_ext pyensae %head mat.txt
_doc/notebooks/td1a/td1a_cenonce_session4.ipynb
sdpython/ensae_teaching_cs
mit
Lecture
with open ("mat.txt", "r") as f : # ouverture d'un fichier mat = [ row.strip(' \n').split('\t') for row in f.readlines() ] print(mat)
_doc/notebooks/td1a/td1a_cenonce_session4.ipynb
sdpython/ensae_teaching_cs
mit
On retrouve les mêmes informations à ceci près qu'il ne faut pas oublier de convertir les nombres initiaux en float.
with open ("mat.txt", "r") as f : # ouverture d'un fichier mat = [ [ float(x) for x in row.strip(' \n').split('\t') ] for row in f.readlines() ] print(mat)
_doc/notebooks/td1a/td1a_cenonce_session4.ipynb
sdpython/ensae_teaching_cs
mit
Voilà qui est mieux. Le module os.path propose différentes fonctions pour manipuler les noms de fichiers. Le module os propose différentes fonctions pour manipuler les fichiers :
import os for f in os.listdir('.'): print (f)
_doc/notebooks/td1a/td1a_cenonce_session4.ipynb
sdpython/ensae_teaching_cs
mit
with De façon pragmatique, l'instruction with permet d'écrire un code plus court d'une instruction : close. Les deux bouts de code suivant sont équivalents :
with open("exemple_fichier.txt", "w") as f: f.write("something") f = open("exemple_fichier.txt", "w") f.write("something") f.close()
_doc/notebooks/td1a/td1a_cenonce_session4.ipynb
sdpython/ensae_teaching_cs
mit
L'instruction close ferme le fichier. A l'ouverture, le fichier est réservé par le programme Python, aucune autre application ne peut écrire dans le même fichier. Après l'instruction close, une autre application pour le supprimer, le modifier. Avec le mot clé with, la méthode close est implicitement appelée. à quoi ça ...
import math print (math.cos(1)) from math import cos print (cos(1)) from math import * # cette syntaxe est déconseillée car il est possible qu'une fonction print (cos(1)) # porte le même nom qu'une des vôtres
_doc/notebooks/td1a/td1a_cenonce_session4.ipynb
sdpython/ensae_teaching_cs
mit
Exercice 2 : trouver un module (1) Aller à la page modules officiels (ou utiliser un moteur de recherche) pour trouver un module permettant de générer des nombres aléatoires. Créer une liste de nombres aléatoires selon une loi uniforme puis faire une permutation aléatoire de cette séquence. Exercice 3 : trouver un modu...
# fichier monmodule.py import math def fonction_cos_sequence(seq) : return [ math.cos(x) for x in seq ] if __name__ == "__main__" : print ("ce message n'apparaît que si ce programme est le point d'entrée")
_doc/notebooks/td1a/td1a_cenonce_session4.ipynb
sdpython/ensae_teaching_cs
mit
La cellule suivante vous permet d'enregistrer le contenu de la cellule précédente dans un fichier appelée monmodule.py.
code = """ # -*- coding: utf-8 -*- import math def fonction_cos_sequence(seq) : return [ math.cos(x) for x in seq ] if __name__ == "__main__" : print ("ce message n'apparaît que si ce programme est le point d'entrée") """ with open("monmodule.py", "w", encoding="utf8") as f : f.write(code)
_doc/notebooks/td1a/td1a_cenonce_session4.ipynb
sdpython/ensae_teaching_cs
mit
Le second fichier :
import monmodule print ( monmodule.fonction_cos_sequence ( [ 1, 2, 3 ] ) )
_doc/notebooks/td1a/td1a_cenonce_session4.ipynb
sdpython/ensae_teaching_cs
mit
Note : Si le fichier monmodule.py est modifié, python ne recharge pas automatiquement le module si celui-ci a déjà été chargé. On peut voir la liste des modules en mémoire dans la variable sys.modules :
import sys list(sorted(sys.modules))[:10]
_doc/notebooks/td1a/td1a_cenonce_session4.ipynb
sdpython/ensae_teaching_cs
mit
Pour retirer le module de la mémoire, il faut l'enlever de sys.modules avec l'instruction del sys.modules['monmodule']. Python aura l'impression que le module monmodule.py est nouveau et il l'importera à nouveau. Exercice 4 : son propre module Que se passe-t-il si vous remplacez if __name__ == "__main__": par if True ...
import pyensae.datasource discours = pyensae.datasource.download_data('voeux.zip', website = 'xd')
_doc/notebooks/td1a/td1a_cenonce_session4.ipynb
sdpython/ensae_teaching_cs
mit
La documentation pour les expressions régulières est ici : regular expressions. Elles permettent de rechercher des motifs dans un texte : 4 chiffres / 2 chiffres / 2 chiffres correspond au motif des dates, avec une expression régulière, il s'écrit : [0-9]{4}/[0-9]{2}/[0-9]{2} la lettre a répété entre 2 et 10 fois est ...
import re # les expressions régulières sont accessibles via le module re expression = re.compile("[0-9]{2}/[0-9]{2}/[0-9]{4}") texte = """Je suis né le 28/12/1903 et je suis mort le 08/02/1957. Ma seconde femme est morte le 10/11/63.""" cherche = expression.findall(texte) print(cherche)
_doc/notebooks/td1a/td1a_cenonce_session4.ipynb
sdpython/ensae_teaching_cs
mit
2. Build a CLM with default parameters Building a CLM using Menpo can be done using a single line of code.
from menpofit.clm import CLM clm = CLM( training_images, verbose=True, group='PTS', diagonal=200 ) print(clm) clm.view_clm_widget()
notebooks/DeformableModels/ConstrainedLocalModel/CLMs Basics.ipynb
menpo/menpofit-notebooks
bsd-3-clause
3. Fit the previous CLM In Menpo, CLMs can be fitted to images by creating Fitter objects around them. One of the most popular algorithms for fitting CLMs is the Regularized Landmark Mean-Shift algorithm. In order to fit our CLM using this algorithm using Menpo, the user needs to define a GradientDescentCLMFitter obje...
from menpofit.clm import GradientDescentCLMFitter fitter = GradientDescentCLMFitter(clm, n_shape=[6, 12])
notebooks/DeformableModels/ConstrainedLocalModel/CLMs Basics.ipynb
menpo/menpofit-notebooks
bsd-3-clause
Fitting a GradientDescentCLMFitter to an image is as simple as calling its fit method. Let's try it by fitting some images of the LFPW database test set!!!
import menpo.io as mio # load test images test_images = [] for i in mio.import_images(path_to_lfpw / 'testset', max_images=5, verbose=True): # crop image i = i.crop_to_landmarks_proportion(0.5) # convert it to grayscale if needed if i.n_channels == 3: i = i.as_greyscale(mode='luminosity') #...
notebooks/DeformableModels/ConstrainedLocalModel/CLMs Basics.ipynb
menpo/menpofit-notebooks
bsd-3-clause
Note that for the purpose of this simple fitting demonstration we will just fit the first 5 images of the LFPW test set.
from menpofit.fitter import noisy_shape_from_bounding_box fitting_results = [] for i in test_images: gt_s = i.landmarks['PTS'].lms # generate perturbed landmarks s = noisy_shape_from_bounding_box(gt_s, gt_s.bounding_box()) # fit image fr = fitter.fit_from_shape(i, s, gt_shape=gt_s) fitting_re...
notebooks/DeformableModels/ConstrainedLocalModel/CLMs Basics.ipynb
menpo/menpofit-notebooks
bsd-3-clause
shift data to correct column using loc for assignment: df.loc[destination condition, column] = df.loc[source]
df.loc[df.type =='map',['mapPhoto']]=df['url'] #moving cell values to correct column df.loc[df.type.str.contains('lineminus'),['miscPhoto']]=df['url'] df.loc[df.type.str.contains('lineplus'),['miscPhoto']]=df['url'] df.loc[df.type.str.contains('misc'),['miscPhoto']]=df['url'] #now to deal with type='photo' photos =...
Combining rows w groupby, transform, or multiIndex.ipynb
Soil-Carbon-Coalition/atlasdata
mit
use groupby and transform to fill the row
#since we're using string methods, NaNs won't work mycols =['general_observations','mapPhoto','linephoto1','linephoto2','miscPhoto','site_description'] for item in mycols: df[item] = df[item].fillna('') df.mapPhoto = df.groupby('id')['mapPhoto'].transform(lambda x: "%s" % ''.join(x)) df.linephoto1 = df.groupby(['...
Combining rows w groupby, transform, or multiIndex.ipynb
Soil-Carbon-Coalition/atlasdata
mit
shift data to correct row using a multi-Index
ids = list(df['id'])#make a list of ids to iterate over, before the hierarchical index #df.type = df.type.map({'\*plot summary':'transect','\*remonitoring notes':'transect'}) df.loc[df.type =='map',['mapPhoto']]=df['url'] #moving cell values to correct column df.set_index(['id','type'],inplace=True) # hierarchical i...
Combining rows w groupby, transform, or multiIndex.ipynb
Soil-Carbon-Coalition/atlasdata
mit
Introduction for the two-level system The quantum two-level system (TLS) is the simplest possible model for quantum light-matter interaction. In the version we simulate here, the system is driven by a continuous-mode coherent state, whose dipolar interaction with the system is represented by the following Hamiltonain $...
# define system operators gamma = 1 # decay rate sm_TLS = destroy(2) # dipole operator c_op_TLS = [np.sqrt(gamma)*sm_TLS] # represents spontaneous emission # choose range of driving strengths to simulate Om_list_TLS = gamma*np.logspace(-2, 1, 300) # calculate steady-state de...
examples/homodyned-Jaynes-Cummings-emission.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
The emission can be decomposed into a so-called coherent and incoherent portion. The coherent portion is simply due to the classical mean of the dipole moment, i.e. $$I_\mathrm{c}=\lim_{t\rightarrow\infty}\Gamma\langle\sigma^\dagger(t)\rangle\langle\sigma(t)\rangle,$$ while the incoherent portion is due to the standard...
# decompose the emitted light into the coherent and incoherent # portions I_c_TLS = expect(sm_TLS.dag(), rho_ss_TLS)*expect(sm_TLS, rho_ss_TLS) I_inc_TLS = expect(sm_TLS.dag()*sm_TLS, rho_ss_TLS) - I_c_TLS
examples/homodyned-Jaynes-Cummings-emission.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Visualize the incoherent and coherent emissions
plt.semilogx(Om_list_TLS, abs(I_c_TLS), label='TLS $I_\mathrm{c}$') plt.semilogx(Om_list_TLS, abs(I_inc_TLS), 'r', label='TLS $I_\mathrm{inc}$') plt.xlabel('Driving strength [$\Gamma$]') plt.ylabel('Normalized flux [$\Gamma$]') plt.legend(loc=2);
examples/homodyned-Jaynes-Cummings-emission.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Introduction for the Jaynes-Cummings system The quantum Jaynes-Cummings (JC) system represents one of the most fundamental models for quantum light-matter interaction, which models the interaction between a quantum two-level system (e.g. an atomic transition) and a single photonic mode. Here, the strong interaction bet...
# truncate size of cavity's Fock space N = 15 # setup system operators sm = tensor(destroy(2), qeye(N)) a = tensor(qeye(2), destroy(N)) # define system parameters, barely into strong coupling regime kappa = 1 g = 0.6 * kappa detuning = 3 * g # cavity-atom detuning delta_s = detuning/2 + np.sqrt(detuning ** 2 / 4 + g...
examples/homodyned-Jaynes-Cummings-emission.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Effective polaritonic two-level system In the ideal scenario, the most anharmonic polariton and the ground state form an ideal two-level system with effective emission rate of $$\Gamma_\mathrm{eff}= \frac{\kappa}{2}+2\,\textrm{Im} \left{\sqrt{ g^2-\left( \frac{\kappa}{4}+\frac{\textbf{i}\Delta}{2} \right)^2 }\right}.$$
effective_gamma = kappa / 2 + 2 * np.imag( np.sqrt(g ** 2 - (kappa / 4 + 1j * detuning / 2) ** 2)) # set driving strength based on the effective polariton's # emission rate (driving strength goes as sqrt{gamma}) Om = 0.4 * np.sqrt(effective_gamma)
examples/homodyned-Jaynes-Cummings-emission.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Define reference system for homodyne interference For the purposes of optimally homodyning the JC output, we wish to transmit light through a bare cavity (no atom involved) and calculate its coherent amplitude. (This of course could easily be analytically calculated but QuTiP certainly is trivially capable of such a ca...
# reference cavity operator a_r = destroy(N) c_op_r = [np.sqrt(kappa)*a_r] # reference cavity Hamiltonian, no atom coupling H_c = Om * (a_r + a_r.dag()) + delta_s * a_r.dag() * a_r # solve for coherent state amplitude at driving strength Om rho_ss_c = steadystate(H_c, c_op_r) alpha = -expect(rho_ss_c, a_r) alpha_c = ...
examples/homodyned-Jaynes-Cummings-emission.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Calculate JC emission The steady-state emitted flux from the JC system is given by $T=\kappa\langle a^\dagger a \rangle$, however with an additional homodyne interference it is $T=\langle b^\dagger b \rangle$, where the operator $b=\sqrt{\kappa}/2\, a + \beta$ is a new operator representing the interference between the...
def calculate_rho_ss(delta_scan): H = Om * (a + a.dag()) + g * (sm.dag() * a + sm * a.dag()) + \ delta_scan * ( sm.dag() * sm + a.dag() * a) - detuning * sm.dag() * sm return steadystate(H, c_op) delta_list = np.linspace(-6 * g, 9 * g, 200) rho_ss = parfor(calculate_rho_ss, delta_list) # ...
examples/homodyned-Jaynes-Cummings-emission.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Visualize the emitted flux with and without interference The dashed black line shows the intensity without interference and the violet line shows the intensity with interference. The vertical gray line indicates the spectral position of the anharmonic polariton. Note its narrower linewidth due to the slower effective d...
plt.figure(figsize=(8,5)) plt.plot(delta_list/g, I/effective_gamma, 'k', linestyle='dashed', label='JC') plt.plot(delta_list/g, I_int/effective_gamma, 'blueviolet', label='JC w/ interference') plt.vlines(delta_s/g, 0, 0.7, 'gray') plt.xlim(-6, 9) plt.ylim(0, 0.7) plt.xlabel('Detuning [g]') plt.ylabel...
examples/homodyned-Jaynes-Cummings-emission.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Calculate coherent/incoherent portions of emission from JC system and its $g^{(2)}(0)$ We note that $$g^{(2)}(0)=\frac{\langle a^\dagger a^\dagger a a \rangle}{\langle a^\dagger a \rangle^2}.$$
Om_list = kappa*np.logspace(-2, 1, 300)*np.sqrt(effective_gamma) def calculate_rho_ss(Om): H = Om * (a + a.dag()) + g * (sm.dag() * a + sm * a.dag()) + \ delta_s*(sm.dag()*sm + a.dag()*a) - detuning*sm.dag()*sm return steadystate(H, c_op) rho_ss = parfor(calculate_rho_ss, Om_list) # decompose emissio...
examples/homodyned-Jaynes-Cummings-emission.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Visualize the results The dashed black line in the top figure represents the coherent portion of the emission and can clearly be seen to dominate the emission for large driving strengths. Here, the emission significantly deviates from that of a two-level system, which saturates by these driving strengths. The lack of s...
plt.figure(figsize=(8,8)) plt.subplot(211) plt.semilogx(Om_list/np.sqrt(effective_gamma), abs(I_c)/kappa, 'k', linestyle='dashed', label='JC $I_\mathrm{c}$') plt.semilogx(Om_list/np.sqrt(effective_gamma), abs(I_inc)/kappa, 'r', linestyle='dashed', label='JC $I_\mathrm{inc}$') plt.xlabel(r'Dri...
examples/homodyned-Jaynes-Cummings-emission.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Calculate homodyned JC emission Now we recalculate the coherent and incoherent portions as well as the $g^{(2)}(0)$ for the homodyned JC emission, but use the operator $b$ instead of $\sqrt{\kappa}/2\,a$. Thus $$g^{(2)}(0)=\frac{\langle b^\dagger b^\dagger b b \rangle}{\langle b^\dagger b \rangle^2}.$$
def calculate_rho_ss_c(Om): H_c = Om * (a_r + a_r.dag()) + delta_s * a_r.dag() * a_r return steadystate(H_c, c_op_r) rho_ss_c = parfor(calculate_rho_ss_c, Om_list) # calculate list of interference values for all driving strengths alpha_list = -expect(rho_ss_c, a_r) alpha_c_list = alpha_list.conjugate() # dec...
examples/homodyned-Jaynes-Cummings-emission.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Calculate the results The dashed red and blue lines, which represent the TLS decomposition are now matched well by the JC decomposition with optimal homodyne interference (red and blue). The dashed black line is shown again as a reminder of the JC system's coherent emission without interference, which does not saturate...
plt.figure(figsize=(8,8)) plt.subplot(211) plt.semilogx(Om_list_TLS, abs(I_c_TLS), linestyle='dashed', label='TLS $I_\mathrm{c}$') plt.semilogx(Om_list_TLS, abs(I_inc_TLS), 'r', linestyle='dashed', label='TLS $I_\mathrm{inc}$') plt.semilogx(Om_list/np.sqrt(effective_gamma), abs(...
examples/homodyned-Jaynes-Cummings-emission.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Second-order coherence with delay We additionally consider the second-order coherence as a function of time delay, i.e. $$g^{(2)}(\tau)=\lim_{t\rightarrow\infty}\frac{\langle b^\dagger(t)b^\dagger(t+\tau)b(t+\tau)b(t)\rangle}{\langle b^\dagger(t)b(t)\rangle^2},$$ and show how it is calculated in the context of homodyne...
# first calculate the steady state H = Om * (a + a.dag()) + g * (sm.dag() * a + sm * a.dag()) + \ delta_s * (sm.dag() * sm + a.dag() * a) - \ detuning * sm.dag() * sm rho0 = steadystate(H, c_op) taulist = np.linspace(0, 5/effective_gamma, 1000) # next evolve the states according the quantum regression theorem...
examples/homodyned-Jaynes-Cummings-emission.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Visualize the comparison to TLS correlations At a moderate driving strength, the JC correlation (dashed black line) is seen to significantly deviate from that of the TLS (dotted purple line). On the other hand, after the optimal homodyne inteference, the emission correlations (solid purple line) match the ideal correla...
plt.figure(figsize=(8,5)) l1, = plt.plot(taulist*effective_gamma, corr_vec_TLS/n_TLS**2, 'blueviolet', linestyle='dotted', label='TLS') plt.plot(taulist*effective_gamma, corr_vec/n**2, 'k', linestyle='dashed', label='JC') plt.plot(taulist*effective_gamma, corr_vec_int/n_int**2, 'bluevi...
examples/homodyned-Jaynes-Cummings-emission.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
<div class="span5 alert alert-info"> <h3>Exercise Set I</h3> <br/> <b>Exercise/Answers:</b> <br/> <li> Look at the histogram above. Tell a story about the average ratings per critic. <b> The average fresh ratings per critic is around 0.6 with a minimum ratings of 0.35 and max of 0.81 </b> <li> What shape does the dis...
from sklearn.feature_extraction.text import CountVectorizer text = ['Hop on pop', 'Hop off pop', 'Hop Hop hop'] print("Original text is\n{}".format('\n'.join(text))) vectorizer = CountVectorizer(min_df=0) # call `fit` to build the vocabulary vectorizer.fit(text) # call `transform` to convert text to a bag of words ...
Mini_Project_Naive_Bayes.ipynb
anonyXmous/CapstoneProject
unlicense
Naive Bayes From Bayes' Theorem, we have that $$P(c \vert f) = \frac{P(c \cap f)}{P(f)}$$ where $c$ represents a class or category, and $f$ represents a feature vector, such as $\bar V(d)$ as above. We are computing the probability that a document (or whatever we are classifying) belongs to category c given the feature...
# your turn # split the data set into a training and test set from sklearn.model_selection import train_test_split from sklearn.naive_bayes import MultinomialNB X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=5) clf = MultinomialNB() clf.fit(X_train, y_train) print('accuracy score on training s...
Mini_Project_Naive_Bayes.ipynb
anonyXmous/CapstoneProject
unlicense
Picking Hyperparameters for Naive Bayes and Text Maintenance We need to know what value to use for $\alpha$, and we also need to know which words to include in the vocabulary. As mentioned earlier, some words are obvious stopwords. Other words appear so infrequently that they serve as noise, and other words in addition...
# Your turn. # contruct the frequency of words vectorizer = CountVectorizer(stop_words='english') X = vectorizer.fit_transform(critics.quote) word_freq_df = pd.DataFrame({'term': vectorizer.get_feature_names(), 'occurrences':np.asarray(X.sum(axis=0)).ravel().tolist()}) word_freq_df['frequency'] = word_freq_df['occurren...
Mini_Project_Naive_Bayes.ipynb
anonyXmous/CapstoneProject
unlicense
<div class="span5 alert alert-info"> <h3>Exercise Set IV</h3> <p><b>Exercise:</b> What does using the function `log_likelihood` as the score mean? What are we trying to optimize for?</p> <b> ANSWER: The function log_likelihood is the logarithmic value of the probability </b> <p><b>Exercise:</b> Without writing any cod...
from sklearn.naive_bayes import MultinomialNB #the grid of parameters to search over alphas = [.1, 1, 5, 10, 50] best_min_df = 1 # YOUR TURN: put your value of min_df here. #Find the best value for alpha and min_df, and the best classifier best_alpha = None maxscore=-np.inf for alpha in alphas: vectorizer...
Mini_Project_Naive_Bayes.ipynb
anonyXmous/CapstoneProject
unlicense
<div class="span5 alert alert-info"> <h3>Exercise Set V: Working with the Best Parameters</h3> <p><b>Exercise:</b> Using the best value of `alpha` you just found, calculate the accuracy on the training and test sets. Is this classifier better? Why (not)?</p> <b/> ANSWER: Yes, it is a better classifier since it improv...
vectorizer = CountVectorizer(min_df=best_min_df) X, y = make_xy(critics, vectorizer) xtrain=X[mask] ytrain=y[mask] xtest=X[~mask] ytest=y[~mask] clf = MultinomialNB(alpha=best_alpha).fit(xtrain, ytrain) #your turn. Print the accuracy on the test and training dataset training_accuracy = clf.score(xtrain, ytrain) test_...
Mini_Project_Naive_Bayes.ipynb
anonyXmous/CapstoneProject
unlicense
Interpretation What are the strongly predictive features? We use a neat trick to identify strongly predictive features (i.e. words). first, create a data set such that each row has exactly one feature. This is represented by the identity matrix. use the trained classifier to make predictions on this matrix sort the r...
words = np.array(vectorizer.get_feature_names()) x = np.matrix(np.identity(xtest.shape[1]), copy=False) probs = clf.predict_log_proba(x)[:, 0] ind = np.argsort(probs) good_words = words[ind[:10]] bad_words = words[ind[-10:]] good_prob = probs[ind[:10]] bad_prob = probs[ind[-10:]] print("Good words\t P(fresh | w...
Mini_Project_Naive_Bayes.ipynb
anonyXmous/CapstoneProject
unlicense
<br/> <b>good words P(fresh | word) </b> <br/> touching 0.96 <br/> delight 0.95 <br/> delightful 0.95 <br/> brilliantly 0.94 <br/> energetic 0.94 <br/> superb 0.94 <br/> ensemble 0.93 <br/> childhood 0.93 <b...
x, y = make_xy(critics, vectorizer) prob = clf.predict_proba(x)[:, 0] predict = clf.predict(x) bad_rotten = np.argsort(prob[y == 0])[:5] bad_fresh = np.argsort(prob[y == 1])[-5:] print("Mis-predicted Rotten quotes") print('---------------------------') for row in bad_rotten: print(critics[y == 0].quote.iloc[row]...
Mini_Project_Naive_Bayes.ipynb
anonyXmous/CapstoneProject
unlicense
<div class="span5 alert alert-info"> <h3>Exercise Set VII: Predicting the Freshness for a New Review</h3> <br/> <div> <b>Exercise:</b> <ul> <li> Using your best trained classifier, predict the freshness of the following sentence: *'This movie is not remarkable, touching, or superb in any way'* <li> Is the result what y...
#your turn # Predicting the Freshness for a New Review docs_new = ['This movie is not remarkable, touching, or superb in any way'] X_new = vectorizer.transform(docs_new) X_new = X_new.tocsc() str = "Fresh" if clf.predict(X_new) == 1 else "Rotten" print('"', docs_new[0], '"==> ', "", str)
Mini_Project_Naive_Bayes.ipynb
anonyXmous/CapstoneProject
unlicense
<div class="span5 alert alert-info"> <h3>Exercise Set VIII: Enrichment</h3> <p> There are several additional things we could try. Try some of these as exercises: <ol> <li> Build a Naive Bayes model where the features are n-grams instead of words. N-grams are phrases containing n words next to each other: a bigram cont...
def print_top_words(model, feature_names, n_top_words): for topic_idx, topic in enumerate(model.components_): print("Topic #%d:" % topic_idx) print(" ".join([feature_names[i] for i in topic.argsort()[:-n_top_words - 1:-1]])) print() # Your turn def make_xy_bigram(critics...
Mini_Project_Naive_Bayes.ipynb
anonyXmous/CapstoneProject
unlicense
Using bigram from nltk package
import itertools import pandas as pd from nltk.collocations import BigramCollocationFinder from nltk.metrics import BigramAssocMeasures def bigram_word_feats(words, score_fn=BigramAssocMeasures.chi_sq, n=200): bigram_finder = BigramCollocationFinder.from_words(words) bigrams = bigram_finder.nbest(score_fn, n...
Mini_Project_Naive_Bayes.ipynb
anonyXmous/CapstoneProject
unlicense
Using RANDOM FOREST classifier instead of Naive Bayes
from sklearn.model_selection import cross_val_score from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(n_estimators=10, max_depth=None, min_samples_split=2, random_state=0) scores = cross_val_score(clf, X, y) scores.mean()
Mini_Project_Naive_Bayes.ipynb
anonyXmous/CapstoneProject
unlicense
Try adding supplemental features -- information about genre, director, cast, etc.
# Create a random forest classifier. By convention, clf means 'classifier' #clf = RandomForestClassifier(n_jobs=2) # Train the classifier to take the training features and learn how they relate # to the training y (the species) #clf.fit(train[features], y) critics.head()
Mini_Project_Naive_Bayes.ipynb
anonyXmous/CapstoneProject
unlicense
Use word2vec or Latent Dirichlet Allocation to group words into topics and use those topics for prediction.
from sklearn.decomposition import NMF, LatentDirichletAllocation vectorizer = CountVectorizer(min_df=best_min_df) X, y = make_xy(critics, vectorizer) xtrain=X[mask] ytrain=y[mask] xtest=X[~mask] ytest=y[~mask] lda = LatentDirichletAllocation(n_topics=10, max_iter=5, learning_method='on...
Mini_Project_Naive_Bayes.ipynb
anonyXmous/CapstoneProject
unlicense
Use TF-IDF weighting instead of word counts.
# http://scikit-learn.org/dev/modules/feature_extraction.html#text-feature-extraction # http://scikit-learn.org/dev/modules/classes.html#text-feature-extraction-ref from sklearn.feature_extraction.text import TfidfVectorizer tfidfvectorizer = TfidfVectorizer(min_df=1, stop_words='english') Xtfidf=tfidfvectorizer.fit_tr...
Mini_Project_Naive_Bayes.ipynb
anonyXmous/CapstoneProject
unlicense
Set parameters
data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' # Setup for reading the raw data raw = io.read_raw_fif(raw_fname) events = mne.read_events(event_fname) # Add a bad channel raw.info['bads'] ...
0.16/_downloads/plot_sensor_connectivity.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Imports
import tensorflow_hub as hub import joblib import gzip import kipoiseq from kipoiseq import Interval import pyfaidx import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl import seaborn as sns %matplotlib inline %config InlineBackend.figure_format = 'retina' transform_path = '...
enformer/enformer-usage.ipynb
deepmind/deepmind-research
apache-2.0
Download files Download and index the reference genome fasta file Credit to Genome Reference Consortium: https://www.ncbi.nlm.nih.gov/grc Schneider et al 2017 http://dx.doi.org/10.1101/gr.213611.116: Evaluation of GRCh38 and de novo haploid genome assemblies demonstrates the enduring quality of the reference assembly
!mkdir -p /root/data !wget -O - http://hgdownload.cse.ucsc.edu/goldenPath/hg38/bigZips/hg38.fa.gz | gunzip -c > {fasta_file} pyfaidx.Faidx(fasta_file) !ls /root/data
enformer/enformer-usage.ipynb
deepmind/deepmind-research
apache-2.0
Download the clinvar file. Reference: Landrum MJ, Lee JM, Benson M, Brown GR, Chao C, Chitipiralla S, Gu B, Hart J, Hoffman D, Jang W, Karapetyan K, Katz K, Liu C, Maddipatla Z, Malheiro A, McDaniel K, Ovetsky M, Riley G, Zhou G, Holmes JB, Kattman BL, Maglott DR. ClinVar: improving access to variant interpretations an...
!wget https://ftp.ncbi.nlm.nih.gov/pub/clinvar/vcf_GRCh38/clinvar.vcf.gz -O /root/data/clinvar.vcf.gz
enformer/enformer-usage.ipynb
deepmind/deepmind-research
apache-2.0
Code (double click on the title to show the code)
# @title `Enformer`, `EnformerScoreVariantsNormalized`, `EnformerScoreVariantsPCANormalized`, SEQUENCE_LENGTH = 393216 class Enformer: def __init__(self, tfhub_url): self._model = hub.load(tfhub_url).model def predict_on_batch(self, inputs): predictions = self._model.predict_on_batch(inputs) return {...
enformer/enformer-usage.ipynb
deepmind/deepmind-research
apache-2.0