markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
**WARNING**: it's not OK to extrapolate the validity of the model outside of the range of values where we have observed data.For example, there is no reason to believe in the model's predictions about ELV for 200 or 2000 hours of stats training: | result.predict({'hours':[200]}) | _____no_output_____ | MIT | stats_overview/04_LINEAR_MODELS.ipynb | minireference/noBSstatsnotebooks |
Set up your connectionThe next cell contains code to check if you already have a sascfg_personal.py file in your current conda environment. If you do not one is created for you.Next [choose your access method](https://sassoftware.github.io/saspy/install.htmlchoosing-an-access-method) and the read through the configura... | # Setup for the configuration file - for running inside of a conda environment
saspyPfad = f"C:\\Users\\{getpass.getuser()}\\.conda\\envs\\{os.environ['CONDA_DEFAULT_ENV']}\\Lib\\site-packages\\saspy\\"
saspycfg_personal = Path(f'{saspyPfad}sascfg_personal.py')
if saspycfg_personal.is_file():
print('All setup and r... | All setup and ready to go
| Apache-2.0 | SAS_contrib/Ask_the_Expert_Germany_2021.ipynb | mp675/saspy-examples |
Configurationprod = { 'iomhost': 'rfnk01-0068.exnet.sas.com', <-- SAS Host Name 'iomport': 8591, <-- SAS Workspace Server Port 'class_id': '440196d4-90f0-11d0-9f41-00a024bb830c', <-- static, if the value is wrong use proc iomoperate 'provider': 'sas.iomprovider', <-- static 'encoding': 'wind... | # If no configuration name is specified, you get a list of the configured ones
# sas = saspy.SASsession(cfgname='prod')
sas = saspy.SASsession() | Please enter the name of the SAS Config you wish to run. Available Configs are: ['prod', 'dev'] prod
Username: sasdemo
Password: ········
SAS Connection established. Workspace UniqueIdentifier is 5A182D7A-E928-4CA9-8EC4-9BE60ECB2A79
| Apache-2.0 | SAS_contrib/Ask_the_Expert_Germany_2021.ipynb | mp675/saspy-examples |
Explore some interactions with SASGetting a feeling for what SASPy can do. | # Let's take a quick look at all the different methods and variables provided by SASSession object
dir(sas)
# Get a list of all tables inside of the library sashelp
table_df = sas.list_tables(libref='sashelp', results='pandas')
# Search for a table containing a capital C in its name
table_df[table_df['MEMNAME'].str.con... | _____no_output_____ | Apache-2.0 | SAS_contrib/Ask_the_Expert_Germany_2021.ipynb | mp675/saspy-examples |
Reading in data from local disc with Pandas and uploading it to SAS1. First we are going to read in a local csv file2. Creating a copy of the base data file in SAS3. Append the local data to the data stored in SAS and sort it The Opel data set:Make,Model,Type,Origin,DriveTrain,MSRP,Invoice,EngineSize,Cylinders,Horsepo... | # Read a local csv file with pandas and take a look
opel = pd.read_csv('cars_opel.csv')
opel.describe()
# Looks like the horsepower isn't right, let's fix that
opel.loc[:, 'Horsepower'] *= 10
opel.describe()
# Create a working copy of the cars data set
sas.submitLOG('''data work.cars; set sashelp.cars; run;''')
# Appen... | _____no_output_____ | Apache-2.0 | SAS_contrib/Ask_the_Expert_Germany_2021.ipynb | mp675/saspy-examples |
Reading in data from SAS and manipulating it with Pandas | # Short form is sd2df()
df = sas.sasdata2dataframe('cars', 'sashelp', dsopts={'where': 'make="BMW"'})
type(df) | _____no_output_____ | Apache-2.0 | SAS_contrib/Ask_the_Expert_Germany_2021.ipynb | mp675/saspy-examples |
Now that the data set is available as a Pandas DataFrame you can use it in e.g. a sklearn pipeline | df | _____no_output_____ | Apache-2.0 | SAS_contrib/Ask_the_Expert_Germany_2021.ipynb | mp675/saspy-examples |
Creating a modelThe data can be found [here](https://www.kaggle.com/gsr9099/best-model-for-credit-card-approval) | # Read two local csv files
df_applications = pd.read_csv('application_record.csv')
df_credit = pd.read_csv('credit_record.csv')
# Get a feel for the data
print(df_applications.columns)
print(df_applications.head(5))
df_applications.describe()
# Join the two data sets together
df_application_credit = df_applications.joi... | _____no_output_____ | Apache-2.0 | SAS_contrib/Ask_the_Expert_Germany_2021.ipynb | mp675/saspy-examples |
The HPSPLIT procedure is a high-performance procedure that builds tree-based statistical models for classification and regression. The procedure produces classification trees, which model a categorical response, and regression trees, which model a continuous response. Both types of trees are referred to as decision tre... | hpsplit_model = stat.hpsplit(data=application_credit_part,
cls=var_class,
model="status(event='N')= FLAG_OWN_CAR FLAG_OWN_REALTY OCCUPATION_TYPE MONTHS_BALANCE AMT_INCOME_TOTAL",
code='trescore.sas',
... | _____no_output_____ | Apache-2.0 | SAS_contrib/Ask_the_Expert_Germany_2021.ipynb | mp675/saspy-examples |
VacationPy---- Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps. | # Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
import json
# Import API key
from config import g_key
| _____no_output_____ | ADSL | VacationPy/VacationPy.ipynb | ineal12/python-api-challenge |
Store Part I results into DataFrame* Load the csv exported in Part I to a DataFrame | #read in weather data
weather_data = pd.read_csv('../cities.csv')
weather_data.head() | _____no_output_____ | ADSL | VacationPy/VacationPy.ipynb | ineal12/python-api-challenge |
Humidity Heatmap* Configure gmaps.* Use the Lat and Lng as locations and Humidity as the weight.* Add Heatmap layer to map. | #Filter columns to be used in weather dataframe
cols = ["City", "Cloudiness", "Country", "Date", "Humidity", "Lat", "Lng", "Temp", "Wind Speed"]
weather_data = weather_data[cols]
#configure gmaps
gmaps.configure(api_key=g_key)
#create coordinates
locations = weather_data[["Lat", "Lng"]].astype(float)
humidity = w... | _____no_output_____ | ADSL | VacationPy/VacationPy.ipynb | ineal12/python-api-challenge |
Create new DataFrame fitting weather criteria* Narrow down the cities to fit weather conditions.* Drop any rows will null values. | weather_data = weather_data[weather_data["Temp"].between(70,80,inclusive=True)]
weather_data = weather_data[weather_data["Temp"] > 70]
weather_data = weather_data[weather_data["Wind Speed"] < 10]
weather_data = weather_data[weather_data["Cloudiness"] == 0]
weather_data.head() | _____no_output_____ | ADSL | VacationPy/VacationPy.ipynb | ineal12/python-api-challenge |
Hotel Map* Store into variable named `hotel_df`.* Add a "Hotel Name" column to the DataFrame.* Set parameters to search for hotels with 5000 meters.* Hit the Google Places API for each city's coordinates.* Store the first Hotel result into the DataFrame.* Plot markers on top of the heatmap. | hotel_df = weather_data
hotel_df["Hotel Name"]= ''
hotel_df
params = {
"types": "lodging",
"radius":5000,
"key": g_key
}
# Use the lat/lng we recovered to identify airports
for index, row in hotel_df.iterrows():
# get lat, lng from df
lat = row["Lat"]
lng = row["Lng"]
# change location eac... | _____no_output_____ | ADSL | VacationPy/VacationPy.ipynb | ineal12/python-api-challenge |
Support Vector Machine (SVM) Tutorial Follow from: [link](https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47) - SVM can be used for both regression and classification problems.- The goal of SVM models is to find a hyperplane in an N-dimensional space that dis... | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
%matplotlib inline
plt.style.use('seaborn')
df = pd.read_csv('data/Iris.csv')
df.head()
df = df.drop(['Id'], axis=1)
df... | _____no_output_____ | MIT | SVM.ipynb | bbrighttaer/data_science_nbs |
SVM implementation with Numpy | train_f1 = x_train[:, 0].reshape(90, 1)
train_f2 = x_train[:, 1].reshape(90, 1)
w1, w2 = np.zeros((90, 1)), np.zeros((90, 1))
epochs = 1
alpha = 1e-4
while epochs < 10000:
y = w1 * train_f1 + w2 * train_f2
prod = y * y_train
count = 0
for val in prod:
if val >= 1:
cost = 0
... | _____no_output_____ | MIT | SVM.ipynb | bbrighttaer/data_science_nbs |
Evaluation | index = list(range(10, 90))
w1 = np.delete(w1, index).reshape(10, 1)
w2 = np.delete(w2, index).reshape(10, 1)
## Extract the test data features
test_f1 = x_test[:,0].reshape(10, 1)
test_f2 = x_test[:,1].reshape(10, 1)
## Predict
y_pred = w1 * test_f1 + w2 * test_f2
predictions = []
for val in y_pred:
if val > 1:
... | 0.9
| MIT | SVM.ipynb | bbrighttaer/data_science_nbs |
SVM via sklearn | from sklearn.svm import SVC
clf = SVC(kernel='linear')
clf.fit(x_train, y_train)
y_pred = clf.predict(x_test)
print(accuracy_score(y_test, y_pred)) | 1.0
| MIT | SVM.ipynb | bbrighttaer/data_science_nbs |
This notebook was put together by [Jake Vanderplas](http://www.vanderplas.com). Source and license info is on [GitHub](https://github.com/jakevdp/sklearn_tutorial/). Supervised Learning In-Depth: Random Forests Previously we saw a powerful discriminative classifier, **Support Vector Machines**.Here we'll take a look a... | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
plt.style.use('seaborn') | _____no_output_____ | BSD-3-Clause | notebooks/03.2-Regression-Forests.ipynb | DininduSenanayake/sklearn_tutorial |
Motivating Random Forests: Decision Trees Random forests are an example of an *ensemble learner* built on decision trees.For this reason we'll start by discussing decision trees themselves.Decision trees are extremely intuitive ways to classify or label objects: you simply ask a series of questions designed to zero-in... | import fig_code
fig_code.plot_example_decision_tree() | _____no_output_____ | BSD-3-Clause | notebooks/03.2-Regression-Forests.ipynb | DininduSenanayake/sklearn_tutorial |
The binary splitting makes this extremely efficient.As always, though, the trick is to *ask the right questions*.This is where the algorithmic process comes in: in training a decision tree classifier, the algorithm looks at the features and decides which questions (or "splits") contain the most information. Creating a ... | from sklearn.datasets import make_blobs
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=1.0)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='rainbow'); | _____no_output_____ | BSD-3-Clause | notebooks/03.2-Regression-Forests.ipynb | DininduSenanayake/sklearn_tutorial |
We have some convenience functions in the repository that help | from fig_code import visualize_tree, plot_tree_interactive | _____no_output_____ | BSD-3-Clause | notebooks/03.2-Regression-Forests.ipynb | DininduSenanayake/sklearn_tutorial |
Now using IPython's ``interact`` (available in IPython 2.0+, and requires a live kernel) we can view the decision tree splits: | plot_tree_interactive(X, y); | _____no_output_____ | BSD-3-Clause | notebooks/03.2-Regression-Forests.ipynb | DininduSenanayake/sklearn_tutorial |
Notice that at each increase in depth, every node is split in two **except** those nodes which contain only a single class.The result is a very fast **non-parametric** classification, and can be extremely useful in practice.**Question: Do you see any problems with this?** Decision Trees and over-fittingOne issue with ... | from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
plt.figure()
visualize_tree(clf, X[:200], y[:200], boundaries=False)
plt.figure()
visualize_tree(clf, X[-200:], y[-200:], boundaries=False) | _____no_output_____ | BSD-3-Clause | notebooks/03.2-Regression-Forests.ipynb | DininduSenanayake/sklearn_tutorial |
The details of the classifications are completely different! That is an indication of **over-fitting**: when you predict the value for a new point, the result is more reflective of the noise in the model rather than the signal. Ensembles of Estimators: Random ForestsOne possible way to address over-fitting is to use a... | def fit_randomized_tree(random_state=0):
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=2.0)
clf = DecisionTreeClassifier(max_depth=15)
rng = np.random.RandomState(random_state)
i = np.arange(len(y))
rng.shuffle(i)
visualize_tree(clf, X[i[:250]... | _____no_output_____ | BSD-3-Clause | notebooks/03.2-Regression-Forests.ipynb | DininduSenanayake/sklearn_tutorial |
See how the details of the model change as a function of the sample, while the larger characteristics remain the same!The random forest classifier will do something similar to this, but use a combined version of all these trees to arrive at a final answer: | from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=100, random_state=0)
visualize_tree(clf, X, y, boundaries=False); | _____no_output_____ | BSD-3-Clause | notebooks/03.2-Regression-Forests.ipynb | DininduSenanayake/sklearn_tutorial |
By averaging over 100 randomly perturbed models, we end up with an overall model which is a much better fit to our data!*(Note: above we randomized the model through sub-sampling... Random Forests use more sophisticated means of randomization, which you can read about in, e.g. the [scikit-learn documentation](http://sc... | from sklearn.ensemble import RandomForestRegressor
x = 10 * np.random.rand(100)
def model(x, sigma=0.3):
fast_oscillation = np.sin(5 * x)
slow_oscillation = np.sin(0.5 * x)
noise = sigma * np.random.randn(len(x))
return slow_oscillation + fast_oscillation + noise
y = model(x)
plt.errorbar(x, y, 0.3,... | _____no_output_____ | BSD-3-Clause | notebooks/03.2-Regression-Forests.ipynb | DininduSenanayake/sklearn_tutorial |
As you can see, the non-parametric random forest model is flexible enough to fit the multi-period data, without us even specifying a multi-period model! Example: Random Forest for Classifying DigitsWe previously saw the **hand-written digits** data. Let's use that here to test the efficacy of the SVM and Random Forest... | from sklearn.datasets import load_digits
digits = load_digits()
digits.keys()
X = digits.data
y = digits.target
print(X.shape)
print(y.shape) | _____no_output_____ | BSD-3-Clause | notebooks/03.2-Regression-Forests.ipynb | DininduSenanayake/sklearn_tutorial |
To remind us what we're looking at, we'll visualize the first few data points: | # set up the figure
fig = plt.figure(figsize=(6, 6)) # figure size in inches
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
# plot the digits: each image is 8x8 pixels
for i in range(64):
ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])
ax.imshow(digits.images[i], cmap=... | _____no_output_____ | BSD-3-Clause | notebooks/03.2-Regression-Forests.ipynb | DininduSenanayake/sklearn_tutorial |
We can quickly classify the digits using a decision tree as follows: | from sklearn.model_selection import train_test_split
from sklearn import metrics
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, random_state=0)
clf = DecisionTreeClassifier(max_depth=11)
clf.fit(Xtrain, ytrain)
ypred = clf.predict(Xtest) | _____no_output_____ | BSD-3-Clause | notebooks/03.2-Regression-Forests.ipynb | DininduSenanayake/sklearn_tutorial |
We can check the accuracy of this classifier: | metrics.accuracy_score(ypred, ytest) | _____no_output_____ | BSD-3-Clause | notebooks/03.2-Regression-Forests.ipynb | DininduSenanayake/sklearn_tutorial |
and for good measure, plot the confusion matrix: | plt.imshow(metrics.confusion_matrix(ypred, ytest),
interpolation='nearest', cmap=plt.cm.binary)
plt.grid(False)
plt.colorbar()
plt.xlabel("predicted label")
plt.ylabel("true label"); | _____no_output_____ | BSD-3-Clause | notebooks/03.2-Regression-Forests.ipynb | DininduSenanayake/sklearn_tutorial |
Talks markdown generator for academicpagesTakes a TSV of talks with metadata and converts them for use with [academicpages.github.io](academicpages.github.io). This is an interactive Jupyter notebook ([see more info here](http://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/what_is_jupyter.html)). The core ... | import pandas as pd
import os | _____no_output_____ | MIT | markdown_generator/talks.ipynb | krcalvert/krcalvert.github.io |
Data formatThe TSV needs to have the following columns: title, type, url_slug, venue, date, location, talk_url, description, with a header at the top. Many of these fields can be blank, but the columns must be in the TSV.- Fields that cannot be blank: `title`, `url_slug`, `date`. All else can be blank. `type` defaults... | !type talks.tsv | title type url_slug venue date location talk_url description
Closing the Loop on Collections Review Conference presentation talk-1 North Carolina Serials Conference 2020-03-01 Chapel Hill, NC
Breaking expectations for technical services assessment: outcomes over output Conference presentation talk-2 Southeastern Libr... | MIT | markdown_generator/talks.ipynb | krcalvert/krcalvert.github.io |
Import TSVPandas makes this easy with the read_csv function. We are using a TSV, so we specify the separator as a tab, or `\t`.I found it important to put this data in a tab-separated values format, because there are a lot of commas in this kind of data and comma-separated values can get messed up. However, you can mo... | talks = pd.read_csv("talks.tsv", sep="\t", header=0)
talks | _____no_output_____ | MIT | markdown_generator/talks.ipynb | krcalvert/krcalvert.github.io |
Escape special charactersYAML is very picky about how it takes a valid string, so we are replacing single and double quotes (and ampersands) with their HTML encoded equivilents. This makes them look not so readable in raw format, but they are parsed and rendered nicely. | html_escape_table = {
"&": "&",
'"': """,
"'": "'"
}
def html_escape(text):
if type(text) is str:
return "".join(html_escape_table.get(c,c) for c in text)
else:
return "False" | _____no_output_____ | MIT | markdown_generator/talks.ipynb | krcalvert/krcalvert.github.io |
Creating the markdown filesThis is where the heavy lifting is done. This loops through all the rows in the TSV dataframe, then starts to concatentate a big string (```md```) that contains the markdown for each type. It does the YAML metadata first, then does the description for the individual page. | loc_dict = {}
for row, item in talks.iterrows():
md_filename = str(item.date) + "-" + item.url_slug + ".md"
html_filename = str(item.date) + "-" + item.url_slug
year = item.date[:4]
md = "---\ntitle: \"" + item.title + '"\n'
md += "collection: talks" + "\n"
if len(str(item.typ... | _____no_output_____ | MIT | markdown_generator/talks.ipynb | krcalvert/krcalvert.github.io |
These files are in the talks directory, one directory below where we're working from. | !ls ../_talks
!cat ../_talks/2013-03-01-tutorial-1.md | ---
title: "Tutorial 1 on Relevant Topic in Your Field"
collection: talks
type: "Tutorial"
permalink: /talks/2013-03-01-tutorial-1
venue: "UC-Berkeley Institute for Testing Science"
date: 2013-03-01
location: "Berkeley CA, USA"
---
[More information here](http://exampleurl.com)
This is a description of yo... | MIT | markdown_generator/talks.ipynb | krcalvert/krcalvert.github.io |
How to build an RNA-seq logistic regression classifier with BigQuery MLCheck out other notebooks at our [Community Notebooks Repository](https://github.com/isb-cgc/Community-Notebooks)!- **Title:** How to build an RNA-seq logistic regression classifier with BigQuery ML- **Author:** John Phan- **Created:** 2021-07-19-... | # GCP libraries
from google.cloud import bigquery
from google.colab import auth | _____no_output_____ | Apache-2.0 | MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier_with_BigQuery_ML.ipynb | rpatil524/Community-Notebooks |
AuthenticateBefore using BigQuery, we need to get authorization for access to BigQuery and the Google Cloud. For more information see ['Quick Start Guide to ISB-CGC'](https://isb-cancer-genomics-cloud.readthedocs.io/en/latest/sections/HowToGetStartedonISB-CGC.html). Alternative authentication methods can be found [her... | # if you're using Google Colab, authenticate to gcloud with the following
auth.authenticate_user()
# alternatively, use the gcloud SDK
#!gcloud auth application-default login | _____no_output_____ | Apache-2.0 | MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier_with_BigQuery_ML.ipynb | rpatil524/Community-Notebooks |
ParametersCustomize the following parameters based on your notebook, execution environment, or project. BigQuery ML must create and store classification models, so be sure that you have write access to the locations stored in the "bq_dataset" and "bq_project" variables. | # set the google project that will be billed for this notebook's computations
google_project = 'google-project' ## CHANGE ME
# bq project for storing ML model
bq_project = 'bq-project' ## CHANGE ME
# bq dataset for storing ML model
bq_dataset = 'scratch' ## CHANGE ME
# name of temporary table for data
bq_tmp_table =... | _____no_output_____ | Apache-2.0 | MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier_with_BigQuery_ML.ipynb | rpatil524/Community-Notebooks |
BigQuery ClientCreate the BigQuery client. | # Create a client to access the data within BigQuery
client = bigquery.Client(google_project) | _____no_output_____ | Apache-2.0 | MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier_with_BigQuery_ML.ipynb | rpatil524/Community-Notebooks |
Create a Table with a Subset of the Gene Expression DataPull RNA-seq gene expression data from the TCGA RNA-seq BigQuery table, join it with clinical labels, and pivot the table so that it can be used with BigQuery ML. In this example, we will label the samples based on therapy outcome. "Complete Remission/Response" w... | tmp_table_query = client.query(("""
BEGIN
CREATE OR REPLACE TABLE `{bq_project}.{bq_dataset}.{bq_tmp_table}` AS
SELECT * FROM (
SELECT
labels.case_barcode as sample,
labels.data_partition as data_partition,
labels.response_label AS label,
ge.gene_name AS gene_name,
-- Multiple sa... | <google.cloud.bigquery.table._EmptyRowIterator object at 0x7f3894001250>
| Apache-2.0 | MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier_with_BigQuery_ML.ipynb | rpatil524/Community-Notebooks |
Let's take a look at this subset table. The data has been pivoted such that each of the 33 genes is available as a column that can be "SELECTED" in a query. In addition, the "label" and "data_partition" columns simplify data handling for classifier training and evaluation. | tmp_table_data = client.query(("""
SELECT
* --usually not recommended to use *, but in this case, we want to see all of the 33 genes
FROM `{bq_project}.{bq_dataset}.{bq_tmp_table}`
""").format(
bq_project=bq_project,
bq_dataset=bq_dataset,
bq_tmp_table=bq_tmp_table
)).result().to_dataframe()
print(... | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 264 entries, 0 to 263
Data columns (total 36 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 sample 264 non-null object
1 data_partition 264 non-null object
2 label 264 non-null ... | Apache-2.0 | MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier_with_BigQuery_ML.ipynb | rpatil524/Community-Notebooks |
Train the Machine Learning ModelNow we can train a classifier using BigQuery ML with the data stored in the subset table. This model will be stored in the location specified by the "bq_ml_model" variable, and can be reused to predict samples in the future.We pass three options to the BQ ML model: model_type, auto_clas... | # create ML model using BigQuery
ml_model_query = client.query(("""
CREATE OR REPLACE MODEL `{bq_project}.{bq_dataset}.{bq_ml_model}`
OPTIONS
(
model_type='LOGISTIC_REG',
auto_class_weights=TRUE,
input_label_cols=['label']
) AS
SELECT * EXCEPT(sample, data_partition) -- when training, w... | <google.cloud.bigquery.table._EmptyRowIterator object at 0x7f3893663810>
Model(reference=ModelReference(project='isb-project-zero', dataset_id='jhp_scratch', project_id='tcga_ov_therapy_ml_lr_model'))
| Apache-2.0 | MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier_with_BigQuery_ML.ipynb | rpatil524/Community-Notebooks |
Evaluate the Machine Learning ModelOnce the model has been trained and stored, we can evaluate the model's performance using the "testing" dataset from our subset table. Evaluating a BQ ML model is generally less expensive than training. Use the following query to evaluate the BQ ML model. Note that we're using the "d... | ml_eval = client.query(("""
SELECT * FROM ML.EVALUATE (MODEL `{bq_project}.{bq_dataset}.{bq_ml_model}`,
(
SELECT * EXCEPT(sample, data_partition)
FROM `{bq_project}.{bq_dataset}.{bq_tmp_table}`
WHERE data_partition = 'testing'
)
)
""").format(
bq_project=bq_project,
bq_dataset=bq_dataset,
bq_ml_m... | _____no_output_____ | Apache-2.0 | MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier_with_BigQuery_ML.ipynb | rpatil524/Community-Notebooks |
Predict Outcome for One or More SamplesML.EVALUATE evaluates a model's performance, but does not produce actual predictions for each sample. In order to do that, we need to use the ML.PREDICT function. The syntax is similar to that of the ML.EVALUATE function and returns "label", "predicted_label", "predicted_label_pr... | ml_predict = client.query(("""
SELECT
label,
predicted_label,
predicted_label_probs
FROM ML.PREDICT (MODEL `{bq_project}.{bq_dataset}.{bq_ml_model}`,
(
SELECT * EXCEPT(sample, data_partition)
FROM `{bq_project}.{bq_dataset}.{bq_tmp_table}`
WHERE data_partition = 'testing' -- Use the testing dataset... | Accuracy: 0.6230769230769231
| Apache-2.0 | MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier_with_BigQuery_ML.ipynb | rpatil524/Community-Notebooks |
FireCARES ops management notebook Using this notebookIn order to use this notebook, a single production/test web node will need to be bootstrapped w/ ipython and django-shell-plus python libraries. After bootstrapping is complete and while forwarding a local port to the port that the ipython notebook server will be r... | import psycopg2
from firecares.tasks import update
from firecares.utils import dictfetchall
from django.db import connections
from django.conf import settings
from django.core.management import call_command
from IPython.display import display
import pandas as pd
fd = {'fdid': '18M04', 'state': 'WA'}
nfirs = connections... | _____no_output_____ | MIT | Ops MGMT.ipynb | FireCARES/firecares |
Solving vertex cover with a quantum annealer The problem of vertex cover is, given an undirected graph $G = (V, E)$, colour the smallest amount of vertices such that each edge $e \in E$ is connected to a coloured vertex.This notebooks works through the process of creating a random graph, translating to an optimization... | import dimod
import networkx as nx
import matplotlib.pyplot as plt
import numpy as np
n_vertices = 5
n_edges = 6
small_graph = nx.gnm_random_graph(n_vertices, n_edges)
nx.draw(small_graph, with_labels=True) | _____no_output_____ | MIT | 04-annealing-applications/Vertex-Cover.ipynb | a-capra/Intro-QC-TRIUMF |
Constructing the Hamiltonian I showed in class that the objective function for vertex cover looks like this:\begin{equation} \sum_{(u,v) \in E} (1 - x_u) (1 - x_v) + \gamma \sum_{v \in V} x_v\end{equation}We want to find an assignment of the $x_u$ of 1 (coloured) or 0 (uncoloured) that _minimizes_ this function. The f... | γ = 0.8
Q = {x : 1 for x in small_graph.edges()}
r = {x : (γ - small_graph.degree[x]) for x in small_graph.nodes} | _____no_output_____ | MIT | 04-annealing-applications/Vertex-Cover.ipynb | a-capra/Intro-QC-TRIUMF |
Let's convert it to the appropriate data structure, and solve using the exact solver. | bqm = dimod.BinaryQuadraticModel(r, Q, 0, dimod.BINARY)
response = dimod.ExactSolver().sample(bqm)
print(f"Sample energy = {next(response.data(['energy']))[0]}") | _____no_output_____ | MIT | 04-annealing-applications/Vertex-Cover.ipynb | a-capra/Intro-QC-TRIUMF |
Let's print the graph with proper colours included | colour_assignments = next(response.data(['sample']))[0]
colours = ['grey' if colour_assignments[x] == 0 else 'red' for x in range(len(colour_assignments))]
nx.draw(small_graph, with_labels=True, node_color=colours) | _____no_output_____ | MIT | 04-annealing-applications/Vertex-Cover.ipynb | a-capra/Intro-QC-TRIUMF |
Scaling up... That one was easy enough to solve by hand. Let's try a much larger instance... | n_vertices = 20
n_edges = 60
large_graph = nx.gnm_random_graph(n_vertices, n_edges)
nx.draw(large_graph, with_labels=True)
# Create h, J and put it into the exact solver
γ = 0.8
Q = {x : 1 for x in large_graph.edges()}
r = {x : (γ - large_graph.degree[x]) for x in large_graph.nodes}
bqm = dimod.BinaryQuadraticModel(r... | _____no_output_____ | MIT | 04-annealing-applications/Vertex-Cover.ipynb | a-capra/Intro-QC-TRIUMF |
Running on the D-Wave You'll only be able to run the next few cells if you have D-Wave access. We will send the same graph as before to the D-Wave QPU and see what kind of results we get back! | from dwave.system.samplers import DWaveSampler
from dwave.system.composites import EmbeddingComposite
sampler = EmbeddingComposite(DWaveSampler())
ising_conversion = bqm.to_ising()
h, J = ising_conversion[0], ising_conversion[1]
response = sampler.sample_ising(h, J, num_reads = 1000)
best_solution =np.sort(response.re... | _____no_output_____ | MIT | 04-annealing-applications/Vertex-Cover.ipynb | a-capra/Intro-QC-TRIUMF |
Here is a scatter plot of all the different energies we got out, against the number of times each solution occurred. | plt.scatter(response.record['energy'], response.record['num_occurrences'])
response.record['num_occurrences'] | _____no_output_____ | MIT | 04-annealing-applications/Vertex-Cover.ipynb | a-capra/Intro-QC-TRIUMF |
Notebook TemplateThis Notebook is stubbed out with some project paths, loading of enviroment variables, and common package imports to speed up the process of starting a new project.It is highly recommended you copy and rename this notebook following the naming convention outlined in the readme of naming notebooks with... | import importlib
import os
from pathlib import Path
import sys
from arcgis.features import GeoAccessor, GeoSeriesAccessor
from arcgis.gis import GIS
from dotenv import load_dotenv, find_dotenv
import pandas as pd
# import arcpy if available
if importlib.util.find_spec("arcpy") is not None:
import arcpy
# load env... | _____no_output_____ | Apache-2.0 | notebooks/notebook_template.ipynb | knu2xs/la-covid-challenge |
Read DC data | fname = "../data/ChungCheonDC/20150101000000.apr"
survey = readReservoirDC(fname)
dobsAppres = survey.dobs
fig, ax = plt.subplots(1,1, figsize = (10, 2))
dat = EM.Static.Utils.StaticUtils.plot_pseudoSection(survey, ax, dtype='volt', sameratio=False)
cb = dat[2]
cb.set_label("Apprent resistivity (ohm-m)")
geom = np.hsta... | _____no_output_____ | MIT | notebook/DCinversion.ipynb | sgkang/DamGeophysics |
FloPy Using FloPy to simplify the use of the MT3DMS ```SSM``` packageA multi-component transport demonstration | import os
import sys
import numpy as np
# run installed version of flopy or add local path
try:
import flopy
except:
fpth = os.path.abspath(os.path.join('..', '..'))
sys.path.append(fpth)
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('flopy version: {}'.format... | 3.8.10 (default, May 19 2021, 11:01:55)
[Clang 10.0.0 ]
numpy version: 1.19.2
flopy version: 3.3.4
| CC0-1.0 | examples/Notebooks/flopy3_multi-component_SSM.ipynb | jdlarsen-UA/flopy |
First, we will create a simple model structure | nlay, nrow, ncol = 10, 10, 10
perlen = np.zeros((10), dtype=float) + 10
nper = len(perlen)
ibound = np.ones((nlay,nrow,ncol), dtype=int)
botm = np.arange(-1,-11,-1)
top = 0. | _____no_output_____ | CC0-1.0 | examples/Notebooks/flopy3_multi-component_SSM.ipynb | jdlarsen-UA/flopy |
Create the ```MODFLOW``` packages | model_ws = 'data'
modelname = 'ssmex'
mf = flopy.modflow.Modflow(modelname, model_ws=model_ws)
dis = flopy.modflow.ModflowDis(mf, nlay=nlay, nrow=nrow, ncol=ncol,
perlen=perlen, nper=nper, botm=botm, top=top,
steady=False)
bas = flopy.modflow.ModflowBas(mf... | _____no_output_____ | CC0-1.0 | examples/Notebooks/flopy3_multi-component_SSM.ipynb | jdlarsen-UA/flopy |
We'll track the cell locations for the ```SSM``` data using the ```MODFLOW``` boundary conditions.Get a dictionary (```dict```) that has the ```SSM``` ```itype``` for each of the boundary types. | itype = flopy.mt3d.Mt3dSsm.itype_dict()
print(itype)
print(flopy.mt3d.Mt3dSsm.get_default_dtype())
ssm_data = {} | {'CHD': 1, 'BAS6': 1, 'PBC': 1, 'WEL': 2, 'DRN': 3, 'RIV': 4, 'GHB': 5, 'MAS': 15, 'CC': -1}
[('k', '<i8'), ('i', '<i8'), ('j', '<i8'), ('css', '<f4'), ('itype', '<i8')]
| CC0-1.0 | examples/Notebooks/flopy3_multi-component_SSM.ipynb | jdlarsen-UA/flopy |
Add a general head boundary (```ghb```). The general head boundary head (```bhead```) is 0.1 for the first 5 stress periods with a component 1 (comp_1) concentration of 1.0 and a component 2 (comp_2) concentration of 100.0. Then ```bhead``` is increased to 0.25 and comp_1 concentration is reduced to 0.5 and comp_2 co... | ghb_data = {}
print(flopy.modflow.ModflowGhb.get_default_dtype())
ghb_data[0] = [(4, 4, 4, 0.1, 1.5)]
ssm_data[0] = [(4, 4, 4, 1.0, itype['GHB'], 1.0, 100.0)]
ghb_data[5] = [(4, 4, 4, 0.25, 1.5)]
ssm_data[5] = [(4, 4, 4, 0.5, itype['GHB'], 0.5, 200.0)]
for k in range(nlay):
for i in range(nrow):
ghb_data[0... | [('k', '<i8'), ('i', '<i8'), ('j', '<i8'), ('bhead', '<f4'), ('cond', '<f4')]
| CC0-1.0 | examples/Notebooks/flopy3_multi-component_SSM.ipynb | jdlarsen-UA/flopy |
Add an injection ```well```. The injection rate (```flux```) is 10.0 with a comp_1 concentration of 10.0 and a comp_2 concentration of 0.0 for all stress periods. WARNING: since we changed the ```SSM``` data in stress period 6, we need to add the well to the ssm_data for stress period 6. | wel_data = {}
print(flopy.modflow.ModflowWel.get_default_dtype())
wel_data[0] = [(0, 4, 8, 10.0)]
ssm_data[0].append((0, 4, 8, 10.0, itype['WEL'], 10.0, 0.0))
ssm_data[5].append((0, 4, 8, 10.0, itype['WEL'], 10.0, 0.0)) | [('k', '<i8'), ('i', '<i8'), ('j', '<i8'), ('flux', '<f4')]
| CC0-1.0 | examples/Notebooks/flopy3_multi-component_SSM.ipynb | jdlarsen-UA/flopy |
Add the ```GHB``` and ```WEL``` packages to the ```mf``` ```MODFLOW``` object instance. | ghb = flopy.modflow.ModflowGhb(mf, stress_period_data=ghb_data)
wel = flopy.modflow.ModflowWel(mf, stress_period_data=wel_data) | _____no_output_____ | CC0-1.0 | examples/Notebooks/flopy3_multi-component_SSM.ipynb | jdlarsen-UA/flopy |
Create the ```MT3DMS``` packages | mt = flopy.mt3d.Mt3dms(modflowmodel=mf, modelname=modelname, model_ws=model_ws)
btn = flopy.mt3d.Mt3dBtn(mt, sconc=0, ncomp=2, sconc2=50.0)
adv = flopy.mt3d.Mt3dAdv(mt)
ssm = flopy.mt3d.Mt3dSsm(mt, stress_period_data=ssm_data)
gcg = flopy.mt3d.Mt3dGcg(mt) | found 'rch' in modflow model, resetting crch to 0.0
SSM: setting crch for component 2 to zero. kwarg name crch2
| CC0-1.0 | examples/Notebooks/flopy3_multi-component_SSM.ipynb | jdlarsen-UA/flopy |
Let's verify that ```stress_period_data``` has the right ```dtype``` | print(ssm.stress_period_data.dtype) | [('k', '<i8'), ('i', '<i8'), ('j', '<i8'), ('css', '<f4'), ('itype', '<i8'), ('cssm(01)', '<f4'), ('cssm(02)', '<f4')]
| CC0-1.0 | examples/Notebooks/flopy3_multi-component_SSM.ipynb | jdlarsen-UA/flopy |
Create the ```SEAWAT``` packages | swt = flopy.seawat.Seawat(modflowmodel=mf, mt3dmodel=mt,
modelname=modelname, namefile_ext='nam_swt', model_ws=model_ws)
vdf = flopy.seawat.SeawatVdf(swt, mtdnconc=0, iwtable=0, indense=-1)
mf.write_input()
mt.write_input()
swt.write_input() | _____no_output_____ | CC0-1.0 | examples/Notebooks/flopy3_multi-component_SSM.ipynb | jdlarsen-UA/flopy |
And finally, modify the ```vdf``` package to fix ```indense```. | fname = modelname + '.vdf'
f = open(os.path.join(model_ws, fname),'r')
lines = f.readlines()
f.close()
f = open(os.path.join(model_ws, fname),'w')
for line in lines:
f.write(line)
for kper in range(nper):
f.write("-1\n")
f.close()
| _____no_output_____ | CC0-1.0 | examples/Notebooks/flopy3_multi-component_SSM.ipynb | jdlarsen-UA/flopy |
Clean the Project Directory | import glob
import os
from pathlib import Path
import shutil
exec(Path('startup.py').read_text())
DEBUG=False
VERBOSE=True
def clean(d='../', pats=['.ipynb*','__pycache__']):
""" Clean the working directory or a directory given by d.
"""
if DEBUG: print("debugging clean")
if VERBOSE: print("running `cle... | running `clean` in `VERBOSE` mode
files matching '.ipynb*':
[WindowsPath('../etc/.ipynb_checkpoints'), WindowsPath('../gcv/.ipynb_checkpoints'), WindowsPath('../notes/.ipynb_checkpoints')]
removing ..\etc\.ipynb_checkpoints
removing ..\gcv\.ipynb_checkpoints
removing ..\notes\.ipynb_checkpoints
files matching '__pycach... | MIT | gcv/notes/clean.ipynb | fuzzyklein/gcv-lab |
IntroductionIn a prior notebook, documents were partitioned by assigning them to the domain with the highest Dice similarity of their term and structure occurrences. The occurrences of terms and structures in each domain is what we refer to as the domain "archetype." Here, we'll assess whether the observed similarity ... | import os
import pandas as pd
import numpy as np
import sys
sys.path.append("..")
import utilities
from ontology import ontology
from style import style
version = 190325 # Document-term matrix version
clf = "lr" # Classifier used to generate the framework
suffix = "_" + clf # Suffix for term lists
n_iter = 1000 # Iter... | _____no_output_____ | MIT | modularity/mod_kvals_lr.ipynb | ehbeam/neuro-knowledge-engine |
Brain activation coordinates | act_bin = utilities.load_coordinates()
print("Document N={}, Structure N={}".format(
act_bin.shape[0], act_bin.shape[1])) | Document N=18155, Structure N=118
| MIT | modularity/mod_kvals_lr.ipynb | ehbeam/neuro-knowledge-engine |
Document-term matrix | dtm_bin = utilities.load_doc_term_matrix(version=version, binarize=True)
print("Document N={}, Term N={}".format(
dtm_bin.shape[0], dtm_bin.shape[1])) | Document N=18155, Term N=4107
| MIT | modularity/mod_kvals_lr.ipynb | ehbeam/neuro-knowledge-engine |
Document splits | splits = {}
# splits["train"] = [int(pmid.strip()) for pmid in open("../data/splits/train.txt")]
splits["validation"] = [int(pmid.strip()) for pmid in open("../data/splits/validation.txt")]
splits["test"] = [int(pmid.strip()) for pmid in open("../data/splits/test.txt")]
for split, split_pmids in splits.items():
pri... | _____no_output_____ | MIT | modularity/mod_kvals_lr.ipynb | ehbeam/neuro-knowledge-engine |
Document assignments and distancesIndexing by min:max will be faster in subsequent computations | from collections import OrderedDict
from scipy.spatial.distance import cdist
def load_doc2dom(k, clf="lr"):
doc2dom_df = pd.read_csv("../partition/data/doc2dom_k{:02d}_{}.csv".format(k, clf),
header=None, index_col=0)
doc2dom = {int(pmid): str(dom.values[0]) for pmid, dom in doc2do... | Processing k=02
Processing k=03
Processing k=04
Processing k=05
Processing k=06
Processing k=07
Processing k=08
Processing k=09
Processing k=10
Processing k=11
Processing k=12
Processing k=13
Processing k=14
Processing k=15
Processing k=16
Processing k=17
Processing k=18
Processing k=19
Processing k=20
Processing k=21
... | MIT | modularity/mod_kvals_lr.ipynb | ehbeam/neuro-knowledge-engine |
Index by PMID and sort by structure | structures = sorted(list(set(act_bin.columns)))
act_structs = act_bin.loc[pmids, structures] | _____no_output_____ | MIT | modularity/mod_kvals_lr.ipynb | ehbeam/neuro-knowledge-engine |
Compute domain modularity Observed values Distances internal and external to articles in each domain | dists_int, dists_ext = {}, {}
for k in circuit_counts:
dists_int[k], dists_ext[k] = {}, {}
lists, circuits = ontology.load_ontology(k, path="../ontology/", suffix=suffix)
domains = list(OrderedDict.fromkeys(lists["DOMAIN"]))
for split, split_pmids in splits.items():
dists_int[k][split], di... | _____no_output_____ | MIT | modularity/mod_kvals_lr.ipynb | ehbeam/neuro-knowledge-engine |
Domain-averaged ratio of external to internal distances | means = {split: np.empty((len(circuit_counts),)) for split in splits.keys()}
for k_i, k in enumerate(circuit_counts):
file_obs = "data/kvals/mod_obs_k{:02d}_{}_{}.csv".format(k, clf, split)
if not os.path.isfile(file_obs):
print("Processing k={:02d}".format(k))
lists, circuit... | Processing k=02
Processing k=03
Processing k=04
Processing k=05
Processing k=06
Processing k=07
Processing k=08
Processing k=09
Processing k=10
Processing k=11
Processing k=12
Processing k=13
Processing k=14
| MIT | modularity/mod_kvals_lr.ipynb | ehbeam/neuro-knowledge-engine |
Null distributions | nulls = {split: np.empty((len(circuit_counts),n_iter)) for split in splits.keys()}
for split, split_pmids in splits.items():
for k_i, k in enumerate(circuit_counts):
file_null = "data/kvals/mod_null_k{:02d}_{}_{}iter.csv".format(k, split, n_iter)
if not os.path.isfile(file_null):
prin... | _____no_output_____ | MIT | modularity/mod_kvals_lr.ipynb | ehbeam/neuro-knowledge-engine |
Bootstrap distributions | boots = {split: np.empty((len(circuit_counts),n_iter)) for split in splits.keys()}
for split, split_pmids in splits.items():
for k_i, k in enumerate(circuit_counts):
file_boot = "data/kvals/mod_boot_k{:02d}_{}_{}iter.csv".format(k, split, n_iter)
if not os.path.isfile(file_boot):
prin... | _____no_output_____ | MIT | modularity/mod_kvals_lr.ipynb | ehbeam/neuro-knowledge-engine |
Plot results over k | from matplotlib import rcParams
%matplotlib inline
rcParams["axes.linewidth"] = 1.5
for split in splits.keys():
print(split.upper())
utilities.plot_stats_by_k(means, nulls, boots, circuit_counts, metric="mod",
split=split, op_k=6, clf=clf, interval=0.999,
... | VALIDATION
| MIT | modularity/mod_kvals_lr.ipynb | ehbeam/neuro-knowledge-engine |
create a support vector classifier and manually set the gamma | from sklearn import svm, metrics
clf = svm.SVC(gamma=0.001, C=100.)
| _____no_output_____ | MIT | Hello, scikit-learn World!.ipynb | InterruptSpeed/mnist-svc |
fit the classifier to the model and use all the images in our dataset except the last one |
clf.fit(digits.data[:-1], digits.target[:-1])
svm.SVC(C=100.0, cache_size=200, class_weight=None, coef0=0.0,
decision_function_shape='ovr', degree=3, gamma=0.001, kernel='rbf',
max_iter=-1, probability=False, random_state=None, shrinking=True,
tol=0.001, verbose=False)
clf.predict(digits.data[-1:]) | _____no_output_____ | MIT | Hello, scikit-learn World!.ipynb | InterruptSpeed/mnist-svc |
reshape the image data into a 8x8 array prior to rendering it | import matplotlib.pyplot as plt
plt.imshow(digits.data[-1:].reshape(8,8), cmap=plt.cm.gray_r)
plt.show() | _____no_output_____ | MIT | Hello, scikit-learn World!.ipynb | InterruptSpeed/mnist-svc |
persist the model using pickle and load it again to ensure it works | import pickle
s = pickle.dumps(clf)
with open(b"digits.model.obj", "wb") as f:
pickle.dump(clf, f)
clf2 = pickle.loads(s)
clf2.predict(digits.data[0:1])
plt.imshow(digits.data[0:1].reshape(8,8), cmap=plt.cm.gray_r)
plt.show() | _____no_output_____ | MIT | Hello, scikit-learn World!.ipynb | InterruptSpeed/mnist-svc |
alternately use joblib.dump | from sklearn.externals import joblib
joblib.dump(clf, 'digits.model.pkl') | _____no_output_____ | MIT | Hello, scikit-learn World!.ipynb | InterruptSpeed/mnist-svc |
example from http://scikit-learn.org/stable/auto_examples/classification/plot_digits_classification.htmlsphx-glr-auto-examples-classification-plot-digits-classification-py | images_and_labels = list(zip(digits.images, digits.target))
for index, (image, label) in enumerate(images_and_labels[:4]):
plt.subplot(2, 4, index + 1)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
plt.title('Training: %i' % label)
# To apply a classifier on this da... | Classification report for classifier SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
decision_function_shape='ovr', degree=3, gamma=0.001, kernel='rbf',
max_iter=-1, probability=False, random_state=None, shrinking=True,
tol=0.001, verbose=False):
precision recall f1-score support
... | MIT | Hello, scikit-learn World!.ipynb | InterruptSpeed/mnist-svc |
Residual NetworksWelcome to the second assignment of this week! You will learn how to build very deep convolutional networks, using Residual Networks (ResNets). In theory, very deep networks can represent very complex functions; but in practice, they are hard to train. Residual Networks, introduced by [He et al.](http... | import numpy as np
from keras import layers
from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D
from keras.models import Model, load_model
from keras.preprocessing import image
from keras.utils import layer_utils
... | _____no_output_____ | MIT | 4. Convolutional Neural Networks/Residual Networks v2a.ipynb | MohamedAskar/Deep-Learning-Specialization |
1 - The problem of very deep neural networksLast week, you built your first convolutional neural network. In recent years, neural networks have become deeper, with state-of-the-art networks going from just a few layers (e.g., AlexNet) to over a hundred layers.* The main benefit of a very deep network is that it can re... | # GRADED FUNCTION: identity_block
def identity_block(X, f, filters, stage, block):
"""
Implementation of the identity block as defined in Figure 4
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main... | out = [ 0.94822985 0. 1.16101444 2.747859 0. 1.36677003]
| MIT | 4. Convolutional Neural Networks/Residual Networks v2a.ipynb | MohamedAskar/Deep-Learning-Specialization |
**Expected Output**: **out** [ 0.94822985 0. 1.16101444 2.747859 0. 1.36677003] 2.2 - The convolutional blockThe ResNet "convolutional block" is the second block type. You can use this type of block when the input and output dimensions... | # GRADED FUNCTION: convolutional_block
def convolutional_block(X, f, filters, stage, block, s = 2):
"""
Implementation of the convolutional block as defined in Figure 4
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV... | out = [ 0.09018463 1.23489773 0.46822017 0.0367176 0. 0.65516603]
| MIT | 4. Convolutional Neural Networks/Residual Networks v2a.ipynb | MohamedAskar/Deep-Learning-Specialization |
**Expected Output**: **out** [ 0.09018463 1.23489773 0.46822017 0.0367176 0. 0.65516603] 3 - Building your first ResNet model (50 layers)You now have the necessary blocks to build a very deep ResNet. The following figure describes in detail the... | # GRADED FUNCTION: ResNet50
def ResNet50(input_shape = (64, 64, 3), classes = 6):
"""
Implementation of the popular ResNet50 the following architecture:
CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3
-> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGP... | _____no_output_____ | MIT | 4. Convolutional Neural Networks/Residual Networks v2a.ipynb | MohamedAskar/Deep-Learning-Specialization |
Run the following code to build the model's graph. If your implementation is not correct you will know it by checking your accuracy when running `model.fit(...)` below. | model = ResNet50(input_shape = (64, 64, 3), classes = 6) | _____no_output_____ | MIT | 4. Convolutional Neural Networks/Residual Networks v2a.ipynb | MohamedAskar/Deep-Learning-Specialization |
As seen in the Keras Tutorial Notebook, prior training a model, you need to configure the learning process by compiling the model. | model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) | _____no_output_____ | MIT | 4. Convolutional Neural Networks/Residual Networks v2a.ipynb | MohamedAskar/Deep-Learning-Specialization |
The model is now ready to be trained. The only thing you need is a dataset. Let's load the SIGNS Dataset. **Figure 6** : **SIGNS dataset** | X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("n... | number of training examples = 1080
number of test examples = 120
X_train shape: (1080, 64, 64, 3)
Y_train shape: (1080, 6)
X_test shape: (120, 64, 64, 3)
Y_test shape: (120, 6)
| MIT | 4. Convolutional Neural Networks/Residual Networks v2a.ipynb | MohamedAskar/Deep-Learning-Specialization |
Run the following cell to train your model on 2 epochs with a batch size of 32. On a CPU it should take you around 5min per epoch. | model.fit(X_train, Y_train, epochs = 2, batch_size = 32) | Epoch 1/2
| MIT | 4. Convolutional Neural Networks/Residual Networks v2a.ipynb | MohamedAskar/Deep-Learning-Specialization |
**Expected Output**: ** Epoch 1/2** loss: between 1 and 5, acc: between 0.2 and 0.5, although your results can be different from ours. ** Epoch 2/2** loss: between 1 and 5, acc: between 0.2 and 0.5, you should ... | preds = model.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1])) | _____no_output_____ | MIT | 4. Convolutional Neural Networks/Residual Networks v2a.ipynb | MohamedAskar/Deep-Learning-Specialization |
**Expected Output**: **Test Accuracy** between 0.16 and 0.25 For the purpose of this assignment, we've asked you to train the model for just two epochs. You can see that it achieves poor performances. Please go ahead and submit your assignment; to check corre... | model = load_model('ResNet50.h5')
preds = model.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1])) | _____no_output_____ | MIT | 4. Convolutional Neural Networks/Residual Networks v2a.ipynb | MohamedAskar/Deep-Learning-Specialization |
ResNet50 is a powerful model for image classification when it is trained for an adequate number of iterations. We hope you can use what you've learnt and apply it to your own classification problem to perform state-of-the-art accuracy.Congratulations on finishing this assignment! You've now implemented a state-of-the-a... | img_path = 'images/my_image.jpg'
img = image.load_img(img_path, target_size=(64, 64))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = x/255.0
print('Input image shape:', x.shape)
my_image = scipy.misc.imread(img_path)
imshow(my_image)
print("class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = "... | _____no_output_____ | MIT | 4. Convolutional Neural Networks/Residual Networks v2a.ipynb | MohamedAskar/Deep-Learning-Specialization |
You can also print a summary of your model by running the following code. | model.summary() | _____no_output_____ | MIT | 4. Convolutional Neural Networks/Residual Networks v2a.ipynb | MohamedAskar/Deep-Learning-Specialization |
Finally, run the code below to visualize your ResNet50. You can also download a .png picture of your model by going to "File -> Open...-> model.png". | plot_model(model, to_file='model.png')
SVG(model_to_dot(model).create(prog='dot', format='svg')) | _____no_output_____ | MIT | 4. Convolutional Neural Networks/Residual Networks v2a.ipynb | MohamedAskar/Deep-Learning-Specialization |
Project 3 Sandbox-Blue-O, NLP using webscraping to create the dataset Objective: Determine if posts are in the SpaceX Subreddit or the Blue Origin SubredditWe'll utilize the RESTful API from pushshift.io to scrape subreddit posts from r/blueorigin and r/spacex and see if we cannot use the Bag-of-words algorithm to pre... | import requests
from bs4 import BeautifulSoup
import pandas as pd
import lebowski as dude
from sklearn.feature_extraction.text import CountVectorizer
import re, regex
# Establish a connection to the API and search for a specific keyword. Maybe we'll add this function to the
# lebowski library? Or maybe make a new an... | _____no_output_____ | CC0-1.0 | code/sandbox-Blue-O.ipynb | MattPat1981/new_space_race_nlp |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.