Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 149
45.7k
| code_prompt
stringlengths 67
140k
|
---|---|---|
15,800 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
304 - Medical Entity Extraction with a BiLSTM
In this tutorial we use a Bidirectional LSTM entity extractor from the MMLSPark
model downloader to extract entities from PubMed medical abstracts
Our goal is to identify useful entities in a block of free-form text. This is a
nontrivial task because entities might be referenced in the text using variety of
synonymns, abbreviations, or formats. Our target output for this model is a set
of tags that specify what kind of entity is referenced. The model we use was
trained on a large dataset of publically tagged pubmed abstracts. An example
annotated sequence is given below, "O" represents no tag
Step1: Get the model and extract the data.
Step2: Download the embeddings and the tokenizer
We use the nltk punkt sentence and word tokenizers and a set of embeddings trained on PubMed Articles
Step3: Load the embeddings and create functions for encoding sentences
Step4: Run the CNTKModel
Step5: Show the annotated text | <ASSISTANT_TASK:>
Python Code:
from mmlspark import CNTKModel, ModelDownloader
from pyspark.sql.functions import udf, col
from pyspark.sql.types import IntegerType, ArrayType, FloatType, StringType
from pyspark.sql import Row
from os.path import abspath, join
import numpy as np
import pickle
from nltk.tokenize import sent_tokenize, word_tokenize
import os, tarfile, pickle
import urllib.request
import nltk
Explanation: 304 - Medical Entity Extraction with a BiLSTM
In this tutorial we use a Bidirectional LSTM entity extractor from the MMLSPark
model downloader to extract entities from PubMed medical abstracts
Our goal is to identify useful entities in a block of free-form text. This is a
nontrivial task because entities might be referenced in the text using variety of
synonymns, abbreviations, or formats. Our target output for this model is a set
of tags that specify what kind of entity is referenced. The model we use was
trained on a large dataset of publically tagged pubmed abstracts. An example
annotated sequence is given below, "O" represents no tag:
|I-Chemical | O |I-Chemical | O | O |I-Chemical | O |I-Chemical | O | O | O | O |I-Disease |I-Disease| O | O |
|:---: |:---:|:---: |:---:|:---:|:---: |:---:|:---: |:---:|:---: |:---:|:---:|:---: |:---: |:---:|:---: |
|Baricitinib| , |Methotrexate| , | or |Baricitinib|Plus |Methotrexate| in |Patients|with |Early|Rheumatoid|Arthritis| Who |Had...|
End of explanation
modelName = "BiLSTM"
modelDir = abspath("models")
d = ModelDownloader(spark, "wasb://" + modelDir)
modelSchema = d.downloadByName(modelName)
modelName = "BiLSTM"
modelDir = abspath("models")
d = ModelDownloader(spark, "file://" + modelDir)
modelSchema = d.downloadByName(modelName)
Explanation: Get the model and extract the data.
End of explanation
nltk.download("punkt", download_dir=modelDir)
nltk.data.path.append(modelDir)
wordEmbFileName = "WordEmbeddings_PubMed.pkl"
pickleFile = join(abspath("models"), wordEmbFileName)
if not os.path.isfile(pickleFile):
urllib.request.urlretrieve("https://mmlspark.blob.core.windows.net/datasets/" + wordEmbFileName, pickleFile)
Explanation: Download the embeddings and the tokenizer
We use the nltk punkt sentence and word tokenizers and a set of embeddings trained on PubMed Articles
End of explanation
pickleContent = pickle.load(open(pickleFile, "rb"), encoding="latin-1")
wordToIndex = pickleContent["word_to_index"]
wordvectors = pickleContent["wordvectors"]
classToEntity = pickleContent["class_to_entity"]
nClasses = len(classToEntity)
nFeatures = wordvectors.shape[1]
maxSentenceLen = 613
content = "Baricitinib, Methotrexate, or Baricitinib Plus Methotrexate in Patients with Early Rheumatoid\
Arthritis Who Had Received Limited or No Treatment with Disease-Modifying-Anti-Rheumatic-Drugs (DMARDs):\
Phase 3 Trial Results. Keywords: Janus kinase (JAK), methotrexate (MTX) and rheumatoid arthritis (RA) and\
Clinical research. In 2 completed phase 3 studies, baricitinib (bari) improved disease activity with a\
satisfactory safety profile in patients (pts) with moderately-to-severely active RA who were inadequate\
responders to either conventional synthetic1 or biologic2DMARDs. This abstract reports results from a\
phase 3 study of bari administered as monotherapy or in combination with methotrexate (MTX) to pts with\
early active RA who had limited or no prior treatment with DMARDs. MTX monotherapy was the active comparator."
sentences = sent_tokenize(content)
df = spark.createDataFrame(enumerate(sentences), ["index","sentence"])
# Add the tokenizers to all worker nodes
def prepNLTK(partition):
localPath = abspath("nltk")
nltk.download("punkt", localPath)
nltk.data.path.append(localPath)
return partition
df = df.rdd.mapPartitions(prepNLTK).toDF()
tokenizeUDF = udf(word_tokenize, ArrayType(StringType()))
df = df.withColumn("tokens",tokenizeUDF("sentence"))
countUDF = udf(len, IntegerType())
df = df.withColumn("count",countUDF("tokens"))
def wordToEmb(word):
return wordvectors[wordToIndex.get(word.lower(), wordToIndex["UNK"])]
def featurize(tokens):
X = np.zeros((maxSentenceLen, nFeatures))
X[-len(tokens):,:] = np.array([wordToEmb(word) for word in tokens])
return [float(x) for x in X.reshape(maxSentenceLen, nFeatures).flatten()]
featurizeUDF = udf(featurize, ArrayType(FloatType()))
df = df.withColumn("features", featurizeUDF("tokens"))
df.show()
Explanation: Load the embeddings and create functions for encoding sentences
End of explanation
model = CNTKModel() \
.setModelLocation(spark, modelSchema.uri) \
.setInputCol("features") \
.setOutputCol("probs") \
.setOutputNodeIndex(0) \
.setMiniBatchSize(1)
df = model.transform(df).cache()
df.show()
def probsToEntities(probs, wordCount):
reshaped_probs = np.array(probs).reshape(maxSentenceLen, nClasses)
reshaped_probs = reshaped_probs[-wordCount:,:]
return [classToEntity[np.argmax(probs)] for probs in reshaped_probs]
toEntityUDF = udf(probsToEntities,ArrayType(StringType()))
df = df.withColumn("entities", toEntityUDF("probs", "count"))
df.show()
Explanation: Run the CNTKModel
End of explanation
# Color Code the Text based on the entity type
colors = {
"B-Disease": "blue",
"I-Disease":"blue",
"B-Drug":"lime",
"I-Drug":"lime",
"B-Chemical":"lime",
"I-Chemical":"lime",
"O":"black",
"NONE":"black"
}
def prettyPrint(words, annotations):
formattedWords = []
for word,annotation in zip(words,annotations):
formattedWord = "<font size = '2' color = '{}'>{}</font>".format(colors[annotation], word)
if annotation in {"O","NONE"}:
formattedWords.append(formattedWord)
else:
formattedWords.append("<b>{}</b>".format(formattedWord))
return " ".join(formattedWords)
prettyPrintUDF = udf(prettyPrint, StringType())
df = df.withColumn("formattedSentence", prettyPrintUDF("tokens", "entities")) \
.select("formattedSentence")
sentences = [row["formattedSentence"] for row in df.collect()]
df.registerTempTable("df")
from IPython.core.display import display, HTML
for sentence in sentences:
display(HTML(sentence))
%%sql -q -o df
select * from df
%%local
sentences =df["formattedSentence"]
from IPython.core.display import display, HTML
for sentence in sentences:
display(HTML(sentence))
Explanation: Show the annotated text
End of explanation
<END_TASK> |
15,801 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
The Hacker Within Spring 2017 survey
by R. Stuart Geiger, freely licensed CC-BY 4.0, MIT license
Importing and processing data
Importing libraries
Step1: Importing data and previewing
Step2: Creating two dataframes
Step3: Topic interest
Each topic (e.g. Python, R, GitHub) has one cell, with a list based on the items checked.
If someone clicked "I want this at THW", there will be a 1.
If someone clicked "I really want this at THW," there will be a 2.
If someone clicked "I know something about this..." there will be a 3.
These are mutually independent -- if someone clicked all of them, the value would be "1, 2, 3" and so on.
Assumptions for calculating interest
Step4: Results
Step5: Topic expertise
Step6: Meta questions about THW
Step7: Personal experience with scientific computing
Step8: What skill level should we aim for?
Step9: What should our sessions look like? | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: The Hacker Within Spring 2017 survey
by R. Stuart Geiger, freely licensed CC-BY 4.0, MIT license
Importing and processing data
Importing libraries
End of explanation
df = pd.read_csv("survey.tsv",sep="\t")
df[0:4]
Explanation: Importing data and previewing
End of explanation
df_topics = df
df_topics = df_topics.drop(['opt_out', 'Skill level', 'Personal experience', 'Presentation style'], axis=1)
df_meta = df
df_meta = df[['Skill level', 'Personal experience', 'Presentation style']]
Explanation: Creating two dataframes: df_topics for interest/experience about topics and df_meta for questions about THW
End of explanation
topic_interest = {}
topic_teaching = {}
for topic in df_topics:
topic_interest[topic] = 0
topic_teaching[topic] = 0
for row in df_topics[topic]:
# if row contains only value 1, increment interest dict by 1
if str(row).find('1')>=0 and str(row).find('2')==-1:
topic_interest[topic] += 1
# if row contains value 2, increment interest dict by 3
if str(row).find('2')>=0:
topic_interest[topic] += 3
if str(row).find('3')>=0:
topic_teaching[topic] += 1
Explanation: Topic interest
Each topic (e.g. Python, R, GitHub) has one cell, with a list based on the items checked.
If someone clicked "I want this at THW", there will be a 1.
If someone clicked "I really want this at THW," there will be a 2.
If someone clicked "I know something about this..." there will be a 3.
These are mutually independent -- if someone clicked all of them, the value would be "1, 2, 3" and so on.
Assumptions for calculating interest: If someone clicked that they just wanted a topic, add 1 to the topic's score. If someone clicked that they really wanted it, add 3 to the topic's score. If they clicked both, just add 3, not 4.
End of explanation
topic_interest_df = pd.DataFrame.from_dict(topic_interest, orient="index")
topic_interest_df.sort_values([0], ascending=False)
topic_interest_df = topic_interest_df.sort_values([0], ascending=True)
topic_interest_df.plot(figsize=[8,14], kind='barh', fontsize=20)
Explanation: Results
End of explanation
topic_teaching_df = pd.DataFrame.from_dict(topic_teaching, orient="index")
topic_teaching_df = topic_teaching_df[topic_teaching_df[0] != 0]
topic_teaching_df.sort_values([0], ascending=False)
topic_teaching_df = topic_teaching_df.sort_values([0], ascending=True)
topic_teaching_df.plot(figsize=[8,10], kind='barh', fontsize=20)
Explanation: Topic expertise
End of explanation
df_meta['Personal experience'].replace([1, 2, 3], ['1: Beginner', '2: Intermediate', '3: Advanced'], inplace=True)
df_meta['Skill level'].replace([1, 2, 3], ['1: Beginner', '2: Intermediate', '3: Advanced'], inplace=True)
df_meta['Presentation style'].replace([1,2,3,4,5], ["1: 100% presentation / 0% hackathon", "2: 75% presentation / 25% hackathon", "3: 50% presentation / 50% hackathon", "4: 25% presentation / 75% hackathon", "5: 100% hackathon"], inplace = True)
df_meta = df_meta.dropna()
df_meta[0:4]
Explanation: Meta questions about THW
End of explanation
pe_df = df_meta['Personal experience'].value_counts(sort=False).sort_index(ascending=False)
pe_plot = pe_df.plot(kind='barh', fontsize=20, figsize=[8,4])
plt.title("What is your personal experience with scientific computing?", size=20)
Explanation: Personal experience with scientific computing
End of explanation
skill_df = df_meta['Skill level'].value_counts(sort=False).sort_values(ascending=False)
skill_plot = skill_df.plot(kind='barh', fontsize=20, figsize=[8,4])
plt.title("What skill level should we aim for?", size=20)
Explanation: What skill level should we aim for?
End of explanation
style_df = df_meta['Presentation style'].value_counts(sort=False).sort_index(ascending=False)
style_plot = style_df.plot(kind='barh', fontsize=20, figsize=[8,4])
plt.title("Session format", size=20)
Explanation: What should our sessions look like?
End of explanation
<END_TASK> |
15,802 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http
Step1: Install BAZEL with Baselisk
Step2: Build .aar files | <ASSISTANT_TASK:>
Python Code:
# Create folders
!mkdir -p '/android/sdk'
# Download and move android SDK tools to specific folders
!wget -q 'https://dl.google.com/android/repository/tools_r25.2.5-linux.zip'
!unzip 'tools_r25.2.5-linux.zip'
!mv '/content/tools' '/android/sdk'
# Copy paste the folder
!cp -r /android/sdk/tools /android/android-sdk-linux
# Download NDK, unzip and move contents
!wget 'https://dl.google.com/android/repository/android-ndk-r19c-linux-x86_64.zip'
!unzip 'android-ndk-r19c-linux-x86_64.zip'
!mv /content/android-ndk-r19c /content/ndk
!mv '/content/ndk' '/android'
# Copy paste the folder
!cp -r /android/ndk /android/android-ndk-r19c
# Remove .zip files
!rm 'tools_r25.2.5-linux.zip'
!rm 'android-ndk-r19c-linux-x86_64.zip'
# Make android ndk executable to all users
!chmod -R go=u '/android'
# Set and view environment variables
%env PATH = /usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin:/opt/bin:/android/sdk/tools:/android/sdk/platform-tools:/android/ndk
%env ANDROID_SDK_API_LEVEL=29
%env ANDROID_API_LEVEL=29
%env ANDROID_BUILD_TOOLS_VERSION=29.0.2
%env ANDROID_DEV_HOME=/android
%env ANDROID_NDK_API_LEVEL=21
%env ANDROID_NDK_FILENAME=android-ndk-r19c-linux-x86_64.zip
%env ANDROID_NDK_HOME=/android/ndk
%env ANDROID_NDK_URL=https://dl.google.com/android/repository/android-ndk-r19c-linux-x86_64.zip
%env ANDROID_SDK_FILENAME=tools_r25.2.5-linux.zip
%env ANDROID_SDK_HOME=/android/sdk
#%env ANDROID_HOME=/android/sdk
%env ANDROID_SDK_URL=https://dl.google.com/android/repository/tools_r25.2.5-linux.zip
#!echo $PATH
!export -p
# Install specific versions of sdk, tools etc.
!android update sdk --no-ui -a \
--filter tools,platform-tools,android-29,build-tools-29.0.2
Explanation: Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Build TensorFlow Lite Support libraries with Bazel
Set up Android environment
End of explanation
# Download Latest version of Bazelisk
!wget https://github.com/bazelbuild/bazelisk/releases/latest/download/bazelisk-linux-amd64
# Make script executable
!chmod +x bazelisk-linux-amd64
# Adding to the path
!sudo mv bazelisk-linux-amd64 /usr/local/bin/bazel
# Extract bazel info
!bazel
# Clone TensorFlow Lite Support repository OR upload your custom folder to build
!git clone https://github.com/tensorflow/tflite-support.git
# Move into tflite-support folder
%cd /content/tflite-support/
!ls
Explanation: Install BAZEL with Baselisk
End of explanation
#@title Select library. { display-mode: "form" }
library = 'Support library' #@param ["Support library", "Task Vision library", "Task Text library", "Task Audio library","Metadata library","C++ image_classifier","C++ image_objector","C++ image_segmenter","C++ image_embedder","C++ nl_classifier","C++ bert_nl_classifier", "C++ bert_question_answerer", "C++ metadata_extractor"]
print('You selected:', library)
if library == 'Support library':
library = '//tensorflow_lite_support/java:tensorflowlite_support.aar'
elif library == 'Task Vision library':
library = '//tensorflow_lite_support/java/src/java/org/tensorflow/lite/task/vision:task-library-vision'
elif library == 'Task Text library':
library = '//tensorflow_lite_support/java/src/java/org/tensorflow/lite/task/text:task-library-text'
elif library == 'Task Audio library':
library = '//tensorflow_lite_support/java/src/java/org/tensorflow/lite/task/audio:task-library-audio'
elif library == 'Metadata library':
library = '//tensorflow_lite_support/metadata/java:tensorflow-lite-support-metadata-lib'
elif library == 'C++ image_classifier':
library = '//tensorflow_lite_support/cc/task/vision:image_classifier'
elif library == 'C++ image_objector':
library = '//tensorflow_lite_support/cc/task/vision:image_objector'
elif library == 'C++ image_segmenter':
library = '//tensorflow_lite_support/cc/task/vision:image_segmenter'
elif library == 'C++ image_embedder':
library = '//tensorflow_lite_support/cc/task/vision:image_embedder'
elif library == 'C++ nl_classifier':
library = '//tensorflow_lite_support/cc/task/text/nlclassifier:nl_classifier'
elif library == 'C++ bert_nl_classifier':
library = '//tensorflow_lite_support/cc/task/text/nlclassifier:bert_nl_classifier'
elif library == 'C++ bert_question_answerer':
library = '//tensorflow_lite_support/cc/task/text/qa:bert_question_answerer'
elif library == 'C++ metadata_extractor':
library = '//tensorflow_lite_support/metadata/cc:metadata_extractor'
#@title Select platform(s). { display-mode: "form" }
platforms = 'arm64-v8a,armeabi-v7a' #@param ["arm64-v8a,armeabi-v7a","x86", "x86_64", "arm64-v8a", "armeabi-v7a","x86,x86_64,arm64-v8a,armeabi-v7a"]
print('You selected:', platforms)
# Build library
!bazel build \
--fat_apk_cpu='{platforms}' \
'{library}'
Explanation: Build .aar files
End of explanation
<END_TASK> |
15,803 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
NumPy ์๊ฐ
NumPy(๋ณดํต "๋ํ์ด"๋ผ๊ณ ๋ฐ์ํ๋ค)๋ 2005๋
์ Travis Oliphant๊ฐ ๋ฐํํ ์์นํด์์ฉ Python ํจํค์ง์ด๋ค. ๋ค์ฐจ์์ ํ๋ ฌ ์๋ฃ๊ตฌ์กฐ์ธ ndarray ๋ฅผ ์ง์ํ์ฌ ๋ฒกํฐ์ ํ๋ ฌ์ ์ฌ์ฉํ๋ ์ ํ๋์ ๊ณ์ฐ์ ์ฃผ๋ก ์ฌ์ฉ๋๋ค. ๋ด๋ถ์ ์ผ๋ก๋ BLAS ๋ผ์ด๋ธ๋ฌ๋ฆฌ์ LAPACK ๋ผ์ด๋ธ๋ฌ๋ฆฌ์ ๊ธฐ๋ฐํ๊ณ ์์ด์ C๋ก ๊ตฌํ๋ CPython์์๋ง ์ฌ์ฉํ ์ ์์ผ๋ฉฐ Jython, IronPython, PyPy ๋ฑ์ Python ๊ตฌํ์์๋ ์ฌ์ฉํ ์ ์๋ค. NumPy์ ํ๋ ฌ ์ฐ์ฐ์ C๋ก ๊ตฌํ๋ ๋ด๋ถ ๋ฐ๋ณต๋ฌธ์ ์ฌ์ฉํ๊ธฐ ๋๋ฌธ์ Python ๋ฐ๋ณต๋ฌธ์ ๋นํด ์๋๊ฐ ๋น ๋ฅด๋ค. ํ๋ ฌ ์ธ๋ฑ์ฑ(array indexing)์ ์ฌ์ฉํ ์ง์(Query) ๊ธฐ๋ฅ์ ์ด์ฉํ์ฌ ์งง๊ณ ๊ฐ๋จํ ์ฝ๋๋ก ๋ณต์กํ ์์์ ๊ณ์ฐํ ์ ์๋ค.
NumPy
์์นํด์์ฉ Python ๋ผ์ด๋ธ๋ฌ๋ฆฌ
CPython์์๋ง ์ฌ์ฉ ๊ฐ๋ฅ
BLAS/LAPACK ๊ธฐ๋ฐ
ndarray ๋ค์ฐจ์ ํ๋ ฌ ์๋ฃ ๊ตฌ์กฐ ์ ๊ณต
๋ด๋ถ ๋ฐ๋ณต๋ฌธ ์ฌ์ฉ์ผ๋ก ๋น ๋ฅธ ํ๋ ฌ ์ฐ์ฐ ๊ฐ๋ฅ
ํ๋ ฌ ์ธ๋ฑ์ฑ(array indexing) ๊ธฐ๋ฅ
ndarray ํด๋์ค
NumPy์ ํต์ฌ์ ndarray๋ผ๊ณ ํ๋ ํด๋์ค ์ด๋ค. ndarray ํด๋์ค๋ ๋ค์ฐจ์ ํ๋ ฌ ์๋ฃ ๊ตฌ์กฐ๋ฅผ ์ง์ํ๋ค. ์ค์ ๋ก ndarray๋ฅผ ์ฌ์ฉํ์ฌ 1์ฐจ์ ํ๋ ฌ(๋ฒกํฐ)์ ๋ง๋ค์ด ๋ณด์
Step1: ๋ง๋ค์ด์ง ndarray ๊ฐ์ฒด์ ํํ์(representation)์ ๋ณด๋ฉด ๋ฐ๊นฅ์ชฝ์ array()๋ ๊ฒ์ด ๋ถ์ด ์์ ๋ฟ ๋ฆฌ์คํธ์ ๋์ผํ ๊ตฌ์กฐ์ฒ๋ผ ๋ณด์ธ๋ค. ์ค์ ๋ก 0, 1, 2, 3 ์ด๋ผ๋ ์์๊ฐ ์๋ ๋ฆฌ์คํธ๋ ๋ค์๊ณผ ๊ฐ์ด ๋ง๋ ๋ค.
Step2: ๊ทธ๋ฌ๋ ndarray ํด๋์ค ๊ฐ์ฒด a์ ๋ฆฌ์คํธ ํด๋์ค ๊ฐ์ฒด b๋ ๋ง์ ์ฐจ์ด๊ฐ ์๋ค. ์ฐ์ ๋ฆฌ์คํธ ํด๋์ค ๊ฐ์ฒด๋ ๋ด๋ถ์ ์ผ๋ก linked list์ ๊ฐ์ ํํ๋ฅผ ๊ฐ์ง๋ฏ๋ก ๊ฐ๊ฐ์ ์์๊ฐ ๋ค๋ฅธ ์๋ฃํ์ด ๋ ์ ์๋ค. ๊ทธ๋ฌ๋ ndarray ํด๋์ค ๊ฐ์ฒด๋ C์ธ์ด์ ํ๋ ฌ์ฒ๋ผ ์ฐ์์ ์ธ ๋ฉ๋ชจ๋ฆฌ ๋ฐฐ์น๋ฅผ ๊ฐ์ง๊ธฐ ๋๋ฌธ์ ๋ชจ๋ ์์๊ฐ ๊ฐ์ ์๋ฃํ์ด์ด์ผ ํ๋ค. ์ด๋ฌํ ์ ์ฝ์ ๊ฐ์ง๋ ๋์ ๋ด๋ถ์ ์์์ ๋ํ ์ ๊ทผ๊ณผ ๋ฐ๋ณต๋ฌธ ์คํ์ด ๋นจ๋ผ์ง๋ค.
ndarray ํด๋์ค์ ๋ ๋ค๋ฅธ ํน์ฑ์ ํ๋ ฌ์ ๊ฐ ์์์ ๋ํ ์ฐ์ฐ์ ํ ๋ฒ์ ์ฒ๋ฆฌํ๋ ๋ฒกํฐํ ์ฐ์ฐ(vectorized operation)์ ์ง์ํ๋ค๋ ์ ์ด๋ค. ์๋ฅผ ๋ค์ด ndarray ํด๋์ค ๊ฐ์ฒด์ ์์์ ํฌ๊ธฐ๋ฅผ ๋ชจ๋ ์ ๊ณฑํ๊ธฐ ์ํด์๋ ๊ฐ์ฒด ์์ฒด๋ฅผ ์ ๊ณฑํ๋ ๊ฒ๋ง์ผ๋ก ์ํ๋ ๊ฒฐ๊ณผ๋ฅผ ์ป์ ์ ์๋ค.
Step3: ๋ฆฌ์คํธ ๊ฐ์ฒด์ ๊ฒฝ์ฐ์๋ ๋ค์๊ณผ ๊ฐ์ด ๋ฐ๋ณต๋ฌธ์ ์ฌ์ฉํด์ผ ํ๋ค.
Step4: ๊ฐ๊ฐ์ ์ฝ๋ ์คํ์์ IPython์ %time ๋งค์ง ๋ช
๋ น์ ์ด์ฉํ์ฌ ์คํ ์๊ฐ์ ์ธก์ ํ ๊ฒฐ๊ณผ ndarray์ ์ ๋๋ฒ์ค ์ฐ์ฐ ์คํ ์๋๊ฐ ๋ฆฌ์คํธ ๋ฐ๋ณต๋ฌธ ๋ณด๋ค ๋น ๋ฅธ ๊ฒ์ ๋ณผ ์ ์๋ค. ndarray์ ๋ฉ๋ชจ๋ฆฌ ํ ๋น์ ํ ๋ฒ์ ํ๋ ๊ฒ๋ ๋นจ๋ผ์ง ์ด์ ์ ํ๋์ด๊ณ ์ ๋๋ฒ์ค ์ฐ์ฐ์ ์ฌ์ฉํ๊ฒ ๋๋ฉด NumPy ๋ด๋ถ์ ์ผ๋ก ๊ตฌํ๋ ๋ฐ๋ณต๋ฌธ์ ์ฌ์ฉํ๊ธฐ ๋๋ฌธ์ ๋ฐ๋ณต๋ฌธ ์คํ ์์ฒด๋ ๋นจ๋ผ์ง๋ค.
๋ฐ๋ผ์ Python์ ์ฑ๋ฅ ๊ฐ์ ์ ์ํด ๋ฐ๋์ ์ง์ผ์ผํ๋ ์ฝ๋ฉ ๊ด๋ก ์ค์ ํ๋๊ฐ NumPy์ ndarray์ ๋ฒกํฐํ ์ฐ์ฐ์ผ๋ก ๋์ฒดํ ์ ์๋ ๊ฒฝ์ฐ์๋ Python ์์ฒด์ ๋ฐ๋ณต๋ฌธ์ ์ฌ์ฉํ์ง ์๋๋ค๋ ์ ์ด๋ค.(for๋ฌธ)
Python ๋ฆฌ์คํธ
์ฌ๋ฌ๊ฐ์ง ํ์
์ ์์
linked List ๊ตฌํ
๋ฉ๋ชจ๋ฆฌ ์ฉ๋์ด ํฌ๊ณ ์๋๊ฐ ๋๋ฆผ
๋ฒกํฐํ ์ฐ์ฐ ๋ถ๊ฐ
NumPy ndarray
๋์ผ ํ์
์ ์์
contiguous memory layout
๋ฉ๋ชจ๋ฆฌ ์ต์ ํ, ๊ณ์ฐ ์๋ ํฅ์
๋ฒกํฐํ ์ฐ์ฐ ๊ฐ๋ฅ
์ฐธ๊ณ ๋ก ์ผ๋ฐ์ ์ธ ๋ฆฌ์คํธ ๊ฐ์ฒด์ ์ ์๋ฅผ ๊ณฑํ๋ฉด ๊ฐ์ฒด์ ํฌ๊ธฐ๊ฐ ์ ์๋ฐฐ ๋งํผ์ผ๋ก ์ฆ๊ฐํ๋ค.
Step5: ๋ค์ฐจ์ ํ๋ ฌ์ ์์ฑ
ndarray ๋ N-dimensional Array์ ์ฝ์์ด๋ค. ์ด๋ฆ ๊ทธ๋๋ก ndarray ํด๋์ค๋ ๋จ์ ๋ฆฌ์คํธ์ ์ ์ฌํ 1์ฐจ์ ํ๋ ฌ ์ด์ธ์๋ 2์ฐจ์ ํ๋ ฌ, 3์ฐจ์ ํ๋ ฌ ๋ฑ์ ๋ค์ฐจ์ ํ๋ ฌ ์๋ฃ ๊ตฌ์กฐ๋ฅผ ์ง์ํ๋ค.
์๋ฅผ ๋ค์ด ๋ค์๊ณผ ๊ฐ์ด ๋ฆฌ์คํธ์ ๋ฆฌ์คํธ๋ฅผ ์ด์ฉํ์ฌ 2์ฐจ์ ํ๋ ฌ์ ์์ฑํ๊ฑฐ๋ ๋ฆฌ์คํธ์ ๋ฆฌ์คํธ์ ๋ฆฌ์คํธ๋ฅผ ์ด์ฉํ์ฌ 3์ฐจ์ ํ๋ ฌ์ ์์ฑํ ์ ์๋ค.
Step6: ํ๋ ฌ์ ์ฐจ์ ๋ฐ ํฌ๊ธฐ๋ ndim ์์ฑ๊ณผ shape ์์ฑ์ผ๋ก ์ ์ ์๋ค.
Step7: ๋ค์ฐจ์ ํ๋ ฌ์ ์ธ๋ฑ์ฑ
ndarray ํด๋์ค๋ก ๊ตฌํํ ๋ค์ฐจ์ ํ๋ ฌ์ ์์ ํ๋ ํ๋๋ ๋ค์๊ณผ ๊ฐ์ด ์ฝค๋ง(comma ,)๋ฅผ ์ฌ์ฉํ์ฌ ์ ๊ทผํ ์ ์๋ค. ์ฝค๋ง๋ก ๊ตฌ๋ถ๋ ์ฐจ์์ ์ถ(axis)์ด๋ผ๊ณ ๋ ํ๋ค. ํ๋กฏ์ x์ถ๊ณผ y์ถ์ ๋ ์ฌ๋ฆฌ๋ฉด ๋ ๊ฒ์ด๋ค.
Step8: ๋ค์ฐจ์ ํ๋ ฌ์ ์ฌ๋ผ์ด์ฑ
ndarray ํด๋์ค๋ก ๊ตฌํํ ๋ค์ฐจ์ ํ๋ ฌ์ ์์ ์ค ๋ณต์ ๊ฐ๋ฅผ ์ ๊ทผํ๋ ค๋ฉด ์ผ๋ฐ์ ์ธ ํ์ด์ฌ ์ฌ๋ผ์ด์ฑ(slicing)๊ณผ comma(,)๋ฅผ ํจ๊ป ์ฌ์ฉํ๋ฉด ๋๋ค.
Step9: ํ๋ ฌ ์ธ๋ฑ์ฑ
NumPy ndarray ํด๋์ค์ ๋๋ค๋ฅธ ๊ฐ๋ ฅํ ๊ธฐ๋ฅ์ ํ๋ ฌ ์ธ๋ฑ์ฑ(fancy indexing)์ด๋ผ๊ณ ๋ ๋ถ๋ฅด๋ ํ๋ ฌ ์ธ๋ฑ์ฑ(array indexing) ๋ฐฉ๋ฒ์ด๋ค. ์ธ๋ฑ์ฑ์ด๋ผ๋ ์ด๋ฆ์ด ๋ถ์์ง๋ง ์ฌ์ค์ ๋ฐ์ดํฐ๋ฒ ์ด์ค์ ์ง์(Query) ๊ธฐ๋ฅ์ ์ํํ๋ค.
ํ๋ ฌ ์ธ๋ฑ์ฑ์์๋ ๋๊ดํธ(Bracket, [])์์ ์ธ๋ฑ์ค ์ ๋ณด๋ก ์ซ์๋ ์ฌ๋ผ์ด์ค๊ฐ ์๋ ndarray ํ๋ ฌ์ ๋ฐ์ ์ ์๋ค. ์ฌ๊ธฐ์์๋ ์ด ํ๋ ฌ์ ํธ์์ ์ธ๋ฑ์ค ํ๋ ฌ์ด๋ผ๊ณ ๋ถ๋ฅด๊ฒ ๋ค. ํ๋ ฌ ์ธ๋ฑ์ฑ์ ๋ฐฉ์์์ ๋ถ๋ฆฌ์(Boolean) ํ๋ ฌ ๋ฐฉ์๊ณผ ์ ์ ํ๋ ฌ ๋ฐฉ์ ๋๊ฐ์ง๊ฐ ์๋ค.
๋จผ์ ๋ถ๋ฆฌ์ ํ๋ ฌ ์ธ๋ฑ์ฑ ๋ฐฉ์์ ์ธ๋ฑ์ค ํ๋ ฌ์ ์์๊ฐ True, False ๋ ๊ฐ์ผ๋ก๋ง ๊ตฌ์ฑ๋๋ฉฐ ์ธ๋ฑ์ค ํ๋ ฌ์ ํฌ๊ธฐ๊ฐ ์๋ ndarray ๊ฐ์ฒด์ ํฌ๊ธฐ์ ๊ฐ์์ผ ํ๋ค.
์๋ฅผ ๋ค์ด ๋ค์๊ณผ ๊ฐ์ 1์ฐจ์ ndarray์์ ํ์์ธ ์์๋ง ๊ณจ๋ผ๋ด๋ ค๋ฉด ํ์์ธ ์์์ ๋์ํ๋ ์ธ๋ฑ์ค ๊ฐ์ด True์ด๊ณ ์ง์์ธ ์์์ ๋์ํ๋ ์ธ๋ฑ์ค ๊ฐ์ด False์ธ ์ธ๋ฑ์ค ํ๋ ฌ ์ฌ์ฉํ๋ค.
Step10: ์ด๋ ๋ค์๊ณผ ๊ฐ์ด ๊ฐ๋จํ๊ฒ ์ธ ์๋ ์๋ค.
Step11: 2์ฐจ์ ์ด์์ ์ธ๋ฑ์ค์ธ ๊ฒฝ์ฐ์๋ ๋ค์๊ณผ ๊ฐ์ด
Step12: ์ ์ ํ๋ ฌ ์ธ๋ฑ์ฑ์์๋ ์ธ๋ฑ์ค ํ๋ ฌ์ ์์ ๊ฐ๊ฐ์ด ์๋ ndarray ๊ฐ์ฒด ์์ ํ๋๋ฅผ ๊ฐ๋ฆฌํค๋ ์ธ๋ฑ์ค ์ ์์ด์ฌ์ผ ํ๋ค.
์๋ฅผ ๋ค์ด 1์ฐจ์ ํ๋ ฌ์์ ํ์๋ฒ์งธ ์์๋ง ๊ณจ๋ผ๋ด๋ ค๋ง ๋ค์๊ณผ ๊ฐ๋ค
Step13: ์ ์ ํ๋ ฌ ์ธ๋ฑ์ค์ ํฌ๊ธฐ๋ ์๋์ ํ๋ ฌ ํฌ๊ธฐ์ ๋ฌ๋ผ๋ ์๊ด์๋ค. ๊ฐ์ ์์๋ฅผ ๋ฐ๋ณตํด์ ๊ฐ๋ฆฌํค๋ ๊ฒฝ์ฐ์๋ ์๋์ ํ๋ ฌ๋ณด๋ค ๋ ์ปค์ง๊ธฐ๋ ํ๋ค.
Step14: ํ๋ ฌ ์ธ๋ฑ์ฑ
๋ถ๋ฆฌ์(Boolean) ๋ฐฉ์ ํ๋ ฌ ์ธ๋ฑ์ฑ
True์ธ ์์๋ง ์ ํ
์ธ๋ฑ์ค์ ํฌ๊ธฐ๊ฐ ํ๋ ฌ์ ํฌ๊ธฐ์ ๊ฐ์์ผ ํ๋ค.
์์น ์ง์ ๋ฐฉ์ ํ๋ ฌ ์ธ๋ฑ์ฑ
์ง์ ๋ ์์น์ ์์๋ง ์ ํ
์ธ๋ฑ์ค์ ํฌ๊ธฐ๊ฐ ํ๋ ฌ์ ํฌ๊ธฐ์ ๋ฌ๋ผ๋ ๋๋ค. | <ASSISTANT_TASK:>
Python Code:
import numpy as np
a = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
print(type(a))
a
Explanation: NumPy ์๊ฐ
NumPy(๋ณดํต "๋ํ์ด"๋ผ๊ณ ๋ฐ์ํ๋ค)๋ 2005๋
์ Travis Oliphant๊ฐ ๋ฐํํ ์์นํด์์ฉ Python ํจํค์ง์ด๋ค. ๋ค์ฐจ์์ ํ๋ ฌ ์๋ฃ๊ตฌ์กฐ์ธ ndarray ๋ฅผ ์ง์ํ์ฌ ๋ฒกํฐ์ ํ๋ ฌ์ ์ฌ์ฉํ๋ ์ ํ๋์ ๊ณ์ฐ์ ์ฃผ๋ก ์ฌ์ฉ๋๋ค. ๋ด๋ถ์ ์ผ๋ก๋ BLAS ๋ผ์ด๋ธ๋ฌ๋ฆฌ์ LAPACK ๋ผ์ด๋ธ๋ฌ๋ฆฌ์ ๊ธฐ๋ฐํ๊ณ ์์ด์ C๋ก ๊ตฌํ๋ CPython์์๋ง ์ฌ์ฉํ ์ ์์ผ๋ฉฐ Jython, IronPython, PyPy ๋ฑ์ Python ๊ตฌํ์์๋ ์ฌ์ฉํ ์ ์๋ค. NumPy์ ํ๋ ฌ ์ฐ์ฐ์ C๋ก ๊ตฌํ๋ ๋ด๋ถ ๋ฐ๋ณต๋ฌธ์ ์ฌ์ฉํ๊ธฐ ๋๋ฌธ์ Python ๋ฐ๋ณต๋ฌธ์ ๋นํด ์๋๊ฐ ๋น ๋ฅด๋ค. ํ๋ ฌ ์ธ๋ฑ์ฑ(array indexing)์ ์ฌ์ฉํ ์ง์(Query) ๊ธฐ๋ฅ์ ์ด์ฉํ์ฌ ์งง๊ณ ๊ฐ๋จํ ์ฝ๋๋ก ๋ณต์กํ ์์์ ๊ณ์ฐํ ์ ์๋ค.
NumPy
์์นํด์์ฉ Python ๋ผ์ด๋ธ๋ฌ๋ฆฌ
CPython์์๋ง ์ฌ์ฉ ๊ฐ๋ฅ
BLAS/LAPACK ๊ธฐ๋ฐ
ndarray ๋ค์ฐจ์ ํ๋ ฌ ์๋ฃ ๊ตฌ์กฐ ์ ๊ณต
๋ด๋ถ ๋ฐ๋ณต๋ฌธ ์ฌ์ฉ์ผ๋ก ๋น ๋ฅธ ํ๋ ฌ ์ฐ์ฐ ๊ฐ๋ฅ
ํ๋ ฌ ์ธ๋ฑ์ฑ(array indexing) ๊ธฐ๋ฅ
ndarray ํด๋์ค
NumPy์ ํต์ฌ์ ndarray๋ผ๊ณ ํ๋ ํด๋์ค ์ด๋ค. ndarray ํด๋์ค๋ ๋ค์ฐจ์ ํ๋ ฌ ์๋ฃ ๊ตฌ์กฐ๋ฅผ ์ง์ํ๋ค. ์ค์ ๋ก ndarray๋ฅผ ์ฌ์ฉํ์ฌ 1์ฐจ์ ํ๋ ฌ(๋ฒกํฐ)์ ๋ง๋ค์ด ๋ณด์
End of explanation
L = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
print(type(L))
L
Explanation: ๋ง๋ค์ด์ง ndarray ๊ฐ์ฒด์ ํํ์(representation)์ ๋ณด๋ฉด ๋ฐ๊นฅ์ชฝ์ array()๋ ๊ฒ์ด ๋ถ์ด ์์ ๋ฟ ๋ฆฌ์คํธ์ ๋์ผํ ๊ตฌ์กฐ์ฒ๋ผ ๋ณด์ธ๋ค. ์ค์ ๋ก 0, 1, 2, 3 ์ด๋ผ๋ ์์๊ฐ ์๋ ๋ฆฌ์คํธ๋ ๋ค์๊ณผ ๊ฐ์ด ๋ง๋ ๋ค.
End of explanation
a = np.arange(1000) #arange : ๊ทธ๋ฅ array range์ array๋ก ๋ฐ๊ฟ
%time a2 = a**2
a1 = np.arange(10)
print(a1)
print(2 * a1)
Explanation: ๊ทธ๋ฌ๋ ndarray ํด๋์ค ๊ฐ์ฒด a์ ๋ฆฌ์คํธ ํด๋์ค ๊ฐ์ฒด b๋ ๋ง์ ์ฐจ์ด๊ฐ ์๋ค. ์ฐ์ ๋ฆฌ์คํธ ํด๋์ค ๊ฐ์ฒด๋ ๋ด๋ถ์ ์ผ๋ก linked list์ ๊ฐ์ ํํ๋ฅผ ๊ฐ์ง๋ฏ๋ก ๊ฐ๊ฐ์ ์์๊ฐ ๋ค๋ฅธ ์๋ฃํ์ด ๋ ์ ์๋ค. ๊ทธ๋ฌ๋ ndarray ํด๋์ค ๊ฐ์ฒด๋ C์ธ์ด์ ํ๋ ฌ์ฒ๋ผ ์ฐ์์ ์ธ ๋ฉ๋ชจ๋ฆฌ ๋ฐฐ์น๋ฅผ ๊ฐ์ง๊ธฐ ๋๋ฌธ์ ๋ชจ๋ ์์๊ฐ ๊ฐ์ ์๋ฃํ์ด์ด์ผ ํ๋ค. ์ด๋ฌํ ์ ์ฝ์ ๊ฐ์ง๋ ๋์ ๋ด๋ถ์ ์์์ ๋ํ ์ ๊ทผ๊ณผ ๋ฐ๋ณต๋ฌธ ์คํ์ด ๋นจ๋ผ์ง๋ค.
ndarray ํด๋์ค์ ๋ ๋ค๋ฅธ ํน์ฑ์ ํ๋ ฌ์ ๊ฐ ์์์ ๋ํ ์ฐ์ฐ์ ํ ๋ฒ์ ์ฒ๋ฆฌํ๋ ๋ฒกํฐํ ์ฐ์ฐ(vectorized operation)์ ์ง์ํ๋ค๋ ์ ์ด๋ค. ์๋ฅผ ๋ค์ด ndarray ํด๋์ค ๊ฐ์ฒด์ ์์์ ํฌ๊ธฐ๋ฅผ ๋ชจ๋ ์ ๊ณฑํ๊ธฐ ์ํด์๋ ๊ฐ์ฒด ์์ฒด๋ฅผ ์ ๊ณฑํ๋ ๊ฒ๋ง์ผ๋ก ์ํ๋ ๊ฒฐ๊ณผ๋ฅผ ์ป์ ์ ์๋ค.
End of explanation
L = range(1000)
%time L2 = [i**2 for i in L]
Explanation: ๋ฆฌ์คํธ ๊ฐ์ฒด์ ๊ฒฝ์ฐ์๋ ๋ค์๊ณผ ๊ฐ์ด ๋ฐ๋ณต๋ฌธ์ ์ฌ์ฉํด์ผ ํ๋ค.
End of explanation
L = range(10)
print(L)
print(2 * L)
Explanation: ๊ฐ๊ฐ์ ์ฝ๋ ์คํ์์ IPython์ %time ๋งค์ง ๋ช
๋ น์ ์ด์ฉํ์ฌ ์คํ ์๊ฐ์ ์ธก์ ํ ๊ฒฐ๊ณผ ndarray์ ์ ๋๋ฒ์ค ์ฐ์ฐ ์คํ ์๋๊ฐ ๋ฆฌ์คํธ ๋ฐ๋ณต๋ฌธ ๋ณด๋ค ๋น ๋ฅธ ๊ฒ์ ๋ณผ ์ ์๋ค. ndarray์ ๋ฉ๋ชจ๋ฆฌ ํ ๋น์ ํ ๋ฒ์ ํ๋ ๊ฒ๋ ๋นจ๋ผ์ง ์ด์ ์ ํ๋์ด๊ณ ์ ๋๋ฒ์ค ์ฐ์ฐ์ ์ฌ์ฉํ๊ฒ ๋๋ฉด NumPy ๋ด๋ถ์ ์ผ๋ก ๊ตฌํ๋ ๋ฐ๋ณต๋ฌธ์ ์ฌ์ฉํ๊ธฐ ๋๋ฌธ์ ๋ฐ๋ณต๋ฌธ ์คํ ์์ฒด๋ ๋นจ๋ผ์ง๋ค.
๋ฐ๋ผ์ Python์ ์ฑ๋ฅ ๊ฐ์ ์ ์ํด ๋ฐ๋์ ์ง์ผ์ผํ๋ ์ฝ๋ฉ ๊ด๋ก ์ค์ ํ๋๊ฐ NumPy์ ndarray์ ๋ฒกํฐํ ์ฐ์ฐ์ผ๋ก ๋์ฒดํ ์ ์๋ ๊ฒฝ์ฐ์๋ Python ์์ฒด์ ๋ฐ๋ณต๋ฌธ์ ์ฌ์ฉํ์ง ์๋๋ค๋ ์ ์ด๋ค.(for๋ฌธ)
Python ๋ฆฌ์คํธ
์ฌ๋ฌ๊ฐ์ง ํ์
์ ์์
linked List ๊ตฌํ
๋ฉ๋ชจ๋ฆฌ ์ฉ๋์ด ํฌ๊ณ ์๋๊ฐ ๋๋ฆผ
๋ฒกํฐํ ์ฐ์ฐ ๋ถ๊ฐ
NumPy ndarray
๋์ผ ํ์
์ ์์
contiguous memory layout
๋ฉ๋ชจ๋ฆฌ ์ต์ ํ, ๊ณ์ฐ ์๋ ํฅ์
๋ฒกํฐํ ์ฐ์ฐ ๊ฐ๋ฅ
์ฐธ๊ณ ๋ก ์ผ๋ฐ์ ์ธ ๋ฆฌ์คํธ ๊ฐ์ฒด์ ์ ์๋ฅผ ๊ณฑํ๋ฉด ๊ฐ์ฒด์ ํฌ๊ธฐ๊ฐ ์ ์๋ฐฐ ๋งํผ์ผ๋ก ์ฆ๊ฐํ๋ค.
End of explanation
a = np.array([0, 1, 2])
a
b = np.array([[0, 1, 2], [3, 4, 5]]) # 2 x 3 array
b
a = np.array([0, 0, 0, 1])
a
c = np.array([[[1,2],[3,4]],[[5,6],[7,8]]]) # 2 x 2 x 2 array
c
Explanation: ๋ค์ฐจ์ ํ๋ ฌ์ ์์ฑ
ndarray ๋ N-dimensional Array์ ์ฝ์์ด๋ค. ์ด๋ฆ ๊ทธ๋๋ก ndarray ํด๋์ค๋ ๋จ์ ๋ฆฌ์คํธ์ ์ ์ฌํ 1์ฐจ์ ํ๋ ฌ ์ด์ธ์๋ 2์ฐจ์ ํ๋ ฌ, 3์ฐจ์ ํ๋ ฌ ๋ฑ์ ๋ค์ฐจ์ ํ๋ ฌ ์๋ฃ ๊ตฌ์กฐ๋ฅผ ์ง์ํ๋ค.
์๋ฅผ ๋ค์ด ๋ค์๊ณผ ๊ฐ์ด ๋ฆฌ์คํธ์ ๋ฆฌ์คํธ๋ฅผ ์ด์ฉํ์ฌ 2์ฐจ์ ํ๋ ฌ์ ์์ฑํ๊ฑฐ๋ ๋ฆฌ์คํธ์ ๋ฆฌ์คํธ์ ๋ฆฌ์คํธ๋ฅผ ์ด์ฉํ์ฌ 3์ฐจ์ ํ๋ ฌ์ ์์ฑํ ์ ์๋ค.
End of explanation
print(a.ndim)
print(a.shape)
a = np.array([[1,2,3 ],[3,4,5]])
a
a.ndim
a.shape
print(b.ndim)
print(b.shape)
print(c.ndim)
print(c.shape)
Explanation: ํ๋ ฌ์ ์ฐจ์ ๋ฐ ํฌ๊ธฐ๋ ndim ์์ฑ๊ณผ shape ์์ฑ์ผ๋ก ์ ์ ์๋ค.
End of explanation
a = np.array([[0, 1, 2], [3, 4, 5]])
a
a[0,0] # ์ฒซ๋ฒ์งธ ํ์ ์ฒซ๋ฒ์งธ ์ด
a[0,1] # ์ฒซ๋ฒ์งธ ํ์ ๋๋ฒ์งธ ์ด
a[-1, -1] # ๋ง์ง๋ง ํ์ ๋ง์ง๋ง ์ด
Explanation: ๋ค์ฐจ์ ํ๋ ฌ์ ์ธ๋ฑ์ฑ
ndarray ํด๋์ค๋ก ๊ตฌํํ ๋ค์ฐจ์ ํ๋ ฌ์ ์์ ํ๋ ํ๋๋ ๋ค์๊ณผ ๊ฐ์ด ์ฝค๋ง(comma ,)๋ฅผ ์ฌ์ฉํ์ฌ ์ ๊ทผํ ์ ์๋ค. ์ฝค๋ง๋ก ๊ตฌ๋ถ๋ ์ฐจ์์ ์ถ(axis)์ด๋ผ๊ณ ๋ ํ๋ค. ํ๋กฏ์ x์ถ๊ณผ y์ถ์ ๋ ์ฌ๋ฆฌ๋ฉด ๋ ๊ฒ์ด๋ค.
End of explanation
a = np.array([[0, 1, 2, 3], [4, 5, 6, 7]])
a
a[0, :] # ์ฒซ๋ฒ์งธ ํ ์ ์ฒด
a[:, 1] # ๋๋ฒ์งธ ์ด ์ ์ฒด
a[1, 1:] # ๋๋ฒ์งธ ํ์ ๋๋ฒ์งธ ์ด๋ถํฐ ๋์ด๊น์ง
Explanation: ๋ค์ฐจ์ ํ๋ ฌ์ ์ฌ๋ผ์ด์ฑ
ndarray ํด๋์ค๋ก ๊ตฌํํ ๋ค์ฐจ์ ํ๋ ฌ์ ์์ ์ค ๋ณต์ ๊ฐ๋ฅผ ์ ๊ทผํ๋ ค๋ฉด ์ผ๋ฐ์ ์ธ ํ์ด์ฌ ์ฌ๋ผ์ด์ฑ(slicing)๊ณผ comma(,)๋ฅผ ํจ๊ป ์ฌ์ฉํ๋ฉด ๋๋ค.
End of explanation
a = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
idx = np.array([True, False, True, False, True, False, True, False, True, False])
a[idx]
Explanation: ํ๋ ฌ ์ธ๋ฑ์ฑ
NumPy ndarray ํด๋์ค์ ๋๋ค๋ฅธ ๊ฐ๋ ฅํ ๊ธฐ๋ฅ์ ํ๋ ฌ ์ธ๋ฑ์ฑ(fancy indexing)์ด๋ผ๊ณ ๋ ๋ถ๋ฅด๋ ํ๋ ฌ ์ธ๋ฑ์ฑ(array indexing) ๋ฐฉ๋ฒ์ด๋ค. ์ธ๋ฑ์ฑ์ด๋ผ๋ ์ด๋ฆ์ด ๋ถ์์ง๋ง ์ฌ์ค์ ๋ฐ์ดํฐ๋ฒ ์ด์ค์ ์ง์(Query) ๊ธฐ๋ฅ์ ์ํํ๋ค.
ํ๋ ฌ ์ธ๋ฑ์ฑ์์๋ ๋๊ดํธ(Bracket, [])์์ ์ธ๋ฑ์ค ์ ๋ณด๋ก ์ซ์๋ ์ฌ๋ผ์ด์ค๊ฐ ์๋ ndarray ํ๋ ฌ์ ๋ฐ์ ์ ์๋ค. ์ฌ๊ธฐ์์๋ ์ด ํ๋ ฌ์ ํธ์์ ์ธ๋ฑ์ค ํ๋ ฌ์ด๋ผ๊ณ ๋ถ๋ฅด๊ฒ ๋ค. ํ๋ ฌ ์ธ๋ฑ์ฑ์ ๋ฐฉ์์์ ๋ถ๋ฆฌ์(Boolean) ํ๋ ฌ ๋ฐฉ์๊ณผ ์ ์ ํ๋ ฌ ๋ฐฉ์ ๋๊ฐ์ง๊ฐ ์๋ค.
๋จผ์ ๋ถ๋ฆฌ์ ํ๋ ฌ ์ธ๋ฑ์ฑ ๋ฐฉ์์ ์ธ๋ฑ์ค ํ๋ ฌ์ ์์๊ฐ True, False ๋ ๊ฐ์ผ๋ก๋ง ๊ตฌ์ฑ๋๋ฉฐ ์ธ๋ฑ์ค ํ๋ ฌ์ ํฌ๊ธฐ๊ฐ ์๋ ndarray ๊ฐ์ฒด์ ํฌ๊ธฐ์ ๊ฐ์์ผ ํ๋ค.
์๋ฅผ ๋ค์ด ๋ค์๊ณผ ๊ฐ์ 1์ฐจ์ ndarray์์ ํ์์ธ ์์๋ง ๊ณจ๋ผ๋ด๋ ค๋ฉด ํ์์ธ ์์์ ๋์ํ๋ ์ธ๋ฑ์ค ๊ฐ์ด True์ด๊ณ ์ง์์ธ ์์์ ๋์ํ๋ ์ธ๋ฑ์ค ๊ฐ์ด False์ธ ์ธ๋ฑ์ค ํ๋ ฌ ์ฌ์ฉํ๋ค.
End of explanation
a[a % 2 == 0]
a[a % 2] # 0์ด True, 1์ด False
Explanation: ์ด๋ ๋ค์๊ณผ ๊ฐ์ด ๊ฐ๋จํ๊ฒ ์ธ ์๋ ์๋ค.
End of explanation
a = np.array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]])
[a % 2 == 0]
a[[a % 2 == 0]]
a[a % 2]
Explanation: 2์ฐจ์ ์ด์์ ์ธ๋ฑ์ค์ธ ๊ฒฝ์ฐ์๋ ๋ค์๊ณผ ๊ฐ์ด
End of explanation
a = np.array([0, 1, 2, 3, 4, 10, 6, 7, 8, 9]) * 10
idx = np.array([0, 5, 7, 9, 9]) #์์น๋ฅผ ๋ปํจ
a[idx]
Explanation: ์ ์ ํ๋ ฌ ์ธ๋ฑ์ฑ์์๋ ์ธ๋ฑ์ค ํ๋ ฌ์ ์์ ๊ฐ๊ฐ์ด ์๋ ndarray ๊ฐ์ฒด ์์ ํ๋๋ฅผ ๊ฐ๋ฆฌํค๋ ์ธ๋ฑ์ค ์ ์์ด์ฌ์ผ ํ๋ค.
์๋ฅผ ๋ค์ด 1์ฐจ์ ํ๋ ฌ์์ ํ์๋ฒ์งธ ์์๋ง ๊ณจ๋ผ๋ด๋ ค๋ง ๋ค์๊ณผ ๊ฐ๋ค
End of explanation
a = np.array([0, 1, 2, 3]) * 10
idx = np.array([0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2])
a[idx]
a[0]
Explanation: ์ ์ ํ๋ ฌ ์ธ๋ฑ์ค์ ํฌ๊ธฐ๋ ์๋์ ํ๋ ฌ ํฌ๊ธฐ์ ๋ฌ๋ผ๋ ์๊ด์๋ค. ๊ฐ์ ์์๋ฅผ ๋ฐ๋ณตํด์ ๊ฐ๋ฆฌํค๋ ๊ฒฝ์ฐ์๋ ์๋์ ํ๋ ฌ๋ณด๋ค ๋ ์ปค์ง๊ธฐ๋ ํ๋ค.
End of explanation
joobun = np.array(["BSY","PJY","PJG","BSJ"])
idx = np.array([0,0,0,1,1,1,2,2,2,3,3,3,0,1,2,3])
joobun[idx]
a = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
a[(a % 2 == 0) & (a % 3 == 1)]
Explanation: ํ๋ ฌ ์ธ๋ฑ์ฑ
๋ถ๋ฆฌ์(Boolean) ๋ฐฉ์ ํ๋ ฌ ์ธ๋ฑ์ฑ
True์ธ ์์๋ง ์ ํ
์ธ๋ฑ์ค์ ํฌ๊ธฐ๊ฐ ํ๋ ฌ์ ํฌ๊ธฐ์ ๊ฐ์์ผ ํ๋ค.
์์น ์ง์ ๋ฐฉ์ ํ๋ ฌ ์ธ๋ฑ์ฑ
์ง์ ๋ ์์น์ ์์๋ง ์ ํ
์ธ๋ฑ์ค์ ํฌ๊ธฐ๊ฐ ํ๋ ฌ์ ํฌ๊ธฐ์ ๋ฌ๋ผ๋ ๋๋ค.
End of explanation
<END_TASK> |
15,804 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Molecular Dynamics
Step1: Basics of Molecular Dynamics | <ASSISTANT_TASK:>
Python Code:
from IPython.core.display import HTML
css_file = 'https://raw.githubusercontent.com/ngcm/training-public/master/ipython_notebook_styles/ngcmstyle.css'
HTML(url=css_file)
Explanation: Molecular Dynamics: Lab 1
In part based on Fortran code from Furio Ercolessi.
End of explanation
%matplotlib inline
import numpy
from matplotlib import pyplot
from mpl_toolkits.mplot3d.axes3d import Axes3D
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
rcParams['figure.figsize'] = (12,6)
Explanation: Basics of Molecular Dynamics
End of explanation
<END_TASK> |
15,805 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Social Minimal Interaction
There exist social processes that emerge in collective online situations โwhen two persons are engaged in real-time interactionsโ that can not be captured by a traditional offline perspective, understanding the problem in terms of an isolated individual that acts as observer exploiting its internal cognitive mechanisms to understand people.
Some authors have pointed out the need of designing metrics capturing the โability for interactionโ that subjects have as a constituent element of sensorimotor and social cognition. In these cases, dynamical processes with emergent collective properties are generated, overflowing the individual abilities of each interlocutor.
During the last years, a classical experiment has been taken as inspiration for building a minimal framework known as the โperceptual crossing paradigmโ, which has allowed a series of studies on social interactions which focus on the dynamical process of interactions as a constituent element of the emergence of the whole social system.
Previous analysis have been constrained to short-term dynamic responses of the player. In turn, we propose a complex systems approach based on the analysis of long-range correlations and fractal dynamics as a more suitable framework for the analysis of complex social interactions that are deployed along many scales of activity.
1. The perceptual crossing paradigm
From an experimental point of view, a minimal paradigm has been consolidated along the recent years. Perceptual crossing paradigm constitutes a simple framework for studying social online interactions, and for understanding the mechanisms that give support to social capabilities. The experiment involves two participants sitting in different rooms and interacting by moving a sensor along a shared virtual line using a computer mouse. In this experimental framework, several experiments can be designed providing us with a way to study online dyadic interaction and to analyze the perception of someone elseโs agency in different situations implemented in minimal virtual worlds. Those experiments highlight that emergent coordination processes result in successful detection of agency although, on an individual level, participants can not discriminate it. Furthermore, all these results illustrate the importance of online dynamical interaction in the analysis of human social cognition.
2. Experimental framework
The device of the participants consisted of a computer-mouse that moved left and right searching someone to interact. The environment consisted of a virtual one-dimensional space of 800 pixels long with both ends connected, forming a torus to avoid the singularities induced by the edges. The participant shifted a cursor in this space moving her computer-mouse.
In this blindfold experiment, human participants were placed in computers to interact in pairs, within a shared perceptual space, where some opponents were other human participants and some opponents were computerized agents (bots) but participants are unaware of the nature of their opponents. Concretely, participants could play against another human, an 'oscillatory agent', or a 'shadow agent'. The oscillatory agent moved according a sinusoidal function while the shadow agent replicated the movements of the player with a certain delay in time and in space.
When opponents (human-human or human-bot) cross their cursors, they receive an auditive stimulation. No image of the cursors or their positions were displayed on the computerscreen, so the auditive stimulations were the only environmental perceptions of the virtual space.
2.1. Exercise
The script below reads the data from the experiment just related. We are going to analize the velocity of the movement for each type of match (human-human, human-oscillatory, and human-shadow)
Step1: We can display the box-plot of the velocity to check if there are differences between groups.
- Try other velocity variables looking for differences between groups, e.g. velocity of opponent, relative velocity
Step2: 3. Fractal analysis
Despite of its apparent simplicity, the perceptual crossing paradigm comprises several embedded of dynamic interaction, resulting on auto-correlations of the signals over different time scales.
Critical systems typically display temporal and spatial scale invariance in the form of fractals and 1/f noise, reflecting the process of propagation of long-range interactions based on local effects. For the complex systems approach to cognitive science, self-organized criticallity is appealing because it allows us to imagine systems that are able to self-regulate coordinated behaviours at different scales in a distributed manner and without a central controller.
We argue that 1/f noise analysis can account not only for the integratedness the behaviour of an agencial system (e.g. the mental, psychological characteristics of human behaviour) but also can characterize the nature of social interaction process. In our experimental setup we have a broad range of kinds of social interaction
Step3: Now, we display the boxplot of the results to get an statistical overview. For the cases of the derivative of the player's position or the opponent's possition, we cannot assure an statistical difference between the distributions of ฮฒ.
Step4: 4. Interaction measures
We propose that genuine social interaction should be manifested by emerging integratedness in collective variables capturing the dynamics of this interactions. Concretely, we propose the changes in the distance between the two participants as a candidate variable for test this hypothesis. On the other hand, if social engagement truly arises from interaction dynamics, individual variables as the changes in the position of the agent or the opponent should not present significative changes in their levels of integratedness and thus the exponents obtained from 1/f analysis.
In order to analyze the interaction between the subjects, we take the time series of the distance between the two players (or the player and the bot agent). We compute the first derivative of the distance to obtain the variations in the distance i.e. whether the players are approaching or distancing themselves at each moment of time. Then we use a DFA algorithm [Peng et al. (2000)] to compute the correlations in the data series of distance variations.
Step5: The boxplot displays statistical differences.
When the opponent is the oscillatory agent (dashed lines), we find that the values of ฮฒ in the time series is around 1.5. This means that the interactions are closer to a brown noise structure, meaning that the interaction is more rigid and structured than in the other cases. This makes sense since the movement of the oscillatory agent is going constrain the interactions into its cyclic movement structure.
On the other hand, when the opponent is the shadow agent (dash-dot lines), we have the opposite situation, and the interaction dynamics tends to display values of ฮฒ greater but close to 0. This means that the history of interaction is more random and uncorrelated. Finally, when the opponent is other human player (solid lines), the exponents of the interactions dynamic are around a value of ฮฒ close to 1, indicating that they follow a pink noise structure between randomness and coherence. This suggest that the dynamics emerge from a situation where the movement of both players is softly assembled into a coherent coordination.
The 1/f spectrum results show that the changes in the relative position of the player to its opponent show that the interaction process is completely different when genuine social interaction is happening than when the player is interacting with object with trivial (oscillatory) or complex (shadow) patterns of movement. It is interesting that pink noise only emerges for a collective variable (the derivative of the distance) only in the case of human-human interaction, suggesting the hypothesis that social interaction is based on the emergence of the soft assembling of the activity of the pair of players. In the cases when this assembling is more rigid or too weak the emergent system disappears. | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import scipy.io
import scipy.signal as signal
from matplotlib import pyplot as plt
from pyeeg import dfa as dfa
def readFilePerceptualCrossing(filename):
data = scipy.io.loadmat(filename)
size = len(data['dataSeries'])
series = [data['dataSeries'][i][0] for i in range(size)]
series = np.array(series)[:,:,0]
series = signal.decimate(series, 10, zero_phase=True)
series = np.diff(series)
oppType = [data['dataOpponentType'][i][0] for i in range(size)]
oppType = np.array(oppType)[:,0]
return [series, oppType]
# Read data
[vel_player , oppTypes] = readFilePerceptualCrossing('dataPC-player.mat')
[vel_opponent, oppTypes] = readFilePerceptualCrossing('dataPC-opponent.mat')
[vel_relative, oppTypes] = readFilePerceptualCrossing('dataPC-distance.mat')
indexOscill = [i for i, x in enumerate(oppTypes) if x=="Oscillatory"]
indexShadow = [i for i, x in enumerate(oppTypes) if x=="Shadow"]
indexHuman = [i for i, x in enumerate(oppTypes) if x=="Human"]
series = vel_player
# Plot figures
plt.figure(figsize=(16, 4), dpi=72)
indexExamples = [60, 7, 11];
for i,ex in enumerate(indexExamples):
x = series[ex,:]
ax = plt.subplot(1,3,(i+1))
plt.title(oppTypes[ex]+r" ($\mu$={:0.2f}".format(np.mean(x))+r", $\sigma^2$={:0.2f}".format(np.var(x))+")")
ax.set(xlabel="Time", ylabel="Velocity", )
plt.plot(x);
Explanation: Social Minimal Interaction
There exist social processes that emerge in collective online situations โwhen two persons are engaged in real-time interactionsโ that can not be captured by a traditional offline perspective, understanding the problem in terms of an isolated individual that acts as observer exploiting its internal cognitive mechanisms to understand people.
Some authors have pointed out the need of designing metrics capturing the โability for interactionโ that subjects have as a constituent element of sensorimotor and social cognition. In these cases, dynamical processes with emergent collective properties are generated, overflowing the individual abilities of each interlocutor.
During the last years, a classical experiment has been taken as inspiration for building a minimal framework known as the โperceptual crossing paradigmโ, which has allowed a series of studies on social interactions which focus on the dynamical process of interactions as a constituent element of the emergence of the whole social system.
Previous analysis have been constrained to short-term dynamic responses of the player. In turn, we propose a complex systems approach based on the analysis of long-range correlations and fractal dynamics as a more suitable framework for the analysis of complex social interactions that are deployed along many scales of activity.
1. The perceptual crossing paradigm
From an experimental point of view, a minimal paradigm has been consolidated along the recent years. Perceptual crossing paradigm constitutes a simple framework for studying social online interactions, and for understanding the mechanisms that give support to social capabilities. The experiment involves two participants sitting in different rooms and interacting by moving a sensor along a shared virtual line using a computer mouse. In this experimental framework, several experiments can be designed providing us with a way to study online dyadic interaction and to analyze the perception of someone elseโs agency in different situations implemented in minimal virtual worlds. Those experiments highlight that emergent coordination processes result in successful detection of agency although, on an individual level, participants can not discriminate it. Furthermore, all these results illustrate the importance of online dynamical interaction in the analysis of human social cognition.
2. Experimental framework
The device of the participants consisted of a computer-mouse that moved left and right searching someone to interact. The environment consisted of a virtual one-dimensional space of 800 pixels long with both ends connected, forming a torus to avoid the singularities induced by the edges. The participant shifted a cursor in this space moving her computer-mouse.
In this blindfold experiment, human participants were placed in computers to interact in pairs, within a shared perceptual space, where some opponents were other human participants and some opponents were computerized agents (bots) but participants are unaware of the nature of their opponents. Concretely, participants could play against another human, an 'oscillatory agent', or a 'shadow agent'. The oscillatory agent moved according a sinusoidal function while the shadow agent replicated the movements of the player with a certain delay in time and in space.
When opponents (human-human or human-bot) cross their cursors, they receive an auditive stimulation. No image of the cursors or their positions were displayed on the computerscreen, so the auditive stimulations were the only environmental perceptions of the virtual space.
2.1. Exercise
The script below reads the data from the experiment just related. We are going to analize the velocity of the movement for each type of match (human-human, human-oscillatory, and human-shadow):
Plot the graph of the velocity of the participant.
Obtain the main statistics of the velocity: mean, variance.
Are there any differences related to the type of opponent?
End of explanation
# Calculate the average velocity of each serie
vel_stats=np.std(vel_player,axis=1) # velocity of the player
#vel_stats=np.std(vel_opponent,axis=1) # velocity of the opponent
#vel_stats=np.std(vel_relative,axis=1) # relative velocity between player
# Plot figure
plt.figure(figsize=(16, 4), dpi=72)
dataBox = [vel_stats[indexOscill], vel_stats[indexShadow], vel_stats[indexHuman]]
plt.boxplot(dataBox);
plt.ylabel("Average velocity")
plt.xticks([1, 2, 3], ['Oscillatory', 'Shadow', 'Human']);
Explanation: We can display the box-plot of the velocity to check if there are differences between groups.
- Try other velocity variables looking for differences between groups, e.g. velocity of opponent, relative velocity
End of explanation
def plot_dfa_perceptual(x, precision, title, drawPlot):
ix = np.arange(np.log2(len(x)/4), 4, -precision)
n = np.round(2**ix)
[_, n, F] = dfa(x, L=n)
n = n/115 # Time (seconds) = samples / sample_frequency
indexes = (n>10**-0.5)&(n<10**0.5) # Time interval for calculating the slope
P = np.polyfit(np.log(n[indexes]),np.log(F[indexes]), 1)
beta = 2*P[0]-1 # beta=2*alpha-1
if drawPlot:
plt.title(title+r" ($\beta$ = {:0.2f})".format(beta))
plt.xlabel('n')
plt.ylabel('F(n)')
plt.loglog(n, F)
plt.loglog(n[indexes], np.power(n[indexes], P[0])*np.exp(P[1]), 'r')
return [beta, n, F]
# Plot figures
series = vel_player
plt.figure(figsize=(16, 4), dpi=72)
indexExamples = [60, 7, 11];
for i,ex in enumerate(indexExamples):
x = series[ex,:]
ax = plt.subplot(1,3,(i+1))
plot_dfa_perceptual(x, 0.1, oppTypes[ex], True);
Explanation: 3. Fractal analysis
Despite of its apparent simplicity, the perceptual crossing paradigm comprises several embedded of dynamic interaction, resulting on auto-correlations of the signals over different time scales.
Critical systems typically display temporal and spatial scale invariance in the form of fractals and 1/f noise, reflecting the process of propagation of long-range interactions based on local effects. For the complex systems approach to cognitive science, self-organized criticallity is appealing because it allows us to imagine systems that are able to self-regulate coordinated behaviours at different scales in a distributed manner and without a central controller.
We argue that 1/f noise analysis can account not only for the integratedness the behaviour of an agencial system (e.g. the mental, psychological characteristics of human behaviour) but also can characterize the nature of social interaction process. In our experimental setup we have a broad range of kinds of social interaction: humans recognizing each others as such, humans interacting with bots with artificial behaviour, humans failing to recognize other humans, bots tricking humans... Can we characterize when genuine social interaction emerges? And if so, where does it lies?
For analyzing fractal exponents in the dynamics of social interaction we use the Detrended Fluctuation Analysis (DFA). Since the slope of the fluctuations in a logarithmic plot is not always linear for all scales, we check if there is any cutoff value in which a transition to a linear relation starts. We do this by searching for negative peaks in the second derivate of F (n). We only do this on the right half of the values of n in the plot, in order to find only the cutoffs at larger scales. Once the cutoff is found, we analyze the slope of the function in the decade inferior to the cutoff value. In the cases where there is no cutoff value (as in Figure 2.c) we analyze the interval $n \in [10^{-0.5},10^{0.5}]$.
3.1. Exercise
Run a DFA analysis to obtain the fractal index ฮฒ.
- Plot the fluctuation versus timescales graphics for the three opponent types: shadow, oscillatory and human. Are there any statistical differences for each type of opponent?
- Load the data of the movement of the opponent and re-run the analysis. Are there statistical differences now?
End of explanation
# Calculate the average velocity of each serie
series = vel_player
betas = np.zeros(len(series));
for i in range(len(series)):
[beta,_,_] = plot_dfa_perceptual(series[i,:], 0.5, oppTypes[i], False)
betas[i] = beta
# Plot figures
plt.figure(figsize=(16, 4), dpi=72)
dataBox = [betas[indexOscill], betas[indexShadow], betas[indexHuman]]
plt.boxplot(dataBox);
plt.ylabel(r'$\beta$');
plt.xticks([1, 2, 3], ['Oscillatory', 'Shadow', 'Human']);
Explanation: Now, we display the boxplot of the results to get an statistical overview. For the cases of the derivative of the player's position or the opponent's possition, we cannot assure an statistical difference between the distributions of ฮฒ.
End of explanation
# Data
series = vel_relative
# Plot figures
plt.figure(figsize=(16, 4), dpi=72)
indexExamples = [60, 7, 11];
for i,ex in enumerate(indexExamples):
ax = plt.subplot(1,3,(i+1))
plot_dfa_perceptual(series[ex,:], 0.1, oppTypes[ex], True);
Explanation: 4. Interaction measures
We propose that genuine social interaction should be manifested by emerging integratedness in collective variables capturing the dynamics of this interactions. Concretely, we propose the changes in the distance between the two participants as a candidate variable for test this hypothesis. On the other hand, if social engagement truly arises from interaction dynamics, individual variables as the changes in the position of the agent or the opponent should not present significative changes in their levels of integratedness and thus the exponents obtained from 1/f analysis.
In order to analyze the interaction between the subjects, we take the time series of the distance between the two players (or the player and the bot agent). We compute the first derivative of the distance to obtain the variations in the distance i.e. whether the players are approaching or distancing themselves at each moment of time. Then we use a DFA algorithm [Peng et al. (2000)] to compute the correlations in the data series of distance variations.
End of explanation
# Data
series = vel_relative
# Plot figures
plt.figure(figsize=(16, 4), dpi=72)
indexExamples = [0];
for i in range(len(series)):
[beta,_,_] = plot_dfa_perceptual(series[i,:], 0.5, oppTypes[i], False)
betas[i] = beta
dataBox = [betas[indexOscill], betas[indexShadow], betas[indexHuman]]
plt.boxplot(dataBox);
plt.ylabel(r'$\beta$');
plt.xticks([1, 2, 3], ['Oscillatory', 'Shadow', 'Human']);
Explanation: The boxplot displays statistical differences.
When the opponent is the oscillatory agent (dashed lines), we find that the values of ฮฒ in the time series is around 1.5. This means that the interactions are closer to a brown noise structure, meaning that the interaction is more rigid and structured than in the other cases. This makes sense since the movement of the oscillatory agent is going constrain the interactions into its cyclic movement structure.
On the other hand, when the opponent is the shadow agent (dash-dot lines), we have the opposite situation, and the interaction dynamics tends to display values of ฮฒ greater but close to 0. This means that the history of interaction is more random and uncorrelated. Finally, when the opponent is other human player (solid lines), the exponents of the interactions dynamic are around a value of ฮฒ close to 1, indicating that they follow a pink noise structure between randomness and coherence. This suggest that the dynamics emerge from a situation where the movement of both players is softly assembled into a coherent coordination.
The 1/f spectrum results show that the changes in the relative position of the player to its opponent show that the interaction process is completely different when genuine social interaction is happening than when the player is interacting with object with trivial (oscillatory) or complex (shadow) patterns of movement. It is interesting that pink noise only emerges for a collective variable (the derivative of the distance) only in the case of human-human interaction, suggesting the hypothesis that social interaction is based on the emergence of the soft assembling of the activity of the pair of players. In the cases when this assembling is more rigid or too weak the emergent system disappears.
End of explanation
<END_TASK> |
15,806 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
(...in regression analysis, a dummy variable (also known as an indicator variable, design variable, Boolean indicator, categorical variable, binary variable, or qualitative variable) is one that takes the value 0 or 1 to indicate the absence or presence of some categorical)
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step9: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
Step10: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
#commands that start with "%" are called "magic words" and are used in Jupyter
%config InlineBackend.figure_format = 'retina'
import numpy as np #is a library that helps to manage arrays www.numpy.org/
import pandas as pd #a library to analyze and show data. http://pandas.pydata.org/pandas-docs/stable/10min.html
import matplotlib.pyplot as plt #a library so we can create graphs easily.
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv' #this is the data in a csv file (just data separated with commas)
rides = pd.read_csv(data_path) #here we open the data and name it "rides" instead
rides.head() #we ask the computer to show a little bit of the initial data to check it out
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
#we are getting into the data to make a graph from the beginning to position 24*10. and labaling X and Y
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday'] #we create a list
for each in dummy_fields: #then we go through each element in that list
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False) #we create a variable called "dummies". to change this columns into to 0 or 1. here is a video about how to use pd.get_dummies to create this variables https://www.youtube.com/watch?v=0s_1IsROgDc
rides = pd.concat([rides, dummies], axis=1) #then we create a variable "rides" to add all the columns. you have to concat (add) the columns (axis=1) results because versions of pandas bellow 0.15.0 cant do the whole DataFrame at once. older version now can do it. for more info: https://stackoverflow.com/questions/24109779/running-get-dummies-on-several-dataframe-columns
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr'] #we create a list of filds we wanto to drop
data = rides.drop(fields_to_drop, axis=1) # we drop the collumns in this list
data.head() #we ask for an example of the initial info of the dataframe
Explanation: Dummy variables
(...in regression analysis, a dummy variable (also known as an indicator variable, design variable, Boolean indicator, categorical variable, binary variable, or qualitative variable) is one that takes the value 0 or 1 to indicate the absence or presence of some categorical)
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
( self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
# sigmoid๏ผActivation function is the sigmoid function
self.activation_function = (lambda x: 1/(1 + np.exp(-x)))
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T # shape [feature_diemension, 1]
targets = np.array(targets_list, ndmin=2).T
# Forward pass
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# y = x
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
### Backward pass###
# Output layer error is the difference between desired target and actual output.
output_errors = (targets_list-final_outputs)
# Backpropagated error
# errors propagated to the hidden layer
hidden_errors = np.dot(output_errors, self.weights_hidden_to_output)*(hidden_outputs*(1-hidden_outputs)).T
# Update the weights
# update hidden-to-output weights with gradient descent step
self.weights_hidden_to_output += output_errors * hidden_outputs.T * self.lr
# update input-to-hidden weights with gradient descent step
self.weights_input_to_hidden += (inputs * hidden_errors * self.lr).T
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import sys
### Set the hyperparameters here ###
epochs = 100
learning_rate = 0.1
hidden_nodes = 20
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
def runTest(self):
test_upper (self)
test_isupper (self)
test_split (self)
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
Your answer below
Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
End of explanation
<END_TASK> |
15,807 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
In this project, youโre going to take a peek into the realm of neural network machine translation. Youโll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
Step24: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
Step27: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save the batch_size and save_path parameters for inference.
Step43: Checkpoint
Step46: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step48: Translate
This will translate translate_sentence from English to French. | <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, youโre going to take a peek into the realm of neural network machine translation. Youโll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
source_id_text = [[source_vocab_to_int[y] for y in x] for x in
[sentence.split() for sentence in source_text.split('\n')]]
target_id_text = [[target_vocab_to_int[y] for y in x] for x in
[sentence.split() for sentence in target_text.split('\n')]]
for l in target_id_text:
l.append(target_vocab_to_int['<EOS>'])
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, learning_rate, keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
End of explanation
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
decoding_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return decoding_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
Explanation: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
enc_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)
enc_cell = tf.contrib.rnn.DropoutWrapper(enc_cell, output_keep_prob=keep_prob)
_, enc_state = tf.nn.dynamic_rnn(enc_cell, rnn_inputs, dtype=tf.float32)
return enc_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
decoder = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
prediction, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, decoder, dec_embed_input,
sequence_length, scope=decoding_scope)
logits = output_fn(prediction)
return logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
decoder = tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn, encoder_state, dec_embeddings,
start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size)
logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, decoder, scope=decoding_scope)
return logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
End of explanation
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
with tf.variable_scope("decoding") as decoding_scope:
dec_cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)
dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)
dec_cell = tf.contrib.rnn.MultiRNNCell([dec_cell] * num_layers)
_, dec_state = tf.nn.dynamic_rnn(dec_cell, dec_embed_input, dtype=tf.float32)
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)
t_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob)
with tf.variable_scope("decoding", reuse=True) as decoding_scope:
i_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'],
target_vocab_to_int['<EOS>'], sequence_length, vocab_size,
decoding_scope, output_fn, keep_prob)
return t_logits, i_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
rnn_inputs = tf.contrib.layers.embed_sequence(input_data, vocab_size=source_vocab_size,
embed_dim=enc_embedding_size)
encoder_state = encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob)
dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
t_logits, i_logits = decoding_layer(dec_embed_input, dec_embeddings, encoder_state, target_vocab_size,
sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)
return t_logits, i_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
End of explanation
# Number of Epochs
epochs = 4
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 384
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 128
decoding_embedding_size = 128
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.6
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
return [vocab_to_int[word] if word in vocab_to_int else vocab_to_int['<UNK>']
for word in sentence.lower().split()]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation
<END_TASK> |
15,808 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Overview
This notebook provides code to populate a dashboard that compares audience behavior based on the GA360 BQ Export. This is particularly useful for customers interested in understanding behavior prior to an observed event, which is useful for behavior-based segmentation, site optimization, or as inputs for a predictive model.
In addition to a sound GA360 tagging implementation, you will need access to the source dataset as a viewer, and access to run BQ jobs on a GCP project.
Dataset
This notebook is meant be a scalable solution that works with any GA360 BQ Export. This particular example utilizes the GA360 data from the Google Merchandise Store, publicly available here. Due to the limited nature of the Merchandise Store Data, not all aspects of this notebook will produce results; try it on your own (corporate) data!
Objective
The resulting dashboard provides a quick solution to visualize differences in audience behavior based on the Google Analytics 360 BigQuery Export. Without customization, this default to comparing the general population vs. the behavior of a particular audience of interest, e.g. users who make a purchase online, or who purchase above a certain dollar amount. These insights can be used in a variety of ways, which include (but are not limited to)
Step1: Authenticate your GCP account
If you are using AI Platform Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step2: Create a BigQuery dataset
If you already have a dataset ready to save tables to, skip this step.
Set the name of your BigQuery dataset below. Dataset IDs
must be alphanumeric (plus underscores) and must be at most 1024 characters
long.
Step3: Validate that your dataset created successfully (this will throw an error if there is no dataset)
Step4: Import libraries and define parameters
PROJECT_ID_BILLING is where querying costs will be billed to
GA_* fields are where the GA360 BQ Export is stored
START_DATE and END_DATE note the date range for analysis
UTC_ADJUSTMENT adjusts for timezone for the appropriate fields*
*Note that the GA360 BQ Export has all timestamps in POSIX time
Step7: Define target audience and filters
user_label_query is used to segment the GA360 BQ Export between your target audience and general population.
query_filter is used to further define all data that is aggregated
Step10: Query custom dimensions to isolate fields with fewer unique values, which will be visualized
Start with session-level custom dimensions
Step13: Repeat for hit level custom dimensions
Step18: Programmatically write a query that pulls distinct users, by class, for features and every custom dimension (session & hit level).
If you want to view the query, set View_Query to True in the cell below.
Step19: Save results to BQ. As-is, only writes if there is no table that already exists.
Step20: Visualize results with a pre-built Data Studio dashboard | <ASSISTANT_TASK:>
Python Code:
PROJECT_ID_BILLING = "" # Set the project ID
! gcloud config set project $PROJECT_ID_BILLING
Explanation: Overview
This notebook provides code to populate a dashboard that compares audience behavior based on the GA360 BQ Export. This is particularly useful for customers interested in understanding behavior prior to an observed event, which is useful for behavior-based segmentation, site optimization, or as inputs for a predictive model.
In addition to a sound GA360 tagging implementation, you will need access to the source dataset as a viewer, and access to run BQ jobs on a GCP project.
Dataset
This notebook is meant be a scalable solution that works with any GA360 BQ Export. This particular example utilizes the GA360 data from the Google Merchandise Store, publicly available here. Due to the limited nature of the Merchandise Store Data, not all aspects of this notebook will produce results; try it on your own (corporate) data!
Objective
The resulting dashboard provides a quick solution to visualize differences in audience behavior based on the Google Analytics 360 BigQuery Export. Without customization, this default to comparing the general population vs. the behavior of a particular audience of interest, e.g. users who make a purchase online, or who purchase above a certain dollar amount. These insights can be used in a variety of ways, which include (but are not limited to):
- Provide guidance to create rules-based audiences
- Recommend potential ways to optimize check-out flow or site design
- Highlight potential features for a propensity model
Costs
This tutorial uses billable components of Google Cloud Platform (GCP):
BigQuery
Learn about BigQuery pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Details
The insights and analysis offered by GA360 are numerous, and this notebook does not intend to cover all of them. Here is a list of features included in this example:
- Traffic source (trafficSource.medium)
- DMA
- Time visited by daypart
- Time visited by day
- Device category (device.deviceCategory)
- Page path level 1 (hits.page.pagePathLevel1)
- Ecommerce action (hits.eCommerceAction.action_type)
- Product engagement (hits.product.v2ProductCategory)
- Browser (device.browser)
- total sessions
- page views
- average time per page
- average session depth (page views per session)
- distinct DMAs (for users on mobile, signifies if they are traveling or not)
- session & hit level custom dimensions
Notes on data output:
Continuous variables generate histograms and cut off the top 0.5% of data
Custom dimensions will only populate if they are setup on the GA360 implementation, and are treated as categorical features
As-is, only custom dimension indices 50 or lower will be visualized; you will need to edit the dashboard to look at distribution of indices above 50. All custom dimensions will be evaluated by the query, so will be present in the underlying dataset.
Set up your GCP project
If you are not already a GCP customer with GA360 and its BQ Export enabled, follow the steps below. If you want to simply implement this on you already-existing dataset, skip to "Import libraries and define parameters".
Select or create a GCP project.. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the BigQuery API.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
if 'google.colab' in sys.modules:
from google.colab import auth
auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your GCP account
If you are using AI Platform Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the GCP Console, go to the Create service account key
page.
From the Service account drop-down list, select New service account.
In the Service account name field, enter a name.
From the Role drop-down list, select
Machine Learning Engine > AI Platform Admin and
Storage > Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
local environment.
Enter the path to your service account key as the
GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
DATASET_NAME = "" # Name the dataset you'd like to save the output to
LOCATION = "US"
! bq mk --location=$LOCATION --dataset $PROJECT_ID_BILLING:$DATASET_NAME
Explanation: Create a BigQuery dataset
If you already have a dataset ready to save tables to, skip this step.
Set the name of your BigQuery dataset below. Dataset IDs
must be alphanumeric (plus underscores) and must be at most 1024 characters
long.
End of explanation
! bq show --format=prettyjson $PROJECT_ID_BILLING:$DATASET_NAME
Explanation: Validate that your dataset created successfully (this will throw an error if there is no dataset)
End of explanation
# Import libraries
import numpy as np
import pandas as pd
# Colab tools & bigquery library
from google.cloud import bigquery
bigquery.USE_LEGACY_SQL = False
pd.options.display.float_format = '{:.5f}'.format
GA_PROJECT_ID = "bigquery-public-data"
GA_DATASET_ID = "google_analytics_sample"
GA_TABLE_ID = "ga_sessions_*"
START_DATE = "20170501" # Format is YYYYMMDD, for GA360 BQ Export
END_DATE = "20170801"
UTC_ADJUSTMENT = -5
client = bigquery.Client(project=PROJECT_ID_BILLING)
Explanation: Import libraries and define parameters
PROJECT_ID_BILLING is where querying costs will be billed to
GA_* fields are where the GA360 BQ Export is stored
START_DATE and END_DATE note the date range for analysis
UTC_ADJUSTMENT adjusts for timezone for the appropriate fields*
*Note that the GA360 BQ Export has all timestamps in POSIX time
End of explanation
# Define the query to identify your target audience with label
# (1 for target, 0 for general population)
user_label_query = f
SELECT
fullvisitorId,
max(case when totals.transactions = 1 then 1 else 0 end) as label,
min(case when totals.transactions = 1 then visitStartTime end) as event_session
FROM
`{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}`
WHERE
_TABLE_SUFFIX BETWEEN '{START_DATE}' AND '{END_DATE}'
AND geoNetwork.Country="United States"
GROUP BY
fullvisitorId
# query_filter -- Change this if you want to adjust WHERE clause in
# the query. This will be inserted after all clauses selecting from
# the GA360 BQ Export.
query_filter = f
WHERE (
_TABLE_SUFFIX BETWEEN '{START_DATE}' AND '{END_DATE}'
AND geoNetwork.Country="United States"
AND (a.visitStartTime < IFNULL(event_session, 0)
or event_session is null) )
Explanation: Define target audience and filters
user_label_query is used to segment the GA360 BQ Export between your target audience and general population.
query_filter is used to further define all data that is aggregated:
Removes behavior during or after the session in which the target event occurs
Subset to only the United States
Specify start and end date for analysis
End of explanation
# Set cut off for session-level custom dimensions,
# then query BQ Export to pull relevant indices
sessions_cut_off = 20 # Max number of distinct values in custom dimensions
# By default, assume there will be custom dimensions at the session and hit level.
# Further down, set these to False if no appropriate CDs are found.
query_session_cd = True
# Unnest session-level custom dimensions a count values for each index
sessions_cd = f
SELECT index, count(distinct value) as dist_values
FROM (SELECT cd.index, cd.value, count(*) as sessions
FROM `{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}`,
UNNEST(customDimensions) as cd
WHERE _TABLE_SUFFIX BETWEEN '{START_DATE}' AND '{END_DATE}'
GROUP BY 1, 2
ORDER BY 1, 2)
GROUP BY index
try:
# Run a Standard SQL query with the project set explicitly
sessions_custom_dimensions = client.query(sessions_cd,
project=PROJECT_ID_BILLING).to_dataframe()
# Create list of session-level CDs to visualize
session_index_list = sessions_custom_dimensions.loc[
sessions_custom_dimensions.dist_values <= sessions_cut_off, 'index'].values
session_index_exclude = sessions_custom_dimensions.loc[
sessions_custom_dimensions.dist_values > sessions_cut_off, 'index'].values
if len(session_index_list) == 0:
query_session_cd = False
print("No session-level indices found.")
else:
print(fPrinting visualizations for the following session-level indices: \
{session_index_list};\n
Excluded the following custom dimension indices because they had more than \
{sessions_cut_off} possible values: {session_index_exclude}\n \n)
except:
query_session_cd = False
Explanation: Query custom dimensions to isolate fields with fewer unique values, which will be visualized
Start with session-level custom dimensions:
End of explanation
# Set cut off for hit-level custom dimensions,
# then query BQ Export to pull relevant indices
hit_cut_off = 20
# By default, assume there will be custom dimensions at the session and hit level.
# Further down, set these to False if no appropriate CDs are found.
query_hit_cd = True
hits_cd = f
SELECT index, count(distinct value) as dist_values
FROM (
SELECT cd.index, cd.value, count(*) as hits
FROM `{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}`,
UNNEST(hits) as ht,
UNNEST(ht.customDimensions) as cd
WHERE _TABLE_SUFFIX BETWEEN '{START_DATE}' AND '{END_DATE}'
GROUP BY 1, 2
ORDER BY 1, 2 )
GROUP BY index
try:
hits_custom_dimensions = client.query(hits_cd, project=PROJECT_ID_BILLING).to_dataframe()
# Create list of hit-level CDs to visualize
hit_index_list = hits_custom_dimensions.loc[hits_custom_dimensions.dist_values <= hit_cut_off, 'index'].values
hit_index_exclude = hits_custom_dimensions.loc[hits_custom_dimensions.dist_values > hit_cut_off, 'index'].values
if len(hit_index_list) == 0:
query_hit_cd = False
print("No hit-level indices found.")
else:
print(fPrinting visualizations for the following hit-level cds: \
{hit_index_list};\n
Excluded the following custom dimension indices because they had more than \
{hit_cut_off} possible values: {hit_index_exclude}\n \n)
except:
print("No hit-level custom dimensions found!")
query_hit_cd = False
Explanation: Repeat for hit level custom dimensions:
End of explanation
# Write a big query that aggregates data to be used as dashboard input
# Set to True if you want to print the final query after it's generated
View_Query = False
final_query = f
WITH users_labeled as (
{user_label_query}
),
trafficSource_medium AS (
SELECT count(distinct CASE WHEN label = 1 THEN fullvisitorId END) AS count_1_users,
count(distinct CASE WHEN label = 0 THEN fullvisitorId END) AS count_0_users,
trafficSource_medium AS trafficSource_medium,
'trafficSource_medium' AS type
FROM (
SELECT a.fullvisitorId,
trafficSource.medium AS trafficSource_medium,
label
FROM `{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a,
unnest (hits) as hits
LEFT JOIN users_labeled b USING(fullvisitorId)
{query_filter}
GROUP BY 1,2,3)
GROUP BY trafficSource_medium),
dma_staging AS (
SELECT a.fullvisitorId,
geoNetwork.metro AS metro,
label,
COUNT(*) AS visits
FROM`{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a
LEFT JOIN users_labeled b USING(fullvisitorId)
{query_filter}
GROUP BY 1,2,3),
--- Finds the dma with the most visits for each user. If it's a tie, arbitrarily picks one.
visitor_dma AS (
SELECT COUNT(DISTINCT CASE WHEN label = 1 THEN fullvisitorId END) AS count_1_users,
COUNT(DISTINCT CASE WHEN label = 0 THEN fullvisitorId END) AS count_0_users,
metro AS dma,
'dma' AS type
FROM (
SELECT fullvisitorId,
metro,
label,
ROW_NUMBER() OVER (PARTITION BY fullvisitorId ORDER BY visits DESC) AS row_num
FROM dma_staging)
WHERE row_num = 1
GROUP BY metro, type),
distinct_dma AS (
SELECT COUNT(DISTINCT CASE WHEN label = 1 THEN fullvisitorId END) AS count_1_users,
COUNT(DISTINCT CASE WHEN label = 0 THEN fullvisitorId END) AS count_0_users,
distinct_dma AS distinct_dma,
'distinct_dma' AS type
FROM (
SELECT COUNT(DISTINCT metro) as distinct_dma,
fullvisitorId,
label
FROM dma_staging
GROUP BY fullvisitorId, label)
GROUP BY distinct_dma),
-- Finds the daypart with the most pageviews for each user; adjusts for timezones and daylight savings time, loosely
visitor_common_daypart AS (
SELECT COUNT(DISTINCT CASE WHEN label = 1 THEN fullvisitorId END) AS count_1_users,
COUNT(DISTINCT CASE WHEN label = 0 THEN fullvisitorId END) AS count_0_users,
'day_part' AS type,
daypart
FROM (
SELECT fullvisitorId, daypart, label, ROW_NUMBER() OVER (PARTITION BY fullvisitorId ORDER BY pageviews DESC) AS row_num
FROM (
SELECT
fullvisitorId,
label,
CASE WHEN hour_of_day >= 1 AND hour_of_day < 6 THEN '1_night_1_6'
WHEN hour_of_day >= 6 AND hour_of_day < 11 THEN '2_morning_6_11'
WHEN hour_of_day >= 11 AND hour_of_day < 14 THEN '3_lunch_11_14'
WHEN hour_of_day >= 14 AND hour_of_day < 17 THEN '4_afternoon_14_17'
WHEN hour_of_day >= 17 AND hour_of_day < 19 THEN '5_dinner_17_19'
WHEN hour_of_day >= 19 AND hour_of_day < 22 THEN '6_evening_19_23'
WHEN hour_of_day >= 22 OR hour_of_day = 0 THEN '7_latenight_23_1'
END AS daypart, SUM(pageviews) AS pageviews
FROM (
SELECT a.fullvisitorId, b.label, EXTRACT(HOUR
FROM TIMESTAMP_ADD(TIMESTAMP_SECONDS(visitStartTime), INTERVAL {UTC_ADJUSTMENT} HOUR)) AS hour_of_day,
totals.pageviews AS pageviews
FROM`{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a
LEFT JOIN users_labeled b USING(fullvisitorId)
{query_filter}
)
GROUP BY 1,2,3) )
WHERE row_num = 1
GROUP BY type, daypart),
-- Finds the most common day based on pageviews
visitor_common_day AS (
SELECT COUNT(DISTINCT CASE WHEN label = 1 THEN fullvisitorId END) AS count_1_users,
COUNT(DISTINCT CASE WHEN label = 0 THEN fullvisitorId END) AS count_0_users,
'DoW' AS type,
case when day = 1 then "1_Sunday"
when day = 2 then "2_Monday"
when day = 3 then "3_Tuesday"
when day = 4 then "4_Wednesday"
when day = 5 then "5_Thursday"
when day = 6 then "6_Friday"
when day = 7 then "7_Saturday" end as day
FROM (
SELECT fullvisitorId, day, label, ROW_NUMBER() OVER (PARTITION BY fullvisitorId ORDER BY pages_viewed DESC) AS row_num
FROM (
SELECT a.fullvisitorId,
EXTRACT(DAYOFWEEK FROM PARSE_DATE('%Y%m%d',date)) AS day,
SUM(totals.pageviews) AS pages_viewed,
b.label
FROM`{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a
LEFT JOIN users_labeled b USING(fullvisitorId)
{query_filter}
GROUP BY 1,2,4 ) )
WHERE row_num = 1
GROUP BY type, day),
technology AS (
SELECT COUNT(DISTINCT CASE WHEN label = 1 THEN fullvisitorId END) AS count_1_users,
COUNT(DISTINCT CASE WHEN label = 0 THEN fullvisitorId END) AS count_0_users,
deviceCategory AS deviceCategory,
browser AS browser,
'technology' AS type
FROM (
SELECT fullvisitorId,
deviceCategory,
browser,
label,
ROW_NUMBER() OVER (PARTITION BY fullvisitorId ORDER BY visits DESC) AS row_num
FROM (
SELECT a.fullvisitorId,
device.deviceCategory AS deviceCategory,
CASE WHEN device.browser LIKE 'Chrome%' THEN device.browser WHEN device.browser LIKE 'Safari%' THEN device.browser ELSE 'Other browser' END AS browser,
b.label,
COUNT(*) AS visits
FROM`{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a
LEFT JOIN users_labeled b USING(fullvisitorId)
{query_filter}
GROUP BY 1,2,3,4))
WHERE row_num = 1
GROUP BY deviceCategory,browser,type),
PPL1 AS (
SELECT COUNT(DISTINCT CASE WHEN label = 1 THEN fullvisitorId END) AS count_1_users,
COUNT(DISTINCT CASE WHEN label = 0 THEN fullvisitorId END) AS count_0_users,
PPL1 AS PPL1,
'PPL1' AS type
FROM (
SELECT a.fullvisitorId,
hits.page.pagePathLevel1 AS PPL1,
b.label
FROM`{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a,
unnest (hits) as hits
LEFT JOIN users_labeled b USING(fullvisitorId)
{query_filter}
GROUP BY 1,2,3)
GROUP BY PPL1),
ecomm_action AS (
SELECT COUNT(DISTINCT CASE WHEN label = 1 THEN fullvisitorId END) AS count_1_users,
COUNT(DISTINCT CASE WHEN label = 0 THEN fullvisitorId END) AS count_0_users,
CASE WHEN ecomm_action = '1' THEN '1_Click product list'
WHEN ecomm_action = '2' THEN '2_Product detail view'
WHEN ecomm_action = '3' THEN '3_Add to cart'
WHEN ecomm_action = '4' THEN '4_Remove from cart'
WHEN ecomm_action = '5' THEN '5_Start checkout'
WHEN ecomm_action = '6' THEN '6_Checkout complete'
WHEN ecomm_action = '7' THEN '7_Refund'
WHEN ecomm_action = '8' THEN '8_Checkout options'
ELSE '9_No_ecomm_action'
END AS ecomm_action,
'ecomm_action' AS type
FROM (
SELECT a.fullvisitorId,
hits.eCommerceAction.action_type AS ecomm_action,
b.label
FROM`{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a,
unnest (hits) as hits
LEFT JOIN users_labeled b USING(fullvisitorId)
{query_filter}
GROUP BY 1,2,3)
GROUP BY ecomm_action),
prod_cat AS (
SELECT COUNT(DISTINCT CASE WHEN label = 1 THEN fullvisitorId END) AS count_1_users,
COUNT(DISTINCT CASE WHEN label = 0 THEN fullvisitorId END) AS count_0_users,
prod_cat AS prod_cat,
'prod_cat' AS type
FROM (
SELECT a.fullvisitorId,
prod.v2ProductCategory AS prod_cat,
b.label
FROM`{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a,
unnest (hits) as hits,
UNNEST (hits.product) AS prod
LEFT JOIN users_labeled b USING(fullvisitorId)
{query_filter}
GROUP BY 1,2,3)
GROUP BY prod_cat),
agg_metrics AS (
SELECT fullvisitorId,
CASE WHEN label IS NULL then 0 else label end as label,
count(distinct visitId) as total_sessions,
sum(totals.pageviews) as pageviews,
count(totals.bounces)/count(distinct VisitID) as bounce_rate,
sum(totals.timeonSite)/sum(totals.pageviews) as time_per_page,
sum(totals.pageviews) / count(distinct VisitID) as avg_session_depth
FROM `{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a
LEFT JOIN users_labeled b
USING (fullvisitorId)
{query_filter}
GROUP BY 1,2
),
Agg_sessions AS (
SELECT fullvisitorId, label, total_sessions
FROM agg_metrics),
Agg_pageviews AS (
SELECT fullvisitorId, label, pageviews
FROM agg_metrics),
Agg_time_per_page AS (
SELECT fullvisitorId, label, time_per_page
FROM agg_metrics),
Agg_avg_session_depth AS (
SELECT fullvisitorId, label, avg_session_depth
FROM agg_metrics),
hist_sessions AS (
SELECT
ROUND(min+max/2) as avg_sessions,
COUNT(distinct case when label = 1 then fullvisitorId end) as count_1_users,
COUNT(distinct case when label = 0 or label is null then fullvisitorId end) as count_0_users,
'stats_sessions' as type
FROM Agg_sessions
JOIN (SELECT min+step*i min, min+step*(i+1)max
FROM (
SELECT max-min diff, min, max, (max-min)/20 step, GENERATE_ARRAY(0, 20, 1) i
FROM (
SELECT MIN(total_sessions) min, MAX(total_sessions) max
FROM Agg_sessions
JOIN (select APPROX_QUANTILES(total_sessions, 200 IGNORE NULLS)[OFFSET(199)] as trimmer FROM Agg_sessions) b
ON agg_sessions.total_sessions <= b.trimmer
)
), UNNEST(i) i) stats_sessions
ON Agg_sessions.total_sessions >= stats_sessions.min
AND Agg_sessions.total_sessions < stats_sessions.max
GROUP BY min, max
ORDER BY min),
hist_pageviews AS (
SELECT
ROUND(min+max/2) as avg_pageviews,
COUNT(distinct case when label = 1 then fullvisitorId end) as count_1_users,
COUNT(distinct case when label = 0 or label is null then fullvisitorId end) as count_0_users,
'stats_pageviews' as type
FROM Agg_pageviews
JOIN (SELECT min+step*i min, min+step*(i+1)max
FROM (
SELECT max-min diff, min, max, (max-min)/20 step, GENERATE_ARRAY(0, 20, 1) i
FROM (
SELECT MIN(pageviews) min, MAX(pageviews) max
FROM Agg_pageviews
JOIN (select APPROX_QUANTILES(pageviews, 200 IGNORE NULLS)[OFFSET(199)] as trimmer FROM Agg_pageviews) b
ON agg_pageviews.pageviews <= b.trimmer
)
), UNNEST(i) i) stats_pageviews
ON Agg_pageviews.pageviews >= stats_pageviews.min
AND Agg_pageviews.pageviews < stats_pageviews.max
GROUP BY min, max
ORDER BY min),
hist_time_per_page AS (
SELECT
ROUND(min+max/2) as avg_time_per_page,
COUNT(distinct case when label = 1 then fullvisitorId end) as count_1_users,
COUNT(distinct case when label = 0 or label is null then fullvisitorId end) as count_0_users,
'stats_time_per_page' as type
FROM Agg_time_per_page
JOIN (SELECT min+step*i min, min+step*(i+1)max
FROM (
SELECT max-min diff, min, max, (max-min)/20 step, GENERATE_ARRAY(0, 20, 1) i
FROM (
SELECT MIN(time_per_page) min, MAX(time_per_page) max
FROM Agg_time_per_page
JOIN (select APPROX_QUANTILES(time_per_page, 200 IGNORE NULLS)[OFFSET(199)] as trimmer FROM Agg_time_per_page) b
ON agg_time_per_page.time_per_page <= b.trimmer
)
), UNNEST(i) i) stats_time_per_page
ON Agg_time_per_page.time_per_page >= stats_time_per_page.min
AND Agg_time_per_page.time_per_page < stats_time_per_page.max
GROUP BY min, max
ORDER BY min),
hist_avg_session_depth AS (
SELECT
ROUND(min+max/2) as avg_avg_session_depth,
COUNT(distinct case when label = 1 then fullvisitorId end) as count_1_users,
COUNT(distinct case when label = 0 or label is null then fullvisitorId end) as count_0_users,
'stats_avg_session_depth' as type
FROM Agg_avg_session_depth
JOIN (SELECT min+step*i min, min+step*(i+1)max
FROM (
SELECT max-min diff, min, max, (max-min)/20 step, GENERATE_ARRAY(0, 20, 1) i
FROM (
SELECT MIN(avg_session_depth) min, MAX(avg_session_depth) max
FROM Agg_avg_session_depth
JOIN (select APPROX_QUANTILES(avg_session_depth, 200 IGNORE NULLS)[OFFSET(199)] as trimmer FROM Agg_avg_session_depth) b
ON agg_avg_session_depth.avg_session_depth <= b.trimmer
)
), UNNEST(i) i) stats_avg_session_depth
ON Agg_avg_session_depth.avg_session_depth >= stats_avg_session_depth.min
AND Agg_avg_session_depth.avg_session_depth < stats_avg_session_depth.max
GROUP BY min, max
ORDER BY min)
if query_session_cd:
session_cd_query = ",\nsession_cds AS (SELECT * FROM ("
counter = len(session_index_list)
start = 1
for ind in session_index_list:
ind_num = ind
session_custom_dimension_query_base = fSELECT
"session_dim_{ind_num}" as type,
count(distinct case when label = 1 then a.fullvisitorId end) as count_1_users,
count(distinct case when label = 0 then a.fullvisitorId end) as count_0_users,
cd.value as session_dim_{ind_num}_value
FROM `{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a,
UNNEST(customDimensions) as cd
LEFT JOIN users_labeled b
ON a.fullvisitorId = b.fullvisitorId
{query_filter}
AND cd.index = {ind_num}
GROUP BY type, cd.value)
query_add = session_custom_dimension_query_base
session_cd_query += query_add
if start > 1:
session_cd_query += "USING (type, count_1_users, count_0_users)"
if start < counter:
session_cd_query += "\nFULL OUTER JOIN\n("
start+=1
session_cd_query+=")\n"
final_query += session_cd_query
# Query hits
if query_hit_cd:
hit_cd_query = ",\nhits_cds AS (SELECT * FROM ("
counter = len(hit_index_list)
start = 1
for ind in hit_index_list:
ind_num = ind
hit_cust_d_query_base = fSELECT
"hit_dim_{ind_num}" as type,
count(distinct case when label = 1 then a.fullvisitorId end) as count_1_users,
count(distinct case when label = 0 then a.fullvisitorId end) as count_0_users,
cd.value as hit_dim_{ind_num}_value
FROM `{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a,
UNNEST(hits) as ht,
UNNEST(ht.customDimensions) as cd
LEFT JOIN users_labeled b
ON a.fullvisitorId = b.fullvisitorId
{query_filter}
AND cd.index = {ind_num}
GROUP BY type, cd.value)
query_add = hit_cust_d_query_base
hit_cd_query += query_add
if start > 1:
hit_cd_query += "USING (type, count_1_users, count_0_users)"
if start < counter:
hit_cd_query += "\nFULL OUTER JOIN\n("
start+=1
hit_cd_query+=")\n"
final_query += hit_cd_query
final_query += SELECT *, count_1_users/(count_1_users+count_0_users) as conv_rate FROM trafficSource_medium
FULL OUTER JOIN visitor_dma USING (type,count_1_users,count_0_users)
FULL OUTER JOIN distinct_dma USING (type,count_1_users,count_0_users)
FULL OUTER JOIN visitor_common_daypart USING (type,count_1_users,count_0_users)
FULL OUTER JOIN visitor_common_day USING (type,count_1_users,count_0_users)
FULL OUTER JOIN technology USING (type,count_1_users,count_0_users)
FULL OUTER JOIN PPL1 USING (type,count_1_users,count_0_users)
FULL OUTER JOIN ecomm_action USING (type,count_1_users,count_0_users)
FULL OUTER JOIN prod_cat USING (type,count_1_users,count_0_users)
FULL OUTER JOIN hist_sessions USING (type, count_1_users, count_0_users)
FULL OUTER JOIN hist_pageviews USING (type, count_1_users, count_0_users)
FULL OUTER JOIN hist_time_per_page USING (type, count_1_users, count_0_users)
FULL OUTER JOIN hist_avg_session_depth USING (type, count_1_users, count_0_users)
if query_hit_cd:
final_query+="FULL OUTER JOIN hits_cds USING (type,count_1_users,count_0_users)"
if query_session_cd:
final_query+="FULL OUTER JOIN session_cds USING (type,count_1_users,count_0_users)"
if (View_Query):
print(final_query)
Explanation: Programmatically write a query that pulls distinct users, by class, for features and every custom dimension (session & hit level).
If you want to view the query, set View_Query to True in the cell below.
End of explanation
# Set the destination for your query results.
# This will be your data source for the Data Studio dashboard.
DESTINATION = f"{PROJECT_ID_BILLING}.{DATASET_NAME}.ga360_gazer_output"
job_config = bigquery.QueryJobConfig(destination=DESTINATION,
writeDisposition="WRITE_EMPTY")
# Start the query, passing in the extra configuration.
query_job = client.query(final_query, job_config=job_config)
query_job.result()
print("Query results loaded to the table {}".format(DESTINATION))
Explanation: Save results to BQ. As-is, only writes if there is no table that already exists.
End of explanation
# Delete the dataset and all contents within
! bq rm -r $PROJECT_ID_BILLING:$DATASET_NAME
Explanation: Visualize results with a pre-built Data Studio dashboard:
Open the templated dashboard here
Make a copy with the button in the top menu bar. When making a copy:
Accept the terms and conditions, if it's your first time using Data Studio
Create a new data source
Select BigQuery (you will need to grant permissions again)
Under Project, select your project specified by PROJECT_ID_BILLING
Under Dataset, select the dataset you specified as DATASET_NAME
Under Table, select "ga360_gazer_output" (unless you changed the name)
Click "Connect"
You will see a list of fields - click "ADD TO REPORT" on the top right
You will be prompted to make a copy of the original report with your new data source - click "Copy Report"
Page through the pages to view insights
Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
End of explanation
<END_TASK> |
15,809 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
When it comes to consuming real-time content, Twitter is the place to be; Be it sending out 'Tweets' real-time, or discovering latest online 'Trends' anywhere, or ability to begin a 'conversation' with anyone, Twitter does it all. In fact, Twitter Management wrote this in their letter to shareholders last year.
Weโre focused now on what Twitter does best
Step1: 2. Identify influential friends using 'Page Rank' formulation
From the adjacency matrix generated above, we can construct a column-stochastic matrix (also called a transition matrix) such that, a column with m outlinks will have 1/m as value in respective m cells. Conceptually, a value in the cell a x b gives the probability of a random-surfer in node B jumping to node A.
Step2: Initialize PageRank vector, such that all the nodes have equal PageRank score adding upto 1.
Step3: On applying the above Transition-matrix transformation iteratively on the PageRank vector, the vector will eventully converge such that
Step4: 2a. Nodes with high PageRank scores
Step5: 2b. Top 100 influential-nodes in Ego-Network
Step6: 2c. Histogram of PageRank scores
PageRank scores are scaled such that nodes have an average-score of 1. So the scores below give an idea of how influential are the nodes, with respect to an average node.
Step7: A joint-plot showing how Inlinks and Outlinks of the nodes are distributed (within the ego-network)
Step8: 3. Identify implicit clusters using Clustering algos
Typically, number of clusters are chosen with a plot of within-cluster sum-of-squares-of-dstances vs number of clusters.
Here for simplicity, we use a simple heuristic to fix the number of clusters in advnace.
Step9: 3a. K-Means clustering
K Means is a point-assignment based clustering-algorithm
Step10: 3b. Spectral clustering
One problem in using K-Means algorithm to cluster the given social-graph is 'Curse of Dimensionality' i.e. at higher-dimensions (here ~400 dimensions), metric like 'Euclidean distance' or 'Centre' would have little meaning in the context of non-convex adjacency matrix.
On the other hand, spectral clustering attempts to partition the graph such that number of edges which connect different components are minimized. Below is the output from Spectral Clustering.
Step11: Nodes mainly into Deep Learning community have grouped into Cluster 8, Python Machine Learning community into Cluster 2, Design community into Cluster 9, general Data Science community into Cluster 0 and 5
One smaller clusters (1) wasn't shown above.
Step12: 3c. Affinity Propagation clustering
Unlike K-Means or Spectral clustering, affinity propagation doesn't require the number of clusters to be estimated beforehand. Here the algorithm finds 'exemplars' i.e. members of dataset that are representative of clusters. and tries to find clusters around them.
Step13: 3d. Principal Component Analysis
To handle the 'curse of dimensionality' problem inherent in high-dimension data, PCA is generally used to represent the high-dimensional data in a fewer number of dimensions - this helps in better visualizing of data as well as faster computation while running clustering algorithm.
Step14: A simpler visualization of Spectral-clustering outcome as rendered in 2 dimensions.
Step15: More on Clustering Algorithms
Step16: Recommendations on the basis of a specific cluster outcome.
E.g. Nodes to follow on the basis of top PageRank nodes in 'Spectral Clustering
Step17: 4c.Reco | <ASSISTANT_TASK:>
Python Code:
from collections import Counter, defaultdict
from datetime import datetime
from sklearn.decomposition import PCA
import csv
import matplotlib.pyplot as plt
import numpy as np
import os.path
import pandas as pd
import re
import seaborn as sns; sns.set()
import time
import twitter
% matplotlib inline
plt.rcParams['figure.figsize'] = (10,7)
import warnings
warnings.filterwarnings("ignore")
#print twitter.__path__
import random
random.seed(1000)
# Pandas data-frame print formatting
from IPython.core.display import HTML
css = open('style-table.css').read() + open('style-notebook.css').read()
HTML('<style>{}</style>'.format(css))
self_screen_name = 'bala_io' # Self
# Keep appending data
fof_filename = "edges.csv" # 'Alters' and their Source->Sink Edges.
cache_filename = "cache.csv" # local cache of (TwitterId, FullName, UserName)
# One-time-use files
binaryMap_filename = "binaryMap.csv" # Directed Graph. Adjacencies as 0/1.RowFollowCol
cluster_filename = "results.csv"
# Twitter auth. https://dev.twitter.com/oauth/overview/application-owner-access-tokens
with open("../../passwd/credentials.txt", "r") as f:
reader = csv.reader(f )
login_dict = {line[0]: line[1]
for line in reader}
api = twitter.Api(consumer_key=login_dict.get('consumer_key') ,
consumer_secret=login_dict.get('consumer_secret'),
access_token_key=login_dict.get('access_token_key'),
access_token_secret=login_dict.get('access_token_secret'))
api
# 'Self' and Friends of Self
self_node = api.GetUser(screen_name = self_screen_name)
self_node_id = str(self_node.id) # Twitter Id of Self
friends_of_self = api.GetFriendIDs(user_id = self_node_id,
screen_name = self_screen_name ,
stringify_ids = True)
index = [self_node_id] + friends_of_self
# GetFriendIDs() API call is rate-limited at 15 req / 15 min
# https://dev.twitter.com/rest/public/rate-limiting
# For each of the list of nodes, fetch the list of nodes it follows, append to file
def update_FoF_File(fileName, to_fetch_list):
with open(fileName, 'a') as f:
apiReqCount = 0
for node in to_fetch_list:
friends_of_node = api.GetFriendIDs(user_id = node, stringify_ids = True)
row = ','.join([str(i) for i in [node] + friends_of_node ]) + "\n"
f.write(row)
apiReqCount += 1
if (apiReqCount == 15):
apiReqCount = 0
print("Off to Sleep :)")
time.sleep(15*60 + 10)
# parse FoF file and return list of nodes, for whom source->sink Edges are already there.
def getFinishList(fileName):
if not os.path.isfile(fileName):
return []
with open(fileName, 'r') as f:
return [ row.strip().split(',')[0] for row in f ] # 1st entry is a user
# Ego-network as adjacency-matrix
# Parses FoF file in order of index, create list of adjacencies as 0 | 1
# Writes to File. Adjacency-matrix in Row_follows_Column format
def updateBinaryMapFile(fof_filename, binaryMap_filename, index):
with open(fof_filename, "r") as f:
stripped_f = (line.replace('\r', '') for line in f)
reader = csv.reader(stripped_f)
fof_dict = {line[0]: line[1:] for line in reader
if line[0] in index} # dict of node:his_followers
if self_node_id not in fof_dict:
fof_dict[self_node_id] = index[1:] # dict of Self
bool_list = []
for user in index:
user_friends = set( fof_dict[user] )
bool_row = [item in user_friends for item in index] # for each, fill T/F
bool_list.append(bool_row)
int_nparray = np.array(bool_list) + 0 # Bool to int
binaryMap_rfc = pd.DataFrame(data = int_nparray, columns= index, index = index)
binaryMap_rfc.to_csv(binaryMap_filename)
# For list of Ids, fetch Profile details. If not in Offline file, make an API call
# Returns ['UserName', 'FullName', 'Followers_count', 'Friends_count', 'Location', 'Created_at']
# UsersLookup API 100 Ids/request
def lookup_in_cache(friendsIdsList):
cache, delta_cache = pd.DataFrame(), pd.DataFrame()
UserNameList, namesList = [], []
followers_count, friends_count = [], []
location, created_at = [], []
if os.path.isfile(cache_filename):
cache = pd.read_csv(cache_filename, skip_blank_lines=True,
dtype={'Ids':str, 'Friends_count':int, 'Followers_count':int})
cache.set_index('Ids', inplace=True)
to_fetch_list = list ( set (friendsIdsList) - set(cache.index) )
else :
to_fetch_list = friendsIdsList
i = 0
while (i < len(to_fetch_list) * 1./100):
print("... Cache-Miss for " + str(len(to_fetch_list)) + " nodes. Updating Cache...")
low, high = i * 100, min( len(to_fetch_list), (i+1)*100 ) # UsersLookup api
twitterObjectsList = api.UsersLookup(user_id = to_fetch_list[low:high])
temp = zip(*[( tempObject.screen_name, #ScreenName
tempObject.name, #Name
tempObject.followers_count, #Followers
tempObject.friends_count, #Friends
tempObject.location, #Location
tempObject.created_at #CreatedAt
) for tempObject in twitterObjectsList])
temp = list(temp)
UserNameList += list(temp[0])
namesList += list(temp[1])
followers_count += list(temp[2])
friends_count += list(temp[3])
location += list(temp[4])
created_at += list(temp[5])
i = i + 1
if len(to_fetch_list) > 0:
delta_cache = pd.DataFrame({'UserName':UserNameList,
'FullName':namesList,
'Ids': to_fetch_list,
'Followers':followers_count,
'Friends': friends_count,
'Location':location,
'Created':created_at})
delta_cache['Created'] = delta_cache['Created'].apply(lambda x:
datetime.strptime(
re.sub(r"\+[0-9]* ", "",x),'%c').
strftime("%b-%Y"))
delta_cache.set_index('Ids', inplace=True, drop = True)
cache = cache.append(delta_cache)
cache.to_csv(cache_filename)
return cache.loc[friendsIdsList]
# Display cluster-wise most-influential users, for the given clustering algo
def top_nodes_in_cluster(df, cluster_algo, n_clusters):
dummy_df = pd.DataFrame()
for i in range(n_clusters):
nodes_in_cluster = list( df [df[cluster_algo] == i ]['FullName'] )
if len(nodes_in_cluster) >= 10: # show only clusters of size > 10
col_name = str(i) + " : " + str(len(nodes_in_cluster)) + " Ids"
dummy_df[col_name] = nodes_in_cluster[:10]
return dummy_df
# identify 20 friends to follow after aggregating friends followed by top 50% in list
def discover_Friends_toFollow(ids_of_interest, friend_list, prop = .5, count = 20):
ids_of_interest = ids_of_interest[:int(len(ids_of_interest) * prop)]
if self_node_id in ids_of_interest:
ids_of_interest.remove(self_node_id)
print("'Who-To-Follow' reco after looking at %3d friends' friends:" %(len(ids_of_interest)))
with open(fof_filename) as f:
reader = csv.reader(f)
fof_dict = {row[0]:row[0:] for row in reader} # dict of node:her_followers
friendsToFollow = []
for id in ids_of_interest:
friendsToFollow += list (set(fof_dict[str(id)]) - set(friend_list) )
friendsToFollow = Counter(friendsToFollow).most_common(count)
tuples_list = list(zip(*friendsToFollow) )
topFriendsToFollowDF = pd.DataFrame()
topFriendsToFollowDF['Ids'] = list(tuples_list[0])
topFriendsToFollowDF['Freq'] = list(tuples_list[1])
topFriendsToFollowDF.set_index('Ids', drop = True, inplace = True)
index = topFriendsToFollowDF.index
topFriendsToFollowDF = topFriendsToFollowDF.merge(lookup_in_cache(index), copy = False,
left_index = True, right_index = True)
return topFriendsToFollowDF
# For the list of nodes I follow, fetch their friends-list
fof_finish_list = getFinishList(fof_filename ) # Completed nodes
fof_to_fetch_list = list ( set(friends_of_self) - set(fof_finish_list) ) # Pending nodes
print( str(len(fof_to_fetch_list)) + " out of " + str(len(index) - 1) +
" Friends details to be fetched")
# For the remaining nodes, populate their details in fof_file
update_FoF_File(fof_filename, fof_to_fetch_list)
# Build the adjacency matrix in terms of 0 and 1 (if there is an edge)
updateBinaryMapFile(fof_filename, binaryMap_filename, index)
# Read adj-matrix into df. Cell MxN is 1 iff node in Mth row follows node in Nth column
binaryMap_rfc = pd.read_csv(binaryMap_filename, skip_blank_lines=True, index_col = 0)
print(binaryMap_rfc.shape)
outlinks_count = binaryMap_rfc.sum(axis = 1) # horizontal-sum to count outlinks
inlinks_count = binaryMap_rfc.sum(axis = 0) # vertical-sum to count inlinks
# Histogram of number of OutLinks per node, within ego-network
sns.distplot(outlinks_count, bins = len(index)//10, kde=False);
# Histogram of number of InLinks per node, within ego-network
sns.distplot(inlinks_count, bins = len(index)//10, kde=False);
Explanation: When it comes to consuming real-time content, Twitter is the place to be; Be it sending out 'Tweets' real-time, or discovering latest online 'Trends' anywhere, or ability to begin a 'conversation' with anyone, Twitter does it all. In fact, Twitter Management wrote this in their letter to shareholders last year.
Weโre focused now on what Twitter does best: live. Twitter is live: live commentary, live connections, live conversations. Whether itโs breaking news, entertainment, sports, or everyday topics, hearing about and watching a live event unfold is the fastest way to understand the power of Twitter.
Twitter has always been considered a โsecond screenโ for whatโs happening in the world and we believe we can become the first screen for everything thatโs happening now. And by doing so, we believe we can build the planetโs largest daily connected audience. A connected audience is one that watches together, and can talk with one another in real-time. Itโs what Twitter has provided for close to 10 years, and itโs what we will continue to drive in the future
Embedded in a Twitter User's Social-graph* is a wealth of information on User's likes and interests. Unlike Facebook or LinkedIn, the magic of Twitter is in its 'Follow' structure - where any can follow any without they knowing each other. This directed social-graph, when methodologically summarized, can reveal interesting information on the most influential/central friends in the network and also help personalize/enrich one's Twitter experience by unearthing implicit-clusters in the network.
*Social-graph: [User, 1st degree Connections, and the links between]
In this notebook, we'll look into these:
1. Extract the ego-network of 'self' using Twitter API
+ Identify the influential nodes using 'Page Rank' formulation
+ Identify implicit clusters formed
+ Recommend new friends to follow on the basis of cluster of interest
Note: This study is limited to Ego network: the focal node ("ego": here the self-node) and the nodes to whom ego is directly connected to ("alters") plus the ties, if any, among the alters.
1. Extract social-graph strucutre using Twitter API
End of explanation
binaryMap_cfr = binaryMap_rfc.transpose() # column-values: Outlinks
binaryMap_cfr_norm = binaryMap_cfr / binaryMap_cfr.sum(axis = 0)
colStochMatrix = np.matrix( binaryMap_cfr_norm.fillna(0)) # column-stochastic-matrix
Explanation: 2. Identify influential friends using 'Page Rank' formulation
From the adjacency matrix generated above, we can construct a column-stochastic matrix (also called a transition matrix) such that, a column with m outlinks will have 1/m as value in respective m cells. Conceptually, a value in the cell a x b gives the probability of a random-surfer in node B jumping to node A.
End of explanation
pageRankVector = np.matrix([1.0/len(index)] * len(index)) # iniitialize page-rank-vector
pageRankVector = pageRankVector.transpose() # transpose to column-vector
Explanation: Initialize PageRank vector, such that all the nodes have equal PageRank score adding upto 1.
End of explanation
# PageRank algo: Power Iteration to solve Markov transition matrix
# refer this : http://setosa.io/blog/2014/07/26/markov-chains/index.html
beta = 0.85
epsilon = 999
iteration = 0
while epsilon > (1.0/(10**16)):
pageRankVectorUpdating = colStochMatrix * pageRankVector * beta
# re-insert leaked page-ranks
S = np.array(pageRankVectorUpdating).sum()
pageRankVectorUpdated = pageRankVectorUpdating + (
1 - S) * (1.0/len(index)) * np.ones_like(len(index))
# compute the squared-difference and check for convergence
error = np.array(pageRankVectorUpdated - pageRankVector)
epsilon = np.sqrt((error * error).sum())
iteration = iteration + 1
pageRankVector = pageRankVectorUpdated
print( "Sum of Page-Rank Scores: " + str(pageRankVector.sum()) +
"\nConverged in " + str(iteration) + " iterations")
# Collect the results
results_df = pd.DataFrame()
results_df['Ids'], results_df['PageRank'] = index, pageRankVector
results_df['Inlinks'], results_df['Outlinks'] = list(inlinks_count), list(outlinks_count)
results_df = results_df.set_index('Ids', drop = True )
results_df = results_df.merge(lookup_in_cache(index), copy = False,
left_index = True, right_index = True)
results_df = results_df[['PageRank','UserName', 'FullName', 'Inlinks' , 'Outlinks',
'Followers','Friends', 'Location', 'Created' ]]
Explanation: On applying the above Transition-matrix transformation iteratively on the PageRank vector, the vector will eventully converge such that:
Matrix.Vector = Vector
Equivalently, this is Eigen-Vector formulation with PageRank vector being the principal eigen-vector of matrix corresponding to eigen-value 1 [Since M is column-stochastic matrix, principal eigen-value is 1]
Here we use Power Iteration method to solve for the PageRank vector. Inorder to handle the nodes which have zero out-links (dead-ends) and nodes with periodic-loops (spider-traps) in the ego-network, we introduce some randomness through parameter beta such that a random-surfer at any node picks a random path approx. every 1 out of 6 times ( 1 - beta = 0.15)
End of explanation
results_df.fillna('').sort_values(by = 'PageRank', ascending =False).set_index('FullName').head(10)
Explanation: 2a. Nodes with high PageRank scores
End of explanation
dummy_df = pd.DataFrame()
temp_df = results_df.sort_values( by = 'PageRank', ascending =False)
for i in range(10):
dummy_df[i] = list (temp_df [10*i : 10* i + 10]['FullName'])
dummy_df
Explanation: 2b. Top 100 influential-nodes in Ego-Network
End of explanation
pageRank_to_plot = len(index) * results_df["PageRank"]
sns.distplot(pageRank_to_plot, kde=False, rug=True, bins = len(index)//10);
Explanation: 2c. Histogram of PageRank scores
PageRank scores are scaled such that nodes have an average-score of 1. So the scores below give an idea of how influential are the nodes, with respect to an average node.
End of explanation
sns.jointplot(x="Inlinks", y="Outlinks", data=results_df, kind = "kde");
Explanation: A joint-plot showing how Inlinks and Outlinks of the nodes are distributed (within the ego-network)
End of explanation
n_clusters = min( int( round(np.sqrt(len(index)/2)) ), 10 ) # not more than 10 clusters
print(n_clusters)
Explanation: 3. Identify implicit clusters using Clustering algos
Typically, number of clusters are chosen with a plot of within-cluster sum-of-squares-of-dstances vs number of clusters.
Here for simplicity, we use a simple heuristic to fix the number of clusters in advnace.
End of explanation
from sklearn.cluster import KMeans
est = KMeans(max_iter = 100000, n_clusters = n_clusters, n_init = 200, init='k-means++')
results_df['kmeans'] = est.fit_predict(binaryMap_cfr)
top_nodes_in_cluster(results_df.sort_values( by = 'PageRank', ascending =False),
'kmeans', n_clusters)
Explanation: 3a. K-Means clustering
K Means is a point-assignment based clustering-algorithm: here we start with k points chosen randomly as centroids, assign the remaining points to k centroids by using certain distance measure (Euclidean / Cosine / Jaccardi). Then we compute new centroids, re-assign remaining points and repeat, until there is convergence of centroids.
Here we are more interested in clustering inherent in who-follows-me (network-driven) graph, rather than who-do-I-follow (self-driven). So the approach is to represent the nodes as Observations and whether other nodes follow them or not (1 or 0) as Features (One observation per row, and one feature per column) Thus the input matrix must be such that any value in cell implies whether node in the column follows node in the row.
End of explanation
from sklearn import cluster
spectral = cluster.SpectralClustering(n_clusters=n_clusters, n_init = 500,
eigen_solver='arpack',
affinity="nearest_neighbors")
spectral.fit(binaryMap_cfr)
results_df['spectral'] = spectral.labels_.astype(np.int)
top_nodes_in_cluster(results_df.sort_values( by = 'PageRank', ascending =False),
'spectral', n_clusters)
Explanation: 3b. Spectral clustering
One problem in using K-Means algorithm to cluster the given social-graph is 'Curse of Dimensionality' i.e. at higher-dimensions (here ~400 dimensions), metric like 'Euclidean distance' or 'Centre' would have little meaning in the context of non-convex adjacency matrix.
On the other hand, spectral clustering attempts to partition the graph such that number of edges which connect different components are minimized. Below is the output from Spectral Clustering.
End of explanation
results_df [results_df['spectral'].isin([1])].sort_values(by = 'PageRank', ascending =False).set_index('FullName').head()
Explanation: Nodes mainly into Deep Learning community have grouped into Cluster 8, Python Machine Learning community into Cluster 2, Design community into Cluster 9, general Data Science community into Cluster 0 and 5
One smaller clusters (1) wasn't shown above.
End of explanation
from sklearn.cluster import AffinityPropagation
af = AffinityPropagation(preference=-50).fit(binaryMap_cfr)
results_df['affinity'] = af.labels_
n_clusters_affinity = len(af.cluster_centers_indices_)
print(str(n_clusters_affinity) + " affinity clusters.")
top_nodes_in_cluster(results_df.sort_values( by = 'PageRank', ascending =False),
'affinity', n_clusters_affinity)
Explanation: 3c. Affinity Propagation clustering
Unlike K-Means or Spectral clustering, affinity propagation doesn't require the number of clusters to be estimated beforehand. Here the algorithm finds 'exemplars' i.e. members of dataset that are representative of clusters. and tries to find clusters around them.
End of explanation
pca = PCA(n_components=3)
Xproj = pca.fit_transform(binaryMap_cfr)
results_df['dim1'] = Xproj[:,0]
results_df['dim2'] = Xproj[:,1]
results_df['dim3'] = Xproj[:,2]
results_df = results_df.sort_values( by = 'PageRank', ascending =False)
results_df.to_csv(cluster_filename)
print("Explained-variance and Proportion of Explained-variance in 3 dimensions [dim1 dim2 dim3]")
print(pca.explained_variance_, pca.explained_variance_ratio_)
Explanation: 3d. Principal Component Analysis
To handle the 'curse of dimensionality' problem inherent in high-dimension data, PCA is generally used to represent the high-dimensional data in a fewer number of dimensions - this helps in better visualizing of data as well as faster computation while running clustering algorithm.
End of explanation
# Spectral clustering | Plot the ego-network in 2 dimensions
plt.figure(num=None, figsize=(20, 10), dpi=80, facecolor='w', edgecolor='k')
plt.scatter(results_df['dim1'], results_df['dim2'], s = 100 ,c= results_df['spectral'],
alpha=0.5, cmap=plt.cm.get_cmap('nipy_spectral', 10))
plt.colorbar();
Explanation: A simpler visualization of Spectral-clustering outcome as rendered in 2 dimensions.
End of explanation
discover_Friends_toFollow(ids_of_interest = index,
friend_list = index,
prop = .5 , count = 20).fillna('').set_index('FullName')
Explanation: More on Clustering Algorithms: http://scikit-learn.org/stable/modules/clustering.html
4. Recommend new friends to follow
Now that we have PageRank in place for nodes in social-graph, we can ask for recommendations on the basis of top-ranked nodes in the graph. E.g. To get 20 recommendations, after looking at friends of top PageRank scoring nodes in my network
4a. Reco: After looking at top nodes in full ego-network
End of explanation
favorite_cluster_df = results_df [results_df['spectral'].isin([0,2,5,8])]
favorite_cluster_list = list(favorite_cluster_df.index)
discover_Friends_toFollow(ids_of_interest = favorite_cluster_list,
friend_list = index,
prop = .5, count = 30).fillna('').set_index('FullName')
Explanation: Recommendations on the basis of a specific cluster outcome.
E.g. Nodes to follow on the basis of top PageRank nodes in 'Spectral Clustering::Data' clusters.
Reco: 4b. After looking at Data-Science clustsers (Spectral clustering)
End of explanation
discover_Friends_toFollow(ids_of_interest = list(results_df [results_df['spectral'] == 9].index),
friend_list = index,
prop = 1, count = 20).fillna('').set_index('FullName')
Explanation: 4c.Reco: After looking at Design clustser (Spectral clustering)
End of explanation
<END_TASK> |
15,810 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Monitor Convergence for Run 6
Applying multiple convergence checks for run 6, which adopted a floating Y and alpha. Up to now, we have monitored convergence by visually inspecting trace plots. It would be useful to know if convergence has been obtained using other metrics.
Step1: Defining convergence diagnostics
(1) trace plot, (2) acceptance fraction, (3) Gelman-Rubin diagnostic, (4) autocorrelation, (5) moving average. Others to consider
Step2: Process samples | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Monitor Convergence for Run 6
Applying multiple convergence checks for run 6, which adopted a floating Y and alpha. Up to now, we have monitored convergence by visually inspecting trace plots. It would be useful to know if convergence has been obtained using other metrics.
End of explanation
def tracePlot(chains, labels=None, truths=None):
n_dim = chains.shape[2]
fig, ax = plt.subplots(n_dim, 1, figsize=(8., 27.), sharex=True)
ax[-1].set_xlabel('Iteration', fontsize=20.)
for i in range(len(ax)):
try:
ax[i].set_ylabel(labels[i], fontsize=20.)
except IndexError:
pass
ax[i].tick_params(which='major', axis='both', length=10., labelsize=16.)
for j in range(len(chains)):
try:
ax[i].plot([0, len(chains[j,:,i])+10], [truths[i], truths[i]], '-', lw=4, dashes=(20., 10.),
c='#B22222')
except:
pass
ax[i].plot(chains[j,:,i], '-', lw=1, c='#0473B3', alpha=0.5)
fig.tight_layout()
def GelmanRubin(chains, labels=None):
n_chains = chains.shape[0]
n_iter = chains.shape[1]/2
n_params = chains.shape[2]
# take last n samples if total was 2n
sample = chains[:,-n_iter:,:]
# compute mean of intra-chain (within) variances
W = np.mean(np.var(sample, axis=1), axis=0)
# compute mean of inter-chain (between) variances
chain_means = np.mean(sample, axis=1)
mean_of_chain_means = np.mean(chain_means, axis=0)
B = np.empty(n_params)
for i in range(n_params):
B[i] = np.sum((chain_means[:, i] - mean_of_chain_means[i])**2)*n_iter/(n_chains - 1.)
# estimated variance (likely an over-estimate)
Sigma_hat_2 = ((n_iter - 1.)*W + B)/n_iter
# pooled posterior variance
Var_hat = Sigma_hat_2 + B/(n_chains*n_iter)
# correction for degree of freedom
# compute potential scale reduction factor
PSRF = np.sqrt(Var_hat/W)
return W, B, Var_hat, PSRF
Explanation: Defining convergence diagnostics
(1) trace plot, (2) acceptance fraction, (3) Gelman-Rubin diagnostic, (4) autocorrelation, (5) moving average. Others to consider: Geweke diagnostic, Raferty-Lewis diagnostic, Heidelberg-Welch diagnostic, ...
End of explanation
# test with Gl 876, the largest number of iterations
flatchain = np.genfromtxt('/Users/grefe950/Software/StarBay/interbay/chains/run06/GJ876_W0300_N0600_B0000.dat')
chains = flatchain.reshape(300, -1, 9)
labels=['Mass', '[Fe/H]', 'Y', 'log(Age)', 'Distance', 'alpha', 'log(Teff)', 'log(Fbol)', 'theta']
truths = [np.nan, 0.17, np.nan, np.nan, 1./0.21328, np.nan, np.log10(3189.), np.log10(1.9156e-8), 0.746]
tracePlot(chains, labels=labels, truths=truths)
GelmanRubin(chains, labels=labels)
Explanation: Process samples
End of explanation
<END_TASK> |
15,811 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
That looks like the best way to represent the data if we want to calculate the $R^2$ distance on a per-symbol basis. I could add it to the single val function.
Step1: Now, let's implement the rolling validation.
Step2: So, I could use a training period based in an amount of market days, or in an amount of sample base periods. The first approach would be taking into consideration the temporal correlation of the data, the second would consider that the amount of samples should be large enough. Not to lose sight of the real problem at hand, I will use the market days approach, and then check that the amount of samples is big enough.
Step3: A lot of attention should be paid to the effect of filling the missing data. It may change the whole results.
Step5: That last number is an approximation of the number of train/evaluation sets that are being considered.
Step6: Let's test the whole process
Step7: It seems like the weird point, in which the model is predicting terribly may be the 2008 financial crisis. And the big unpredictability is limited to one symbol. I should implement a way to trace the symbols...
What about the mean absolute error? | <ASSISTANT_TASK:>
Python Code:
def run_single_val(x, y, ahead_days, estimator):
multiindex = x.index.nlevels > 1
x_y = pd.concat([x, y], axis=1)
x_y_sorted = x_y.sort_index()
if multiindex:
x_y_train = x_y_sorted.loc[:fe.add_market_days(x_y_sorted.index.levels[0][-1], -ahead_days)]
x_y_val = x_y_sorted.loc[x_y_sorted.index.levels[0][-1]:]
else:
x_y_train = x_y_sorted.loc[:fe.add_market_days(x_y_sorted.index[-1], -ahead_days)]
x_y_val = x_y_sorted.loc[x_y_sorted.index[-1]:]
x_train = x_y_train.iloc[:,:-1]
x_val = x_y_val.iloc[:,:-1]
y_train_true = x_y_train.iloc[:,-1]
y_val_true = x_y_val.iloc[:,-1]
estimator.fit(x_train)
y_train_pred = estimator.predict(x_train)
y_val_pred = estimator.predict(x_val)
y_train_true_df = pd.DataFrame(y_train_true)
y_train_pred_df = pd.DataFrame(y_train_pred)
y_val_true_df = pd.DataFrame(y_val_true)
y_val_pred_df = pd.DataFrame(y_val_pred)
return y_train_true, \
y_train_pred, \
y_val_true, \
y_val_pred
y_train_true, y_train_pred, y_val_true, y_val_pred = run_single_val(x, y, 1, predictor)
print(y_train_true.shape)
print(y_train_pred.shape)
print(y_val_true.shape)
print(y_val_pred.shape)
print(y_train_true.shape)
y_train_true.head()
y = y_train_true
multiindex = y.index.nlevels > 1
if multiindex:
DATE_LEVEL_NAME = 'level_0'
else:
DATE_LEVEL_NAME = 'index'
DATE_LEVEL_NAME
y.reset_index()
reshape_by_symbol(y_train_true)
Explanation: That looks like the best way to represent the data if we want to calculate the $R^2$ distance on a per-symbol basis. I could add it to the single val function.
End of explanation
train_eval_days = -1 # In market days
base_days = 7 # In market days
step_days = 7 # market days
ahead_days = 1 # market days
today = data_df.index[-1] # Real date
train_days = 252 # market days per training period
step_eval_days = 30 # market days between training periods beginings
filled_data_df = pp.fill_missing(data_df)
tic = time()
x, y = fe.generate_train_intervals(filled_data_df,
train_eval_days,
base_days,
step_days,
ahead_days,
today,
fe.feature_close_one_to_one)
toc = time()
print('Elapsed time: %i seconds.' % (toc-tic))
x_y_sorted = pd.concat([x, y], axis=1).sort_index()
x_y_sorted
start_date = x_y_sorted.index.levels[0][0]
start_date
end_date = fe.add_market_days(start_date, 252)
end_date
end_date = fe.add_index_days(start_date, 252, x_y_sorted)
end_date
Explanation: Now, let's implement the rolling validation.
End of explanation
end_date = fe.add_market_days(start_date, 252)
x_i = x_y_sorted.loc[start_date:end_date].iloc[:,:-1]
y_i = x_y_sorted.loc[start_date:end_date].iloc[:,-1]
print(x_i.shape)
print(x_i.head())
print(y_i.shape)
print(y_i.head())
ahead_days
predictor = dmp.DummyPredictor()
y_train_true, y_train_pred, y_val_true, y_val_pred = run_single_val(x_i, y_i, ahead_days, predictor)
print(y_train_true.shape)
print(y_train_pred.shape)
print(y_val_true.shape)
print(y_val_pred.shape)
y_train_pred.head()
y_train_pred.dropna(axis=1, how='all').shape
scores = r2_score(pp.fill_missing(y_train_pred), pp.fill_missing(y_train_true), multioutput='raw_values')
print('R^2 score = %f +/- %f' % (np.mean(scores), 2*np.std(scores)))
scores = r2_score(y_train_pred, y_train_true, multioutput='raw_values')
print('R^2 score = %f +/- %f' % (np.mean(scores), np.std(scores)))
len(scores)
y_val_true_df = pd.DataFrame()
y_val_true
y_val_true_df.append(y_val_true)
Explanation: So, I could use a training period based in an amount of market days, or in an amount of sample base periods. The first approach would be taking into consideration the temporal correlation of the data, the second would consider that the amount of samples should be large enough. Not to lose sight of the real problem at hand, I will use the market days approach, and then check that the amount of samples is big enough.
End of explanation
x.index.min()
x.index.max()
x.index.max() - x.index.min()
(x.index.max() - fe.add_market_days(x.index.min(), train_days)).days // step_days
Explanation: A lot of attention should be paid to the effect of filling the missing data. It may change the whole results.
End of explanation
def roll_evaluate(x, y, train_days, step_eval_days, ahead_days, verbose=False):
Warning: The final date of the period should be no larger than the final date of the SPY_DF
# calculate start and end date
# sort by date
x_y_sorted = pd.concat([x, y], axis=1).sort_index()
start_date = x_y_sorted.index[0]
end_date = fe.add_market_days(start_date, train_days)
final_date = x_y_sorted.index[-1]
# loop: run_single_val(x,y, ahead_days, estimator)
r2_train_means = []
r2_train_stds = []
y_val_true_df = pd.DataFrame()
y_val_pred_df = pd.DataFrame()
num_training_sets = (252/365) * (x.index.max() - fe.add_market_days(x.index.min(), train_days)).days // step_eval_days
set_index = 0
if verbose:
print('Evaluating approximately %i training/evaluation pairs' % num_training_sets)
while end_date < final_date:
x = x_y_sorted.loc[start_date:end_date].iloc[:,:-1]
y = x_y_sorted.loc[start_date:end_date].iloc[:,-1]
y_train_true, y_train_pred, y_val_true, y_val_pred = run_single_val(x, y, ahead_days, predictor)
# Calculate R^2 for training and append
scores = r2_score(y_train_true, y_train_pred, multioutput='raw_values')
r2_train_means.append(np.mean(scores))
r2_train_stds.append(np.std(scores))
# Append validation results
y_val_true_df = y_val_true_df.append(y_val_true)
y_val_pred_df = y_val_pred_df.append(y_val_pred)
# Update the dates
start_date = fe.add_market_days(start_date, step_eval_days)
end_date = fe.add_market_days(end_date, step_eval_days)
set_index += 1
if verbose:
sys.stdout.write('\rApproximately %2.1f percent complete. ' % (100.0 * set_index / num_training_sets))
sys.stdout.flush()
return r2_train_means, r2_train_stds, y_val_true_df, y_val_pred_df
Explanation: That last number is an approximation of the number of train/evaluation sets that are being considered.
End of explanation
train_eval_days = -1 # In market days
base_days = 14 # In market days
step_days = 30 # market days
ahead_days = 1 # market days
today = data_df.index[-1] # Real date
filled_data_df = pp.fill_missing(data_df)
tic = time()
x, y = fe.generate_train_intervals(filled_data_df,
train_eval_days,
base_days,
step_days,
ahead_days,
today,
fe.feature_close_one_to_one)
toc = time()
print('Elapsed time: %i seconds.' % (toc-tic))
train_days = 252 # market days per training period
step_eval_days = 10 # market days between training periods beginings
tic = time()
r2_train_means, r2_train_stds, y_val_true_df, y_val_pred_df = roll_evaluate(x, y, train_days, step_eval_days, ahead_days, verbose=True)
toc = time()
print('Elapsed time: %i seconds.' % (toc-tic))
y_val_true_df.head()
pd.DataFrame(r2_train_means).describe()
scores = r2_score(y_val_true_df.T, y_val_pred_df.T, multioutput='raw_values')
print('R^2 score = %f +/- %f' % (np.mean(scores), np.std(scores)))
pd.DataFrame(scores).describe()
plt.plot(y_val_true_df.index, r2_train_means, label='r2_train_means')
plt.plot(y_val_true_df.index, scores, label='r2 validation scores')
plt.legend(loc='lower left')
scores_val = r2_score(y_val_true_df, y_val_pred_df, multioutput='raw_values')
print('R^2 score = %f +/- %f' % (np.mean(scores_val), np.std(scores_val)))
plt.plot(scores_val, label='r2 validation scores')
sorted_means = x.sort_index().mean(axis=1)
sorted_means.head()
sorted_means.plot()
sub_period = sorted_means['2009-03-01':]
plt.scatter(sub_period.index, sub_period)
Explanation: Let's test the whole process
End of explanation
from sklearn.metrics import mean_absolute_error
scores = mean_absolute_error(y_val_true_df.T, y_val_pred_df.T, multioutput='raw_values')
print('MAE score = %f +/- %f' % (np.mean(scores), np.std(scores)))
plt.plot(y_val_true_df.index, scores, label='MAE validation scores')
plt.legend(loc='lower left')
pd.DataFrame(scores).describe()
scores = mean_absolute_error(y_val_true_df, y_val_pred_df, multioutput='raw_values')
print('MAE score = %f +/- %f' % (np.mean(scores), np.std(scores)))
plt.plot(scores, label='MAE validation scores')
plt.legend(loc='lower left')
pd.DataFrame(scores).describe()
Explanation: It seems like the weird point, in which the model is predicting terribly may be the 2008 financial crisis. And the big unpredictability is limited to one symbol. I should implement a way to trace the symbols...
What about the mean absolute error?
End of explanation
<END_TASK> |
15,812 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Load the Iris dataset from scikit-learn. The third column represents the petal length, and the fourth column the petal width of the flower samples. The classes are already converted to integer labels where 0=Iris-Setosa, 1=Iris-Versicolor, 2=Iris-Virginica.
Train different classifiers using off-the-shelf algorithms in scikit-learn.
Step1: 1. Training a perceptron via scikit-learn
Step2: 2. Training a logistic regression model with scikit-learn
Step3: 3. Training a support vector machine (SVM) with scikit-learn
Step4: Training a kernel SVM with scikit-learn
Instead of calculating the dot product between two points explicitly, we define a kernel function, which computes the distance/similarity between two points in higher dimensional space. One of the most widely used kernels is the Radial Basis Function kernel (RBF kernel) or Gaussian kernel. The gamma parameter in rbf can be understood as a cut-off parameter for the Gaussian sphere. If we increase the value for , we increase the influence or reach of the training samples, which leads to a softer decision boundary.
Step5: Since we chose a relatively small value for gamma, the resulting decision boundary of the RBF kernel SVM model will be relatively soft. When we increase gamma from 0.2 to 100, we observe the effect on the decision boundary below. Although the model fits the training dataset very well, such a classifier is likely to overfit and have a high generalization error on unseen data.
Step6: 4. Training a decision tree classifier with scikit-learn
Step7: 5. Training a random forest with scikit-learn
Step8: 6. Training a KNN with scikit-learn | <ASSISTANT_TASK:>
Python Code:
from sklearn import datasets
import numpy as np
iris = datasets.load_iris()
X = iris.data[:, [2, 3]]
y = iris.target
print('Class labels:', np.unique(y))
# Splitting data into 70% training and 30% test data:
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=0)
# Standardizing the features:
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
sc.fit(X_train)
X_train_std = sc.transform(X_train)
X_test_std = sc.transform(X_test)
Explanation: Load the Iris dataset from scikit-learn. The third column represents the petal length, and the fourth column the petal width of the flower samples. The classes are already converted to integer labels where 0=Iris-Setosa, 1=Iris-Versicolor, 2=Iris-Virginica.
Train different classifiers using off-the-shelf algorithms in scikit-learn.
End of explanation
from sklearn.linear_model import Perceptron
ppn = Perceptron(n_iter=40, eta0=0.1, random_state=0)
ppn.fit(X_train_std, y_train)
y_pred = ppn.predict(X_test_std)
print('Misclassified sample: %d' % (y_test != y_pred).sum())
from sklearn.metrics import accuracy_score
print('Accuracy: %.2f' % accuracy_score(y_test, y_pred))
from matplotlib.colors import ListedColormap
import matplotlib.pyplot as plt
def plot_decision_regions(X, y, classifier, test_idx=None, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1],
alpha=0.8, c=cmap(idx),
marker=markers[idx], label=cl)
# highlight test samples
if test_idx:
# plot all samples
X_test, y_test = X[test_idx, :], y[test_idx]
plt.scatter(X_test[:, 0],
X_test[:, 1],
c='',
alpha=1.0,
linewidths=1,
marker='o',
s=55, label='test set')
X_combined_std = np.vstack((X_train_std, X_test_std))
y_combined = np.hstack((y_train, y_test))
plot_decision_regions(X_combined_std, y_combined,
classifier=ppn, test_idx=range(105, 150))
plt.xlabel('petal length [standardized]')
plt.ylabel('petal width [standardized]')
plt.legend(loc='upper left')
plt.show()
Explanation: 1. Training a perceptron via scikit-learn
End of explanation
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(C=1000, random_state=0)
lr.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined, classifier=lr, test_idx=range(105,150))
plt.xlabel('petal length [standardized]')
plt.ylabel('petal width [standardized]')
plt.legend(loc='upper left')
plt.show()
lr.predict_proba(X_test_std[0, :].reshape(1,-1))
Explanation: 2. Training a logistic regression model with scikit-learn
End of explanation
from sklearn.svm import SVC
svm = SVC(kernel='linear', C=1.0, random_state=0)
svm.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=svm, test_idx=range(105, 150))
plt.xlabel('petal length [standardized]')
plt.ylabel('petal width [standardized]')
plt.legend(loc='upper left')
plt.show()
Explanation: 3. Training a support vector machine (SVM) with scikit-learn
End of explanation
svm = SVC(kernel='rbf', random_state=0, gamma=0.2, C=1.0)
svm.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=svm, test_idx=range(105, 150))
plt.xlabel('petal length [standardized]')
plt.ylabel('petal width [standardized]')
plt.legend(loc='upper left')
plt.show()
Explanation: Training a kernel SVM with scikit-learn
Instead of calculating the dot product between two points explicitly, we define a kernel function, which computes the distance/similarity between two points in higher dimensional space. One of the most widely used kernels is the Radial Basis Function kernel (RBF kernel) or Gaussian kernel. The gamma parameter in rbf can be understood as a cut-off parameter for the Gaussian sphere. If we increase the value for , we increase the influence or reach of the training samples, which leads to a softer decision boundary.
End of explanation
svm = SVC(kernel='rbf', random_state=0, gamma=100.0, C=1.0)
svm.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=svm, test_idx=range(105, 150))
plt.xlabel('petal length [standardized]')
plt.ylabel('petal width [standardized]')
plt.legend(loc='upper left')
plt.show()
Explanation: Since we chose a relatively small value for gamma, the resulting decision boundary of the RBF kernel SVM model will be relatively soft. When we increase gamma from 0.2 to 100, we observe the effect on the decision boundary below. Although the model fits the training dataset very well, such a classifier is likely to overfit and have a high generalization error on unseen data.
End of explanation
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(criterion='entropy', max_depth=3, random_state=0)
tree.fit(X_train, y_train)
X_combined = np.vstack((X_train, X_test))
y_combined = np.hstack((y_train, y_test))
plot_decision_regions(X_combined, y_combined,
classifier=tree, test_idx=range(105, 150))
plt.xlabel('petal length [cm]')
plt.ylabel('petal width [cm]')
plt.legend(loc='upper left')
plt.show()
Explanation: 4. Training a decision tree classifier with scikit-learn
End of explanation
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier(criterion='entropy',
n_estimators=10,
random_state=1,
n_jobs=2)
forest.fit(X_train, y_train)
plot_decision_regions(X_combined, y_combined,
classifier=forest, test_idx=range(105, 150))
plt.xlabel('petal length [cm]')
plt.ylabel('petal width [cm]')
plt.legend(loc='upper left')
plt.show()
Explanation: 5. Training a random forest with scikit-learn
End of explanation
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5, p=2, metric='minkowski')
knn.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=knn, test_idx=range(105, 150))
plt.xlabel('petal length [standardized]')
plt.ylabel('petal width [standardized]')
plt.legend(loc='upper left')
plt.show()
Explanation: 6. Training a KNN with scikit-learn
End of explanation
<END_TASK> |
15,813 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Displaying text on a PmodOLED
This demonstration shows how to display text on a PmodOLED using the Pynq-Z1 board.
The Digilent Pmod OLED is required. In this example it should be connected to PMODA.
Step1: You should now see the text output on the OLED, so let's try another message
Step2: Finally, capture some text from IPython shell calls and print out to OLED | <ASSISTANT_TASK:>
Python Code:
from pynq import Overlay
from pynq.iop import Pmod_OLED
from pynq.iop import PMODA
ol = Overlay("base.bit")
ol.download()
pmod_oled = Pmod_OLED(PMODA)
pmod_oled.clear()
pmod_oled.write('Welcome to the\nPynq-Z1 board!')
Explanation: Displaying text on a PmodOLED
This demonstration shows how to display text on a PmodOLED using the Pynq-Z1 board.
The Digilent Pmod OLED is required. In this example it should be connected to PMODA.
End of explanation
pmod_oled.clear()
pmod_oled.write('Python and Zynq\nproductivity & performance')
Explanation: You should now see the text output on the OLED, so let's try another message
End of explanation
def get_ip_address():
ipaddr_slist = !hostname -I
ipaddr = (ipaddr_slist.s).split(" ")[0]
return str(ipaddr)
pmod_oled.clear()
pmod_oled.write(get_ip_address())
Explanation: Finally, capture some text from IPython shell calls and print out to OLED
End of explanation
<END_TASK> |
15,814 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Steady-state simulation of organic light emitting cell
This is an example of steady-state simulation of the light emitting electrochemical cell.
It attempts to reproduce reference. Exact agreement is not achieved probably because of missing details regarding the electrode model. As in the reference, the temperature equal to 2500 K is assumed.
Step1: Model and parameters
Step2: Results
Distribution of cations and anions
Step3: Distribution of the electric field
Step4: Distribution of electrons and holes
Step5: Distribution of holes near to contact
Step6: Comparison between drift and diffusion parts of the current | <ASSISTANT_TASK:>
Python Code:
from oedes.fvm import mesh1d
from oedes import progressbar, testing, init_notebook, models, context
init_notebook()
%matplotlib inline
import matplotlib.pylab as plt
import numpy as np
Explanation: Steady-state simulation of organic light emitting cell
This is an example of steady-state simulation of the light emitting electrochemical cell.
It attempts to reproduce reference. Exact agreement is not achieved probably because of missing details regarding the electrode model. As in the reference, the temperature equal to 2500 K is assumed.
End of explanation
params = {'T': 2500.,
'electron.mu': 1e-6,
'electron.energy': 0.,
'electron.N0': 5e26,
'hole.mu': 1e-6,
'hole.energy': -5.,
'hole.N0': 5e26,
'electrode0.workfunction': 2.5,
'electrode0.voltage': 2.,
'electrode1.workfunction': 2.5,
'electrode1.voltage': 0.,
'cation.mu': 1e-6,
'anion.mu': 1e-6,
'npi': 2e43,
'epsilon_r': 3.
}
L = 350e-9
mesh = mesh1d(L=L, epsilon_r=3.4)
cinit = 1.25e25
model = models.BaseModel()
models.std.electronic_device(model, mesh, 'pn')
cation, anion, ic = models.std.add_ions(model, mesh, zc=1, za=-1)
model.setUp()
xinit=ic(cinit=1e24)
c=context(model,x=xinit)
c.transient(params,1,1e-10, reltol=1, abstol=1e15, relfail=20.)
o = c.output()
m = mesh
Explanation: Model and parameters
End of explanation
plt.plot(m.cells['center'] / L - 0.5, o['cation.c'], '.-', label='cations')
plt.plot(m.cells['center'] / L - 0.5, o['anion.c'], '.-', label='anions')
testing.store(o['cation.c'], rtol=1e-6)
testing.store(o['anion.c'], rtol=1e-6)
plt.yscale('log')
plt.legend(loc=0, frameon=False)
plt.ylabel('carrier density [$m^{-3}$]')
plt.xlabel('distance [reduced units]')
plt.xlim([-0.5, 0.5])
plt.ylim([1e23, 1e27])
Explanation: Results
Distribution of cations and anions
End of explanation
testing.store(o['E'], rtol=1e-6)
plt.plot(m.faces['center'] / L - 0.5, o['E'], '.-')
plt.yscale('log')
plt.ylim([1e4, 1e10])
plt.xlim([-0.5, 0.5])
plt.xlabel('distance [reduced units]')
plt.ylabel('electric field [$Vm^{-1}$]')
Explanation: Distribution of the electric field
End of explanation
plt.plot(m.cells['center'] / L - 0.5, o['hole.c'], '.-', label='holes')
plt.plot(m.cells['center'] / L - 0.5, o['electron.c'], '.-', label='electrons')
plt.plot(
m.cells['center'] /
L -
0.5,
o['R'] *
0.5e-7,
'.-',
label='recombination zone')
testing.store(o['hole.c'], rtol=1e-6)
testing.store(o['electron.c'], rtol=1e-6)
testing.store(o['R'], rtol=1e-6)
plt.xlabel('distance [reduced units]')
plt.ylabel('carrier density [$m^{-3}$]')
plt.xlim([-0.5, 0.5])
plt.legend(loc=0, frameon=False)
Explanation: Distribution of electrons and holes
End of explanation
plt.plot(m.cells['center'] / L - 0.5, o['hole.c'], '.-', label='holes')
plt.xlabel('distance [reduced units]')
plt.ylabel('carrier density [$m^{-3}$]')
plt.xlim([-0.505, -0.4])
plt.legend(loc=0, frameon=False)
Explanation: Distribution of holes near to contact
End of explanation
testing.store(o['hole.jdrift'], rtol=1e-6)
testing.store(o['hole.jdiff'], rtol=1e-6)
testing.store(o['electron.jdrift'], rtol=1e-6)
testing.store(o['electron.jdiff'], rtol=1e-6)
plt.plot(
m.faces['center'] /
L -
0.5,
o['hole.jdrift'] /
np.amax(
o['hole.jdrift']),
'.-',
label='$j^p_{drift}$')
plt.plot(
m.faces['center'] /
L -
0.5,
o['hole.jdiff'] /
np.amax(
o['hole.jdrift']),
'.-',
label='$j^p_{diff}$')
plt.xlim([-0.6, 0.1])
plt.legend(loc=0, frameon=False)
plt.xlabel('distance [reduced units]')
plt.ylabel('normalized current density')
Explanation: Comparison between drift and diffusion parts of the current
End of explanation
<END_TASK> |
15,815 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
One Dimensional Lattice Models with QuTiP
Step1: Declaring a tight binding chain with a single site unit cell
As a default the instance of Lattice1d class is initialized as an atomic chain with a unit
cell with one atom only. The user need only define the number of cells and the boundary
condition.
Step2: The user can call Periodic_Atom_Chain to print all its information.
Step3: To define a lattice with more than one site per unit cell and one or more degrees of freedom per site, the cell_num_site and cell_site_dof arguments must be used. In a case like this, specifying the intra and inter cell interactions would also be necessary (through the arguments cell_Hamiltonian and inter_hop) in most cases. However, Lattice1d() will initiate the instance with default cell_Hamiltonian and inter_hop
if the user does not input it.
The use of display_unit_cell() and display_lattice()
The functions display_unit_cell() and display_lattice() can be used at any stage to produce visual symbolizations and elementwise information.
Step4: The user can review the attribute values of H and T from the retuned Qobjs.
Step5: Multiple site per unitcell and multiple degrees of freedom per site
Specifying cell_num_site enables choosing multiple sites for a unit call and any combination of degrees of freedom can be chosen for each site with cell_site_dof.
Step6: The use of cell_structures()
There is an aide function that help the user form the cell_Hamiltonian and inter_hop
arguments namely cell_structures().
Step7: The cell_structure() function returns two list of lists cell_H_form and inter_cell_T_form
that prints str s that can guide the user enter the nonzero elements at cell_H
and inter_cell_T which are np.zeros with the appropriate size. The procedure would
be to check a certain element in cell_H_form and insert the value for cell_H and so on.
Step8: Similarly, we set more elements to non-zero values.
Step9: The user would have to enter all the nonzero elements in cell_H and inter_cell_T
and then convert them into Qobjs and use them in declaring the instance of Lattice_1d.
Step10: cell_cite_dof can take care of composite degrees of freedom such as orbits, spins and/or excitations. For example, if each site has 4 orbitals and 2 spins, we set cell_site_dof = [4,2] defines that lattice. With the aid of the Lattice1d.basis() operator we can access particles localized at specific cell,site,orbitals and spin.
Valid inputs for cell_site_dof are one int(e.g. 4) or a list of int's(e.g. [4,2]). A single dof can be entered either as an int or a list with that int. So cell_site_dof = [4] and cell_site_dof = 4 are the same.
Step11: The labels of the diagrams can be read off from the returned H and T Qobjs. For example, $H_{12}$ can be read off as follows.
Step12: Basis function
Step13: Position Operator
Calling the position operator, x() returns an operator in matrix form that gives the
cell number for all the dofs present on the diagonal elements. The length of the unit
cell is always considered 1.
Step14: The crystal momentum operator
The crystal momentum operator can be produced with the k() method.
Step15: Distribute A Operator
The operator_at_cells() function distributes a user input operator on cells specified
in a list and identity operator on the rest.
The operator distribute_operator() distributes it over all the cells indiscriinately.
Step16: Hamiltonian
Step17: Dispersion Relation
Step18: Unitary evolution a Gaussian Wavepacket with mesolve
With no initial momentum
Step19: The wavepacket disperses over time keeping the periodic nature in space, since we picked a periodic boundary cndition for space.
With initial momentum
We initialize a Gaussian state with initial crystal momentum $\pi/3$ and plot the momentum expectation value with the unitary Schrodinger evolution.
Step20: The crystal momentum operator commutes with the Hamiltonian, so it is conserved in a Hamiltonian evolution, as expected.
Step21: Due to the initial momentum, the wave-packet moves to the right keeping the momentum as well as disperses.
Simulating a hardwall
We can simulate a hardwall by putting in a large potential on the first and last site of the lattice as well as change the boundary condition to "aperiodic".
Step22: We confirm that, the final momentum is indeed exactly the opposite of the initial momentum.
Dissipation induced translational motion
We can initialize a Gaussian state with no initial momentum and devise a dissipation induced translational motion scheme with forming a collapse operator that translates the wavepacket by one lattice point periodically.
Step23: The wave-packet disperses and trannslates to the right, but the momentum expectation remains zero, since the translation is induced by the dissipation.
Example
Step24: The three dispersion relationships for the three values of $\eta$ can be compared with the published results in Ref [2].
Unitary dynamics example
No initial momentum
Step25: With Initial momentum
Step26: translation by dissipation
Step27: References
[1] J. R. Johansson, P. D. Nation, and F. Nori, Comp. Phys. Comm. 183, 1760 (2012). http | <ASSISTANT_TASK:>
Python Code:
from qutip import *
import matplotlib.pyplot as plt
import numpy as np
Explanation: One Dimensional Lattice Models with QuTiP: Introduction
Saumya Biswas (saumyab@uoregon.edu)
For more information about QuTiP see http://qutip.org
We introduce the basic functionality of QuTiPs Lattice1d class of the lattice module.
About
The qutip.lattice module enables defining tight binding/lattice models for bosons and fermions on lattices and calculating their fundamental properties specially features arising from the translational symmetry of the lattice. The lattice classes defined are compatible with the rest of the functionalities of QuTiP and can make use of them quite conveniently.
The module can be used to perform the widely used quantum optics calculations involving unitary/nonunitary evolutions, steadystates, correlation functions for bosons and fermions on a lattice. It also facilitates the traditional condensed matter problem calculations about topology and ground state properties.
The lattice1d class is well-suited for studying composite systems with multiple orbitals/excitations/spins per site of a crystal(with a unit cell possibly composed of multiple sites) with tensor structures of indices(cell number, site number, degrees of freedom) aligned with the spirit of QuTiP's tensor functionalities.
Single-particle and Multiparticle physics
All the functionalities of the Lattice1d class are for sigle particle physics. The multi-particle physics calculations can be performed with a subclass of the Lattice1d class which is initiated with an instance of the Lattice1d class and inherits all the information about the lattice and basis.
Unitcell structure in the Second Quantized notation
Defining an instance of the Lattice1d class requires formatting the second Quantized Hamiltonian in a unitcell based structure with nearest neighbor coupling only. Howewver, the functionality is limited to single particle physics only in Lattice1d class methods.
\begin{eqnarray}
H = \sum_i \psi_i^{\dagger} D \psi_i + \sum_{i} \left( \psi_i^{\dagger} T \psi_{i+1} + \psi_{i+1}^{\dagger} T^{\dagger} \psi_i \right) \label{eq:TB_block}
\end{eqnarray}
where $\psi_i$ is the annihilation operator for a unit cell at coordinate i,$D$ is the cell Hamiltonian of the unit cell, $T$ is the inter cell hopping. Any 1d lattice can be put in the form of the equation above by resolving it into unit cells with coupling limited to the nearest neighbors only.
The Lattice1d class is based on this unit cell and nearest neighbor hopping format. A unit cell can be comprised of one or more sites with one or more orbitals, spins, excitations or any other degrees of freedom. An 1d lattice with next nearest neighbor coupling can be equivalently represented as a 1d lattice with unit cells of larger size limiting the hopping terms to nearest neighbors only.
How to Define a One Dimensional Lattice
End of explanation
boundary_condition = "periodic"
cells = 3
Periodic_Atom_Chain = Lattice1d(num_cell=cells, boundary = boundary_condition)
Explanation: Declaring a tight binding chain with a single site unit cell
As a default the instance of Lattice1d class is initialized as an atomic chain with a unit
cell with one atom only. The user need only define the number of cells and the boundary
condition.
End of explanation
Periodic_Atom_Chain
Explanation: The user can call Periodic_Atom_Chain to print all its information.
End of explanation
H = Periodic_Atom_Chain.display_unit_cell(label_on = True)
T = Periodic_Atom_Chain.display_lattice()
Explanation: To define a lattice with more than one site per unit cell and one or more degrees of freedom per site, the cell_num_site and cell_site_dof arguments must be used. In a case like this, specifying the intra and inter cell interactions would also be necessary (through the arguments cell_Hamiltonian and inter_hop) in most cases. However, Lattice1d() will initiate the instance with default cell_Hamiltonian and inter_hop
if the user does not input it.
The use of display_unit_cell() and display_lattice()
The functions display_unit_cell() and display_lattice() can be used at any stage to produce visual symbolizations and elementwise information.
End of explanation
print(H[0][0])
print(T)
Explanation: The user can review the attribute values of H and T from the retuned Qobjs.
End of explanation
boundary_condition = "periodic"
cells = 3
cell_num_site = 2
cell_site_dof = [2,3] # It could be 2 orbitals and 3 spins per sites or
# any other combination of such degrees of freedom
lattice_3223 = Lattice1d(num_cell=cells, boundary = boundary_condition,
cell_num_site = cell_num_site, cell_site_dof = cell_site_dof)
Explanation: Multiple site per unitcell and multiple degrees of freedom per site
Specifying cell_num_site enables choosing multiple sites for a unit call and any combination of degrees of freedom can be chosen for each site with cell_site_dof.
End of explanation
val_s = ['site0', 'site1', 'site2']
val_t = [' orb0', ' orb1']
(cell_H_form,inter_cell_T_form,cell_H,inter_cell_T) = cell_structures( val_s, val_t)
Explanation: The use of cell_structures()
There is an aide function that help the user form the cell_Hamiltonian and inter_hop
arguments namely cell_structures().
End of explanation
cell_H_form[0][5]
cell_H[0][5] = -1-0.5j # Calculated value from hand calculation
cell_H[5][0] = -1+0.5j # keeping it Hermitian
Explanation: The cell_structure() function returns two list of lists cell_H_form and inter_cell_T_form
that prints str s that can guide the user enter the nonzero elements at cell_H
and inter_cell_T which are np.zeros with the appropriate size. The procedure would
be to check a certain element in cell_H_form and insert the value for cell_H and so on.
End of explanation
cell_H_form[2][5]
cell_H[2][5] = -1+0.25j # Calculated value from hand calculation
cell_H[5][2] = -1-0.25j # keeping it Hermitian
Explanation: Similarly, we set more elements to non-zero values.
End of explanation
inter_cell_T_form[5][0]
inter_cell_T[5][0] = -0.5
inter_cell_T[0][5] = -0.5
cell_H = Qobj(cell_H)
inter_cell_T = Qobj(inter_cell_T)
lattice_324 = Lattice1d(num_cell=3, boundary = "periodic", cell_num_site = 3, cell_site_dof = [2], Hamiltonian_of_cell = cell_H, inter_hop = inter_cell_T )
Explanation: The user would have to enter all the nonzero elements in cell_H and inter_cell_T
and then convert them into Qobjs and use them in declaring the instance of Lattice_1d.
End of explanation
H = lattice_324.display_unit_cell(label_on = True)
T = lattice_324.display_lattice()
Explanation: cell_cite_dof can take care of composite degrees of freedom such as orbits, spins and/or excitations. For example, if each site has 4 orbitals and 2 spins, we set cell_site_dof = [4,2] defines that lattice. With the aid of the Lattice1d.basis() operator we can access particles localized at specific cell,site,orbitals and spin.
Valid inputs for cell_site_dof are one int(e.g. 4) or a list of int's(e.g. [4,2]). A single dof can be entered either as an int or a list with that int. So cell_site_dof = [4] and cell_site_dof = 4 are the same.
End of explanation
H[1][2]
Explanation: The labels of the diagrams can be read off from the returned H and T Qobjs. For example, $H_{12}$ can be read off as follows.
End of explanation
lattice_3224 = Lattice1d(num_cell=3, boundary = "periodic", \
cell_num_site = 2, cell_site_dof = [4,2])
psi0 = lattice_3224.basis(1,0,[2,1])
print( psi0.dag() ) # Because plotting the dag() takes up less space
Explanation: Basis function: ket vector initialized at specific cell, site, dof:
The basis() function enables the user to initialize a ket vector at a specific cell,
site and dof.
End of explanation
lattice_412 = Lattice1d(num_cell=4, boundary = "periodic", cell_num_site = 1, cell_site_dof = [2])
lattice_412.x()
Explanation: Position Operator
Calling the position operator, x() returns an operator in matrix form that gives the
cell number for all the dofs present on the diagonal elements. The length of the unit
cell is always considered 1.
End of explanation
lattice_411 = Lattice1d(num_cell=4, boundary = "periodic", cell_num_site = 1, cell_site_dof = [1])
k = lattice_411.k()
print(k)
Explanation: The crystal momentum operator
The crystal momentum operator can be produced with the k() method.
End of explanation
lattice_412 = Lattice1d(num_cell=4, boundary = "periodic", cell_num_site = 1, cell_site_dof = [2])
op = Qobj(np.array([[0,1],[1,0]]) )
op_sp = lattice_412.operator_at_cells(op, cells = [1,2])
op_all = lattice_412.distribute_operator(op)
print(op_sp)
print(op_all)
Explanation: Distribute A Operator
The operator_at_cells() function distributes a user input operator on cells specified
in a list and identity operator on the rest.
The operator distribute_operator() distributes it over all the cells indiscriinately.
End of explanation
boundary_condition = "periodic"
cells = 8
Periodic_Atom_Chain = Lattice1d(num_cell=cells, boundary = boundary_condition)
Hamt = Periodic_Atom_Chain.Hamiltonian()
print(Hamt)
Explanation: Hamiltonian:
The Hamiltoian() function returns the Hamiltonian for the lattice.
End of explanation
Periodic_Atom_Chain.plot_dispersion()
[knxA,val_kns] = Periodic_Atom_Chain.get_dispersion()
print(knxA)
print(val_kns)
Explanation: Dispersion Relation:
plot_dispersion() plots the valid (same as the number of unit cells) points in k-space
over the dispersion relation of an infinite crystal.
get_dispersion() returns the tuple of two np.ndarrays (knxA,val_kns). knxA has the valid k-values in it and val_kns has the band energies at those k-values. The length of
the unit cell is always set to 1.
End of explanation
num_cellN = 51
discrete_space_periodic = Lattice1d(num_cell=num_cellN, boundary = "periodic", cell_num_site = 1,
cell_site_dof = [1])
H0 = discrete_space_periodic.Hamiltonian()
xs = np.linspace(0, num_cellN-1, num_cellN)
sig = 3 # A standard deviation of 3
xm = num_cellN //2 + 15
psi0 = 1/np.sqrt(2*np.pi*sig**2) * np.exp(-(xs-xm)**2/2/sig/sig)
psi0 = Qobj(np.sqrt(psi0))
tlist = np.linspace(0,24,801)
options = Options(atol=1e-12)
options.store_states = True
states_Gauss_0 = mesolve(H0, psi0, tlist, [], [], options=options)
t0 = 0
t1 = 180
t2 = 360
t3 = 540
t4 = 720
x_t0 = states_Gauss_0.states[t0]
x_t1 = states_Gauss_0.states[t1]
x_t2 = states_Gauss_0.states[t2]
x_t3 = states_Gauss_0.states[t3]
x_t4 = states_Gauss_0.states[t4]
plt.plot(xs, np.abs(x_t0))
plt.plot(xs, np.abs(x_t1))
plt.plot(xs, np.abs(x_t2))
plt.plot(xs, np.abs(x_t3))
plt.plot(xs, np.abs(x_t4))
plt.xlabel('space', fontsize=14)
plt.ylabel('Wavepacket shape', fontsize=14)
plt.legend(['t0', 't1', 't2', 't3', 't4'])
plt.show()
plt.close()
Explanation: Unitary evolution a Gaussian Wavepacket with mesolve
With no initial momentum
End of explanation
sig = 3
xm = num_cellN //2 + 15
psi0 = 1/np.sqrt(2*np.pi*sig**2) * np.exp(-(xs-xm)**2/2/sig/sig)
psi0 = Qobj(np.sqrt(psi0) * np.exp(np.pi*1j*xs/3) )
k = discrete_space_periodic.k()
tlist = np.linspace(0,24,801)
options = Options(atol=1e-12)
options.store_states = True
states_Gauss_k = mesolve(H0, psi0, tlist, [], [k], options=options)
plt.plot(tlist, states_Gauss_k.expect[0])
plt.xlabel('Time', fontsize=14)
plt.ylabel(r'$\langle k \rangle$', fontsize=14)
plt.ylim([np.pi/3.01, np.pi/2.99])
plt.show()
plt.close()
np.pi/3
Explanation: The wavepacket disperses over time keeping the periodic nature in space, since we picked a periodic boundary cndition for space.
With initial momentum
We initialize a Gaussian state with initial crystal momentum $\pi/3$ and plot the momentum expectation value with the unitary Schrodinger evolution.
End of explanation
t0 = 0
t1 = 40
t2 = 80
t3 = 120
t4 = 160
x_t0 = states_Gauss_k.states[t0]
x_t1 = states_Gauss_k.states[t1]
x_t2 = states_Gauss_k.states[t2]
x_t3 = states_Gauss_k.states[t3]
x_t4 = states_Gauss_k.states[t4]
plt.plot(xs, np.abs(x_t0))
plt.plot(xs, np.abs(x_t1))
plt.plot(xs, np.abs(x_t2))
plt.plot(xs, np.abs(x_t3))
plt.plot(xs, np.abs(x_t4))
plt.xlabel('space', fontsize=14)
plt.ylabel('Wavepacket shape', fontsize=14)
plt.legend(['t0', 't1', 't2', 't3', 't4'])
plt.show()
plt.close()
Explanation: The crystal momentum operator commutes with the Hamiltonian, so it is conserved in a Hamiltonian evolution, as expected.
End of explanation
sig = 3
xm = num_cellN //2 + 5
psi0 = 1/np.sqrt(2*np.pi*sig**2) * np.exp(-(xs-xm)**2/2/sig/sig)
psi0 = Qobj(np.sqrt(psi0) * np.exp(np.pi*1j*xs/3) )
discrete_space_aperiodic = Lattice1d(num_cell=num_cellN, boundary = "aperiodic",
cell_num_site = 1, cell_site_dof = [1])
psiL = discrete_space_aperiodic.basis(0,0,[0])
psiR = discrete_space_aperiodic.basis(num_cellN-1,0,[0])
Ha = discrete_space_aperiodic.Hamiltonian()
H_p = 1e4*(psiL * psiL.dag() + psiR * psiR.dag() )
tlist = np.linspace(0,30,5001)
options = Options(atol=1e-12)
options.store_states = True
states_Gauss_k_HW = mesolve(Ha+H_p, psi0, tlist, [], [k], options=options)
# Warning: This calculation takes upto a minute
t0 = 0
t1 = 1000
t2 = 2000
t3 = 3000
t4 = 4000
t5 = 5000
x_t0 = states_Gauss_k_HW.states[t0]
x_t1 = states_Gauss_k_HW.states[t1]
x_t2 = states_Gauss_k_HW.states[t2]
x_t3 = states_Gauss_k_HW.states[t3]
x_t4 = states_Gauss_k_HW.states[t4]
x_t5 = states_Gauss_k_HW.states[t5]
plt.plot(xs, np.abs(x_t0))
plt.plot(xs, np.abs(x_t1))
plt.plot(xs, np.abs(x_t2))
plt.plot(xs, np.abs(x_t3))
plt.plot(xs, np.abs(x_t4))
plt.plot(xs, np.abs(x_t5))
plt.xlabel('space', fontsize=14)
plt.ylabel('Wavepacket shape', fontsize=14)
plt.legend(['t0', 't1', 't2', 't3', 't4', 't5'])
plt.show()
plt.close()
plt.plot(tlist, states_Gauss_k_HW.expect[0])
plt.xlabel('Time', fontsize=14)
plt.ylabel(r'$\langle k \rangle$', fontsize=14)
plt.show()
plt.close()
kd = discrete_space_aperiodic.k()
psi_f = states_Gauss_k_HW.states[3200]
kex0 = psi0.dag() * kd * psi0
kexf = psi_f.dag() * kd * psi_f
print('Initital momentum: ', kex0)
print('Final momentum: ', kexf)
Explanation: Due to the initial momentum, the wave-packet moves to the right keeping the momentum as well as disperses.
Simulating a hardwall
We can simulate a hardwall by putting in a large potential on the first and last site of the lattice as well as change the boundary condition to "aperiodic".
End of explanation
num_cellN = 51
discrete_space_periodic = Lattice1d(num_cell=num_cellN, boundary = "periodic", cell_num_site = 1,
cell_site_dof = [1])
H0 = discrete_space_periodic.Hamiltonian()
xp = discrete_space_periodic.x()
kp = discrete_space_periodic.k()
xs = np.linspace(0, num_cellN-1, num_cellN)
sig = 3 # A standard deviation of 3
xm = num_cellN //2
psi0 = 1/np.sqrt(2*np.pi*sig**2) * np.exp(-(xs-xm)**2/2/sig/sig)
psi0 = Qobj(np.sqrt(psi0))
lat_trR = np.diag(np.zeros(num_cellN-1)+1, -1)
lat_trR[0, num_cellN-1] = 1 # translate right
lat_trL = np.diag(np.zeros(num_cellN-1)+1, 1)
lat_trL[num_cellN-1, 0] = 1 # translate left
trR = Qobj(lat_trR)
trL = Qobj(lat_trL)
gamma = 2
col_op = [np.sqrt(gamma) * trR ]
tlistC = np.linspace(0,24,801)
options = Options(atol=1e-12)
options.store_states = True
rho0 = psi0 * psi0.dag()
states_Gauss_0 = mesolve(H0, rho0, tlistC, col_op, [kp], options=options)
plt.plot(tlistC, states_Gauss_0.expect[0])
plt.xlabel('Time', fontsize=14)
plt.ylabel(r'$\langle k \rangle$', fontsize=14)
plt.ylim([-1e-8, 1e-8])
plt.show()
plt.close()
t0 = 0
t1 = 140
t2 = 280
t3 = 420
t4 = 560
diag_x0 = np.diag(states_Gauss_0.states[t0])
diag_x1 = np.diag(states_Gauss_0.states[t1])
diag_x2 = np.diag(states_Gauss_0.states[t2])
diag_x3 = np.diag(states_Gauss_0.states[t3])
diag_x4 = np.diag(states_Gauss_0.states[t4])
plt.plot(xs, np.abs(diag_x0))
plt.plot(xs, np.abs(diag_x1))
plt.plot(xs, np.abs(diag_x2))
plt.plot(xs, np.abs(diag_x3))
plt.plot(xs, np.abs(diag_x4))
plt.xlabel('space', fontsize=14)
plt.ylabel('Wavepacket shape', fontsize=14)
plt.title('Nonunitary evolution')
plt.show()
plt.close()
Explanation: We confirm that, the final momentum is indeed exactly the opposite of the initial momentum.
Dissipation induced translational motion
We can initialize a Gaussian state with no initial momentum and devise a dissipation induced translational motion scheme with forming a collapse operator that translates the wavepacket by one lattice point periodically.
End of explanation
cells = 4
cell_num_site = 1
cell_site_dof = [2]
J = 2
### For eta = 0
eta = 0
H_cell = Qobj(np.array([[0, J * np.sin(eta)], [J * np.sin(eta), 0]]))
inter_cell_T = (J/2) * Qobj(np.array([[np.exp(eta * 1j), 1], [1, np.exp(-eta*1j)]]))
CROW_lattice = Lattice1d(num_cell=cells, boundary = "periodic", cell_num_site = 1,
cell_site_dof = [2], Hamiltonian_of_cell = H_cell,
inter_hop = inter_cell_T )
CROW_lattice.plot_dispersion()
### For eta = pi/4
eta = np.pi/4
H_cell = Qobj(np.array([[0, J * np.sin(eta)], [J * np.sin(eta), 0]]))
inter_cell_T = (J/2) * Qobj(np.array([[np.exp(eta * 1j), 1], [1, np.exp(-eta*1j)]]))
CROW_lattice = Lattice1d(num_cell=cells, boundary = "periodic", cell_num_site = 1,
cell_site_dof = [2], Hamiltonian_of_cell = H_cell,
inter_hop = inter_cell_T )
CROW_lattice.plot_dispersion()
### For eta = pi/2
eta = np.pi/2
H_cell = Qobj(np.array([[0, J * np.sin(eta)], [J * np.sin(eta), 0]]))
inter_cell_T = (J/2) * Qobj(np.array([[np.exp(eta * 1j), 1], [1, np.exp(-eta*1j)]]))
CROW_lattice = Lattice1d(num_cell=cells, boundary = "periodic", cell_num_site = 1,
cell_site_dof = [2], Hamiltonian_of_cell = H_cell,
inter_hop = inter_cell_T )
CROW_lattice.plot_dispersion()
Explanation: The wave-packet disperses and trannslates to the right, but the momentum expectation remains zero, since the translation is induced by the dissipation.
Example: A Coupled Resonator Optical Waveguideยถ
We now demonstrate the basic functionality of QuTiPs Lattice1d class of the lattice module with the example of a Coupled Resonator Optical Waveguide(CROW)(ref. [2]).
\begin{eqnarray}
H_0 = \sum\limits_{n} \left(H_a + H_b + H_{ab} + H^{\dagger}{ab} \right) \
H_a = \frac{J}{2} a_n^{\dagger} \left( e^{-i\eta} a{n-1} + e^{i\eta} a_{n+1} \right) \
H_b = \frac{J}{2} b_n^{\dagger} \left( e^{i\eta} b_{n-1} + e^{-i\eta} b_{n+1} \right) \
H_{ab} = J a_n^{\dagger} \left( sin (\eta) b_n + \frac{1}{2} \left(b_{n-1} + b_{n+1} \right) \right)
\end{eqnarray}
For implementation with Lattice1d class, we resolve the Hamiltonian into unitcells.
\begin{equation}
H = \sum\limits_{n} H_n
\end{equation}
\begin{eqnarray}
H_{n}= \begin{bmatrix}
a_{n}^{\dagger} & b_{n}^{\dagger}
\end{bmatrix}
\begin{bmatrix}
o & J sin(\eta) \
J sin(\eta) & 0
\end{bmatrix}
\begin{bmatrix}
a_{n} \
b_{n}
\end{bmatrix}
\ \ \ \ \ \ \ \ \ \ \
+ \left( \begin{bmatrix}
a_{n}^{\dagger} & b_{n}^{\dagger}
\end{bmatrix}
\begin{bmatrix}
e^{i\eta} & 1 \
1 & e^{-i\eta}
\end{bmatrix}
\begin{bmatrix}
a_{n+1} \
b_{n+1}
\end{bmatrix} + H.C. \right)
\end{eqnarray}
In the present case, we have 1 site in every unit cell and 2 dofs per site. And
\begin{equation}
\text{cell_Hamiltonian} = \begin{bmatrix}
o & J sin(\eta) \
J sin(\eta) & 0
\end{bmatrix}
\end{equation}
\begin{equation}
\text{inter_hop} = \begin{bmatrix}
e^{i\eta} & 1 \
1 & e^{-i\eta}
\end{bmatrix}
\end{equation}
End of explanation
num_cell = 100
J = 2
eta = np.pi/2
H_cell = Qobj(np.array([[0, J * np.sin(eta)], [J * np.sin(eta), 0]]))
inter_cell_T = (J/2) * Qobj(np.array([[np.exp(eta * 1j), 1], [1, np.exp(-eta*1j)]]))
CROW_lattice = Lattice1d(num_cell=num_cell, boundary = "periodic", cell_num_site = 2,
cell_site_dof = [1], Hamiltonian_of_cell = H_cell,
inter_hop = inter_cell_T)
HCROW = CROW_lattice.Hamiltonian()
kC = CROW_lattice.k()
nx = 1
ne = 2
positions = np.kron(range(nx), [1/nx for i in range(ne)])
S = np.kron(np.ones(num_cell), positions)
R = np.kron(range(0, num_cell), np.ones(nx*ne))
xA = R+S
sig = 3 # A standard deviation of 3
xm = num_cell //2
psi0 = 1/np.sqrt(2*np.pi*sig**2) * np.exp(-(xA-xm)**2/2/sig/sig)
psi0 = Qobj(np.sqrt(psi0))
tlistW = np.linspace(0,30,5001)
options = Options(atol=1e-12)
options.store_states = True
states_CROW_u = mesolve(HCROW, psi0, tlistW, [], [kC], options=options)
plt.plot(tlistW, states_CROW_u.expect[0])
plt.xlabel('Time', fontsize=14)
plt.ylabel(r'$\langle k \rangle$', fontsize=14)
plt.ylim([-1e-8, 1e-8])
plt.show()
plt.close()
t0 = 0
t1 = 1000
t2 = 2000
t3 = 3000
t4 = 4000
t5 = 5000
x_t0 = states_CROW_u.states[t0]
x_t1 = states_CROW_u.states[t1]
x_t2 = states_CROW_u.states[t2]
x_t3 = states_CROW_u.states[t3]
x_t4 = states_CROW_u.states[t4]
x_t5 = states_CROW_u.states[t5]
plt.plot(xA, np.abs(x_t0))
plt.plot(xA, np.abs(x_t1))
plt.plot(xA, np.abs(x_t2))
plt.plot(xA, np.abs(x_t3))
plt.plot(xA, np.abs(x_t4))
plt.plot(xA, np.abs(x_t5))
plt.xlabel('space', fontsize=14)
plt.ylabel('Wavepacket shape', fontsize=14)
plt.legend(['t0', 't1', 't2', 't3', 't4', 't5'])
plt.show()
plt.close()
Explanation: The three dispersion relationships for the three values of $\eta$ can be compared with the published results in Ref [2].
Unitary dynamics example
No initial momentum
End of explanation
sig = 3
xm = num_cell //2 + 15
psi0 = 1/np.sqrt(2*np.pi*sig**2) * np.exp(-(xA-xm)**2/2/sig/sig)
psi0 = Qobj(np.sqrt(psi0) * np.exp(1*np.pi*1j*xA/3) )
tlistCk = np.linspace(0,30,5001)
options = Options(atol=1e-12)
options.store_states = True
states_CROW_uk = mesolve(HCROW, psi0, tlistCk, [], [kC], options=options)
plt.plot(tlistCk, states_CROW_uk.expect[0])
plt.xlabel('Time', fontsize=14)
plt.ylabel(r'$\#langle k \rangle$', fontsize=14)
plt.ylim([1.046, 1.048])
plt.show()
plt.close()
t0 = 0
t1 = 1000
t2 = 2000
t3 = 3000
t4 = 4000
t5 = 5000
x_t0 = states_CROW_u.states[t0]
x_t1 = states_CROW_u.states[t1]
x_t2 = states_CROW_u.states[t2]
x_t3 = states_CROW_u.states[t3]
x_t4 = states_CROW_u.states[t4]
x_t5 = states_CROW_u.states[t5]
plt.plot(xA, np.abs(x_t0))
plt.plot(xA, np.abs(x_t1))
plt.plot(xA, np.abs(x_t2))
plt.plot(xA, np.abs(x_t3))
plt.plot(xA, np.abs(x_t4))
plt.plot(xA, np.abs(x_t5))
plt.xlabel('space', fontsize=14)
plt.ylabel('Wavepacket shape', fontsize=14)
plt.legend(['t0', 't1', 't2', 't3', 't4', 't5'])
plt.show()
plt.close()
t0 = 0
t1 = 1000
t2 = 2000
t3 = 3000
t4 = 4000
t5 = 5000
x_t0 = states_CROW_u.states[t0]
x_t1 = states_CROW_u.states[t1]
x_t2 = states_CROW_u.states[t2]
x_t3 = states_CROW_u.states[t3]
x_t4 = states_CROW_u.states[t4]
x_t5 = states_CROW_u.states[t5]
plt.plot(xA[range(0,200,2)], np.abs(x_t0.full()[range(0,200,2)]))
plt.plot(xA[range(0,200,2)], np.abs(x_t1.full()[range(0,200,2)]))
plt.plot(xA[range(0,200,2)], np.abs(x_t2.full()[range(0,200,2)]))
plt.plot(xA[range(0,200,2)], np.abs(x_t3.full()[range(0,200,2)]))
plt.plot(xA[range(0,200,2)], np.abs(x_t4.full()[range(0,200,2)]))
plt.plot(xA[range(0,200,2)], np.abs(x_t5.full()[range(0,200,2)]))
plt.xlabel('space(left sublattice)', fontsize=14)
plt.ylabel('Wavepacket shape', fontsize=14)
plt.legend(['t0', 't1', 't2', 't3', 't4', 't5'])
plt.show()
plt.close()
t0 = 0
t1 = 1000
t2 = 2000
t3 = 3000
t4 = 4000
t5 = 5000
x_t0 = states_CROW_u.states[t0]
x_t1 = states_CROW_u.states[t1]
x_t2 = states_CROW_u.states[t2]
x_t3 = states_CROW_u.states[t3]
x_t4 = states_CROW_u.states[t4]
x_t5 = states_CROW_u.states[t5]
plt.plot(xA[range(1,200,2)], np.abs(x_t0.full()[range(1,200,2)]))
plt.plot(xA[range(1,200,2)], np.abs(x_t1.full()[range(1,200,2)]))
plt.plot(xA[range(1,200,2)], np.abs(x_t2.full()[range(1,200,2)]))
plt.plot(xA[range(1,200,2)], np.abs(x_t3.full()[range(1,200,2)]))
plt.plot(xA[range(1,200,2)], np.abs(x_t4.full()[range(1,200,2)]))
plt.plot(xA[range(1,200,2)], np.abs(x_t5.full()[range(1,200,2)]))
plt.xlabel('space(right sublattice)', fontsize=14)
plt.ylabel('Wavepacket shape', fontsize=14)
plt.legend(['t0', 't1', 't2', 't3', 't4', 't5'])
plt.show()
plt.close()
Explanation: With Initial momentum
End of explanation
cells = 100
nx = 2
ne = 1
positions = np.kron(range(nx), [1/nx for i in range(ne)])
S = np.kron(np.ones(cells), positions)
R = np.kron(range(0, cells), np.ones(nx*ne))
xA = R+S
eta = np.pi/2
H_cell = Qobj(np.array([[0, J * np.sin(eta)], [J * np.sin(eta), 0]]))
inter_cell_T = (J/2) * Qobj(np.array([[np.exp(eta * 1j), 1], [1, np.exp(-eta*1j)]]))
CROW_lattice = Lattice1d(num_cell=cells, boundary = "periodic", cell_num_site = 2,
cell_site_dof = [1], Hamiltonian_of_cell = H_cell,
inter_hop = inter_cell_T)
HCROW = CROW_lattice.Hamiltonian()
kC = CROW_lattice.k()
lat_trR = np.diag(np.zeros(cells-1)+1, -1)
lat_trR[0, cells-1] = 1 # translate to the right
lat_trL = np.diag(np.zeros(cells-1)+1, 1)
lat_trL[cells-1, 0] = 1 # translate to the left
trR = Qobj(lat_trR)
trL = Qobj(lat_trL)
gamma = 0.5
col_op = [np.sqrt(gamma) * tensor(trL, qeye(2)) ] # We could have used trR for translation to the right
sig = 3
xm = cells //2 + 15
psi0 = 1/np.sqrt(2*np.pi*sig**2) * np.exp(-(xA-xm)**2/2/sig/sig)
psi0 = Qobj(np.sqrt(psi0))
tlistCN = np.linspace(0,30,601)
options = Options(atol=1e-12)
options.store_states = True
states_CROW_nu = mesolve(HCROW, psi0, tlistCN, col_op, [kC], options=options)
plt.plot(tlistCN, states_CROW_nu.expect[0])
plt.xlabel('Time', fontsize=14)
plt.ylabel(r'$\#langle k \rangle$', fontsize=14)
plt.ylim([-1e-8, 1e-8])
plt.show()
plt.close()
t0 = 0
t1 = 100
t2 = 200
t3 = 300
t4 = 400
t5 = 500
x_t0 = np.diag(states_CROW_nu.states[t0])
x_t1 = np.diag(states_CROW_nu.states[t1])
x_t2 = np.diag(states_CROW_nu.states[t2])
x_t3 = np.diag(states_CROW_nu.states[t3])
x_t4 = np.diag(states_CROW_nu.states[t4])
x_t5 = np.diag(states_CROW_nu.states[t5])
plt.plot(xA, np.abs(x_t0))
plt.plot(xA, np.abs(x_t1))
plt.plot(xA, np.abs(x_t2))
plt.plot(xA, np.abs(x_t3))
plt.plot(xA, np.abs(x_t4))
plt.plot(xA, np.abs(x_t5))
plt.xlabel('space', fontsize=14)
plt.ylabel('Wavepacket shape', fontsize=14)
plt.legend(['t0', 't1', 't2', 't3', 't4', 't5'])
plt.show()
plt.close()
Explanation: translation by dissipation
End of explanation
qutip.about()
qutip.cite()
Explanation: References
[1] J. R. Johansson, P. D. Nation, and F. Nori, Comp. Phys. Comm. 183, 1760 (2012). http://qutip.org
[2] Han, JungYun, Clemens Gneiting, and Daniel Leykam. "Helical transport in coupled resonator waveguides." Physical Review B 99.22 (2019): 224201.
https://journals.aps.org/prb/abstract/10.1103/PhysRevB.99.224201#
End of explanation
<END_TASK> |
15,816 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Regresja liniowa
Zamiast
Step1: Macierz $A$ dla regresji liniowej wynosi
Step2: Wspรณลczynniki dokลadnie bฤdฤ
wynosiลy
Step3: Optymalizacja metodฤ
iteracyjnฤ
,
Nie zakลadamy,ย ลผe mamy problem regresji liniowej.
Uลผywamy
Step4: Tensor flow - gradient descend | <ASSISTANT_TASK:>
Python Code:
%matplotlib notebook
import tensorflow as tf
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
import numpy as np
import matplotlib.pyplot as plt
learning_rate = 0.01
training_epochs = 1000
display_step = 50
train_X = np.asarray([3.3,4.4,5.5,6.71,6.93,4.168,9.779,6.182,7.59,2.167,
7.042,10.791,5.313,7.997,5.654,9.27,3.1])
train_Y = np.asarray([1.7,2.76,2.09,3.19,1.694,1.573,3.366,2.596,2.53,1.221,
2.827,3.465,1.65,2.904,2.42,2.94,1.3])
n_samples = train_X.shape[0]
Explanation: Regresja liniowa
Zamiast:
<p>$$ย A x =ย b$$</p>
<p>rozwiฤ
zujemy</p>
<p>$$ A^T A x = A^T b.$$</p>
<p>Inaczej mรณwiฤ
c, niech bลฤ
d:</p>
<p>$$ย r=b-A x$$</p>
<p>leลผy w lewym jฤ
drze operatora A:</p>
<p>$$A^T r =A^T( b-Ax) = A^T b-A^TAx = 0.$$</p>
<p>ย </p>
<p>$$ A^T A x = A^T b.$$</p>
<ol> </ol>
Moลผna teลผ zarzฤ
daฤ znikania gradientu kwadratu odchylenia:
$$ \frac{\partial}{\partial x_k} (A_{ij} x_j - b_i) (A_{il} x_l - b_i) = 0$$
$$ A_{ij} \delta_{jk} (A_{il} x_l - b_i) + A_{il} \delta_{lk} (A_{ij} x_j - b_i) =0 $$
$$ A^TAx - A^T b + A^TAx - A^T b =0 $$
$$ A^TAx - A^T b =0 $$
End of explanation
import numpy as np
M = np.vstack([np.ones_like(train_X),train_X]).T
M
print (np.dot(M.T,M))
print(np.dot(M.T,train_Y))
Explanation: Macierz $A$ dla regresji liniowej wynosi:
End of explanation
c = np.linalg.solve(np.dot(M.T,M),np.dot(M.T,train_Y))
c
plt.plot(train_X, train_Y, 'ro', label='Original data')
plt.plot(train_X, c[1] * train_X + c[0], label='Fitted line')
plt.legend()
plt.close()
Explanation: Wspรณลczynniki dokลadnie bฤdฤ
wynosiลy:
End of explanation
from scipy.optimize import minimize
def cost(c,x=train_X,y=train_Y):
return sum( (c[0]+x_*c[1]-y_)**2 for (x_,y_) in zip(x,y) )
cost([1,2])
res = minimize(cost, [1,1], method='nelder-mead', options={'xtol': 1e-8, 'disp': True})
res.x
x = np.linspace(-2,2,77)
y = np.linspace(-2,2,77)
X,Y = np.meshgrid(x,y)
cost([X,Y]).shape
plt.contourf( X,Y,np.log(cost([X,Y])),cmap='gray')
plt.plot(res.x[0],res.x[1],'o')
np.min(cost([X,Y]))
px=[]
py=[]
for i in range(20):
res = minimize(cost, [1,1], options={ 'maxiter':i})
px.append(res.x[0])
py.append(res.x[1])
print(res.x)
plt.plot(px,py,'ro-')
import sympy
from sympy.abc import x,y
sympy.init_printing(use_latex='mathjax')
f_symb = cost([x,y]).expand()
f_symb.diff(x)
F = sympy.lambdify((x,y),f_symb,np)
Fx = sympy.lambdify((x,y),f_symb.diff(x),np)
Fy = sympy.lambdify((x,y),f_symb.diff(y),np)
F(1,1),cost([1,1])
x0,y0 = -1,1
h = 0.01/(2*17)
for i in range(500):
plt.plot(x0,y0,'go')
#print(i,x0,y0)
x0 += -h * Fx(x0,y0)
y0 += -h * Fy(x0,y0)
Explanation: Optymalizacja metodฤ
iteracyjnฤ
,
Nie zakลadamy,ย ลผe mamy problem regresji liniowej.
Uลผywamy: https://docs.scipy.org/doc/scipy-0.18.1/reference/tutorial/optimize.html
End of explanation
# tf Graph Input
X = tf.placeholder("float")
Y = tf.placeholder("float")
# Set model weights
W = tf.Variable(1.0, name="weight")
b = tf.Variable(1.0, name="bias")
# Construct a linear model
pred = tf.add(tf.multiply(X, W), b)
# Mean squared error
cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples)
# Gradient descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
# Initializing the variables
init = tf.global_variables_initializer()
# TEST
with tf.Session() as sess:
sess.run(init)
sess.run(tf.assign(W,1.0))
sess.run(tf.assign(b,2.0))
print(sess.run(b),sess.run(cost, feed_dict={X: train_X, Y: train_Y}))
# Launch the graph
x_tf_lst = []
y_tf_lst = []
with tf.Session() as sess:
sess.run(init)
# Fit all training data
for epoch in range(training_epochs):
for (x, y) in zip(train_X, train_Y):
sess.run(optimizer, feed_dict={X: x, Y: y})
#Display logs per epoch step
if (epoch+1) % display_step == 0:
c = sess.run(cost, feed_dict={X: train_X, Y:train_Y})
print ("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(c), \
"W=", sess.run(W), "b=", sess.run(b))
x_tf_lst.append(sess.run(b))
y_tf_lst.append(sess.run(W))
training_cost = sess.run(cost, feed_dict={X: train_X, Y: train_Y})
print ("Training cost=", training_cost, "W=", sess.run(W), "b=", sess.run(b), '\n')
plt.plot(x_tf_lst,y_tf_lst,'yo')
Explanation: Tensor flow - gradient descend
End of explanation
<END_TASK> |
15,817 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
SciPy
SciPy es paquete que incluye una colecciรณn de algoritmos matemรกticos y funciones construidas sobre el paquete NumPy.
Dirigida al mismo tipo de usuarios que los de aplicaciones como MATLAB, GNU Octave, y Scilab, SciPy contiene varios mรณdulos, entre ellos
Step1: File input/output
Step2: Asรญ de sencillo, ya hemos "exportado" los valores de la variable 'a' a un archivo en formato *.mat.
Veamos si podemos localizarlo
Step3: Imaginemos la situaciรณn inversa. ยฟCรณmo importar archivos de datos de MATLAB a Python
Step4: Podemos listar los variables importadas
Step5: Existen otras formas de cargar datos
Step6: Vaya, parece que para estas muestras los histogramas no nos muestran informaciรณn clara tampoco. Asumiendo que las muestras son representativas, podemos realizar la prueba de t de Student para ver si existen diferencias significativas entre ellas.
Step7: Podemos ver que el valor de $\frac{p}{2} > 0.05 $ por lo que la hipรณtesis nula se cumple (no existen diferencias significativas entre ellas).
Ejercicio
Step8: Ahora que hemos realizado nuestras simulaciones, podemos calcular nuestro valor $p$, que es simplemente la proporciรณn de simulaciones que resultaron en una diferencia mayor o igual a 6.6 (la diferencia original)
Step9: $$ p = \frac{N_{>6.6}}{N_{total}} = \frac{1512}{10000} = 0.15 $$
Step10: Este tutorial de scipy.stats muestra mรกs ejemplos que podemos realizar. Por el momento, vamos a continuar explorando SciPy. Retomaremos mรกs trabajo estadรญsitco cuando lleguemos a pandas,
Interpolaciรณn
Step11: Para interpolar utilizaremos el paquete interpolate de SciPy
Step12: Para crear una funciรณn interpolante utilizaremos el objeto InterpolatedUnivariateSpline del paquete interpolate. A este objeto solo hay que pasarle los puntos de interpolaciรณn y el grado, y generarรก un spline.
Step13: ยฟCรณmo obtengo los puntos desde aquรญ? El resultado que hemos obtenido es una funciรณn y admite como argumento la $x$.
Step14: Vamos a representar esta funciรณn junto con los puntos de interpolaciรณn. Fรญjate en que, ahora que tenemos una funciรณn interpolante, podemos representarla en un dominio
Step15: Retrocede ahora y comprueba lo que pasa si cambias el grado del spline. Dale a un vistazo a todas las opciones que SciPy ofrece para interpolar datos.
Ajuste
El ajuste funciona de manera totalmente distinta a la interpolaciรณn
Step16: Generaremos unos datos para ver cรณmo funcionarรญa, del tipo
Step17: Utilicemos ahora la funciรณn polynomial.polyfit, que recibe los puntos de interpolaciรณn y el grado del polinomio. El resultado serรกn los coeficientes del mismo, en orden de potencias crecientes.
Step18: ยกMuy similares a lo que esperรกbamos! Para evaluar un polinomio con estos coeficientes, o bien construimos la funciรณn nosotros mismos o usamos la funciรณn polynomial.polyval
Step19: Si la funciรณn que queremos ajustar es mรกs compleja, necesitaremos ajustar los datos a una curva mediante un algoritmo de optimizaciรณn.
Step20: Generemos una vez mรกs los datos aรฑadiendo un poco de ruรญdo. ยฟPuedes leer ya funciones descritas con NumPy?
Step21: Como en este ejemplo sintรฉtico conocemos los valores exactos, podemos ver que existe variaciรณn respecto a los valores originales debido al ruรญdo aรฑadido.
Normalmente es mejor ayudar al solver definiendo unas restricciones
Step22: Veamos los resultados en una grรกfica
Step23: Otra forma de ajustar una funciรณn a datos experimentales es minimizando el error por mรญnimos cuadrados. Para hacer este ejemplo mรกs interesante, aรฑadamos ademรกs de valores atรญpicos (outliers, en inglรฉs). Este ejemplo estรก tomado del Cookbook de Scipy, robust regression
Step24: Una vez creado el modelo estamos listos para generar los datos.
Step25: La funciรณn que calcula los residuos se puede definir como
Step26: Ya tenemos todo lo que necesitamos para realizar el ajuste por mรญnimos cuadrados
Step27: Veamos los resultados
Step28: El paquete optimize incluye multitud de mรฉtodos para optimizaciรณn, ajuste de curvas y bรบsqueda de raรญces. La ayuda de este paquete es bastante larga (puedes consultarla tambiรฉn en http
Step29: Lo mรกs sencillo en estos casos es aplicar un filtro por ventana
Step30: Como podemos ver, si la frecuencia de muestreo no es muy alta o el tamaรฑo de nuestra ventana no es el adecuado, el resultado puede ser no satisfactorio.
El filtro de SavitzkyโGolay suele ser interesante en estos casos.
Step31: Si estamos interesados en obtener una seรฑal sinuisidal y dado que el ruรญdo ocurre a una frecuencia mรกs alta, otra opciรณn es generar filtro de paso bajo.
Step32: Por รบltimo, si la seรฑal tiene un deriva (drifting) podemos corregirla fรกcilmente con | <ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
Explanation: SciPy
SciPy es paquete que incluye una colecciรณn de algoritmos matemรกticos y funciones construidas sobre el paquete NumPy.
Dirigida al mismo tipo de usuarios que los de aplicaciones como MATLAB, GNU Octave, y Scilab, SciPy contiene varios mรณdulos, entre ellos:
Funciones especiales (scipy.special)
Integraciรณn (scipy.integrate)
Optimizaciรณn (scipy.optimize)
Interpolaciรณn (scipy.interpolate)
Transformadas de Fourier (scipy.fftpack)
Procesado de seรฑales (scipy.signal)
Algebra lineal (scipy.linalg)
Problemas de valores propios con matrices dispersas (ARPACK)
Grafos (scipy.sparse.csgraph)
Anรกlisis espacial: (scipy.spatial)
Estadรญstica (scipy.stats)
Tratamiento de imรกgenes multidimensionales (scipy.ndimage)
Entrada y salida de archivos (scipy.io)
En esta clase nos vamos repasar brevemente algunas de las funcionalidades mรกs relevantes para el anรกlisis de datos. Pero antes, carguemos las librerรญas de NumPy y matplotlib que ya conocemos.
End of explanation
from scipy import io as spio
a = np.ones((3, 3)) # creamos una matriz de 3x3
spio.savemat('archivo.mat', # nombre del archivo
{'a': a}) # asignamos y referenciamos el nombre con un diccionario
Explanation: File input/output: scipy.io
Este mรณdulo de SciPy permite leer y escribir archivos de datos en una gran variedad de formatos (ver documentaciรณn).
Por ejemplo, para trabajar con datos de MATLAB (*.dat):
End of explanation
%ls *.mat
Explanation: Asรญ de sencillo, ya hemos "exportado" los valores de la variable 'a' a un archivo en formato *.mat.
Veamos si podemos localizarlo:
End of explanation
data = spio.loadmat('archivo.mat')
data['a']
Explanation: Imaginemos la situaciรณn inversa. ยฟCรณmo importar archivos de datos de MATLAB a Python
End of explanation
spio.whosmat('archivo.mat')
Explanation: Podemos listar los variables importadas:
End of explanation
a = np.array([84, 72, 57, 46, 63, 76, 99, 91])
b = np.array([81, 69, 74, 61, 56, 87, 69, 65, 66, 44, 62, 69])
plt.hist(b, bins=5, alpha=0.5)
plt.hist(a, bins=5, alpha=0.5)
plt.plot(a,np.zeros(len(a)),'^')
plt.plot(b,np.zeros(len(b)),'^')
plt.title("Histogram")
plt.show()
print("La media de 'a' es {0:.1f}, y su desviaciรณn estรกndar es {1:.1f}".format(a.mean(), a.std()))
print("La media de 'b' es {0:.1f}, y su desviaciรณn estรกndar es {0:.1f}".format(b.mean(), b.std()))
print("La diferencia entre las medias es de {0:.1f}".format(a.mean()- b.mean()))
Explanation: Existen otras formas de cargar datos:
* Datos de archivos de texto: numpy.loadtxt() | numpy.savetxt()
* Datos de archivos de texto incompletos: numpy.genfromtxt() | numpy.recfromcsv()
* Datos con formato NumPy-binario (eficiente): numpy.save() | numpy.load
* pandas.read_csv: que cubriremos en detalle mรกs tarde
Estadรญstica: scipy.stats
Este mรณdulo contiene un gran nรบmero de distribuciones de probabilidad, tanto continuas como discretas, asรญ como un creciente nรบmero de funciones estadรญsticas.
Veamos un ejemplo simple tomado de Jake VanderPlas y su charla (Statistics for Hackers). Queremos saber si dos sets de muestras son diferentes (a y b).
End of explanation
from scipy import stats
stats.ttest_ind(a,b)
Explanation: Vaya, parece que para estas muestras los histogramas no nos muestran informaciรณn clara tampoco. Asumiendo que las muestras son representativas, podemos realizar la prueba de t de Student para ver si existen diferencias significativas entre ellas.
End of explanation
samples = np.concatenate([a,b])
num_simulations = 10000
differences = np.zeros(num_simulations)
for i in range(num_simulations):
np.random.shuffle(samples)
a_new = samples[0:len(a)]
b_new = samples[len(a):len(samples)]
a_mean = a_new.mean()
b_mean = b_new.mean()
differences[i]= (a_mean-b_mean)
Explanation: Podemos ver que el valor de $\frac{p}{2} > 0.05 $ por lo que la hipรณtesis nula se cumple (no existen diferencias significativas entre ellas).
Ejercicio:
Si las muestras no son diferentes, no deberรญa importar si intercambiamos los valores de forma aleatoria, ยฟverdad?.
ยกComprobรฉmoslo!
End of explanation
p = np.sum(differences>(a.mean()-b.mean()))/num_simulations
p
Explanation: Ahora que hemos realizado nuestras simulaciones, podemos calcular nuestro valor $p$, que es simplemente la proporciรณn de simulaciones que resultaron en una diferencia mayor o igual a 6.6 (la diferencia original)
End of explanation
plt.hist(differences, bins=50)
plt.axvline((a.mean()-b.mean()),color='r')
plt.xlabel('mean difference')
plt.ylabel('number')
Explanation: $$ p = \frac{N_{>6.6}}{N_{total}} = \frac{1512}{10000} = 0.15 $$
End of explanation
x_i = [0.0, 0.9, 1.8, 2.7, 3.6, 4.4, 5.3, 6.2, 7.1, 8.0]
y_i = [0.0, 0.8, 1.0, 0.5, -0.4, -1.0, -0.8, -0.1, 0.7, 1.0]
plt.plot(x_i, y_i, 'x', mew=2)
Explanation: Este tutorial de scipy.stats muestra mรกs ejemplos que podemos realizar. Por el momento, vamos a continuar explorando SciPy. Retomaremos mรกs trabajo estadรญsitco cuando lleguemos a pandas,
Interpolaciรณn: scipy.interpolate
Una de las tareas fundamentales a las que nos enfrentaremos a diario en el anรกlisis de datos es la de interpolar.
Supongamos que tenemos una serie de puntos que representan los datos de un cierto experimento. Por simplicidad, vamos a generar unos puntos de una funciรณn $\sin{x}$ de ejemplo para explicar el funcionamiento.
End of explanation
from scipy import interpolate
Explanation: Para interpolar utilizaremos el paquete interpolate de SciPy:
End of explanation
f_interp = interpolate.InterpolatedUnivariateSpline(x_i, y_i, k=1)
f_interp
Explanation: Para crear una funciรณn interpolante utilizaremos el objeto InterpolatedUnivariateSpline del paquete interpolate. A este objeto solo hay que pasarle los puntos de interpolaciรณn y el grado, y generarรก un spline.
End of explanation
f_interp(np.pi / 2)
Explanation: ยฟCรณmo obtengo los puntos desde aquรญ? El resultado que hemos obtenido es una funciรณn y admite como argumento la $x$.
End of explanation
x = np.linspace(0, 8)
y_interp = f_interp(x)
plt.plot(x_i, y_i, 'x', mew=2)
plt.plot(x, y_interp)
Explanation: Vamos a representar esta funciรณn junto con los puntos de interpolaciรณn. Fรญjate en que, ahora que tenemos una funciรณn interpolante, podemos representarla en un dominio:
End of explanation
from numpy.polynomial import polynomial
Explanation: Retrocede ahora y comprueba lo que pasa si cambias el grado del spline. Dale a un vistazo a todas las opciones que SciPy ofrece para interpolar datos.
Ajuste
El ajuste funciona de manera totalmente distinta a la interpolaciรณn: obtendremos una curva que no pasarรก por ninguno de los puntos originales, pero que a cambio tendrรก una expresiรณn analรญtica simple que conocemos a priori.
Veamos un ejemplo simple para realizar ajustes polinรณmicos vamos a utilizar el paquete np.polynomial.polynomial (sรญ, estรก dos veces).
End of explanation
x_i = np.linspace(-2, 3, num=10)
y_i = x_i ** 2 - x_i + 1 + 0.5 * np.random.randn(10)
plt.plot(x_i, y_i, 'x', mew=2)
Explanation: Generaremos unos datos para ver cรณmo funcionarรญa, del tipo:
$$y(x) = x^2 - x + 1 + \text{ruido}$$
End of explanation
a, b, c = polynomial.polyfit(x_i, y_i, deg=2)
a, b, c
Explanation: Utilicemos ahora la funciรณn polynomial.polyfit, que recibe los puntos de interpolaciรณn y el grado del polinomio. El resultado serรกn los coeficientes del mismo, en orden de potencias crecientes.
End of explanation
x = np.linspace(-2, 3)
#y_fit = a + b * x + c * x ** 2
y_fit = polynomial.polyval(x, (a, b, c))
l, = plt.plot(x, y_fit)
plt.plot(x_i, y_i, 'x', mew=2, c=l.get_color())
Explanation: ยกMuy similares a lo que esperรกbamos! Para evaluar un polinomio con estos coeficientes, o bien construimos la funciรณn nosotros mismos o usamos la funciรณn polynomial.polyval:
End of explanation
from scipy.optimize import curve_fit
Explanation: Si la funciรณn que queremos ajustar es mรกs compleja, necesitaremos ajustar los datos a una curva mediante un algoritmo de optimizaciรณn.
End of explanation
def func(x, a, b, c):
return a * np.exp(-b * x) + c
a, b, c = 2.5, 1.3, 0.5
xdata = np.linspace(0, 4, 50)
y = func(xdata, a, b, c)
y_noise = 1.5 * np.random.normal(size=xdata.size)
ydata = y + y_noise
plt.plot(xdata, ydata, 'x',mew=2, label='exp. data')
plt.plot(xdata, func(xdata, a, b, c), '-', label='true function')
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plt.show()
popt, pcov = curve_fit(func, xdata, ydata)
popt
Explanation: Generemos una vez mรกs los datos aรฑadiendo un poco de ruรญdo. ยฟPuedes leer ya funciones descritas con NumPy?
End of explanation
popt_bounds, pcov_bounds = curve_fit(func, xdata, ydata,
bounds=([1, 1, 0], [3., 2., 1.]))
popt_bounds
Explanation: Como en este ejemplo sintรฉtico conocemos los valores exactos, podemos ver que existe variaciรณn respecto a los valores originales debido al ruรญdo aรฑadido.
Normalmente es mejor ayudar al solver definiendo unas restricciones:
End of explanation
plt.plot(xdata, ydata, 'x',mew=2, label='exp. data')
plt.plot(xdata, func(xdata, a, b, c), '-', label='true function')
plt.plot(xdata, func(xdata, *popt), 'r-', label='fit')
plt.plot(xdata, func(xdata, *popt_bounds), 'g--', label='fit-with-bounds')
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plt.show()
Explanation: Veamos los resultados en una grรกfica:
End of explanation
def generate_data(t, A, sigma, omega, noise=0, n_outliers=0, random_state=0):
y = A * np.exp(-sigma*t) * np.sin(omega*t)
rnd = np.random.RandomState(random_state)
error = noise * rnd.randn(t.size)
outliers = rnd.randint(0, t.size, n_outliers)
error[outliers] = error[outliers] * 35
return y + error
# Parametros del modelo
A = 2
sigma = 0.1
omega = 0.1 * 2 * np.pi
x_true = np.array([A, sigma, omega])
noise = 0.1
t_min = 0
t_max = 30
Explanation: Otra forma de ajustar una funciรณn a datos experimentales es minimizando el error por mรญnimos cuadrados. Para hacer este ejemplo mรกs interesante, aรฑadamos ademรกs de valores atรญpicos (outliers, en inglรฉs). Este ejemplo estรก tomado del Cookbook de Scipy, robust regression
End of explanation
t= np.linspace(t_min, t_max, 30)
y_exp = generate_data(t, A, sigma, omega, noise=noise, n_outliers=4)
y_true = generate_data(t, A, sigma, omega) # ยฟpor quรฉ no necesito indicar nada mรกs?
plt.plot(t, y_exp, 'o', label='exp data')
plt.plot(t, y_true, label='true')
plt.xlabel('$t$')
plt.ylabel('$y$')
plt.legend()
Explanation: Una vez creado el modelo estamos listos para generar los datos.
End of explanation
def fun_res(x, t, y):
A, sigma, omega = x # parรกmetros
return (A * np.exp(-sigma * t) * np.sin(omega * t)) - y
x0 = np.ones(3) # valores inciales de A, sigma y omega
Explanation: La funciรณn que calcula los residuos se puede definir como:
End of explanation
from scipy.optimize import least_squares
res_lsq = least_squares(fun_res, x0, args=(t, y_exp))
res_lsq
res_robust = least_squares(fun_res, x0,
loss='soft_l1', # Norma del tipo L1 (mรกs robusta)
f_scale=0.1, # restringe los errores
args=(t, y_exp))
res_robust
Explanation: Ya tenemos todo lo que necesitamos para realizar el ajuste por mรญnimos cuadrados
End of explanation
y_lsq = generate_data(t, *res_lsq.x)
y_robust = generate_data(t, *res_robust.x)
plt.plot(t, y_exp, 'o', label='exp data')
plt.plot(t, y_true, label='true')
plt.plot(t, y_lsq, label='lsq')
plt.plot(t, y_robust, label='robust lsq')
plt.xlabel('$t$')
plt.ylabel('$y$')
plt.legend()
Explanation: Veamos los resultados:
End of explanation
N = 100 # number of samples
T = 1./N # sample spacing
t = np.linspace(-1, N*T, N)
y = (np.sin(
2*np.pi*0.75*t*(1-t) + 2.1) +
0.1*np.sin(2*np.pi*1.25*t + 1) +
0.18*np.cos(2*np.pi*3.85*t)
)
t_exp = (t + 1)
y_exp = y + np.random.randn(len(t)) * 0.30 # ruรญdo
plt.plot(t_exp, y_exp, label='exp data', alpha=0.75)
plt.xlabel('$t$')
plt.ylabel('$y$')
plt.legend()
Explanation: El paquete optimize incluye multitud de mรฉtodos para optimizaciรณn, ajuste de curvas y bรบsqueda de raรญces. La ayuda de este paquete es bastante larga (puedes consultarla tambiรฉn en http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html).
Procesado de seรฑales (scipy.signal)
Afrontรฉmoslo, lo habitual es que tus datos tengan siempre un ruรญdo asociado y no dispongamos de un modelo al que ajustarlos a priori. Bajo esta situaciรณn se suelen optar por diferentes tรฉcnicas para filtrar la seรฑal.
Primero generaremos una seรฑal con ruรญdo:
End of explanation
from scipy.signal import medfilt
n_elements = 11 # nยบ de elementos de en el que se aplica el filtro
y_exp_filt = medfilt(y_exp, n_elements)
plt.plot(t_exp, y_exp, label='exp data', alpha=0.55)
plt.plot(t_exp, y_exp_filt, label='filt. (median)')
plt.plot(t_exp, y, '-k', label='true', )
plt.xlabel('$t$')
plt.ylabel('$y$')
plt.legend()
Explanation: Lo mรกs sencillo en estos casos es aplicar un filtro por ventana:
End of explanation
from scipy.signal import savgol_filter
n_elements = 11 # nยบ de elementos de en el que se aplica el filtro
polyorder = 3
y_exp_filt = savgol_filter(y_exp, n_elements, polyorder)
plt.plot(t_exp, y_exp, label='exp data', alpha=0.55)
plt.plot(t_exp, y_exp_filt, label='filt. (Savitzky-Golay)')
plt.plot(t_exp, y, '-k', label='true', )
plt.xlabel('$t$')
plt.ylabel('$y$')
plt.legend()
Explanation: Como podemos ver, si la frecuencia de muestreo no es muy alta o el tamaรฑo de nuestra ventana no es el adecuado, el resultado puede ser no satisfactorio.
El filtro de SavitzkyโGolay suele ser interesante en estos casos.
End of explanation
from scipy import signal
filt_order = 3 # Filter order
Wn = 0.2 # # Cutoff frequency [radยทs^-1]
# Create the filter
b, a = signal.butter(filt_order, Wn, btype='low')
w, h = signal.freqs(b, a)
# Second, apply the filter
y_exp_lowpass = signal.filtfilt(b,a, y_exp)
plt.plot(t_exp, y_exp, label='exp data', alpha=0.55)
plt.plot(t_exp, y_exp_lowpass, label='filt. (low pass)')
plt.plot(t_exp, y, '-k', label='true', )
plt.xlabel('$t$')
plt.ylabel('$y$')
plt.legend()
Explanation: Si estamos interesados en obtener una seรฑal sinuisidal y dado que el ruรญdo ocurre a una frecuencia mรกs alta, otra opciรณn es generar filtro de paso bajo.
End of explanation
from scipy import signal
N = 100
t0 = 0
tf = 100
m = 1
b = 0
t = np.linspace(t0, tf, N)
y = m*t+b
y_exp = y + 100* np.random.rand(N)
plt.plot(t, y_exp, linewidth=2, label='exp data')
plt.plot(t, signal.detrend(y_exp), linewidth=2, label='drift correction')
plt.xlabel('$t$')
plt.ylabel('$y$')
plt.legend()
Explanation: Por รบltimo, si la seรฑal tiene un deriva (drifting) podemos corregirla fรกcilmente con:
End of explanation
<END_TASK> |
15,818 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Oregon Curriculum Network <br />
Discovering Math with Python
Crystal Ball Sequence
The face-centered cubic (FCC) lattice is not always presented in this simplest form, ditto the cubic close packing (CCP), which amounts to the same thing. A nuclear ball is surrounded by a layer of twelve, all touching it, and adjacent neighbors. The shape so formed is not a cube, but a cuboctahedron, with eight triangular faces and six square. This is where I can type stuff.
As the cuboctahedral packing continues to expand outward, layer by layer, the cumulative number of balls or points forms the Crystal Ball Sequence.
cubocta(), a generator, yields the number of balls in each successive layer of the cuboctahedron, according to a simple formula derived by R. Buckminster Fuller, a prolific inventor and philosopher [1]. cummulative( ) delegates to cubocta( ) while accumulating the number in each layer to provide a running total.
Step3: Octet Truss
When adjacent CCP ball centers interconnect, what do you get? Why the octet truss of course, a well known space frame, used a lot in architecture. Alexander Graham Bell was fascinated by this construction.[2]
<a data-flickr-embed="true" href="https
Step4: Each number in Pascal's Triangle may be understood as the number of unique pathways to that position, were falling balls introduced through the top and allowed to fall left or right to the next row down. This apparatus is sometimes called a Galton Board.
For example, a ball could reach the 6 in the middle of the 5th row going 1,1,2,3,6 in four ways (counting left and right mirrors), or 1,1,1,3,6 in two ways. The likely pattern when many balls fall through this maze will be a bell curve, as shown in the simulation below. | <ASSISTANT_TASK:>
Python Code:
from itertools import accumulate, islice
def cubocta():
Classic Generator: Cuboctahedral / Icosahedral #s
https://oeis.org/A005901
yield 1 # nuclear ball
f = 1
while True:
elem = 10 * f * f + 2 # f for frequency
yield elem # <--- pause / resume here
f += 1
def cummulative(n):
https://oeis.org/A005902 (crystal ball sequence)
yield from islice(accumulate(cubocta()),0,n)
print("{:=^30}".format(" Crystal Ball Sequence "))
print("{:^10} {:^10}".format("Layers", "Points"))
for f, out in enumerate(cummulative(30),start=1):
print("{:>10} {:>10}".format(f, out))
Explanation: Oregon Curriculum Network <br />
Discovering Math with Python
Crystal Ball Sequence
The face-centered cubic (FCC) lattice is not always presented in this simplest form, ditto the cubic close packing (CCP), which amounts to the same thing. A nuclear ball is surrounded by a layer of twelve, all touching it, and adjacent neighbors. The shape so formed is not a cube, but a cuboctahedron, with eight triangular faces and six square. This is where I can type stuff.
As the cuboctahedral packing continues to expand outward, layer by layer, the cumulative number of balls or points forms the Crystal Ball Sequence.
cubocta(), a generator, yields the number of balls in each successive layer of the cuboctahedron, according to a simple formula derived by R. Buckminster Fuller, a prolific inventor and philosopher [1]. cummulative( ) delegates to cubocta( ) while accumulating the number in each layer to provide a running total.
End of explanation
from itertools import islice
def pascal():
row = [1]
while True:
yield row
row = [i+j for i,j in zip([0]+row, row+[0])]
print("{0:=^60}".format(" Pascal's Triangle "))
print()
for r in islice(pascal(),0,11):
print("{:^60}".format("".join(map(lambda n: "{:>5}".format(n), r))))
Explanation: Octet Truss
When adjacent CCP ball centers interconnect, what do you get? Why the octet truss of course, a well known space frame, used a lot in architecture. Alexander Graham Bell was fascinated by this construction.[2]
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/23636692173/in/album-72157664250599655/" title="Business Accelerator Building"><img src="https://farm2.staticflickr.com/1584/23636692173_103b411737.jpg" width="500" height="375" alt="Business Accelerator Building"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
[1] Siobahn Roberts. King of Infinite Space. New York: Walker & Company (2006). pp 179-180.
"Coxeter sent back a letter saying that one equation would be 'a remarkable discovery, justifying Bucky's evident pride,' if only it weren't too good to be true. The next day, Coxeter called: 'On further reflection, I see that it is true'. Coxeter told Fuller how impressed he was with his formula -- on the cubic close-packing of balls."
[2] http://worldgame.blogspot.com/2006/02/octet-truss.html (additional info on the octet truss)
Pascal's Triangle
Pascal's Triangle connects to the Binomial Theorem (originally proved by Sir Isaac Newton) and to numerous topics in probability theory. The triangular and tetrahedral number sequences may be discovered lurking in its columns.
pascal(), a generator, yields successive rows of Pascal's Triangle. By prepending and appending a zero element and adding vertically, a next row is obtained. For example, from [1] we get [0, 1] + [1, 0] = [1, 1]. From [1, 1] we get [0, 1, 1] + [1, 1, 0] = [1, 2, 1] and so on.
Notice the triangular numbers (1, 3, 6, 10...) and tetrahedral number sequences (1, 4, 10, 20...) appear in the slanted columns. [3]
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo("9xUBhhM4vbM")
Explanation: Each number in Pascal's Triangle may be understood as the number of unique pathways to that position, were falling balls introduced through the top and allowed to fall left or right to the next row down. This apparatus is sometimes called a Galton Board.
For example, a ball could reach the 6 in the middle of the 5th row going 1,1,2,3,6 in four ways (counting left and right mirrors), or 1,1,1,3,6 in two ways. The likely pattern when many balls fall through this maze will be a bell curve, as shown in the simulation below.
End of explanation
<END_TASK> |
15,819 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Regularization
Welcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that overfitting can be a serious problem, if the training dataset is not big enough. Sure it does well on the training set, but the learned network doesn't generalize to new examples that it has never seen!
You will learn to
Step1: Problem Statement
Step3: Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.
- If the dot is blue, it means the French player managed to hit the ball with his/her head
- If the dot is red, it means the other team's player hit the ball with their head
Your goal
Step4: Let's train the model without any regularization, and observe the accuracy on the train/test sets.
Step5: The train accuracy is 94.8% while the test accuracy is 91.5%. This is the baseline model (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.
Step7: The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting.
2 - L2 Regularization
The standard way to avoid overfitting is called L2 regularization. It consists of appropriately modifying your cost function, from
Step9: Expected Output
Step10: Expected Output
Step11: Congrats, the test set accuracy increased to 93%. You have saved the French football team!
You are not overfitting the training data anymore. Let's plot the decision boundary.
Step13: Observations
Step15: Expected Output
Step16: Expected Output
Step17: Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you!
Run the code below to plot the decision boundary. | <ASSISTANT_TASK:>
Python Code:
# import packages
import numpy as np
import matplotlib.pyplot as plt
from reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec
from reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters
import sklearn
import sklearn.datasets
import scipy.io
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
Explanation: Regularization
Welcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that overfitting can be a serious problem, if the training dataset is not big enough. Sure it does well on the training set, but the learned network doesn't generalize to new examples that it has never seen!
You will learn to: Use regularization in your deep learning models.
Let's first import the packages you are going to use.
End of explanation
train_X, train_Y, test_X, test_Y = load_2D_dataset()
Explanation: Problem Statement: You have just been hired as an AI expert by the French Football Corporation. They would like you to recommend positions where France's goal keeper should kick the ball so that the French team's players can then hit it with their head.
<img src="images/field_kiank.png" style="width:600px;height:350px;">
<caption><center> <u> Figure 1 </u>: Football field<br> The goal keeper kicks the ball in the air, the players of each team are fighting to hit the ball with their head </center></caption>
They give you the following 2D dataset from France's past 10 games.
End of explanation
def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)
learning_rate -- learning rate of the optimization
num_iterations -- number of iterations of the optimization loop
print_cost -- If True, print the cost every 10000 iterations
lambd -- regularization hyperparameter, scalar
keep_prob - probability of keeping a neuron active during drop-out, scalar.
Returns:
parameters -- parameters learned by the model. They can then be used to predict.
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 20, 3, 1]
# Initialize parameters dictionary.
parameters = initialize_parameters(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
if keep_prob == 1:
a3, cache = forward_propagation(X, parameters)
elif keep_prob < 1:
a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)
# Cost function
if lambd == 0:
cost = compute_cost(a3, Y)
else:
cost = compute_cost_with_regularization(a3, Y, parameters, lambd)
# Backward propagation.
assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout,
# but this assignment will only explore one at a time
if lambd == 0 and keep_prob == 1:
grads = backward_propagation(X, Y, cache)
elif lambd != 0:
grads = backward_propagation_with_regularization(X, Y, cache, lambd)
elif keep_prob < 1:
grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 10000 iterations
if print_cost and i % 10000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
if print_cost and i % 1000 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (x1,000)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
Explanation: Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.
- If the dot is blue, it means the French player managed to hit the ball with his/her head
- If the dot is red, it means the other team's player hit the ball with their head
Your goal: Use a deep learning model to find the positions on the field where the goalkeeper should kick the ball.
Analysis of the dataset: This dataset is a little noisy, but it looks like a diagonal line separating the upper left half (blue) from the lower right half (red) would work well.
You will first try a non-regularized model. Then you'll learn how to regularize it and decide which model you will choose to solve the French Football Corporation's problem.
1 - Non-regularized model
You will use the following neural network (already implemented for you below). This model can be used:
- in regularization mode -- by setting the lambd input to a non-zero value. We use "lambd" instead of "lambda" because "lambda" is a reserved keyword in Python.
- in dropout mode -- by setting the keep_prob to a value less than one
You will first try the model without any regularization. Then, you will implement:
- L2 regularization -- functions: "compute_cost_with_regularization()" and "backward_propagation_with_regularization()"
- Dropout -- functions: "forward_propagation_with_dropout()" and "backward_propagation_with_dropout()"
In each part, you will run this model with the correct inputs so that it calls the functions you've implemented. Take a look at the code below to familiarize yourself with the model.
End of explanation
parameters = model(train_X, train_Y)
print ("On the training set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
Explanation: Let's train the model without any regularization, and observe the accuracy on the train/test sets.
End of explanation
plt.title("Model without regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
Explanation: The train accuracy is 94.8% while the test accuracy is 91.5%. This is the baseline model (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.
End of explanation
# GRADED FUNCTION: compute_cost_with_regularization
def compute_cost_with_regularization(A3, Y, parameters, lambd):
Implement the cost function with L2 regularization. See formula (2) above.
Arguments:
A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
Returns:
cost - value of the regularized loss function (formula (2))
m = Y.shape[1]
W1 = parameters["W1"]
W2 = parameters["W2"]
W3 = parameters["W3"]
cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost
### START CODE HERE ### (approx. 1 line)
L2_regularization_cost = (lambd/(2*m))*(np.sum(np.square(W1))+np.sum(np.square(W2))+np.sum(np.square(W3)))
### END CODER HERE ###
cost = cross_entropy_cost + L2_regularization_cost
return cost
A3, Y_assess, parameters = compute_cost_with_regularization_test_case()
print("cost = " + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1)))
Explanation: The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting.
2 - L2 Regularization
The standard way to avoid overfitting is called L2 regularization. It consists of appropriately modifying your cost function, from:
$$J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{L}\right) + (1-y^{(i)})\log\left(1- a^{L}\right) \large{)} \tag{1}$$
To:
$$J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{L}\right) + (1-y^{(i)})\log\left(1- a^{L}\right) \large{)} }\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W{k,j}^{[l]2} }_\text{L2 regularization cost} \tag{2}$$
Let's modify your cost and observe the consequences.
Exercise: Implement compute_cost_with_regularization() which computes the cost given by formula (2). To calculate $\sum\limits_k\sum\limits_j W_{k,j}^{[l]2}$ , use :
python
np.sum(np.square(Wl))
Note that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \frac{1}{m} \frac{\lambda}{2} $.
End of explanation
# GRADED FUNCTION: backward_propagation_with_regularization
def backward_propagation_with_regularization(X, Y, cache, lambd):
Implements the backward propagation of our baseline model to which we added an L2 regularization.
Arguments:
X -- input dataset, of shape (input size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation()
lambd -- regularization hyperparameter, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
### START CODE HERE ### (approx. 1 line)
dW3 = 1./m * np.dot(dZ3, A2.T) + (lambd/m)*W3
### END CODE HERE ###
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
### START CODE HERE ### (approx. 1 line)
dW2 = 1./m * np.dot(dZ2, A1.T) + (lambd/m)*W2
### END CODE HERE ###
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
### START CODE HERE ### (approx. 1 line)
dW1 = 1./m * np.dot(dZ1, X.T) + (lambd/m)*W1
### END CODE HERE ###
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_regularization_test_case()
grads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7)
print ("dW1 = "+ str(grads["dW1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("dW3 = "+ str(grads["dW3"]))
Explanation: Expected Output:
<table>
<tr>
<td>
**cost**
</td>
<td>
1.78648594516
</td>
</tr>
</table>
Of course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost.
Exercise: Implement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient ($\frac{d}{dW} ( \frac{1}{2}\frac{\lambda}{m} W^2) = \frac{\lambda}{m} W$).
End of explanation
parameters = model(train_X, train_Y, lambd = 0.7)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
Explanation: Expected Output:
<table>
<tr>
<td>
**dW1**
</td>
<td>
[[-0.25604646 0.12298827 -0.28297129]
[-0.17706303 0.34536094 -0.4410571 ]]
</td>
</tr>
<tr>
<td>
**dW2**
</td>
<td>
[[ 0.79276486 0.85133918]
[-0.0957219 -0.01720463]
[-0.13100772 -0.03750433]]
</td>
</tr>
<tr>
<td>
**dW3**
</td>
<td>
[[-1.77691347 -0.11832879 -0.09397446]]
</td>
</tr>
</table>
Let's now run the model with L2 regularization $(\lambda = 0.7)$. The model() function will call:
- compute_cost_with_regularization instead of compute_cost
- backward_propagation_with_regularization instead of backward_propagation
End of explanation
plt.title("Model with L2-regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
Explanation: Congrats, the test set accuracy increased to 93%. You have saved the French football team!
You are not overfitting the training data anymore. Let's plot the decision boundary.
End of explanation
# GRADED FUNCTION: forward_propagation_with_dropout
def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):
Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.
Arguments:
X -- input dataset, of shape (2, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (20, 2)
b1 -- bias vector of shape (20, 1)
W2 -- weight matrix of shape (3, 20)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
A3 -- last activation value, output of the forward propagation, of shape (1,1)
cache -- tuple, information stored for computing the backward propagation
np.random.seed(1)
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above.
D1 = np.random.rand(A1.shape[0],A1.shape[1]) # Step 1: initialize matrix D1 = np.random.rand(..., ...)
D1 = (D1<keep_prob) # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1 = A1*D1 # Step 3: shut down some neurons of A1
A1 = A1/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
### START CODE HERE ### (approx. 4 lines)
D2 = np.random.rand(A2.shape[0],A2.shape[1]) # Step 1: initialize matrix D2 = np.random.rand(..., ...)
D2 = (D2<keep_prob) # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
A2 = A2*D2 # Step 3: shut down some neurons of A2
A2 = A2/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
return A3, cache
X_assess, parameters = forward_propagation_with_dropout_test_case()
A3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)
print ("A3 = " + str(A3))
Explanation: Observations:
- The value of $\lambda$ is a hyperparameter that you can tune using a dev set.
- L2 regularization makes your decision boundary smoother. If $\lambda$ is too large, it is also possible to "oversmooth", resulting in a model with high bias.
What is L2-regularization actually doing?:
L2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes.
<font color='blue'>
What you should remember -- the implications of L2-regularization on:
- The cost computation:
- A regularization term is added to the cost
- The backpropagation function:
- There are extra terms in the gradients with respect to weight matrices
- Weights end up smaller ("weight decay"):
- Weights are pushed to smaller values.
3 - Dropout
Finally, dropout is a widely used regularization technique that is specific to deep learning.
It randomly shuts down some neurons in each iteration. Watch these two videos to see what this means!
<!--
To understand drop-out, consider this conversation with a friend:
- Friend: "Why do you need all these neurons to train your network and classify images?".
- You: "Because each neuron contains a weight and can learn specific features/details/shape of an image. The more neurons I have, the more featurse my model learns!"
- Friend: "I see, but are you sure that your neurons are learning different features and not all the same features?"
- You: "Good point... Neurons in the same layer actually don't talk to each other. It should be definitly possible that they learn the same image features/shapes/forms/details... which would be redundant. There should be a solution."
!-->
<center>
<video width="620" height="440" src="images/dropout1_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<br>
<caption><center> <u> Figure 2 </u>: Drop-out on the second hidden layer. <br> At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep_prob$ or keep it with probability $keep_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration. </center></caption>
<center>
<video width="620" height="440" src="images/dropout2_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> <u> Figure 3 </u>: Drop-out on the first and third hidden layers. <br> $1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons. </center></caption>
When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time.
3.1 - Forward propagation with dropout
Exercise: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer.
Instructions:
You would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps:
1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using np.random.rand() to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{1} d^{1} ... d^{1}] $ of the same dimension as $A^{[1]}$.
2. Set each entry of $D^{[1]}$ to be 0 with probability (1-keep_prob) or 1 with probability (keep_prob), by thresholding values in $D^{[1]}$ appropriately. Hint: to set all the entries of a matrix X to 0 (if entry is less than 0.5) or 1 (if entry is more than 0.5) you would do: X = (X < 0.5). Note that 0 and 1 are respectively equivalent to False and True.
3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values.
4. Divide $A^{[1]}$ by keep_prob. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.)
End of explanation
# GRADED FUNCTION: backward_propagation_with_dropout
def backward_propagation_with_dropout(X, Y, cache, keep_prob):
Implements the backward propagation of our baseline model to which we added dropout.
Arguments:
X -- input dataset, of shape (2, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation_with_dropout()
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
m = X.shape[1]
(Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
### START CODE HERE ### (โ 2 lines of code)
dA2 = dA2*D2 # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation
dA2 = dA2/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
### START CODE HERE ### (โ 2 lines of code)
dA1 = dA1*D1 # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation
dA1 = dA1/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_dropout_test_case()
gradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8)
print ("dA1 = " + str(gradients["dA1"]))
print ("dA2 = " + str(gradients["dA2"]))
Explanation: Expected Output:
<table>
<tr>
<td>
**A3**
</td>
<td>
[[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]]
</td>
</tr>
</table>
3.2 - Backward propagation with dropout
Exercise: Implement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks $D^{[1]}$ and $D^{[2]}$ stored in the cache.
Instruction:
Backpropagation with dropout is actually quite easy. You will have to carry out 2 Steps:
1. You had previously shut down some neurons during forward propagation, by applying a mask $D^{[1]}$ to A1. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask $D^{[1]}$ to dA1.
2. During forward propagation, you had divided A1 by keep_prob. In backpropagation, you'll therefore have to divide dA1 by keep_prob again (the calculus interpretation is that if $A^{[1]}$ is scaled by keep_prob, then its derivative $dA^{[1]}$ is also scaled by the same keep_prob).
End of explanation
parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
Explanation: Expected Output:
<table>
<tr>
<td>
**dA1**
</td>
<td>
[[ 0.36544439 0. -0.00188233 0. -0.17408748]
[ 0.65515713 0. -0.00337459 0. -0. ]]
</td>
</tr>
<tr>
<td>
**dA2**
</td>
<td>
[[ 0.58180856 0. -0.00299679 0. -0.27715731]
[ 0. 0.53159854 -0. 0.53159854 -0.34089673]
[ 0. 0. -0.00292733 0. -0. ]]
</td>
</tr>
</table>
Let's now run the model with dropout (keep_prob = 0.86). It means at every iteration you shut down each neurons of layer 1 and 2 with 24% probability. The function model() will now call:
- forward_propagation_with_dropout instead of forward_propagation.
- backward_propagation_with_dropout instead of backward_propagation.
End of explanation
plt.title("Model with dropout")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
Explanation: Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you!
Run the code below to plot the decision boundary.
End of explanation
<END_TASK> |
15,820 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
CSAL4243
Step1: Plot data
Step2: Find a line that best fit the data
Step3: Lets assume $\theta_0 = 0$ and $\theta_1=0$
Model h(x) = $\theta_0$ + $\theta_1$x = 0
Cost function J($\theta$) = $\frac{1}{2m}\sum_{i=1}^{m} (h(x^i) - y^i)^2$ = $\frac{1}{2m}\sum_{i=1}^{m} (0 - y^i)^2$
Step4: plot it
Step5: Plot $\theta1$ vs Cost
Step6: Lets assume $\theta_0 = 0$ and $\theta_1=1$
Model h(x) = $\theta_0$ + $\theta_1$x = x
Cost function J($\theta$) = $\frac{1}{2m}\sum_{i=1}^{m} (h(x^i) - y^i)^2$ = $\frac{1}{2m}\sum_{i=1}^{m} (x^i - y^i)^2$
Step7: plot it
Step8: Plot $\theta1$ vs Cost again
Step9: Lets assume $\theta_0 = 0$ and $\theta_1=2$
Model h(x) = $\theta_0$ + $\theta_1$x = 2x
Cost function J($\theta$) = $\frac{1}{2m}\sum_{i=1}^{m} (h(x^i) - y^i)^2$ = $\frac{1}{2m}\sum_{i=1}^{m} (2x^i - y^i)^2$
Step10: Run it for a while
Step11: plot $\theta_1$ vs Cost
Step12: <br>
<br>
Lets do it with Gradient Descent now
Step13: Plot Convergence
Step14: Predict output using trained model | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
from sklearn import linear_model
import matplotlib.pyplot as plt
# read data in pandas frame
dataframe = pd.read_csv('datasets/example1.csv', encoding='utf-8')
# assign x and y
X = np.array(dataframe[['x']])
y = np.array(dataframe[['y']])
m = y.size # number of training examples
# check data by printing first few rows
dataframe.head()
Explanation: CSAL4243: Introduction to Machine Learning
Muhammad Mudassir Khan (mudasssir.khan@ucp.edu.pk)
Lecture 4: Linear Regression and Gradient Descent Example
Overview
Machine Learning pipeline
Linear Regression with one variable
Model Representation
Cost Function
Gradient Descent
Linear Regression Example
Read data
Plot data
Find a line that best fit the data
Lets assume $\theta_0 = 0$ and $\theta_1=0$
Plot it
$\theta_1$ vs Cost
Lets do it with Gradient Descent now
Plot Convergence
Predict output using trained model
Plot Results
Resources
Credits
<br>
<br>
Machine Learning pipeline
<img style="float: left;" src="images/model.png">
x is called input variables or input features.
y is called output or target variable. Also sometimes known as label.
h is called hypothesis or model.
pair (x<sup>(i)</sup>,y<sup>(i)</sup>) is called a sample or training example
dataset of all training examples is called training set.
m is the number of samples in a dataset.
n is the number of features in a dataset excluding label.
<img style="float: left;" src="images/02_02.png", width=400>
<br>
<br>
Linear Regression with one variable
Model Representation
Model is represented by h<sub>$\theta$</sub>(x) or simply h(x)
For Linear regression with one input variable h(x) = $\theta$<sub>0</sub> + $\theta$<sub>1</sub>x
<img style="float: left;" src="images/02_01.png">
$\theta$<sub>0</sub> and $\theta$<sub>1</sub> are called weights or parameters.
Need to find $\theta$<sub>0</sub> and $\theta$<sub>1</sub> that maximizes the performance of model.
<br>
<br>
<br>
Cost Function
Let $\hat{y}$ = h(x) = $\theta$<sub>0</sub> + $\theta$<sub>1</sub>x
Error in single sample (x,y) = $\hat{y}$ - y = h(x) - y
Cummulative error of all m samples = $\sum_{i=1}^{m} (h(x^i) - y^i)^2$
Finally mean squared error or cost function = J($\theta$) = $\frac{1}{2m}\sum_{i=1}^{m} (h(x^i) - y^i)^2$
<img style="float: left;" src="images/03_01.png", width=300> <img style="float: right;" src="images/03_02.png", width=300>
<br>
<br>
Gradient Descent
Gradient descent equation:
$\theta_j := \theta_j - \alpha \frac{\partial}{\partial \theta_j} J(\theta_0, \theta_1)$
Linear regression Cost function:
J($\theta$) = $\frac{1}{2m}\sum_{i=1}^{m} (h(x^i) - y^i)^2$
<br>
Replacing J($\theta$) in gradient descent equation:
\begin{align} \text{repeat until convergence: } \lbrace & \newline \theta_0 := & \theta_0 - \alpha \frac{1}{m} \sum\limits_{i=1}^{m}(h_\theta(x_{i}) - y_{i}) \newline \theta_1 := & \theta_1 - \alpha \frac{1}{m} \sum\limits_{i=1}^{m}\left((h_\theta(x_{i}) - y_{i}) x_{i}\right) \newline \rbrace& \end{align}
<br>
<img style="float: left;" src="images/03_04.gif">
<br>
<br>
Linear Regression Example
| x | y |
| ------------- |:-------------:|
| 1 | 0.8 |
| 2 | 1.6 |
| 3 | 2.4 |
| 4 | 3.2 |
Read data
End of explanation
#visualize results
plt.scatter(X, y)
plt.title("Dataset")
plt.xlabel("x")
plt.ylabel("y")
plt.show()
Explanation: Plot data
End of explanation
#best fit line
tmpx = np.array([0, 1, 2, 3, 4])
y1 = 0.2*tmpx
y2 = 0.7*tmpx
y3 = 1.5*tmpx
plt.scatter(X, y)
plt.plot(tmpx,y1)
plt.plot(tmpx,y2)
plt.plot(tmpx,y3)
plt.title("Best fit line")
plt.xlabel("x")
plt.ylabel("y")
plt.show()
Explanation: Find a line that best fit the data
End of explanation
theta0 = 0
theta1 = 0
cost = 0
for i in range(m):
hx = theta1*X[i,0] + theta0
cost += pow((hx - y[i,0]),2)
cost = cost/(2*m)
print (cost)
Explanation: Lets assume $\theta_0 = 0$ and $\theta_1=0$
Model h(x) = $\theta_0$ + $\theta_1$x = 0
Cost function J($\theta$) = $\frac{1}{2m}\sum_{i=1}^{m} (h(x^i) - y^i)^2$ = $\frac{1}{2m}\sum_{i=1}^{m} (0 - y^i)^2$
End of explanation
# predict using model
y_pred = theta1*X + theta0
# plot
plt.scatter(X, y)
plt.plot(X, y_pred)
plt.title("Line for theta1 = 0")
plt.xlabel("x")
plt.ylabel("y")
plt.show()
Explanation: plot it
End of explanation
# save theta1 and cost in a vector
cost_log = []
theta1_log = []
cost_log.append(cost)
theta1_log.append(theta1)
# plot
plt.scatter(theta1_log, cost_log)
plt.title("Theta1 vs Cost")
plt.xlabel("Theta1")
plt.ylabel("Cost")
plt.show()
Explanation: Plot $\theta1$ vs Cost
End of explanation
theta0 = 0
theta1 = 1
cost = 0
for i in range(m):
hx = theta1*X[i,0] + theta0
cost += pow((hx - y[i,0]),2)
cost = cost/(2*m)
print (cost)
Explanation: Lets assume $\theta_0 = 0$ and $\theta_1=1$
Model h(x) = $\theta_0$ + $\theta_1$x = x
Cost function J($\theta$) = $\frac{1}{2m}\sum_{i=1}^{m} (h(x^i) - y^i)^2$ = $\frac{1}{2m}\sum_{i=1}^{m} (x^i - y^i)^2$
End of explanation
# predict using model
y_pred = theta1*X + theta0
# plot
plt.scatter(X, y)
plt.plot(X, y_pred)
plt.title("Line for theta1 = 1")
plt.xlabel("x")
plt.ylabel("y")
plt.show()
Explanation: plot it
End of explanation
# save theta1 and cost in a vector
cost_log.append(cost)
theta1_log.append(theta1)
# plot
plt.scatter(theta1_log, cost_log)
plt.title("Theta1 vs Cost")
plt.xlabel("Theta1")
plt.ylabel("Cost")
plt.show()
Explanation: Plot $\theta1$ vs Cost again
End of explanation
theta0 = 0
theta1 = 2
cost = 0
for i in range(m):
hx = theta1*X[i,0] + theta0
cost += pow((hx - y[i,0]),2)
cost = cost/(2*m)
print (cost)
# predict using model
y_pred = theta1*X + theta0
# plot
plt.scatter(X, y)
plt.plot(X, y_pred)
plt.title("Line for theta1 = 2")
plt.xlabel("x")
plt.ylabel("y")
plt.show()
# save theta1 and cost in a vector
cost_log.append(cost)
theta1_log.append(theta1)
# plot
plt.scatter(theta1_log, cost_log)
plt.title("theta1 vs Cost")
plt.xlabel("Theta1")
plt.ylabel("Cost")
plt.show()
Explanation: Lets assume $\theta_0 = 0$ and $\theta_1=2$
Model h(x) = $\theta_0$ + $\theta_1$x = 2x
Cost function J($\theta$) = $\frac{1}{2m}\sum_{i=1}^{m} (h(x^i) - y^i)^2$ = $\frac{1}{2m}\sum_{i=1}^{m} (2x^i - y^i)^2$
End of explanation
theta0 = 0
theta1 = -3.1
cost_log = []
theta1_log = [];
inc = 0.1
for j in range(61):
theta1 = theta1 + inc;
cost = 0
for i in range(m):
hx = theta1*X[i,0] + theta0
cost += pow((hx - y[i,0]),2)
cost = cost/(2*m)
cost_log.append(cost)
theta1_log.append(theta1)
Explanation: Run it for a while
End of explanation
plt.scatter(theta1_log, cost_log)
plt.title("theta1 vs Cost")
plt.xlabel("Theta1")
plt.ylabel("Cost")
plt.show()
Explanation: plot $\theta_1$ vs Cost
End of explanation
theta0 = 0
theta1 = -3
alpha = 0.1
interations = 100
cost_log = []
iter_log = [];
inc = 0.1
for j in range(interations):
cost = 0
grad = 0
for i in range(m):
hx = theta1*X[i,0] + theta0
cost += pow((hx - y[i,0]),2)
grad += ((hx - y[i,0]))*X[i,0]
cost = cost/(2*m)
grad = grad/(2*m)
theta1 = theta1 - alpha*grad
cost_log.append(cost)
theta1
Explanation: <br>
<br>
Lets do it with Gradient Descent now
End of explanation
plt.plot(cost_log)
plt.title("Convergence of Cost Function")
plt.xlabel("Iteration number")
plt.ylabel("Cost function")
plt.show()
Explanation: Plot Convergence
End of explanation
# predict using model
y_pred = theta1*X + theta0
# plot
plt.scatter(X, y)
plt.plot(X, y_pred)
plt.title("Line for Theta1 from Gradient Descent")
plt.xlabel("x")
plt.ylabel("y")
plt.show()
Explanation: Predict output using trained model
End of explanation
<END_TASK> |
15,821 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described.
<END_TASK>
<USER_TASK:>
Description:
Count numbers with difference between number and its digit sum greater than specific value
Utility method to get sum of digits of K ; loop until K is not zero ; method returns count of numbers smaller than N , satisfying difference condition ; binary search while loop ; if difference between number and its sum of digit is smaller than given difference then smallest number will be on left side ; if difference between number and its sum of digit greater than equal to given difference then smallest number will be on right side ; return the difference between ' smallest โ number โ โ found ' and ' N ' as result ; Driver code to test above methods
| <ASSISTANT_TASK:>
Python Code:
def sumOfDigit(K ) :
sod = 0
while(K ) :
sod = sod + K % 10
K = K // 10
return sod
def totalNumbersWithSpecificDifference(N , diff ) :
low = 1
high = N
while(low <= high ) :
mid =(low + high ) // 2
if(mid - sumOfDigit(mid ) < diff ) :
low = mid + 1
else :
high = mid - 1
return(N - high )
N = 13
diff = 2
print(totalNumbersWithSpecificDifference(N , diff ) )
<END_TASK>
|
15,822 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Scientific Programming I
Reference Document
<OL>
<LI> <A HREF="http
Step1: Input / Output (scipy.io)
scipy has built-in functions to read and write to a wide variety of data formats, including Matlab, IDL, and netcdf. Plain text and binary capabilites are available from numpy. hdf5 has its own python module.
Step2: Integration (scipy.integrate)
Step3: Linear Algebra Operations (scipy.linalg)
scipy.linalg contains all the functions in numpy.linalg plus some other more advanced ones not contained in numpy.linalg.
Another advantage of using scipy.linalg over numpy.linalg is that it is always compiled with BLAS/LAPACK support, while for numpy this is optional. Therefore, the scipy version might be faster depending on how numpy was installed.
Therefore, unless you donโt want to add scipy as a dependency to your numpy program, use scipy.linalg instead of numpy.linalg
The most commonly used methods are to calculate the determinant of a matrix (linalg.det) and to invert a matrix (linalg.inv)
Step4: If a matrix has a determinant of zero LinAlg will raise an error. This is the definition of a Singular matrix (one for which an inverse does not exist).
Step5: Fast Fourier Transforms (scipy.fftpack)
Step6: The scipy.fftpack module allows to compute fast Fourier transforms. As an illustration, a (noisy) input signal may look like
Step7: The observer doesnโt know the signal frequency, only the sampling time step of the signal sig. The signal is supposed to come from a real function so the Fourier transform will be symmetric. The scipy.fftpack.fftfreq() function will generate the sampling frequencies and scipy.fftpack.fft() will compute the fast Fourier transform
Step8: Because the resulting power is symmetric, only the positive part of the spectrum needs to be used for finding the frequency
Step9: The signal frequency can be found by
Step10: Now the high-frequency noise will be removed from the Fourier transformed signal
Step11: The resulting filtered signal can be computed by the scipy.fftpack.ifft() function
Step12: The result can be viewed with
Step13: Optimization and Fitting (scipy.optimize)
The scipy.optimize module provides useful algorithms for function minimization (scalar or multi-dimensional), curve fitting and root finding.
Step14: Finding the minimum of a scalar function
Step15: This function has a global minimum around -1.3 and a local minimum around 3.8.
The general and efficient way to find a minimum for this function is to conduct a gradient descent starting from a given initial point. The BFGS algorithm is a good way of doing this
Step16: A possible issue with this approach is that, if the function has local minima the algorithm may find these local minima instead of the global minimum depending on the initial point
Step17: If we donโt know the neighborhood of the global minimum to choose the initial point, we need to resort to costlier global optimization. To find the global minimum, the simplest algorithm is the brute force algorithm, in which the function is evaluated on each point of a given grid
Step18: For larger grid sizes, scipy.optimize.brute() becomes quite slow. scipy.optimize.anneal() provides an alternative, using simulated annealing. More efficient algorithms for different classes of global optimization problems exist, but this is out of the scope of scipy. Some useful packages for global optimization are OpenOpt, IPOPT, PyGMO and PyEvolve.
To find the local minimum, letโs constraint the variable to the interval (0, 10) using scipy.optimize.fminbound()
Step19: Finding the roots of a scalar function
To find a root, i.e. a point where f(x) = 0, of the function f above we can use for example scipy.optimize.fsolve()
Step20: Note that only one root is found. Inspecting the plot of f reveals that there is a second root around -2.5. We find the exact value of it by adjusting our initial guess
Step21: Curve Fitting
Suppose we have data sampled from f with some noise
Step22: Now if we know the functional form of the function from which the samples were drawn (x^2 + sin(x) in this case) but not the amplitudes of the terms, we can find those by least squares curve fitting. First we have to define the function to fit
Step23: Then we can use scipy.optimize.curve_fit() to find a and b
Step24: Note
Step25: Define a function that can describe min and max temperatures. Hint
Step26: Fit this function to the data with scipy.optimize.curve_fit().
Step27: Plot the result. Is the fit reasonable? If not, why?
Is the time offset for min and max temperatures the same within the fit accuracy?
Step28: Interpolation (scipy.interpolate)
The scipy.interpolate is useful for fitting a function from experimental data and thus evaluating points where no measure exists. The module is based on the FITPACK Fortran subroutines from the netlib project.
By imagining experimental data close to a sine function
Step29: The scipy.interpolate.interp1d class can build a linear interpolation function
Step30: Then the scipy.interpolate.linear_interp instance needs to be evaluated at the time of interest
Step31: A cubic interpolation can also be selected by providing the kind optional keyword argument
Step32: Statistics and Random Numbers
The module scipy.stats contains statistical tools and probabilistic descriptions of random processes. Random number generators for various random process can be found in numpy.random.
Histogram and probability density function
Given observations of a random process, their histogram is an estimator of the random processโs PDF (probability density function)
Step33: If we know that the random process belongs to a given family of random processes, such as normal processes, we can do a maximum-likelihood fit of the observations to estimate the parameters of the underlying distribution. Here we fit a normal process to the observed data
Step34: Percentiles
The median is the value with half of the observations below, and half above
Step35: It is also called the percentile 50, because 50% of the observation are below it
Step36: Similarly, we can calculate the percentile 90
Step37: The percentile is an estimator of the CDF
Step38: The resulting output is composed of
Step39: scipy.signal.detrend()
Step40: scipy.signal.resample()
Step41: scipy.signal has many window functions
Step42: Image processing routines may be sorted according to the category of processing they perform.
Geometrical transformations on images
Changing orientation, resolution, ...
Step43: Measurements on images
Step44: Now we look for various information about the objects in the image | <ASSISTANT_TASK:>
Python Code:
%pylab inline
# Increase default figure resolution
matplotlib.rcParams['savefig.dpi'] = 2 * matplotlib.rcParams['savefig.dpi']
Explanation: Scientific Programming I
Reference Document
<OL>
<LI> <A HREF="http://scipy-lectures.github.io">Python Scientific Lecture Notes</A>
<LI> <A HREF="http://scipy-central.org">SciPy Central (code snippets, modules, etc.) </A>
<LI> <A HREF="http://docs.scipy.org/doc/scipy/reference">SciPy Reference Guide</A>
</OL>
What is a SciPy?
The scipy package contains various toolboxes dedicated to common issues in scientific computing. Its different submodules correspond to different applications, such as interpolation, integration, optimization, image processing, statistics, special functions, etc.
scipy can be compared to other standard scientific-computing libraries, such as the GSL (GNU Scientific Library for C and C++), or Matlabโs toolboxes. scipy is the core package for scientific routines in Python; it is meant to operate efficiently on numpy arrays, so that numpy and scipy work hand in hand.
Before implementing a routine, it is worth checking if the desired data processing is not already implemented in Scipy. As non-professional programmers, scientists often tend to re-invent the wheel, which leads to buggy, non-optimal, difficult-to-share and unmaintainable code. By contrast, Scipyโs routines are optimized and tested, and should therefore be used when possible.
Available packages
The following packages are available in SciPy:
<table>
<tr> <td> <b> Subpackage </b></td> <td><b> Description </b></td></tr>
<tr> <td>cluster</td> <td> Clustering algorithms</td> </tr>
<tr> <td>constants</td> <td> Physical and mathematical constants</td> </tr>
<tr> <td>fftpack</td> <td> Fast Fourier Transform routines</td> </tr>
<tr> <td>integrate</td> <td> Integration and ordinary differential equation solvers</td> </tr>
<tr> <td>interpolate</td> <td> Interpolation and smoothing splines</td> </tr>
<tr> <td>io</td> <td> Input and Output</td> </tr>
<tr> <td>linalg</td> <td> Linear algebra</td> </tr>
<tr> <td>ndimage</td> <td> N-dimensional image processing</td> </tr>
<tr> <td>odr</td> <td> Orthogonal distance regression</td> </tr>
<tr> <td>optimize</td> <td> Optimization and root-finding routines</td> </tr>
<tr> <td>signal</td> <td> Signal processing</td> </tr>
<tr> <td>sparse</td> <td> Sparse matrices and associated routines</td> </tr>
<tr> <td>spatial</td> <td> Spatial data structures and algorithms</td> </tr>
<tr> <td>special</td> <td> Special functions</td> </tr>
<tr> <td>stats</td> <td> Statistical distributions and functions</td> </tr>
<tr> <td>weave</td> <td> C/C++ integration</td> </tr>
</table>
Getting Started
End of explanation
from scipy import io
a = np.ones((3, 3))
io.savemat('file.mat', {'a': a}); # savemat expects a dictionary
data = io.loadmat('file.mat', struct_as_record=True)
data['a']
Explanation: Input / Output (scipy.io)
scipy has built-in functions to read and write to a wide variety of data formats, including Matlab, IDL, and netcdf. Plain text and binary capabilites are available from numpy. hdf5 has its own python module.
End of explanation
from scipy import integrate
%%latex
We want to compute the integral:
$$ I = \int_{x=\pi}^{2\pi}\int_{y=0}^{\pi}{(y\sin x + x\cos y)dydx} $$
def integrand(y, x):
'y must be the first argument, and x the second.'
return y * np.sin(x) + x * np.cos(y)
ans, err = integrate.dblquad(integrand,
np.pi, 2*np.pi, # x limits
lambda x: 0, # y limits
lambda x: np.pi)
print ans
Explanation: Integration (scipy.integrate)
End of explanation
from scipy import linalg
arr1 = np.array([[1, 2], [3, 4]]); arr2 = np.array([[3, 2], [6, 4]]); arr3 = np.ones((3, 3))
print "arr 1 = \n",arr1
print "arr 2 = \n",arr2
print "arr 3 = \n",arr3
linalg.det(arr1), linalg.det(arr2)
linalg.det(arr3)
linalg.inv(arr1)
# np.allclose returns True if two arrays are element-wise equal within a tolerance
np.allclose(np.dot(arr1, linalg.inv(arr1)), np.eye(2))
print arr3
linalg.inv(arr3)
Explanation: Linear Algebra Operations (scipy.linalg)
scipy.linalg contains all the functions in numpy.linalg plus some other more advanced ones not contained in numpy.linalg.
Another advantage of using scipy.linalg over numpy.linalg is that it is always compiled with BLAS/LAPACK support, while for numpy this is optional. Therefore, the scipy version might be faster depending on how numpy was installed.
Therefore, unless you donโt want to add scipy as a dependency to your numpy program, use scipy.linalg instead of numpy.linalg
The most commonly used methods are to calculate the determinant of a matrix (linalg.det) and to invert a matrix (linalg.inv)
End of explanation
print arr1
linalg.inv(arr1)
Explanation: If a matrix has a determinant of zero LinAlg will raise an error. This is the definition of a Singular matrix (one for which an inverse does not exist).
End of explanation
from scipy import fftpack
Explanation: Fast Fourier Transforms (scipy.fftpack)
End of explanation
time_step = 0.02
period = 5.
time_vec = np.arange(0, 20, time_step)
sig = np.sin(2 * np.pi / period * time_vec) + 0.5 * np.random.randn(time_vec.size)
plt.plot(time_vec, sig)
Explanation: The scipy.fftpack module allows to compute fast Fourier transforms. As an illustration, a (noisy) input signal may look like:
End of explanation
sample_freq = fftpack.fftfreq(sig.size, d=time_step)
sig_fft = fftpack.fft(sig)
Explanation: The observer doesnโt know the signal frequency, only the sampling time step of the signal sig. The signal is supposed to come from a real function so the Fourier transform will be symmetric. The scipy.fftpack.fftfreq() function will generate the sampling frequencies and scipy.fftpack.fft() will compute the fast Fourier transform:
End of explanation
pidxs = np.where(sample_freq > 0)
freqs = sample_freq[pidxs]
power = np.abs(sig_fft)[pidxs]
plt.figure()
plt.plot(freqs, power)
plt.xlabel('Frequency [Hz]')
plt.ylabel('plower')
axes = plt.axes([0.3, 0.3, 0.5, 0.5])
plt.title('Peak frequency')
plt.plot(freqs[:8], power[:8])
plt.setp(axes, yticks=[])
Explanation: Because the resulting power is symmetric, only the positive part of the spectrum needs to be used for finding the frequency:
End of explanation
freq = freqs[power.argmax()]
freq, np.allclose(freq, 1./period)
Explanation: The signal frequency can be found by:
End of explanation
sig_fft[np.abs(sample_freq) > freq] = 0
Explanation: Now the high-frequency noise will be removed from the Fourier transformed signal:
End of explanation
main_sig = fftpack.ifft(sig_fft)
Explanation: The resulting filtered signal can be computed by the scipy.fftpack.ifft() function:
End of explanation
plt.figure()
plt.plot(time_vec, sig)
plt.plot(time_vec, main_sig, linewidth=3)
plt.xlabel('Time [s]')
plt.ylabel('Amplitude')
Explanation: The result can be viewed with:
End of explanation
from scipy import optimize
Explanation: Optimization and Fitting (scipy.optimize)
The scipy.optimize module provides useful algorithms for function minimization (scalar or multi-dimensional), curve fitting and root finding.
End of explanation
f = lambda x: x**2 + 10*np.sin(x)
x = np.arange(-10, 10, 0.1)
plt.plot(x, f(x))
Explanation: Finding the minimum of a scalar function
End of explanation
optimize.fmin_bfgs(f, 0)
Explanation: This function has a global minimum around -1.3 and a local minimum around 3.8.
The general and efficient way to find a minimum for this function is to conduct a gradient descent starting from a given initial point. The BFGS algorithm is a good way of doing this:
End of explanation
optimize.fmin_bfgs(f, 3)
Explanation: A possible issue with this approach is that, if the function has local minima the algorithm may find these local minima instead of the global minimum depending on the initial point:
End of explanation
grid = (-10, 10, 0.1)
xmin_global = optimize.brute(f, (grid,))
xmin_global
Explanation: If we donโt know the neighborhood of the global minimum to choose the initial point, we need to resort to costlier global optimization. To find the global minimum, the simplest algorithm is the brute force algorithm, in which the function is evaluated on each point of a given grid:
End of explanation
xmin_local = optimize.fminbound(f, 0, 10)
xmin_local
Explanation: For larger grid sizes, scipy.optimize.brute() becomes quite slow. scipy.optimize.anneal() provides an alternative, using simulated annealing. More efficient algorithms for different classes of global optimization problems exist, but this is out of the scope of scipy. Some useful packages for global optimization are OpenOpt, IPOPT, PyGMO and PyEvolve.
To find the local minimum, letโs constraint the variable to the interval (0, 10) using scipy.optimize.fminbound():
End of explanation
root = optimize.fsolve(f, 1) # our initial guess is 1
root
Explanation: Finding the roots of a scalar function
To find a root, i.e. a point where f(x) = 0, of the function f above we can use for example scipy.optimize.fsolve():
End of explanation
root2 = optimize.fsolve(f, -2.5)
root2
Explanation: Note that only one root is found. Inspecting the plot of f reveals that there is a second root around -2.5. We find the exact value of it by adjusting our initial guess:
End of explanation
xdata = np.linspace(-10, 10, num=20)
ydata = f(xdata) + np.random.randn(xdata.size)
plt.plot(xdata,ydata)
Explanation: Curve Fitting
Suppose we have data sampled from f with some noise:
End of explanation
f2 = lambda x, a, b: a*x**2 + b*np.sin(x)
Explanation: Now if we know the functional form of the function from which the samples were drawn (x^2 + sin(x) in this case) but not the amplitudes of the terms, we can find those by least squares curve fitting. First we have to define the function to fit:
End of explanation
guess = [2, 2]
params, params_covariance = optimize.curve_fit(f2, xdata, ydata, guess)
params
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(x, f(x), 'b-', label="f(x)")
ax.plot(x, f2(x, *params), 'r--', label="Curve fit result")
xmins = np.array([xmin_global[0], xmin_local])
ax.plot(xmins, f(xmins), 'go', label="Minima")
roots = np.array([root, root2])
ax.plot(roots, f(roots), 'kv', label="Roots")
ax.set_xlabel('x')
ax.set_ylabel('f(x)')
Explanation: Then we can use scipy.optimize.curve_fit() to find a and b:
End of explanation
max = [17, 19, 21, 28, 33, 38, 37, 37, 31, 23, 19, 18]
min = [-62, -59, -56, -46, -32, -18, -9, -13, -25, -46, -52, -58]
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(range(12),max,'r-', label='Max')
plt.plot(range(12),min,'b-', label='Min')
ax.legend()
ax.set_xlabel('Month')
ax.set_ylabel('Temperature ($^o C$)')
Explanation: Note: In Scipy >= 0.11 unified interfaces to all minimization and root finding algorithms are available: scipy.optimize.minimize(), scipy.optimize.minimize_scalar() and scipy.optimize.root(). They allow comparing various algorithms easily through the method keyword.
Exercise: Curve fitting of temperature data
The temperature extremes in Alaska for each month, starting in January, are given by (in degrees Celcius):
| Temp | Jan | Feb | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov | Dec |
| ------------- |:-------------:| -----:|
| max: | 17 | 19 | 21 | 28 | 33 | 38 | 37 | 37 | 31 | 23 | 19 | 18 |
| min: | -62 | -59 | -56 | -46 | -32 | -18 | -9 | -13 | -25 | -46 | -52 | -58 |
Plot these temperature extremes.
End of explanation
temp = lambda t, a, b, c: b*np.sin(2.*math.pi*t+a)+c
Explanation: Define a function that can describe min and max temperatures. Hint: this function has to have a period of 1 year. Hint: include a time offset.
End of explanation
guess = [6, 1, 1]
print "min temp"
params, params_covariance = optimize.curve_fit(temp, range(12), min, guess)
print params
print "max temp"
params2, params_covariance2 = optimize.curve_fit(temp, range(12), max, guess)
print params2
Explanation: Fit this function to the data with scipy.optimize.curve_fit().
End of explanation
fig = plt.figure()
ax = fig.add_subplot(111)
x=np.linspace(-1,12,1000)
ax.plot(range(12), min, 'b.', label="min")
ax.plot(range(12), max, 'r.', label="max")
ax.plot(x, temp(x, *params), 'y--', label="min fit")
ax.plot(x, temp(x, *params2), 'g--', label="max fit")
ax.set_xlabel('Month')
ax.set_ylabel('Temperature ($^o C$)')
ax.set_xlim=[-1.,11.]
ax.set_ylim=[-80,60]
Explanation: Plot the result. Is the fit reasonable? If not, why?
Is the time offset for min and max temperatures the same within the fit accuracy?
End of explanation
measured_time = np.linspace(0, 1, 10)
noise = (np.random.random(10)*2 - 1) * 1e-1
measures = np.sin(2 * np.pi * measured_time) + noise
plt.plot(measured_time,measures)
Explanation: Interpolation (scipy.interpolate)
The scipy.interpolate is useful for fitting a function from experimental data and thus evaluating points where no measure exists. The module is based on the FITPACK Fortran subroutines from the netlib project.
By imagining experimental data close to a sine function:
End of explanation
from scipy.interpolate import interp1d
linear_interp = interp1d(measured_time, measures)
Explanation: The scipy.interpolate.interp1d class can build a linear interpolation function:
End of explanation
computed_time = np.linspace(0, 1, 50)
linear_results = linear_interp(computed_time)
Explanation: Then the scipy.interpolate.linear_interp instance needs to be evaluated at the time of interest:
End of explanation
cubic_interp = interp1d(measured_time, measures, kind='cubic')
cubic_results = cubic_interp(computed_time)
plt.plot(measured_time, measures, 'o', ms=6, label='measures')
plt.plot(computed_time, linear_results, label='linear interp')
plt.plot(computed_time, cubic_results, label='cubic interp')
plt.legend()
Explanation: A cubic interpolation can also be selected by providing the kind optional keyword argument:
End of explanation
a = np.random.normal(size=1000)
bins = np.arange(-4, 5)
bins
histogram = np.histogram(a, bins=bins, normed=True)[0]
bins = 0.5*(bins[1:] + bins[:-1])
bins
from scipy import stats
b = stats.norm.pdf(bins) # norm is a distribution
plt.plot(bins, histogram)
plt.plot(bins, b)
Explanation: Statistics and Random Numbers
The module scipy.stats contains statistical tools and probabilistic descriptions of random processes. Random number generators for various random process can be found in numpy.random.
Histogram and probability density function
Given observations of a random process, their histogram is an estimator of the random processโs PDF (probability density function):
End of explanation
loc, std = stats.norm.fit(a)
loc, std
Explanation: If we know that the random process belongs to a given family of random processes, such as normal processes, we can do a maximum-likelihood fit of the observations to estimate the parameters of the underlying distribution. Here we fit a normal process to the observed data:
End of explanation
np.median(a)
Explanation: Percentiles
The median is the value with half of the observations below, and half above:
End of explanation
stats.scoreatpercentile(a, 50)
Explanation: It is also called the percentile 50, because 50% of the observation are below it:
End of explanation
stats.scoreatpercentile(a, 90)
Explanation: Similarly, we can calculate the percentile 90:
End of explanation
a = np.random.normal(0, 1, size=100)
b = np.random.normal(1, 1, size=10)
stats.ttest_ind(a, b)
Explanation: The percentile is an estimator of the CDF: cumulative distribution function.
Statistical tests
A statistical test is a decision indicator. For instance, if we have two sets of observations, that we assume are generated from Gaussian processes, we can use a T-test to decide whether the two sets of observations are significantly different:
End of explanation
from scipy import signal
Explanation: The resulting output is composed of:
1) The T statistic value: it is a number the sign of which is proportional to the difference between the two random processes and the magnitude is related to the significance of this difference.
2) The p value: the probability of both processes being identical. If it is close to 1, the two process are almost certainly identical. The closer it is to zero, the more likely it is that the processes have different means.
Signal Processing (scipy.signal)
End of explanation
t = np.linspace(0, 5, 100)
x = t + np.random.normal(size=100)
plt.plot(t, x, linewidth=3)
plt.plot(t, signal.detrend(x), linewidth=3)
Explanation: scipy.signal.detrend(): remove linear trend from signal:
End of explanation
t = np.linspace(0, 5, 100)
x = np.sin(t)
plt.plot(t, x, linewidth=3)
plt.plot(t[::2], signal.resample(x, 50), 'ko')
Explanation: scipy.signal.resample(): resample a signal to n points using FFT.
End of explanation
from scipy import ndimage
Explanation: scipy.signal has many window functions: scipy.signal.hamming(), scipy.signal.bartlett(), scipy.signal.blackman()...
scipy.signal has filtering (median filter scipy.signal.medfilt(), Wiener scipy.signal.wiener()), but we will discuss this in the image section.
Image Processing (scipy.ndimage)
End of explanation
from scipy import misc
ascent = misc.ascent()
fig = plt.figure()
ax = fig.add_subplot(111)
ax.imshow(ascent, cmap=cm.gray)
ax.axis('off')
shifted_ascent = ndimage.shift(ascent, (50, 50))
shifted_ascent2 = ndimage.shift(ascent, (50, 50), mode='nearest')
rotated_ascent = ndimage.rotate(ascent, 30)
cropped_ascent = ascent[50:-50, 50:-50]
zoomed_ascent = ndimage.zoom(ascent, 2)
fig = plt.figure()
ax = fig.add_subplot(151)
ax.imshow(shifted_ascent, cmap=cm.gray)
ax.axis('off')
ax2 = fig.add_subplot(152)
ax2.imshow(shifted_ascent2, cmap=cm.gray)
ax2.axis('off')
ax3 = fig.add_subplot(153)
ax3.imshow(rotated_ascent, cmap=cm.gray)
ax3.axis('off')
ax4 = fig.add_subplot(154)
ax4.imshow(cropped_ascent, cmap=cm.gray)
ax4.axis('off')
ax5 = fig.add_subplot(155)
ax5.imshow(zoomed_ascent, cmap=cm.gray)
ax5.axis('off')
noisy_ascent = np.copy(ascent).astype(np.float)
noisy_ascent += ascent.std()*0.5*np.random.standard_normal(ascent.shape)
blurred_ascent = ndimage.gaussian_filter(noisy_ascent, sigma=3)
median_ascent = ndimage.median_filter(blurred_ascent, size=5)
wiener_ascent = signal.wiener(blurred_ascent, (5,5))
fig = plt.figure()
ax = fig.add_subplot(141)
ax.imshow(noisy_ascent, cmap=cm.gray)
ax.axis('off')
ax.set_title("noisy ascent")
ax2 = fig.add_subplot(142)
ax2.imshow(blurred_ascent, cmap=cm.gray)
ax2.axis('off')
ax2.set_title("Gaussian filter")
ax3 = fig.add_subplot(143)
ax3.imshow(median_ascent, cmap=cm.gray)
ax3.axis('off')
ax3.set_title("median filter")
ax4 = fig.add_subplot(144)
ax4.imshow(wiener_ascent, cmap=cm.gray)
ax4.axis('off')
ax4.set_title("Weiner filter")
Explanation: Image processing routines may be sorted according to the category of processing they perform.
Geometrical transformations on images
Changing orientation, resolution, ...
End of explanation
x, y = np.indices((100, 100))
sig = np.sin(2*np.pi*x/50.)*np.sin(2*np.pi*y/50.)*(1+x*y/50.**2)**2
mask = sig > 1
Explanation: Measurements on images
End of explanation
labels, nb = ndimage.label(mask)
nb
areas = ndimage.sum(mask, labels, xrange(1, labels.max()+1))
areas
maxima = ndimage.maximum(sig, labels, xrange(1, labels.max()+1))
maxima
ndimage.find_objects(labels==4)
sl = ndimage.find_objects(labels==4)
fig = plt.figure()
ax = fig.add_subplot(131)
ax.imshow(sig)
ax.axis('off')
ax.set_title("sig")
ax2 = fig.add_subplot(132)
ax2.imshow(mask)
ax2.axis('off')
ax2.set_title("mask")
ax3 = fig.add_subplot(133)
ax3.imshow(labels)
ax3.axis('off')
ax3.set_title("labels")
Explanation: Now we look for various information about the objects in the image:
End of explanation
<END_TASK> |
15,823 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
1 - Introductory Example
Step1: Load bifacial_radiance
Step2: <a id='step2'></a>
2. Create a Radiance Object
Step3: This will create all the folder structure of the bifacial_radiance Scene in the designated testfolder in your computer, and it should look like this
Step4: To see more options of ground materials available (located on ground.rad), run this function without any input.
4. Download and Load Weather Files
There are various options provided in bifacial_radiance to load weatherfiles. getEPW is useful because you just set the latitude and longitude of the location and it donwloads the meteorologicla data for any location.
Step5: The downloaded EPW will be in the EPWs folder.
To load the data, use readWeatherFile. This reads EPWs, TMY meterological data, or even your own data as long as it follows TMY data format (With any time resoultion).
Step6: <a id='step5'></a>
5. Generate the Sky.
Sky definitions can either be for a single time point with gendaylit function,
or using gencumulativesky to generate a cumulativesky for the entire year.
Step7: The method gencumSky calculates the hourly radiance of the sky hemisphere by dividing it into 145 patches. Then it adds those hourly values to generate one single <b> cumulative sky</b>. Here is a visualization of this patched hemisphere for Richmond, VA, US. Can you deduce from the radiance values of each patch which way is North?
<img src="../images_wiki/Journal1Pics/cumulativesky.png">
<img src="../images_wiki/Journal1Pics/cumulativesky.png">
Answer
Step8: In case you want to use a pre-defined module or a module you've created previously, they are stored in a JSON format in data/module.json, and the options available can be called with printModules
Step9: <a id='step7'></a>
7. Make the Scene
The sceneDicitonary specifies the information of the scene, such as number of rows, number of modules per row, azimuth, tilt, clearance_height (distance between the ground and lowest point of the module), pitch or gcr, and any other parameter.
<img src="../images_wiki/Webinar/scenegoal.png">
Reminder
Step10: To make the scene we have to create a Scene Object through the method makeScene. This method will create a .rad file in the objects folder, with the parameters specified in sceneDict and the module created above.
Step11: <a id='step8'></a>
8. COMBINE the Ground, Sky, and the Scene Objects
Radiance requires an "Oct" file that combines the ground, sky and the scene object into it.
The method makeOct does this for us.
Step12: This is how the octfile looks like (** broke the first line so it would fit in the view, it's usually super long)
<img src="../images_wiki/Webinar/octfileexample.png">
<a id='step9'></a>
9. ANALYZE and get Results
Once the octfile tying the scene, ground and sky has been created, we create an Analysis Object. We first have to create an Analysis object, and then we have to specify where the sensors will be located with moduleAnalysis.
<img src="../images_wiki/Webinar/analysisgoal.png">
Let's query the cente rmodule (default)
First let's create the Analysis Object
Step13: Then let's specify the sensor location. If no parameters are passed to moduleAnalysis, it will scan the center module of the center row
Step14: The frontscan and backscan include a linescan along a chord of the module, both on the front and back.
<img src="../images_wiki/Journal1Pics/frontscan_backscan.png">
Analysis saves the measured irradiances in the front and in the back on the results folder.
Step15: The results are also automatically saved in the results folder. Some of our input/output functions can be used to read the results and work with them, for example
Step16: As can be seen in the results for the Wm2Front and WM2Back, the irradiance values are quite high. This is because a cumulative sky simulation was performed on <b> step 5 </b>, so this is the total irradiance over all the hours of the year that the module at each sampling point will receive. Dividing the back irradiance average by the front irradiance average will give us the bifacial gain for the year
Step17: ANALYZE and get Results for another module
You can select what module you want to sample.
<img src="../images_wiki/Webinar/analysisgoal2.png">
Step18: <a id='step10'></a>
10. View / Render the Scene
If you used gencumsky or gendaylit, you can view the <b> Scene </b> by navigating on a command line to the folder and typing | <ASSISTANT_TASK:>
Python Code:
import os
testfolder = os.path.abspath(r'..\..\bifacial_radiance\TEMP\Demo1')
print ("Your simulation will be stored in %s" % testfolder)
Explanation: 1 - Introductory Example: Fixed-Tilt simple setup
This jupyter journal will walk us through the creation of the most basic fixed-tilt simulation possible with bifacial_radiance.
We will simulate a 1-up landscape system over a white rooftop.
Steps include:
<ol>
<li> <a href='#step1'> Create a folder for your simulation, and Load bifacial_radiance </a></li>
<li> <a href='#step2'> Create a Radiance Object </a></li>
<li> <a href='#step3'> Set the Albedo </a></li>
<li> <a href='#step4'> Download Weather Files </a></li>
<li> <a href='#step5'> Generate the Sky </a></li>
<li> <a href='#step6'> Define a Module type </a></li>
<li> <a href='#step7'> Create the scene </a></li>
<li> <a href='#step8'> Combine Ground, Sky and Scene Objects </a></li>
<li> <a href='#step9'> Analyze and get results </a></li>
<li> <a href='#step10'> Visualize scene options </a></li>
</ol>
<a id='step1'></a>
1. Create a folder for your simulation, and load bifacial_radiance
End of explanation
from bifacial_radiance import *
import numpy as np
Explanation: Load bifacial_radiance
End of explanation
demo = RadianceObj('bifacial_example',testfolder)
Explanation: <a id='step2'></a>
2. Create a Radiance Object
End of explanation
albedo = 0.62
demo.setGround(albedo)
Explanation: This will create all the folder structure of the bifacial_radiance Scene in the designated testfolder in your computer, and it should look like this:
<img src="..\images_wiki\Journal1Pics\folderStructure.png">
<a id='step3'></a>
3. Set the Albedo
If a number between 0 and 1 is passed, it assumes it's an albedo value. For this example, we want a high-reflectivity rooftop albedo surface, so we will set the albedo to 0.62
End of explanation
epwfile = demo.getEPW(lat = 37.5, lon = -77.6)
Explanation: To see more options of ground materials available (located on ground.rad), run this function without any input.
4. Download and Load Weather Files
There are various options provided in bifacial_radiance to load weatherfiles. getEPW is useful because you just set the latitude and longitude of the location and it donwloads the meteorologicla data for any location.
End of explanation
# Read in the weather data pulled in above.
metdata = demo.readWeatherFile(epwfile)
Explanation: The downloaded EPW will be in the EPWs folder.
To load the data, use readWeatherFile. This reads EPWs, TMY meterological data, or even your own data as long as it follows TMY data format (With any time resoultion).
End of explanation
fullYear = True
if fullYear:
demo.genCumSky(demo.epwfile) # entire year.
else:
demo.gendaylit(metdata,4020) # Noon, June 17th (timepoint # 4020)
Explanation: <a id='step5'></a>
5. Generate the Sky.
Sky definitions can either be for a single time point with gendaylit function,
or using gencumulativesky to generate a cumulativesky for the entire year.
End of explanation
module_type = 'Prism Solar Bi60 landscape'
demo.makeModule(name=module_type,x=1.695, y=0.984)
Explanation: The method gencumSky calculates the hourly radiance of the sky hemisphere by dividing it into 145 patches. Then it adds those hourly values to generate one single <b> cumulative sky</b>. Here is a visualization of this patched hemisphere for Richmond, VA, US. Can you deduce from the radiance values of each patch which way is North?
<img src="../images_wiki/Journal1Pics/cumulativesky.png">
<img src="../images_wiki/Journal1Pics/cumulativesky.png">
Answer: Since Richmond is in the Northern Hemisphere, the modules face the south, which is where most of the radiation from the sun is coming. The north in this picture is the darker blue areas.
<a id='step6'></a>
6. DEFINE a Module type
You can create a custom PV module type. In this case we are defining a module named "Prism Solar Bi60", in landscape. The x value defines the size of the module along the row, so for landscape modules x > y. This module measures y = 0.984 x = 1.695.
<div class="alert alert-success">
You can specify a lot more variables in makeModule like cell-level modules, multiple modules along the Collector Width (CW), torque tubes, spacing between modules, etc.
Reffer to the <a href="https://bifacial-radiance.readthedocs.io/en/latest/generated/bifacial_radiance.RadianceObj.makeModule.html#bifacial_radiance.RadianceObj.makeModule"> Module Documentation </a> and read the following jupyter journals to explore all your options.
</div>
End of explanation
availableModules = demo.printModules()
Explanation: In case you want to use a pre-defined module or a module you've created previously, they are stored in a JSON format in data/module.json, and the options available can be called with printModules:
End of explanation
sceneDict = {'tilt':10,'pitch':3,'clearance_height':0.2,'azimuth':180, 'nMods': 3, 'nRows': 3}
Explanation: <a id='step7'></a>
7. Make the Scene
The sceneDicitonary specifies the information of the scene, such as number of rows, number of modules per row, azimuth, tilt, clearance_height (distance between the ground and lowest point of the module), pitch or gcr, and any other parameter.
<img src="../images_wiki/Webinar/scenegoal.png">
Reminder: Azimuth gets measured from N = 0, so for South facing modules azimuth should equal 180 degrees
End of explanation
scene = demo.makeScene(module_type,sceneDict)
Explanation: To make the scene we have to create a Scene Object through the method makeScene. This method will create a .rad file in the objects folder, with the parameters specified in sceneDict and the module created above.
End of explanation
octfile = demo.makeOct(demo.getfilelist())
demo.getfilelist()
Explanation: <a id='step8'></a>
8. COMBINE the Ground, Sky, and the Scene Objects
Radiance requires an "Oct" file that combines the ground, sky and the scene object into it.
The method makeOct does this for us.
End of explanation
analysis = AnalysisObj(octfile, demo.basename)
Explanation: This is how the octfile looks like (** broke the first line so it would fit in the view, it's usually super long)
<img src="../images_wiki/Webinar/octfileexample.png">
<a id='step9'></a>
9. ANALYZE and get Results
Once the octfile tying the scene, ground and sky has been created, we create an Analysis Object. We first have to create an Analysis object, and then we have to specify where the sensors will be located with moduleAnalysis.
<img src="../images_wiki/Webinar/analysisgoal.png">
Let's query the cente rmodule (default)
First let's create the Analysis Object
End of explanation
frontscan, backscan = analysis.moduleAnalysis(scene)
Explanation: Then let's specify the sensor location. If no parameters are passed to moduleAnalysis, it will scan the center module of the center row:
End of explanation
results = analysis.analysis(octfile, demo.basename, frontscan, backscan)
Explanation: The frontscan and backscan include a linescan along a chord of the module, both on the front and back.
<img src="../images_wiki/Journal1Pics/frontscan_backscan.png">
Analysis saves the measured irradiances in the front and in the back on the results folder.
End of explanation
load.read1Result('results\irr_bifacial_example.csv')
Explanation: The results are also automatically saved in the results folder. Some of our input/output functions can be used to read the results and work with them, for example:
End of explanation
bifacialityfactor = 0.9
print('Annual bifacial ratio: %0.2f ' %( np.mean(analysis.Wm2Back) * bifacialityfactor / np.mean(analysis.Wm2Front)) )
Explanation: As can be seen in the results for the Wm2Front and WM2Back, the irradiance values are quite high. This is because a cumulative sky simulation was performed on <b> step 5 </b>, so this is the total irradiance over all the hours of the year that the module at each sampling point will receive. Dividing the back irradiance average by the front irradiance average will give us the bifacial gain for the year:
<img src="../images_wiki/Journal1Pics/BGG_Formula.png">
Assuming that our module from Prism Solar has a bifaciality factor (rear to front performance) of 90%, our <u> bifacial gain </u> is of:
End of explanation
modWanted=1
rowWanted=1
sensorsy=4
resultsfilename = demo.basename+"_Mod1Row1"
frontscan, backscan = analysis.moduleAnalysis(scene, modWanted = modWanted, rowWanted=rowWanted, sensorsy=sensorsy)
results = analysis.analysis(octfile, resultsfilename, frontscan, backscan)
load.read1Result('results\irr_bifacial_example_Mod1Row1.csv')
Explanation: ANALYZE and get Results for another module
You can select what module you want to sample.
<img src="../images_wiki/Webinar/analysisgoal2.png">
End of explanation
# Make a color render and falsecolor image of the scene.
analysis.makeImage('side.vp')
analysis.makeFalseColor('side.vp')
Explanation: <a id='step10'></a>
10. View / Render the Scene
If you used gencumsky or gendaylit, you can view the <b> Scene </b> by navigating on a command line to the folder and typing:
objview materials\ground.rad objects\Prism_Solar_Bi60_landscape_0.2_3_10_20x7_origin0,0.rad
This <b> objview </b> has 3 different light sources of its own, so the shading is not representative.
ONLY If you used <b> gendaylit </b>, you can view the scene correctly illuminated with the sky you generated after generating the oct file, with
rvu -vf views\front.vp -e .01 bifacial_example.oct
The <b> rvu </b> manual can be found here: manual page here: http://radsite.lbl.gov/radiance/rvu.1.html
Or you can also use the code below from bifacial_radiance to generate an .HDR rendered image of the scene. You can choose from front view or side view in the views folder:
End of explanation
<END_TASK> |
15,824 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
Step1: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is
Step3: In this equation
Step4: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
Step5: Use interact with plot_fermidist to explore the distribution | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Image
from IPython.html.widgets import interact, interactive, fixed
Explanation: Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
End of explanation
Image('fermidist.png')
Explanation: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is:
End of explanation
def fermidist(energy, mu, kT):
Compute the Fermi distribution at energy, mu and kT.
F = 1/(np.exp((energy-mu)/kT)+1)
return F
assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033)
assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0),
np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532,
0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ]))
Explanation: In this equation:
$\epsilon$ is the single particle energy.
$\mu$ is the chemical potential, which is related to the total number of particles.
$k$ is the Boltzmann constant.
$T$ is the temperature in Kelvin.
In the cell below, typeset this equation using LaTeX:
$\large F(\epsilon) = {\Large \frac{1}{e^{(\epsilon-\mu)/kT}+1}}$
Define a function fermidist(energy, mu, kT) that computes the distribution function for a given value of energy, chemical potential mu and temperature kT. Note here, kT is a single variable with units of energy. Make sure your function works with an array and don't use any for or while loops in your code.
End of explanation
def plot_fermidist(mu, kT):
energy = np.linspace(0,10.0,21)
plt.plot(energy, fermidist(energy, mu, kT))
plt.tick_params(direction='out')
plt.xlabel('$Energy$')
plt.ylabel('$F(Energy)$')
plt.title('Fermi Distribution')
plot_fermidist(4.0, 1.0)
assert True # leave this for grading the plot_fermidist function
Explanation: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
End of explanation
interact(plot_fermidist, mu=(0.0,5.0), kT=(0.1,10.0))
Explanation: Use interact with plot_fermidist to explore the distribution:
For mu use a floating point slider over the range $[0.0,5.0]$.
for kT use a floating point slider over the range $[0.1,10.0]$.
End of explanation
<END_TASK> |
15,825 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introduction to Weighted Ensemble Data Analysis in wepy
By
Step2: Running the simulation
Here we run some simulations so that we can showcase combining them for analysis. Since this tutorial isn't about running simulations we won't discuss this part much.
This first block is just to set up all the parameters and components for the simulation. We will be running multiple simulations with these and just change the reporters we are using.
Step3: First Simulation
Nothing fancy here, we just make a reporter that will generate the WepyHDF5 file. We need to do deepcopy on all the components because we will be reusing this initial state for the other simulations and passing them by reference here will mutate them. The Manager won't do copies automatically for you as this may be expensive in some situations.
Step4: Second Simulation
Same as the first one but we just want a separate dataset so we can demonstrate how to aggregate independently generated datasets later.
Step5: Third Simulation
For this simulation we will continue the second simulation using the outputs. While this isn't something you would likely do in practice all in the same script, this is something that must be dealt with when running very long simulations on machines which have restrictions on how long you can run them. To support this in a more robust way, there is the orchestration module which we encourage you to check out.
Step6: Analysis Workflow
Now that we have generated all of our simulation data we can start analyzing it.
We will start with a workflow something like this, where on one branch (green) we want to do some dimensionally reduction and cluster the states we collected into "macrostates".
On the other branch (blue, red, purple) we will do the necessary bookkeeping to get the walker family tree taking into account both the cloning & merging as well as any walkers that have warped through boundary conditions.
In the end we can combine these states with the walker family tree to calculate the transition probabilities between those macrostates. This end result is something similar to a Markov State Model (MSM) which we call a Conformation State Model (CSN). The distinction merely refers to some rather rigorous requirements a true Markov State Model must have that we aren't directly attempting to verify here.
Step7: Calculating Observables
In order to do the macrostate clustering we need to calculate some interesting observables over the state data. You could use absolute positions of the particles or the energies or something that is calculated on-the-fly by the simulations but we want to showcase how to do this yourself. For a real molecular system this might be the solvent accessible surface area or the radius of gyration.
Because our example is very simple (just two particles) there isn't much we can do. So we'll just calculate the Euclidean distance between the two particles. Not coincidentally, this is also the distance metric we used. We are actually using this distance for a similar purpose (clustering) except while the resampler (WExplore) was doing this "on-the-fly" we have all of the simulation data at once and so will likely get better boundaries between clusters than our resampler.
To do this we have to define a function pair_dist_obs of a particular form that we can then pass to the WepyHDF5.compute_observable function which maps it over the data. Since this is a small simulation with small molecules we can do it in memory in a single process. Real simulation data can be much larger, for this you are free to implement your own computational strategies using a lower level API or you can see if the methods in the wepy.analysis.distributed module are suitable, which uses the distributed computing framework dask.
Step8: When we run compute_observable it both returns the values and saves them into the HDF5 file. Depending on your workflow you may favor one over the other. In the end having the observables saved in the same HDF5 file makes indexing them easier and allows for some fancy queries when you rearrange the data into networks and trees.
We can see what the shape of this data is and that it matches the number of walkers (4) and cycles (100) we ran the simulation with.
We can also see that the computed feature is a 1-D feature vector. We retain the rank of a vector rather than a scalar because in most machine learning pipelines a vector is expected. Even if in this case its only length one.
Step9: Clustering Simulation Samples
For a larger dataset clustering can be significantly more involved as it will likely need to be done either using special clustering algorithms (like the KCenters algorithm implemented in MSMBuilder) and you will probably want to partition your data into training and test sets to validate your model.
For our purposes here we will just use a simple fast clustering method from sklearn, i.e. the MiniBatchKMeans algorithm just so you can appreciate the mechanism and so we have some macrostate labels to use for the later analyses.
First we have to get our data as a uniform array of feature vectors which is simple as concatenating the features for each walker trajectory together
Step10: Then we can choose our classifier hyperparameters and train it
Step11: Once we have the trained classifier and we don't have a test set, we can just use the already determined labels for the feature vectors and then assign this to the simulation dataset as it's own observable. To do this we just need to destructure the feature set back into trajectory shapes and use the add_observable method on our WepyHDF5 dataset
Step12: Thats it for clustering!
For now we will leave it as is and do more things with the macrostate networks once we calculate the transition probabilities.
Just remember that even though we are dealing with data as nice big chunks that we are even calling "trajs" or "trajectories" they are not! The name "trajectory" is simply a convenient name for a collection of molecular dynamics frames, i.e. a slice of data containing positions, box vectors, and potentially velocities and energies, that is familiar to the community.
A single trajectory may be linear for portions where cloning and merging did not occur, but in others the cloned walkers will be swapped in and out along the "trajectory".
However, if we ran a simulation in which no cloning and merging occurs (say with the NoResampler) then the trajectories indeed would be valid continuous trajectories. Strictly speaking nothing in wepy enforces this and in reality resamplers are free to mangle the contiguity of each trajectory. That is, never rely on the well-orderdness of trajectories in WepyHDF5 runs!
The following sections show the correct approach and how to extract contiguous trajectories from walker trees.
Computing Macrostate Transition Counts & Probabilities
Now that we have computed the observables and clustered them we can compute the transitions between those macrostates. The first step is a purely mechanical process and is just book-keeping about where and when walkers were cloned and merged. You can think of it as just making the family lineage. During the simulation the individual clones and merges are recorded as distinct events, like individual facts. Now we have to go back through this and assemble it into a tree-like data structure.
In the second revision of this workflow we will show a higher-level way to do this, but which also makes use of multiple branching simulations. For now we do it the semi-manual way of transforming the "resampling records" to a "resampling panel" (the run_resampling_panel method) and then condensing this to a "net parent table" (the parent_panel function). Finally we add a mask to this table to account for any walkers that warped through boundary conditions (the parent_table_discontinuities). This is necessary for getting transition probabilities correctly since movement through a boundary acts like a probability sink and doesn't represent a physical transition.
The resampler.DECISION and UnbindingBC classes are simply the ones we used in the simulations above. These classes contain methods that allow for proper accounting to be done. The resampler.DECISION is just MultiCloneMergeDecision.
Step13: Side Notes on Lineages and Continuous Trajectories
Now that we have the entire table (which represents the walker tree) we can find contiguous trajectories from it. One way to do this is via the ancestors function which works on the parent table. We just need to choose a walker at a certain point of time and it will retrieve the entire history of this walker up until that point.
In this example I picked one of the walkers at the end of the simulation and showed the first few entries from the resulting "trace". Traces in wepy are essentially collections of complex indices on the runs, trajectories, and frames (and also contigs which will be discussed later) in the data. Because these traces are untyped it can sometimes be confusing as to what the indices are. Be sure to pay close attention to the documentation which should tell you. There are also some disambiguations in the glossary as to the different types of traces.
Also pay attention to the fact that we are using the parent_table, not parent_table_discs. A discontinuous warp breaks a lineage from the ancestors function.
Step14: In this trace the tuples are complex indices where the values are
Step15: Notice that we are excluding the last entry in the trace. Is this an error? It is not, but sometimes the indexes can be confusing. When trying to debug these kinds of issues, it can be very useful to have these two diagrams handy as a reference to the order of events taking place in the simulation.
The first is a schematic that focuses on the generation and inheritance of walker states at an abstract level. In this diagram the colors indicate unique states and the boxes they sit in represent the walker index, which is the trajectory index in a WepyHDF5 run.
This figure is particularly helpful in understanding things like the parent table, especially when combined with boundary conditions. The dot and slash in the boundary condition lines indicate that boundary warping can be continuous or discontinuous. As you can see the slash is a discontinuous even because the purple state is replaced with the original white state. Indicating a source-sink non-equilibrium cycle. The dotted line in the resampling portion indicates that walker 1 was merged into walker 2 (AKA "squashed") and donated its weight to it but ending the lineage of it's state.
Step16: In this diagram we show the actual flow of data in the wepy simulation manager and components.
Step17: To explain the wrinkle in the trace indexing above we must consider that the parent table above was generated from all of the resampling records available. Resampling occurs after dynamics is finished, and as the name implies generates no novel samples. And so rather than recording frames twice in the HDF5 before and after sampling we simply record the action of what happens.
When we ran the simulation above and returned the walkers from run_simulation these walkers were the resampled walkers, but those in the HDF5 are those directly produced by the runner. This way we never lose any sampled data. Even if that walker is subsequently warped or merged away.
Because wepy is focused on molecular dynamics simulations we even provide a method for generating mdtraj trajectories from traces. This is hugely useful for quickly converting to a more common format that can be then read by molecular structure viewers like VMD, PyMOL, or Chimera. This relies on the topology of the molecule also being stored in the HDF5 file (currently this is achieved using a JSON format file). But just know that you never need to rely on this method to get the data you need for this kind of conversion i.e. positions and box vectors.
Step18: Also note that these trajectories do not include the actual initial structures that started the simulations. These too can be accessed if you need them via the initial_walker_fields and initial_walkers_to_mdtraj methods.
Back to calculating Transition Probabilities
Now that we have a better idea of the continuity of walker trajectories in our simulation data we can turn our attention back to generating transition probabilities between our labelled macrostates.
The simplest way to do this is simply to go through the continuous trajectories and take each pair of states A -> B, look at the labels and then tally the macrostate transition counts.
Naively it seems our job is done. However, things get complicated when we want to consider lag times over the trajectories. That is recording the macrostate transition between not just A -> B but rather the A -> C transition given A -> B -> C. This is desirable for a number of reasons when computing probabilities using stochastic methods like Markov State Models where we want memoryless transitions.
Doing this over trajectory trees is not as trivial. First consider what this lag time of 3 would look like for linear MD.
Step19: Now consider a branchpoint in a tree of trajectories. We must be very careful not to double count or undercount transitions.
Step20: The algorithm to achieve this has been implemented in the sliding_window function which generates traces corresponding to the red segments in the above figures.
Step21: These window traces can then be combined with cluster labels (as an observable) to produce a transition probability matrix directly. See the wepy.analysis.transitions module for other similar functions if you want to control how you are aggregating forward and backward transitions.
Step22: Contig Trees
The final step before we can construct a network of macrostates is to combine everything into a ContigTree abstract data type. In early prototypes we supported construction of networks from both ContigTree and parent-table/transition-matrix. This was cumbersome and confusing and so we consolidated this interface to only use ContigTree as it is more general, powerful, and easier to use as a user.
If you have fastidiously followed up until this point you may be disappointed to learn that most of these steps are also not exactly needed when using the ContigTree. However, these data structures are still used in certain contexts and are useful in helping to understand all the layers of structure.
So lets update our original workflow to see where the ContigTree fits in.
Step23: As you can see most of those complex intermediary formats are folded into the ContigTree representation. The ContigTre is a combination of a network data structure (networkx.DiGraph) and the WepyHDF5 data.
In this tree each node is a single cycle of a simulation. Thus it can express not only ensembles of walkers cloning and merging but whole simulations of ensembles which are branching. Its like there is a tree inside of a tree. Technically, they are forests as well since there is often multiple roots. This whole tree inside a tree is too much complexity for what we are doing at the moment so we will defer our discussion of it until we need it when analyzing multiple simulations.
What we want is to get back to the part where we had a transition probability matrix and combine that with our macrostate labels to make a CSN.
So we simply construct the ContigTree with our dataset and the classes that describe the cloning & merging and boundary conditions like we did before.
Step24: What have we done? Well the ContigTree has an extensive API for a lot of stuff so lets try to focus in on a few interesting ones before we get to the transition probabilities.
One of the main features is a richer set of functions for generating commonly useful continuous trajectories.
Here we can get the indices (as unorderd traces) for
Step25: And then we can take these events and generate all of the full lineages for them in one line.
Step26: Since we only have a single run our contig tree is really just a single contig which we can get with this little bit of magic
Step27: This Contig object is a subclass of the ContigTree but isn't a tree. This restriction opens a few more doors to us that we will use, but for now it makes the abstraction simpler.
Step28: Multi-Run Data Analysis
So far we have assumed nice contiguous data. This is not so with MD data in general which may be split up into different runs.
If you've checked out thinking about meta-trees, don't worry. You don't really have to think about it. The benefit is however that if the need ever arises and you have multiple simulations run from a single intermediary checkpoint, you won't have to throw out one or the other continuations
Step29: We want to be able to get transitions between the original run and all of its continuations without double counting.
Contig Tree
The contig tree is a tree (technically forest) of continuous simulations. For us the branchings happen when a run is continued multiple times.
Step30: The Spanning Contigs are the contigs drawn before
Step31: Run 1 continues run 0, and run 2 continues run 0.
This only works within a single file (but we can do interfile linking).
API for interacting with continuations
Step32: A spanning contig is a contig which goes from a root run (a run that does not continue another run) and a leaf run (a run which is not continued). These can be enumerated
Step33: The Contig Tree Class
Since we have given it everything it needs to make the parent table from the previous example it can automate it all with the appropriate sliding windows algorithms for multiple runs!
The Macro-State Network Class
In order to make a Conformation State Network (CSN) we need to have the transitions (end-points of sliding windows) and the labels of the micro-states (frames of simulations) which can be from clustering or whatever else.
Because, a ContigTree can in the most degenerate form be a single run it is the most appropriate inputs for a general macro-state network.
Step34: Because this object directly wraps the WepyHDF5 file, we have access to all the data including weights, positions, box sizes etc. and so we can perform all kinds of fancy operations on a macrostate basis.
Weights/Free Energy of the Macrostates
Since we are principally interested in the free energy of macrostates the weights of the macrostates are a canonical example.
Step35: Transition Probabilitiy Matrix
To make an actual network we would need the transition probabilities as well, which were calculated with the lag time given when we created the network.
Step36: Updated User Workflow
Synergism with CSNAnalysis
This is the assymetric probability matrix. You can calculate interesting things with it related to paths etc. with the CSNAnalysis package from here on.
The purpose of this class is to calculate transition probabilities and create a direct interface to microstates in the HDF5 from a macrostate perspective.
Step37: Committor Probabilities
Calculate the distance between the two particles
Step38: Determine the source and sink basins and compute the committor probabilities
Step39: Microstates in the Macrostate
The macrostate keeps track of the microstates by a 'trace' which in wepy parlance is just a list of indices into the main wepy HDF5 data structure to a specific microstate, i.e. list of (run index, walker index, cycle index) tuples.
Step40: Visualizing Microstates
This is used to make it very easy to visualize the microstates by allowing export to other trajectory software better supported by viewers like mdtraj.
Step41: Linking Runs Together
When doing many runs over different jobs and with continuations we found that it would be useful to be able to link them all into a single file and thus not have to worry about different files when doing analysis.
Luckily, the HDF5 standard provides a way to do this and is now incorporated into wepy.hdf5.
The main methods we can use to accomplish this are
Step42: Code | <ASSISTANT_TASK:>
Python Code:
import os
import os.path as osp
import random as rand
from pathlib import Path
import shutil as sh
import warnings
from PIL import Image
warnings.filterwarnings("ignore")
rand.seed(33)
## Inputs and Outputs
input_dir = Path("input")
outputs_dir = Path("_output")
media_dir = input_dir / 'media'
sh.rmtree(outputs_dir)
os.makedirs(outputs_dir, exist_ok=True)
def rescale(image, factor):
Helper function to show the images at the right scale
new_shape = [int(dim * factor) for dim in image.size]
new_image = image.resize(new_shape, Image.ANTIALIAS)
return new_image
# load all the images
CSN_workflow_longway_im = Image.open(media_dir / "CSN_workflow_longway.png")
walker_history_im = Image.open(media_dir / "walker_history_schematic.png")
sim_manager_dataflow_im = Image.open(media_dir / "sim_manager_dataflow.png")
trad_md_windows_im = Image.open(media_dir / "trad_MD_windows.png")
we_md_windows_im = Image.open(media_dir / "we_MD_windows.png")
continuation_runs_im = Image.open(media_dir / "continuation_runs.png")
contig_tree_im = Image.open(media_dir / "contig_tree.png")
CSN_workflow_shortway_im = Image.open(media_dir / "CSN_workflow_shortway.png")
linking_files_im = Image.open(media_dir / "linking_files.png")
Explanation: Introduction to Weighted Ensemble Data Analysis in wepy
By: Samuel Lotz
Before we move on to the interesting stuff we first just set up some boilerplate stuff for running the tutorial.
End of explanation
import sys
from copy import copy, deepcopy
import os
import os.path as osp
import pickle
import numpy as np
import scipy.spatial.distance as scidist
import simtk.openmm.app as omma
import simtk.openmm as omm
import simtk.unit as unit
from openmm_systems.test_systems import LennardJonesPair
import mdtraj as mdj
from wepy.util.mdtraj import mdtraj_to_json_topology
from wepy.sim_manager import Manager
from wepy.resampling.distances.distance import Distance
from wepy.resampling.resamplers.wexplore import WExploreResampler
from wepy.walker import Walker
from wepy.runners.openmm import OpenMMRunner, OpenMMState
from wepy.runners.openmm import UNIT_NAMES, GET_STATE_KWARG_DEFAULTS
from wepy.work_mapper.mapper import Mapper
from wepy.boundary_conditions.unbinding import UnbindingBC
from wepy.reporter.hdf5 import WepyHDF5Reporter
from wepy.hdf5 import WepyHDF5
## PARAMETERS
# we use the Reference platform because this is just a test
PLATFORM = 'Reference'
# Langevin Integrator
TEMPERATURE= 300.0*unit.kelvin
FRICTION_COEFFICIENT = 1/unit.picosecond
# step size of time integrations
STEP_SIZE = 0.002*unit.picoseconds
# Resampler parameters
# the maximum weight allowed for a walker
PMAX = 0.5
# the minimum weight allowed for a walker
PMIN = 1e-12
# the maximum number of regions allowed under each parent region
MAX_N_REGIONS = (10, 10, 10, 10)
# the maximum size of regions, new regions will be created if a walker
# is beyond this distance from each voronoi image unless there is an
# already maximal number of regions
MAX_REGION_SIZES = (1, 0.5, .35, .25) # nanometers
# boundary condition parameters
# maximum distance between between any atom of the ligand and any
# other atom of the protein, if the shortest such atom-atom distance
# is larger than this the ligand will be considered unbound and
# restarted in the initial state
CUTOFF_DISTANCE = 1.0 # nm
# reporting parameters
# these are the properties of the states (i.e. from OpenMM) which will
# be saved into the HDF5
SAVE_FIELDS = ('positions', 'box_vectors', 'velocities')
# make a dictionary of units for adding to the HDF5
units = dict(UNIT_NAMES)
## System
# make the test system
test_sys = LennardJonesPair()
## Molecular Topology
mdtraj_topology = mdj.Topology.from_openmm(test_sys.topology)
json_str_top = mdtraj_to_json_topology(mdtraj_topology)
## Runner
# make the integrator
integrator = omm.LangevinIntegrator(TEMPERATURE,
FRICTION_COEFFICIENT,
STEP_SIZE)
# make a context and set the positions
context = omm.Context(test_sys.system,
copy(integrator))
context.setPositions(test_sys.positions)
# get the data from this context so we have a state to start the
# simulation with
get_state_kwargs = dict(GET_STATE_KWARG_DEFAULTS)
init_sim_state = context.getState(**get_state_kwargs)
init_state = OpenMMState(init_sim_state)
# initialize the runner
runner = OpenMMRunner(test_sys.system,
test_sys.topology,
integrator,
platform=PLATFORM)
## Distance Metric
# we define a simple distance metric for this system, assuming the
# positions are in a 'positions' field
class PairDistance(Distance):
def __init__(self, metric=scidist.euclidean):
self.metric = metric
def image(self, state):
return state['positions']
def image_distance(self, image_a, image_b):
dist_a = self.metric(image_a[0], image_a[1])
dist_b = self.metric(image_b[0], image_b[1])
return np.abs(dist_a - dist_b)
# make a distance object which can be used to compute the distance
# between two walkers, for our scorer class
distance = PairDistance()
## Resampler
resampler = WExploreResampler(distance=distance,
init_state=init_state,
max_region_sizes=MAX_REGION_SIZES,
max_n_regions=MAX_N_REGIONS,
pmin=PMIN, pmax=PMAX)
## Boundary Conditions
# initialize the unbinding boundary conditions
ubc = UnbindingBC(cutoff_distance=CUTOFF_DISTANCE,
initial_state=init_state,
topology=json_str_top,
ligand_idxs=np.array(test_sys.ligand_indices),
receptor_idxs=np.array(test_sys.receptor_indices))
## Work Mapper
# a simple work mapper
mapper = Mapper()
## initial walkers
n_walkers = 4
init_weight = 1.0 / n_walkers
init_walkers = [Walker(OpenMMState(init_sim_state), init_weight) for i in range(n_walkers)]
## run parameters
n_cycles = 100
n_steps = 1000
# steps for each cycle
steps = [n_steps for i in range(n_cycles)]
Explanation: Running the simulation
Here we run some simulations so that we can showcase combining them for analysis. Since this tutorial isn't about running simulations we won't discuss this part much.
This first block is just to set up all the parameters and components for the simulation. We will be running multiple simulations with these and just change the reporters we are using.
End of explanation
run1_hdf5_reporter = WepyHDF5Reporter(
file_path=str(outputs_dir / "results_run1.wepy.h5"),
mode='w',
save_fields=SAVE_FIELDS,
resampler=resampler,
boundary_conditions=ubc,
topology=json_str_top,
units=units)
sim_manager_1 = Manager(deepcopy(init_walkers),
runner=deepcopy(runner),
resampler=deepcopy(resampler),
boundary_conditions=deepcopy(ubc),
work_mapper=deepcopy(mapper),
reporters=[run1_hdf5_reporter]
)
(run1_walkers,
(run1_runner, run1_bc, run1_resampler)) = \
sim_manager_1.run_simulation(n_cycles, steps)
Explanation: First Simulation
Nothing fancy here, we just make a reporter that will generate the WepyHDF5 file. We need to do deepcopy on all the components because we will be reusing this initial state for the other simulations and passing them by reference here will mutate them. The Manager won't do copies automatically for you as this may be expensive in some situations.
End of explanation
run2_hdf5_reporter = WepyHDF5Reporter(
file_path=str(outputs_dir / "results_run2.wepy.h5"),
mode='w',
save_fields=SAVE_FIELDS,
resampler=resampler,
boundary_conditions=ubc,
topology=json_str_top,
units=units)
# run two simulations from the initial conditions
sim_manager_2 = Manager(deepcopy(init_walkers),
runner=deepcopy(runner),
resampler=deepcopy(resampler),
boundary_conditions=deepcopy(ubc),
work_mapper=deepcopy(mapper),
reporters=[run2_hdf5_reporter]
)
(run2_walkers,
(run2_runner, run2_bc, run2_resampler)) = \
sim_manager_2.run_simulation(n_cycles, steps)
Explanation: Second Simulation
Same as the first one but we just want a separate dataset so we can demonstrate how to aggregate independently generated datasets later.
End of explanation
run3_hdf5_reporter = WepyHDF5Reporter(
file_path=str(outputs_dir / "results_run3.wepy.h5"),
mode='w',
save_fields=SAVE_FIELDS,
resampler=run2_resampler,
boundary_conditions=run2_bc,
topology=json_str_top,
units=units)
# run two simulations from the initial conditions
sim_manager_3 = Manager(deepcopy(init_walkers),
runner=deepcopy(run2_runner),
resampler=deepcopy(run2_resampler),
boundary_conditions=deepcopy(run2_bc),
work_mapper=deepcopy(mapper),
reporters=[run3_hdf5_reporter]
)
(run3_walkers,
(run3_runner, run3_bc, run3_resampler)) = \
sim_manager_3.run_simulation(n_cycles, steps)
Explanation: Third Simulation
For this simulation we will continue the second simulation using the outputs. While this isn't something you would likely do in practice all in the same script, this is something that must be dealt with when running very long simulations on machines which have restrictions on how long you can run them. To support this in a more robust way, there is the orchestration module which we encourage you to check out.
End of explanation
rescale(CSN_workflow_longway_im, 1.0)
Explanation: Analysis Workflow
Now that we have generated all of our simulation data we can start analyzing it.
We will start with a workflow something like this, where on one branch (green) we want to do some dimensionally reduction and cluster the states we collected into "macrostates".
On the other branch (blue, red, purple) we will do the necessary bookkeeping to get the walker family tree taking into account both the cloning & merging as well as any walkers that have warped through boundary conditions.
In the end we can combine these states with the walker family tree to calculate the transition probabilities between those macrostates. This end result is something similar to a Markov State Model (MSM) which we call a Conformation State Model (CSN). The distinction merely refers to some rather rigorous requirements a true Markov State Model must have that we aren't directly attempting to verify here.
End of explanation
def pair_dist_obs(fields_d, *args, **kwargs):
atomA_coords = fields_d['positions'][:,0]
atomB_coords = fields_d['positions'][:,1]
dists = np.array([np.array([scidist.euclidean(atomA_coords[i],
atomB_coords[i])])
for i in range(atomA_coords.shape[0])
])
return dists
wepy1 = WepyHDF5(outputs_dir / 'results_run1.wepy.h5', mode='r+')
with wepy1:
# compute the observable with the function
# and automatically saving it as an extra trajectory field
obs = wepy1.compute_observable(pair_dist_obs,
['positions'],
(),
save_to_hdf5='pair_dist',
return_results=True
)
Explanation: Calculating Observables
In order to do the macrostate clustering we need to calculate some interesting observables over the state data. You could use absolute positions of the particles or the energies or something that is calculated on-the-fly by the simulations but we want to showcase how to do this yourself. For a real molecular system this might be the solvent accessible surface area or the radius of gyration.
Because our example is very simple (just two particles) there isn't much we can do. So we'll just calculate the Euclidean distance between the two particles. Not coincidentally, this is also the distance metric we used. We are actually using this distance for a similar purpose (clustering) except while the resampler (WExplore) was doing this "on-the-fly" we have all of the simulation data at once and so will likely get better boundaries between clusters than our resampler.
To do this we have to define a function pair_dist_obs of a particular form that we can then pass to the WepyHDF5.compute_observable function which maps it over the data. Since this is a small simulation with small molecules we can do it in memory in a single process. Real simulation data can be much larger, for this you are free to implement your own computational strategies using a lower level API or you can see if the methods in the wepy.analysis.distributed module are suitable, which uses the distributed computing framework dask.
End of explanation
print("number of walkers:", len(obs))
print("number of cycles:", obs[0].shape[0])
print("feature vector shape:", obs[0].shape[1:])
Explanation: When we run compute_observable it both returns the values and saves them into the HDF5 file. Depending on your workflow you may favor one over the other. In the end having the observables saved in the same HDF5 file makes indexing them easier and allows for some fancy queries when you rearrange the data into networks and trees.
We can see what the shape of this data is and that it matches the number of walkers (4) and cycles (100) we ran the simulation with.
We can also see that the computed feature is a 1-D feature vector. We retain the rank of a vector rather than a scalar because in most machine learning pipelines a vector is expected. Even if in this case its only length one.
End of explanation
with wepy1:
features = np.concatenate([wepy1.get_traj_field(0, i, 'observables/pair_dist')
for i in range(wepy1.num_run_trajs(0))])
print(features.shape)
Explanation: Clustering Simulation Samples
For a larger dataset clustering can be significantly more involved as it will likely need to be done either using special clustering algorithms (like the KCenters algorithm implemented in MSMBuilder) and you will probably want to partition your data into training and test sets to validate your model.
For our purposes here we will just use a simple fast clustering method from sklearn, i.e. the MiniBatchKMeans algorithm just so you can appreciate the mechanism and so we have some macrostate labels to use for the later analyses.
First we have to get our data as a uniform array of feature vectors which is simple as concatenating the features for each walker trajectory together:
End of explanation
from sklearn.cluster import MiniBatchKMeans
clf = MiniBatchKMeans(n_clusters=4,
batch_size=10,
random_state=1)
clf.fit(features)
print(clf.labels_.shape)
print(clf.labels_[0:10])
Explanation: Then we can choose our classifier hyperparameters and train it:
End of explanation
with wepy1:
# destructure the features
obs_trajs = []
start_idx = 0
for traj_idx in range(wepy1.num_run_trajs(0)):
num_traj_frames = wepy1.num_traj_frames(0, traj_idx)
obs_trajs.append(clf.labels_[start_idx : start_idx + num_traj_frames])
start_idx += num_traj_frames
print("observables shape:", len(obs_trajs), len(obs_trajs[0]))
# add it as an observable
wepy1.add_observable('minibatch-kmeans_pair-dist_4_10_1',
[obs_trajs])
Explanation: Once we have the trained classifier and we don't have a test set, we can just use the already determined labels for the feature vectors and then assign this to the simulation dataset as it's own observable. To do this we just need to destructure the feature set back into trajectory shapes and use the add_observable method on our WepyHDF5 dataset:
End of explanation
from wepy.analysis.parents import (
parent_panel,
net_parent_table,
parent_table_discontinuities,
)
with wepy1:
# make a parent matrix from the hdf5 resampling records for run 0
resampling_panel = wepy1.run_resampling_panel(0)
# get the warping records
warping_records = wepy1.warping_records([0])
parent_panel = parent_panel(
resampler.DECISION,
resampling_panel)
parent_table = net_parent_table(parent_panel)
parent_table_discs = parent_table_discontinuities(
UnbindingBC,
parent_table,
warping_records
)
Explanation: Thats it for clustering!
For now we will leave it as is and do more things with the macrostate networks once we calculate the transition probabilities.
Just remember that even though we are dealing with data as nice big chunks that we are even calling "trajs" or "trajectories" they are not! The name "trajectory" is simply a convenient name for a collection of molecular dynamics frames, i.e. a slice of data containing positions, box vectors, and potentially velocities and energies, that is familiar to the community.
A single trajectory may be linear for portions where cloning and merging did not occur, but in others the cloned walkers will be swapped in and out along the "trajectory".
However, if we ran a simulation in which no cloning and merging occurs (say with the NoResampler) then the trajectories indeed would be valid continuous trajectories. Strictly speaking nothing in wepy enforces this and in reality resamplers are free to mangle the contiguity of each trajectory. That is, never rely on the well-orderdness of trajectories in WepyHDF5 runs!
The following sections show the correct approach and how to extract contiguous trajectories from walker trees.
Computing Macrostate Transition Counts & Probabilities
Now that we have computed the observables and clustered them we can compute the transitions between those macrostates. The first step is a purely mechanical process and is just book-keeping about where and when walkers were cloned and merged. You can think of it as just making the family lineage. During the simulation the individual clones and merges are recorded as distinct events, like individual facts. Now we have to go back through this and assemble it into a tree-like data structure.
In the second revision of this workflow we will show a higher-level way to do this, but which also makes use of multiple branching simulations. For now we do it the semi-manual way of transforming the "resampling records" to a "resampling panel" (the run_resampling_panel method) and then condensing this to a "net parent table" (the parent_panel function). Finally we add a mask to this table to account for any walkers that warped through boundary conditions (the parent_table_discontinuities). This is necessary for getting transition probabilities correctly since movement through a boundary acts like a probability sink and doesn't represent a physical transition.
The resampler.DECISION and UnbindingBC classes are simply the ones we used in the simulations above. These classes contain methods that allow for proper accounting to be done. The resampler.DECISION is just MultiCloneMergeDecision.
End of explanation
from wepy.analysis.parents import ancestors
lineage_trace = ancestors(parent_table,
100,
3
)
print(lineage_trace[0:3])
Explanation: Side Notes on Lineages and Continuous Trajectories
Now that we have the entire table (which represents the walker tree) we can find contiguous trajectories from it. One way to do this is via the ancestors function which works on the parent table. We just need to choose a walker at a certain point of time and it will retrieve the entire history of this walker up until that point.
In this example I picked one of the walkers at the end of the simulation and showed the first few entries from the resulting "trace". Traces in wepy are essentially collections of complex indices on the runs, trajectories, and frames (and also contigs which will be discussed later) in the data. Because these traces are untyped it can sometimes be confusing as to what the indices are. Be sure to pay close attention to the documentation which should tell you. There are also some disambiguations in the glossary as to the different types of traces.
Also pay attention to the fact that we are using the parent_table, not parent_table_discs. A discontinuous warp breaks a lineage from the ancestors function.
End of explanation
with wepy1:
lineage_fields = wepy1.get_run_trace_fields(0,
lineage_trace[:-1],
['weights',
'observables/pair_dist'])
print("weights:")
print(lineage_fields['weights'][0:3])
print("LJ-pair distance:")
print(lineage_fields['observables/pair_dist'][0:3])
Explanation: In this trace the tuples are complex indices where the values are: traj_idx, cycle_idx. Which uniquely identifies a walker at a single point in time relative to the parent table. Because the parent table was created straight from run 0 these indices are valid for that run.
Using this information we can extract the fields of these walkers as a collection of arrays. For this kind of trace we use the get_run_trace_fields since we only are concerned with a single run. Here we are getting the weights of the walkers and the pair distance which calculated earlier.
End of explanation
rescale(walker_history_im, 1.0)
Explanation: Notice that we are excluding the last entry in the trace. Is this an error? It is not, but sometimes the indexes can be confusing. When trying to debug these kinds of issues, it can be very useful to have these two diagrams handy as a reference to the order of events taking place in the simulation.
The first is a schematic that focuses on the generation and inheritance of walker states at an abstract level. In this diagram the colors indicate unique states and the boxes they sit in represent the walker index, which is the trajectory index in a WepyHDF5 run.
This figure is particularly helpful in understanding things like the parent table, especially when combined with boundary conditions. The dot and slash in the boundary condition lines indicate that boundary warping can be continuous or discontinuous. As you can see the slash is a discontinuous even because the purple state is replaced with the original white state. Indicating a source-sink non-equilibrium cycle. The dotted line in the resampling portion indicates that walker 1 was merged into walker 2 (AKA "squashed") and donated its weight to it but ending the lineage of it's state.
End of explanation
rescale(sim_manager_dataflow_im, 0.65)
Explanation: In this diagram we show the actual flow of data in the wepy simulation manager and components.
End of explanation
with wepy1:
mdj_traj = wepy1.run_trace_to_mdtraj(0,
lineage_trace[:-1])
print(mdj_traj)
# save one of the frames as a PDB as a reference topology
mdj_traj[0].save_pdb(str(outputs_dir / 'lj-pair.pdb'))
# save the lineage as a DCD trajectory
mdj_traj.save_dcd(str(outputs_dir / 'lj-pair_walker_lineage.pdb'))
Explanation: To explain the wrinkle in the trace indexing above we must consider that the parent table above was generated from all of the resampling records available. Resampling occurs after dynamics is finished, and as the name implies generates no novel samples. And so rather than recording frames twice in the HDF5 before and after sampling we simply record the action of what happens.
When we ran the simulation above and returned the walkers from run_simulation these walkers were the resampled walkers, but those in the HDF5 are those directly produced by the runner. This way we never lose any sampled data. Even if that walker is subsequently warped or merged away.
Because wepy is focused on molecular dynamics simulations we even provide a method for generating mdtraj trajectories from traces. This is hugely useful for quickly converting to a more common format that can be then read by molecular structure viewers like VMD, PyMOL, or Chimera. This relies on the topology of the molecule also being stored in the HDF5 file (currently this is achieved using a JSON format file). But just know that you never need to rely on this method to get the data you need for this kind of conversion i.e. positions and box vectors.
End of explanation
rescale(trad_md_windows_im, 0.6)
Explanation: Also note that these trajectories do not include the actual initial structures that started the simulations. These too can be accessed if you need them via the initial_walker_fields and initial_walkers_to_mdtraj methods.
Back to calculating Transition Probabilities
Now that we have a better idea of the continuity of walker trajectories in our simulation data we can turn our attention back to generating transition probabilities between our labelled macrostates.
The simplest way to do this is simply to go through the continuous trajectories and take each pair of states A -> B, look at the labels and then tally the macrostate transition counts.
Naively it seems our job is done. However, things get complicated when we want to consider lag times over the trajectories. That is recording the macrostate transition between not just A -> B but rather the A -> C transition given A -> B -> C. This is desirable for a number of reasons when computing probabilities using stochastic methods like Markov State Models where we want memoryless transitions.
Doing this over trajectory trees is not as trivial. First consider what this lag time of 3 would look like for linear MD.
End of explanation
rescale(we_md_windows_im, 0.6)
Explanation: Now consider a branchpoint in a tree of trajectories. We must be very careful not to double count or undercount transitions.
End of explanation
from wepy.analysis.parents import sliding_window
from wepy.analysis.transitions import run_transition_probability_matrix
# use the parent matrix to generate the sliding windows
windows = list(sliding_window(np.array(parent_table_discs), 10))
Explanation: The algorithm to achieve this has been implemented in the sliding_window function which generates traces corresponding to the red segments in the above figures.
End of explanation
# make the transition matrix from the windows
with wepy1:
transprob_mat = run_transition_probability_matrix(
wepy1,
0,
"observables/minibatch-kmeans_pair-dist_4_10_1",
windows)
print(transprob_mat)
Explanation: These window traces can then be combined with cluster labels (as an observable) to produce a transition probability matrix directly. See the wepy.analysis.transitions module for other similar functions if you want to control how you are aggregating forward and backward transitions.
End of explanation
rescale(CSN_workflow_shortway_im, 1.0)
Explanation: Contig Trees
The final step before we can construct a network of macrostates is to combine everything into a ContigTree abstract data type. In early prototypes we supported construction of networks from both ContigTree and parent-table/transition-matrix. This was cumbersome and confusing and so we consolidated this interface to only use ContigTree as it is more general, powerful, and easier to use as a user.
If you have fastidiously followed up until this point you may be disappointed to learn that most of these steps are also not exactly needed when using the ContigTree. However, these data structures are still used in certain contexts and are useful in helping to understand all the layers of structure.
So lets update our original workflow to see where the ContigTree fits in.
End of explanation
from wepy.analysis.contig_tree import ContigTree
contigtree = ContigTree(wepy1,
decision_class=resampler.DECISION,
boundary_condition_class=UnbindingBC,
)
resampler.DECISION.ENUM.SQUASH
Explanation: As you can see most of those complex intermediary formats are folded into the ContigTree representation. The ContigTre is a combination of a network data structure (networkx.DiGraph) and the WepyHDF5 data.
In this tree each node is a single cycle of a simulation. Thus it can express not only ensembles of walkers cloning and merging but whole simulations of ensembles which are branching. Its like there is a tree inside of a tree. Technically, they are forests as well since there is often multiple roots. This whole tree inside a tree is too much complexity for what we are doing at the moment so we will defer our discussion of it until we need it when analyzing multiple simulations.
What we want is to get back to the part where we had a transition probability matrix and combine that with our macrostate labels to make a CSN.
So we simply construct the ContigTree with our dataset and the classes that describe the cloning & merging and boundary conditions like we did before.
End of explanation
warp_events = contigtree.warp_trace()
final_walkers = contigtree.final_trace()
squashed_walkers = contigtree.resampling_trace(resampler.DECISION.ENUM.SQUASH)
Explanation: What have we done? Well the ContigTree has an extensive API for a lot of stuff so lets try to focus in on a few interesting ones before we get to the transition probabilities.
One of the main features is a richer set of functions for generating commonly useful continuous trajectories.
Here we can get the indices (as unorderd traces) for:
all walkers were warped
all of the final walkers in the simulation
all of the squashed walkers
End of explanation
warp_lineages = contigtree.lineages(squashed_walkers)
print(warp_lineages)
Explanation: And then we can take these events and generate all of the full lineages for them in one line.
End of explanation
with contigtree:
contig = contigtree.make_contig(contigtree.spanning_contig_traces()[0])
print(contig)
Explanation: Since we only have a single run our contig tree is really just a single contig which we can get with this little bit of magic:
End of explanation
contig_tree.sliding_windows(3)[-5:-1]
Explanation: This Contig object is a subclass of the ContigTree but isn't a tree. This restriction opens a few more doors to us that we will use, but for now it makes the abstraction simpler.
End of explanation
rescale(continuation_runs_im, 0.8)
Explanation: Multi-Run Data Analysis
So far we have assumed nice contiguous data. This is not so with MD data in general which may be split up into different runs.
If you've checked out thinking about meta-trees, don't worry. You don't really have to think about it. The benefit is however that if the need ever arises and you have multiple simulations run from a single intermediary checkpoint, you won't have to throw out one or the other continuations
End of explanation
rescale(contig_tree_im, 0.85)
Explanation: We want to be able to get transitions between the original run and all of its continuations without double counting.
Contig Tree
The contig tree is a tree (technically forest) of continuous simulations. For us the branchings happen when a run is continued multiple times.
End of explanation
print("Runs in this file:", wepy1.run_idxs)
print("Continuations using the API method:\n", wepy1.continuations)
print("Where it is actually stored in the HDF5:\n", wepy1.h5['_settings/continuations'][:])
Explanation: The Spanning Contigs are the contigs drawn before: (0,3) (0,4) (0,5) (1,) (2,)
Storage Implementation: Continuations
The only thing that must exist is a specification of the continuations that exist in a WepyHDF5 file.
End of explanation
print("Contig {} has {} frames".format([0], wepy1.contig_n_cycles([0])))
print("Contig {} has {} frames".format([1], wepy1.contig_n_cycles([1])))
print("Contig {} has {} frames".format([2], wepy1.contig_n_cycles([2])))
print("Contig {} has {} frames".format([0,1], wepy1.contig_n_cycles([0,1])))
print("Contig {} has {} frames".format([0,2], wepy1.contig_n_cycles([0,2])))
#wepy1.resampling_records_dataframe([0,1])
Explanation: Run 1 continues run 0, and run 2 continues run 0.
This only works within a single file (but we can do interfile linking).
API for interacting with continuations: Contigs
A contig is a list of runs that form a contiguous dataset.
For the continuation (1,0) the contig is (0,1).
Contigs can be any number of runs.
End of explanation
spanning_contigs = wepy1.spanning_contigs()
print("The spanning contigs:", spanning_contigs)
Explanation: A spanning contig is a contig which goes from a root run (a run that does not continue another run) and a leaf run (a run which is not continued). These can be enumerated:
End of explanation
from wepy.analysis.network import MacroStateNetwork
random_state_net = MacroStateNetwork(contig_tree, transition_lag_time=3,
assg_field_key="observables/rand_assg_idx")
Explanation: The Contig Tree Class
Since we have given it everything it needs to make the parent table from the previous example it can automate it all with the appropriate sliding windows algorithms for multiple runs!
The Macro-State Network Class
In order to make a Conformation State Network (CSN) we need to have the transitions (end-points of sliding windows) and the labels of the micro-states (frames of simulations) which can be from clustering or whatever else.
Because, a ContigTree can in the most degenerate form be a single run it is the most appropriate inputs for a general macro-state network.
End of explanation
# compute the weights of the macrostates and set them as node attributes
random_state_net.set_nodes_field('Weight', random_state_net.macrostate_weights())
# get the weight of a node
print(random_state_net.graph.node[39]['Weight'])
Explanation: Because this object directly wraps the WepyHDF5 file, we have access to all the data including weights, positions, box sizes etc. and so we can perform all kinds of fancy operations on a macrostate basis.
Weights/Free Energy of the Macrostates
Since we are principally interested in the free energy of macrostates the weights of the macrostates are a canonical example.
End of explanation
print(random_state_net.probmat)
Explanation: Transition Probabilitiy Matrix
To make an actual network we would need the transition probabilities as well, which were calculated with the lag time given when we created the network.
End of explanation
from csnanalysis.csn import CSN
from csnanalysis.matrix import *
csn = CSN(random_state_net.countsmat, symmetrize=True)
Explanation: Updated User Workflow
Synergism with CSNAnalysis
This is the assymetric probability matrix. You can calculate interesting things with it related to paths etc. with the CSNAnalysis package from here on.
The purpose of this class is to calculate transition probabilities and create a direct interface to microstates in the HDF5 from a macrostate perspective.
End of explanation
from scipy.spatial.distance import euclidean
dists = []
for node_id, field_d in random_state_net.iter_nodes_fields(['positions']).items():
# just use the positions of the first frame in the cluster
pos_A, pos_B = field_d['positions'][0][0], field_d['positions'][0][1]
dist = euclidean(pos_A, pos_B)
dists.append(dist)
Explanation: Committor Probabilities
Calculate the distance between the two particles:
End of explanation
# the sink basins are those close to the unbinding cutoff
sink_basin = [int(i) for i in np.argwhere(np.array(dists) > 2.5)]
# the source basins are where they are close together
source_basin = [int(i) for i in np.argwhere(np.array(dists) < 0.37)]
print("Number of sink states:", len(sink_basin))
print("Number of source states:", len(source_basin))
committor_probabilities = csn.calc_committors([source_basin, sink_basin])
committor_probabilities[39]
Explanation: Determine the source and sink basins and compute the committor probabilities:
End of explanation
node8_trace = random_state_net.node_assignments(8)
print(node8_trace)
Explanation: Microstates in the Macrostate
The macrostate keeps track of the microstates by a 'trace' which in wepy parlance is just a list of indices into the main wepy HDF5 data structure to a specific microstate, i.e. list of (run index, walker index, cycle index) tuples.
End of explanation
# get an mdtraj trajectory object from the microstates in a node
node8_traj = random_state_net.state_to_mdtraj(8)
node8_traj.superpose(node8_traj)
print("{} frames in macrostate {}".format(node8_traj.n_frames, 8))
import nglview as nv
view = nv.show_mdtraj(node8_traj)
view
Explanation: Visualizing Microstates
This is used to make it very easy to visualize the microstates by allowing export to other trajectory software better supported by viewers like mdtraj.
End of explanation
rescale(linking_files_im, 1.0)
wepy2 = WepyHDF5(wepy2_path, mode='r')
wepy3 = WepyHDF5(wepy3_path, mode='r')
with wepy2:
print("File 2 runs:", wepy2.run_idxs)
with wepy3:
print("File 3 runs:", wepy3.run_idxs)
Explanation: Linking Runs Together
When doing many runs over different jobs and with continuations we found that it would be useful to be able to link them all into a single file and thus not have to worry about different files when doing analysis.
Luckily, the HDF5 standard provides a way to do this and is now incorporated into wepy.hdf5.
The main methods we can use to accomplish this are:
- clone : copy the header information in a file without run data
- add_continuation : manually specify a continuation within a file
- link_run : link a single run from another file, optionally creating a continuation
- link_file_runs : link all runs from another file, preserving their internal continuations
Example Datasets
End of explanation
# now we are going to link them all under one linker file
linker_h5_path = osp.join(outputs_dir, "all_runs.wepy.h5")
with wepy1:
all_wepy = wepy1.clone(linker_h5_path, mode='w')
with all_wepy:
# link the whole file for wepy1 to preserve the continuations
wepy1_run_idxs = all_wepy.link_file_runs(wepy1_path)
# do the same for wepy2 just to test it on only one run in a file and
# because it is easier
wepy2_run_idxs = all_wepy.link_file_runs(wepy2_path)
# now we need to link the run from the continuation file for wepy2 and
# add in the continuation records
all_wepy.link_run(wepy3_path, 0, continue_run=wepy2_run_idxs[0])
print(all_wepy.spanning_contigs())
all_wepy.open()
wepy1.open()
wepy2.open()
wepy3.open()
Explanation: Code:
End of explanation
<END_TASK> |
15,826 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Title
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Resources
TensorFlow documentation contributor guide
TensorFlow documentation style guide
Google developer documentation style guide
Notebook style
Include the collapsed license at the top (uses the Colab "Form" mode to hide the cells).
Save the notebook with the table of contents open.
Use one H1 header for the title.
Include the button-bar immediately after the H1.
Headers that areH4 and below are not visible in the navigation
bar of tensorflow.org.
Include an overview section before any code.
Put all your installs and imports in a setup section.
Keep code and text cells as brief as possible.
Break text cells at headings
Break code cells between "building" and "running", and between "printing one result" and "printing another result".
Necessary but uninteresting code (like plotting logic) should be hidden in a toggleable code cell by putting #@title as the first line.
Code style
Notebooks are for people. Write code optimized for clarity.
Use the Google Python Style Guide, where applicable.
tensorflow.org doesn't support interactive plots.
Keep examples quick. Use small datasets, or small slices of datasets. Don't train to convergence, train until it's obvious it's making progress.
If you define a function, run it and show us what it does before using it in another function.
Demonstrate small parts before combining them into something more complex, like this
Step3: Run the model on a single batch of data, and inspect the output
Step4: Compile the model for training | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import tensorflow as tf
import numpy as np
Explanation: Title
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/not_a_real_link"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/tools/templates/notebook.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/tools/templates/notebook.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/tools/templates/notebook.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
[Update button links]
See model on TFHub is only required if the notebook uses a model from tfhub.dev
Overview
[Include a paragraph or two explaining what this example demonstrates, who should be interested in it, and what you need to know before you get started.]
Setup
[Put all your imports and installs up into a setup section.]
End of explanation
# Build the model
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation='relu', input_shape=(None, 5)),
tf.keras.layers.Dense(3)
])
Explanation: Resources
TensorFlow documentation contributor guide
TensorFlow documentation style guide
Google developer documentation style guide
Notebook style
Include the collapsed license at the top (uses the Colab "Form" mode to hide the cells).
Save the notebook with the table of contents open.
Use one H1 header for the title.
Include the button-bar immediately after the H1.
Headers that areH4 and below are not visible in the navigation
bar of tensorflow.org.
Include an overview section before any code.
Put all your installs and imports in a setup section.
Keep code and text cells as brief as possible.
Break text cells at headings
Break code cells between "building" and "running", and between "printing one result" and "printing another result".
Necessary but uninteresting code (like plotting logic) should be hidden in a toggleable code cell by putting #@title as the first line.
Code style
Notebooks are for people. Write code optimized for clarity.
Use the Google Python Style Guide, where applicable.
tensorflow.org doesn't support interactive plots.
Keep examples quick. Use small datasets, or small slices of datasets. Don't train to convergence, train until it's obvious it's making progress.
If you define a function, run it and show us what it does before using it in another function.
Demonstrate small parts before combining them into something more complex, like this:
End of explanation
result = model(tf.constant(np.random.randn(10,5), dtype = tf.float32)).numpy()
print("min:", result.min())
print("max:", result.max())
print("mean:", result.mean())
print("shape:", result.shape)
Explanation: Run the model on a single batch of data, and inspect the output:
End of explanation
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.categorical_crossentropy)
Explanation: Compile the model for training:
End of explanation
<END_TASK> |
15,827 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Get some data to play with
Step1: Really Simple API
0) Import your model class
Step2: 1) Instantiate an object and set the parameters
Step3: 2) Fit the model
Step4: 3) Apply / evaluate
Step5: And again
Step6: Exercises
Load the iris dataset from the sklearn.datasets module using the load_iris function.
Split it into training and test set using train_test_split.
Then train an evaluate a classifier of your choice. | <ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import load_digits
digits = load_digits()
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(digits.data,
digits.target)
X_train.shape
Explanation: Get some data to play with
End of explanation
from sklearn.svm import LinearSVC
Explanation: Really Simple API
0) Import your model class
End of explanation
svm = LinearSVC(C=0.1)
Explanation: 1) Instantiate an object and set the parameters
End of explanation
svm.fit(X_train, y_train)
Explanation: 2) Fit the model
End of explanation
print(svm.predict(X_test))
svm.score(X_train, y_train)
svm.score(X_test, y_test)
Explanation: 3) Apply / evaluate
End of explanation
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_estimators=50)
rf.fit(X_train, y_train)
rf.score(X_test, y_test)
%load https://raw.githubusercontent.com/scikit-learn/scikit-learn/master/examples/classification/plot_classifier_comparison.py
Explanation: And again
End of explanation
# %load solutions/train_iris.py
Explanation: Exercises
Load the iris dataset from the sklearn.datasets module using the load_iris function.
Split it into training and test set using train_test_split.
Then train an evaluate a classifier of your choice.
End of explanation
<END_TASK> |
15,828 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Tweaking CNN Parameters
Preface
To explore the CNN inner workings a bit more, I want to use this post to tweak some parameters and see how that impacts the filters that were learned by the TFlearn model. Off the top of my naive head, I can think of a few parameters in our CNN that can be optimized
Step1: Feature Building
Step2: Scaling Inputs
Step3: Reshaping 3D Array To 4D Array
Step4: Putting It All Together
Step5: Preparing Labels
Step6: CNN #1 - Less Convolutional Filters
I want to try less convolutional filters because looking at 2048 filters on one graph is a bit much haha, a bit hard to tell exactly what's going on.
If I use 3 filters in each layer of our CNN, I should get a stack of 3 convolutional outputs after conv layer 1, and 9 convolutional outputs after conv layer 2. Hopefully that will be a bit easier on the eyes.
Step7: Train Test Split
Step8: Training
Step9: View Convolutional Filters
Step10: Hmm... they could potentially represent some part of someone's face, but it's still a bit too pixelated to tell... What if I go smaller filter size?
CNN #2 - Smaller Filter Sizes
I used a 10 x 10 filter in each of the layers... let's go down to a... I dunno, maybe 3 x 3 just for fun?
Step11: First of all, I had to import tensorflow and add in
~~~~
with tf.Graph().as_default()
Step12: Interesting, it still doesn't quite make sense to me, but it's starting to take some type of shape in the second level filters. In that bottom right one in the second layer filters, I can almost even see a face if I squint... Or maybe that's just my hope and dreams deceiving my eyes right before me. I'm not even sure if we'd be able to really make any sense of the second layer filters because they would be filtering on the activations / outputs of the first layer filter. Only the first layer filter is a filter acting on the original image itself. It's plausible that one of these may have the silhouette of a person because my filter size is the size of the entire image. There theoretically isn't even any strides happening here.
Regardless, we would love to see the filters take a smoothed shape as that will indicate the filters have successfully learned some kind of pattern.
Let's try 3 layers to see what kind of difference that makes.
CNN #4 - More Convolutional Layers
Step13: My first thought... how come the third convolution layer only has 9 filters as well? Each output from the second layer (9 outputs) should each be convoluted with the 3 third-layer convolution filter, creating 27 outputs, no? That seems to be what happened between convolutional layers 1 and 2, but I'm not sure why the third filter doesn't follow this logic... I guess that's what I get for using a method I'm not too familiar with, on a python package I'm not too familiar with, while viewing the filters through a copied and pasted code I'm not too familiar with.
Baby steps, though, let's not worry about that too much right now as it looks like it is still giving me the filters (or perhaps at least a subset of the 3rd convolutional layer filters) for each convolutional layer. In fact, I'm going to remove the third layer altogether because it's way too abstracted for me to understand right now and probably just complicates things.
We are actually starting to see the emergence of some smoother shapes throughout all the filters. The filters are still quite grainy in the grand scheme of things, so maybe I need to train it longer and see if the filters smooth out even more... let's just go crazy and do 200 epochs. Well, I guess if I were really crazy I'd leave it to train for thousands of epochs overnight, but let's call 20x more epochs "crazy" for now. Plus, it converges to like 99% accuracy after the 6th epoch anyways.
I also realized that if I'm using a filter size of 91, then the output of my first convolution is probably a single pixel as there is only one unique location that a 91 pixel wide filter can have on a 91 pixel wide image! I'm not even sure how it's doing the second convolution at this point because how can you convolute a 1 pixel wide image with a 91 pixel wide filter? Hmm let's just train 1 convolution layer as well.
CNN #5 - More Epochs
Step14: At about 0.15s per epoch, plus overhead time, this took us around 3 mins to train. Not too shabby! The top two filters don't look too smooth to me, but filter on the bottom, call me crazy, kind of has an outline of somebody with long hair. Note quite as crisp as the mock up I drew | <ASSISTANT_TASK:>
Python Code:
# Install tflearn
import os
os.system("sudo pip install tflearn")
Explanation: Tweaking CNN Parameters
Preface
To explore the CNN inner workings a bit more, I want to use this post to tweak some parameters and see how that impacts the filters that were learned by the TFlearn model. Off the top of my naive head, I can think of a few parameters in our CNN that can be optimized:
- Number of convolutional layers
- Filter size of convolutional layers
- Number of filters in each convolutional layer
- Presence of max pooling layers
- Size of max pooling
- Number of nodes in the fully connected layer
Because I'm going to aim to train a few different models, I'm back on AWS so I don't have to wait 5 minutes per model.
I'll probably start by tweaking the convolutional layers and go from there.
End of explanation
import numpy as np
import pandas as pd
import copy
from matplotlib import pyplot as plt
%matplotlib inline
# Temporarily load from np arrays
chi_photos_np = np.load('chi_photos_np_0.03_compress.npy')
lars_photos_np = np.load('lars_photos_np_0.03_compress.npy')
# View shape of numpy array
chi_photos_np.shape
# Set width var
width = chi_photos_np.shape[-1]
width
Explanation: Feature Building
End of explanation
# Try out scaler on a manually set data (min of 0, max of 255)
from sklearn.preprocessing import MinMaxScaler
# Set test data list to train on (min of 0, max of 255)
test_list = np.array([0, 255]).reshape(-1, 1)
test_list
# Initialize scaler
scaler = MinMaxScaler()
# Fit test list
scaler.fit(test_list)
Explanation: Scaling Inputs
End of explanation
chi_photos_np.reshape(-1, width, width, 1).shape
Explanation: Reshaping 3D Array To 4D Array
End of explanation
# Reshape to prepare for scaler
chi_photos_np_flat = chi_photos_np.reshape(1, -1)
chi_photos_np_flat[:10]
# Scale
chi_photos_np_scaled = scaler.transform(chi_photos_np_flat)
chi_photos_np_scaled[:10]
# Reshape to prepare for scaler
lars_photos_np_flat = lars_photos_np.reshape(1, -1)
lars_photos_np_scaled = scaler.transform(lars_photos_np_flat)
# Reshape
chi_photos_reshaped = chi_photos_np_scaled.reshape(-1, width, width, 1)
lars_photos_reshaped = lars_photos_np_scaled.reshape(-1, width, width, 1)
print('{} has shape: {}'. format('chi_photos_reshaped', chi_photos_reshaped.shape))
print('{} has shape: {}'. format('lars_photos_reshaped', lars_photos_reshaped.shape))
# Create copy of chi's photos to start populating x_input
x_input = copy.deepcopy(chi_photos_reshaped)
print('{} has shape: {}'. format('x_input', x_input.shape))
# Concatentate lars' photos to existing x_input
x_input = np.append(x_input, lars_photos_reshaped, axis = 0)
print('{} has shape: {}'. format('x_input', x_input.shape))
Explanation: Putting It All Together
End of explanation
# Create label arrays
y_chi = np.array([[1, 0] for i in chi_photos_reshaped])
y_lars = np.array([[0, 1] for i in lars_photos_reshaped])
print('{} has shape: {}'. format('y_chi', y_chi.shape))
print('{} has shape: {}'. format('y_lars', y_lars.shape))
# Preview the first few elements
y_chi[:5]
y_lars[:5]
# Create copy of chi's labels to start populating y_input
y_input = copy.deepcopy(y_chi)
print('{} has shape: {}'. format('y_input', y_input.shape))
# Concatentate lars' labels to existing y_input
y_input = np.append(y_input, y_lars, axis = 0)
print('{} has shape: {}'. format('y_input', y_input.shape))
Explanation: Preparing Labels
End of explanation
# TFlearn libraries
import tflearn
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.estimator import regression
# sentdex's code to build the neural net using tflearn
# Input layer --> conv layer w/ max pooling --> conv layer w/ max pooling --> fully connected layer --> output layer
convnet = input_data(shape = [None, 91, 91, 1], name = 'input')
convnet = conv_2d(convnet, 3, 10, activation = 'relu', name = 'conv_1')
convnet = max_pool_2d(convnet, 2, name = 'max_pool_1')
convnet = conv_2d(convnet, 3, 10, activation = 'relu', name = 'conv_2')
convnet = max_pool_2d(convnet, 2, name = 'max_pool_2')
convnet = fully_connected(convnet, 1024, activation = 'relu', name = 'fully_connected_1')
convnet = dropout(convnet, 0.8, name = 'dropout_1')
convnet = fully_connected(convnet, 2, activation = 'softmax', name = 'fully_connected_2')
convnet = regression(convnet, optimizer = 'sgd', learning_rate = 0.01, loss = 'categorical_crossentropy', name = 'targets')
Explanation: CNN #1 - Less Convolutional Filters
I want to try less convolutional filters because looking at 2048 filters on one graph is a bit much haha, a bit hard to tell exactly what's going on.
If I use 3 filters in each layer of our CNN, I should get a stack of 3 convolutional outputs after conv layer 1, and 9 convolutional outputs after conv layer 2. Hopefully that will be a bit easier on the eyes.
End of explanation
# Import library
from sklearn.model_selection import train_test_split
print(x_input.shape)
print(y_input.shape)
# Perform train test split
x_train, x_test, y_train, y_test = train_test_split(x_input, y_input, test_size = 0.1, stratify = y_input)
x_train = np.array(x_train, dtype = np.float64)
x_test = np.array(x_test, dtype = np.float64)
y_train = np.array(y_train, dtype = np.float64)
y_test = np.array(y_test, dtype = np.float64)
Explanation: Train Test Split
End of explanation
# Train with data
model = tflearn.DNN(convnet)
model.fit(
{'input': x_train},
{'targets': y_train},
n_epoch = 10,
validation_set = ({'input': x_test}, {'targets': y_test}),
snapshot_step = 500,
show_metric = True
)
Explanation: Training
End of explanation
import six
def display_convolutions(model, layer, padding=4, filename=''):
if isinstance(layer, six.string_types):
vars = tflearn.get_layer_variables_by_name(layer)
variable = vars[0]
else:
variable = layer.W
data = model.get_weights(variable)
# N is the total number of convolutions
N = data.shape[2] * data.shape[3]
print('There are {} filters in {}'.format(N, layer))
# Ensure the resulting image is square
filters_per_row = int(np.ceil(np.sqrt(N)))
# Assume the filters are square
filter_size = data.shape[0]
# Size of the result image including padding
result_size = filters_per_row * (filter_size + padding) - padding
# Initialize result image to all zeros
result = np.zeros((result_size, result_size))
# Tile the filters into the result image
filter_x = 0
filter_y = 0
for n in range(data.shape[3]):
for c in range(data.shape[2]):
if filter_x == filters_per_row:
filter_y += 1
filter_x = 0
for i in range(filter_size):
for j in range(filter_size):
result[filter_y * (filter_size + padding) + i, filter_x * (filter_size + padding) + j] = \
data[i, j, c, n]
filter_x += 1
# Normalize image to 0-1
min = result.min()
max = result.max()
result = (result - min) / (max - min)
# Plot figure
plt.figure(figsize=(10, 10))
plt.axis('off')
plt.imshow(result, cmap='gray', interpolation='nearest')
# Save plot if filename is set
if filename != '':
plt.savefig(filename, bbox_inches='tight', pad_inches=0)
plt.show()
# Display first convolutional layer filters
display_convolutions(model, 'conv_1')
# Display first convolutional layer filters ( filters)
display_convolutions(model, 'conv_2')
Explanation: View Convolutional Filters
End of explanation
import tensorflow as tf
with tf.Graph().as_default():
# Build CNN
convnet = input_data(shape = [None, 91, 91, 1], name = 'input')
convnet = conv_2d(convnet, 3, 3, activation = 'relu', name = 'conv_1')
convnet = max_pool_2d(convnet, 2, name = 'max_pool_1')
convnet = conv_2d(convnet, 3, 3, activation = 'relu', name = 'conv_2')
convnet = max_pool_2d(convnet, 2, name = 'max_pool_2')
convnet = fully_connected(convnet, 1024, activation = 'relu', name = 'fully_connected_1')
convnet = dropout(convnet, 0.8, name = 'dropout_1')
convnet = fully_connected(convnet, 2, activation = 'softmax', name = 'fully_connected_2')
convnet = regression(convnet, optimizer = 'sgd', learning_rate = 0.01, loss = 'categorical_crossentropy', name = 'targets')
# Train Model
model = tflearn.DNN(convnet)
model.fit(
{'input': x_train},
{'targets': y_train},
n_epoch = 6,
validation_set = ({'input': x_test}, {'targets': y_test}),
snapshot_step = 500,
show_metric = True
)
# Display convolutional filters
display_convolutions(model, 'conv_1')
display_convolutions(model, 'conv_2')
Explanation: Hmm... they could potentially represent some part of someone's face, but it's still a bit too pixelated to tell... What if I go smaller filter size?
CNN #2 - Smaller Filter Sizes
I used a 10 x 10 filter in each of the layers... let's go down to a... I dunno, maybe 3 x 3 just for fun?
End of explanation
with tf.Graph().as_default():
# Build CNN
convnet = input_data(shape = [None, 91, 91, 1], name = 'input')
convnet = conv_2d(convnet, 3, 91, activation = 'relu', name = 'conv_1')
convnet = max_pool_2d(convnet, 2, name = 'max_pool_1')
convnet = conv_2d(convnet, 3, 91, activation = 'relu', name = 'conv_2')
convnet = max_pool_2d(convnet, 2, name = 'max_pool_2')
convnet = fully_connected(convnet, 1024, activation = 'relu', name = 'fully_connected_1')
convnet = dropout(convnet, 0.8, name = 'dropout_1')
convnet = fully_connected(convnet, 2, activation = 'softmax', name = 'fully_connected_2')
convnet = regression(convnet, optimizer = 'sgd', learning_rate = 0.01, loss = 'categorical_crossentropy', name = 'targets')
# Train Model
model = tflearn.DNN(convnet)
model.fit(
{'input': x_train},
{'targets': y_train},
n_epoch = 6,
validation_set = ({'input': x_test}, {'targets': y_test}),
snapshot_step = 500,
show_metric = True
)
# Display convolutional filters
display_convolutions(model, 'conv_1')
display_convolutions(model, 'conv_2')
Explanation: First of all, I had to import tensorflow and add in
~~~~
with tf.Graph().as_default():
~~~~
to somehow separate the new model into another "graph session". What are these? I'm not sure, but this stack overflow inquiry helped me out a bit. With that, I got my new model training with the same variable names where it was giving me an error earlier.
I trained the model until 6 epochs because that's when I saw accuracy peak. A manual early stopping mechanism, if you will. Even with 6 epochs and only 3 x 3 filters with 3 filters per convolutional layer, we still reach 99% accuracy!
Again, the filters don't quite mean anything to me (I'm not sure it can mean anything being 3 pixels by 3 pixels), but I do see some generally brighter colored filters and darker colored filters... perhaps these could represent the face / background and hair respectively!
Let's try going the other way and making huge filters... why not just cover the entire photo... let's try it out.
CNN #3 - Larger Filter Sizes
Let's just use a 91 x 91 filter and see what happens. Maybe this will lend itself to a recognizable filter a bit more.
End of explanation
with tf.Graph().as_default():
# Build CNN
convnet = input_data(shape = [None, 91, 91, 1], name = 'input')
convnet = conv_2d(convnet, 3, 91, activation = 'relu', name = 'conv_1')
convnet = max_pool_2d(convnet, 2, name = 'max_pool_1')
convnet = conv_2d(convnet, 3, 91, activation = 'relu', name = 'conv_2')
convnet = max_pool_2d(convnet, 2, name = 'max_pool_2')
convnet = conv_2d(convnet, 3, 91, activation = 'relu', name = 'conv_3')
convnet = max_pool_2d(convnet, 2, name = 'max_pool_3')
convnet = fully_connected(convnet, 1024, activation = 'relu', name = 'fully_connected_1')
convnet = dropout(convnet, 0.8, name = 'dropout_1')
convnet = fully_connected(convnet, 2, activation = 'softmax', name = 'fully_connected_2')
convnet = regression(convnet, optimizer = 'sgd', learning_rate = 0.01, loss = 'categorical_crossentropy', name = 'targets')
# Train Model
model = tflearn.DNN(convnet)
model.fit(
{'input': x_train},
{'targets': y_train},
n_epoch = 10,
validation_set = ({'input': x_test}, {'targets': y_test}),
snapshot_step = 500,
show_metric = True
)
# Display convolutional filters
display_convolutions(model, 'conv_1')
display_convolutions(model, 'conv_2')
display_convolutions(model, 'conv_3')
Explanation: Interesting, it still doesn't quite make sense to me, but it's starting to take some type of shape in the second level filters. In that bottom right one in the second layer filters, I can almost even see a face if I squint... Or maybe that's just my hope and dreams deceiving my eyes right before me. I'm not even sure if we'd be able to really make any sense of the second layer filters because they would be filtering on the activations / outputs of the first layer filter. Only the first layer filter is a filter acting on the original image itself. It's plausible that one of these may have the silhouette of a person because my filter size is the size of the entire image. There theoretically isn't even any strides happening here.
Regardless, we would love to see the filters take a smoothed shape as that will indicate the filters have successfully learned some kind of pattern.
Let's try 3 layers to see what kind of difference that makes.
CNN #4 - More Convolutional Layers
End of explanation
with tf.Graph().as_default():
# Build CNN
convnet = input_data(shape = [None, 91, 91, 1], name = 'input')
convnet = conv_2d(convnet, 3, 91, activation = 'relu', name = 'conv_1')
convnet = max_pool_2d(convnet, 2, name = 'max_pool_1')
convnet = fully_connected(convnet, 1024, activation = 'relu', name = 'fully_connected_1')
convnet = dropout(convnet, 0.8, name = 'dropout_1')
convnet = fully_connected(convnet, 2, activation = 'softmax', name = 'fully_connected_2')
convnet = regression(convnet, optimizer = 'sgd', learning_rate = 0.001, loss = 'categorical_crossentropy', name = 'targets')
# Train Model
model = tflearn.DNN(convnet)
model.fit(
{'input': x_train},
{'targets': y_train},
n_epoch = 100,
validation_set = ({'input': x_test}, {'targets': y_test}),
snapshot_step = 500,
show_metric = True
)
# Display convolutional filters
display_convolutions(model, 'conv_1')
Explanation: My first thought... how come the third convolution layer only has 9 filters as well? Each output from the second layer (9 outputs) should each be convoluted with the 3 third-layer convolution filter, creating 27 outputs, no? That seems to be what happened between convolutional layers 1 and 2, but I'm not sure why the third filter doesn't follow this logic... I guess that's what I get for using a method I'm not too familiar with, on a python package I'm not too familiar with, while viewing the filters through a copied and pasted code I'm not too familiar with.
Baby steps, though, let's not worry about that too much right now as it looks like it is still giving me the filters (or perhaps at least a subset of the 3rd convolutional layer filters) for each convolutional layer. In fact, I'm going to remove the third layer altogether because it's way too abstracted for me to understand right now and probably just complicates things.
We are actually starting to see the emergence of some smoother shapes throughout all the filters. The filters are still quite grainy in the grand scheme of things, so maybe I need to train it longer and see if the filters smooth out even more... let's just go crazy and do 200 epochs. Well, I guess if I were really crazy I'd leave it to train for thousands of epochs overnight, but let's call 20x more epochs "crazy" for now. Plus, it converges to like 99% accuracy after the 6th epoch anyways.
I also realized that if I'm using a filter size of 91, then the output of my first convolution is probably a single pixel as there is only one unique location that a 91 pixel wide filter can have on a 91 pixel wide image! I'm not even sure how it's doing the second convolution at this point because how can you convolute a 1 pixel wide image with a 91 pixel wide filter? Hmm let's just train 1 convolution layer as well.
CNN #5 - More Epochs
End of explanation
with tf.Graph().as_default():
# Build CNN
convnet = input_data(shape = [None, 91, 91, 1], name = 'input')
convnet = conv_2d(convnet, 9, 30, activation = 'relu', name = 'conv_1')
convnet = max_pool_2d(convnet, 2, name = 'max_pool_1')
convnet = fully_connected(convnet, 1024, activation = 'relu', name = 'fully_connected_1')
convnet = dropout(convnet, 0.8, name = 'dropout_1')
convnet = fully_connected(convnet, 2, activation = 'softmax', name = 'fully_connected_2')
convnet = regression(convnet, optimizer = 'sgd', learning_rate = 0.001, loss = 'categorical_crossentropy', name = 'targets')
# Train Model
model = tflearn.DNN(convnet)
model.fit(
{'input': x_train},
{'targets': y_train},
n_epoch = 100,
validation_set = ({'input': x_test}, {'targets': y_test}),
snapshot_step = 500,
show_metric = True
)
# Display convolutional filters
display_convolutions(model, 'conv_1')
Explanation: At about 0.15s per epoch, plus overhead time, this took us around 3 mins to train. Not too shabby! The top two filters don't look too smooth to me, but filter on the bottom, call me crazy, kind of has an outline of somebody with long hair. Note quite as crisp as the mock up I drew:
<img src="https://s3.ca-central-1.amazonaws.com/2017edmfasatb/chi_lars_face_detection/images/47_silhouette.png" style="width: 500px;"/>
But I do see the general shape! The fact that my 91 x 91 filter theoretically yields an output of 1 pixel really bothers me... I don't quite want to go back to 10 pixels yet either though, because that really didn't do much for me. I'm going to try keeping one layer, but more filters of a smaller size.
CNN #6 - More Filters, Smaller Filter Size
End of explanation
<END_TASK> |
15,829 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Example 1
Step1: Create a point source theory RVT motion
Step2: Create site profile
This is about the simplest profile that we can create. Linear-elastic soil and rock.
Step3: Create the site response calculator
Step4: Specify the output
Step5: Perform the calculation
Compute the response of the site, and store the state within the calculation object. Nothing is provided.
Step6: Calculate all of the outputs from the calculation object.
Step7: Plot the outputs
Create a few plots of the output. | <ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pysra
%matplotlib inline
# Increased figure sizes
plt.rcParams["figure.dpi"] = 120
Explanation: Example 1: Random vibration theory SRA
Random vibration theory analysis to compute surface response spectrum and site
amplification functions.
End of explanation
m = pysra.motion.SourceTheoryRvtMotion(6.0, 30, "wna")
m.calc_fourier_amps()
fig, ax = plt.subplots()
ax.plot(m.freqs, m.fourier_amps)
ax.set(
xlabel="Frequency (Hz)", xscale="log", ylabel="Fourier Ampl. (g-s)", yscale="log"
)
fig.tight_layout();
Explanation: Create a point source theory RVT motion
End of explanation
profile = pysra.site.Profile(
[
pysra.site.Layer(pysra.site.SoilType("Soil", 18.0, None, 0.05), 30, 400),
pysra.site.Layer(pysra.site.SoilType("Rock", 24.0, None, 0.01), 0, 1200),
]
)
Explanation: Create site profile
This is about the simplest profile that we can create. Linear-elastic soil and rock.
End of explanation
calc = pysra.propagation.LinearElasticCalculator()
Explanation: Create the site response calculator
End of explanation
freqs = np.logspace(-1, 2, num=500)
outputs = pysra.output.OutputCollection(
[
pysra.output.ResponseSpectrumOutput(
# Frequency
freqs,
# Location of the output
pysra.output.OutputLocation("outcrop", index=0),
# Damping
0.05,
),
pysra.output.ResponseSpectrumRatioOutput(
# Frequency
freqs,
# Location in (denominator),
pysra.output.OutputLocation("outcrop", index=-1),
# Location out (numerator)
pysra.output.OutputLocation("outcrop", index=0),
# Damping
0.05,
),
]
)
Explanation: Specify the output
End of explanation
calc(m, profile, profile.location("outcrop", index=-1))
Explanation: Perform the calculation
Compute the response of the site, and store the state within the calculation object. Nothing is provided.
End of explanation
outputs(calc)
Explanation: Calculate all of the outputs from the calculation object.
End of explanation
for o in outputs:
o.plot(style="indiv")
Explanation: Plot the outputs
Create a few plots of the output.
End of explanation
<END_TASK> |
15,830 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
The effect of inconsistencies between model atmospheres and spectral synthesis
We use a cool model
Step1: Now we use a model atmosphere with a lower alpha enhancement
Step2: And a model atmosphere with a higher alpha enhancement
Step3: Compare | <ASSISTANT_TASK:>
Python Code:
atm= atlas9.Atlas9Atmosphere(teff=3500.,logg=2.5,metals=0.,am=0.,cm=0.)
synspec_correct= apogee.modelspec.turbospec.synth(modelatm=atm,
linelist='20150714',
lsf='all',cont='true',vmacro=0.,
isotopes='arcturus')
Explanation: The effect of inconsistencies between model atmospheres and spectral synthesis
We use a cool model
End of explanation
atm2= atlas9.Atlas9Atmosphere(teff=3500.,logg=2.5,metals=0.,am=-0.25,cm=0.)
synspec_minus= apogee.modelspec.turbospec.synth([8.,0.25],[12,0.25],[14.,0.25],[16.,0.25],[20.,0.25],[22.,0.25],
modelatm=atm2,
linelist='20150714',
lsf='all',cont='true',vmacro=0.,
isotopes='arcturus')
Explanation: Now we use a model atmosphere with a lower alpha enhancement
End of explanation
atm3= atlas9.Atlas9Atmosphere(teff=3500.,logg=2.5,metals=0.,am=+0.25,cm=0.)
synspec_plus= apogee.modelspec.turbospec.synth([8.,-0.25],[12,-0.25],[14.,-0.25],[16.,-0.25],[20.,-0.25],[22.,-0.25],
modelatm=atm3,
linelist='20150714',
lsf='all',cont='true',vmacro=0.,
isotopes='arcturus')
Explanation: And a model atmosphere with a higher alpha enhancement
End of explanation
for panel in apogee.spec.plot.highres(synspec_minus[0]-synspec_correct[0],synspec_plus[0]-synspec_correct[0],
yrange=[-0.2,0.2],color=['r','b'],labelLines=True,cleanZero=False,fig_width=14.):
show()
Explanation: Compare
End of explanation
<END_TASK> |
15,831 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Chapter 12
Step2: Show a bunch of 4s
Step3: Classifying with tree based models
Let's see how well some other models do before we get to the neural net. | <ASSISTANT_TASK:>
Python Code:
import os
import struct
import numpy as np
def load_mnist(path, kind='train'):
Load MNIST data from `path`
labels_path = os.path.join(path,
'%s-labels-idx1-ubyte' % kind)
images_path = os.path.join(path,
'%s-images-idx3-ubyte' % kind)
with open(labels_path, 'rb') as lbpath:
magic, n = struct.unpack('>II',
lbpath.read(8))
labels = np.fromfile(lbpath,
dtype=np.uint8)
with open(images_path, 'rb') as imgpath:
magic, num, rows, cols = struct.unpack(">IIII",
imgpath.read(16))
images = np.fromfile(imgpath,
dtype=np.uint8).reshape(len(labels), 784)
return images, labels
X_train, y_train = load_mnist('mnist', kind='train')
print('Rows: %d, columns: %d' % (X_train.shape[0], X_train.shape[1]))
X_test, y_test = load_mnist('mnist', kind='t10k')
print('Rows: %d, columns: %d' % (X_test.shape[0], X_test.shape[1]))
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
fig, ax = plt.subplots(nrows=2, ncols=5, sharex=True, sharey=True,)
ax = ax.flatten()
for i in range(10):
img = X_train[y_train == i][0].reshape(28, 28)
ax[i].imshow(img, cmap='Greys', interpolation='nearest')
ax[0].set_xticks([])
ax[0].set_yticks([])
plt.tight_layout()
plt.show()
Explanation: Chapter 12: Training Artificial Neural Networks for Image Recognition
In this notebook I work through chapter 12 of Python Machine Learningโsee the author's definitive notes.
Loading in the MNIST hand written image data set
End of explanation
fig, ax = plt.subplots(nrows=5, ncols=5, sharex=True, sharey=True,)
ax = ax.flatten()
for i in range(25):
img = X_train[y_train == 4][i].reshape(28, 28)
ax[i].imshow(img, cmap='Greys', interpolation='nearest')
ax[0].set_xticks([])
ax[0].set_yticks([])
plt.tight_layout()
plt.show()
Explanation: Show a bunch of 4s
End of explanation
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
tree10 = DecisionTreeClassifier(criterion='entropy', max_depth=10, random_state=0)
tree100 = DecisionTreeClassifier(criterion='entropy', max_depth=100, random_state=0)
rf10 = RandomForestClassifier(criterion='entropy', n_estimators=10, random_state=1)
rf100 = RandomForestClassifier(criterion='entropy', n_estimators=100, random_state=1)
labeled_models = [
('decision tree depth 10', tree10),
('decision tree depth 100', tree100),
('random forest 10 estimators', rf10),
('random forest 100 estimators', rf100),
]
import time
import subprocess
def say_done(label):
subprocess.call("say 'done with {}'".format(label), shell=True)
for label, model in labeled_models:
before = time.time()
model.fit(X_train, y_train)
after = time.time()
print("{} fit the dataset in {:.1f} seconds".format(label, after - before))
say_done(label)
from sklearn.metrics import accuracy_score
for label, model in labeled_models:
print("{} training fit: {:.3f}".format(label, accuracy_score(y_train, model.predict(X_train))))
print("{} test accuracy: {:.3f}".format(label, accuracy_score(y_test, model.predict(X_test))))
Explanation: Classifying with tree based models
Let's see how well some other models do before we get to the neural net.
End of explanation
<END_TASK> |
15,832 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Please find torch implementation of this notebook here
Step1: Imports
Step2: Load Data
Step3: Let's view some images(Because the images are normalized so we need to first convert them to the range of 0 to 1) in order to view them.
Step4: Model
Step5: Training
Step6: Testing The Model
<ul>
<li>0 = No HotDog</li>
<li>1 = HotDog</li>
</ul> | <ASSISTANT_TASK:>
Python Code:
# Install Augmax for Image Augmentation
try:
import augmax
except ModuleNotFoundError:
%pip install -qq git+https://github.com/khdlr/augmax.git -q
import augmax
# Install the jax-resnet
try:
import jax_resnet
except ModuleNotFoundError:
%pip install -qq git+https://github.com/n2cholas/jax-resnet.git -q
import jax_resnet
# Download and Extract Data
!wget http://d2l-data.s3-accelerate.amazonaws.com/hotdog.zip
!unzip -qq /content/hotdog.zip -d /content/
Explanation: Please find torch implementation of this notebook here: https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/book1/19/finetune_cnn_torch.ipynb
<a href="https://colab.research.google.com/drive/1c0yus2G9AIHXjstUDGT9u7cXgAkJ4tCF?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
Author of the Notebook : Susnato Dhar (Github : https://github.com/susnato)
This notebook is JAX compatible version of the main notebook which can be found <a href="https://github.com/probml/probml-notebooks/blob/main/notebooks-d2l/finetune_cnn_torch.ipynb">here</a>.
<br>All the credits goes to the author of the main notebook, I just converted it to JAX.
I used <a href="https://github.com/n2cholas/jax-resnet">this repository</a> to impelement the pre-trained version of ResNet18 in order to fine tune it!<br>I used the Dataset HotDog VS No HotDog from this <a href="http://d2l-data.s3-accelerate.amazonaws.com/hotdog.zip">link</a>.
End of explanation
import os
import sys
try:
import cv2
except ModuleNotFoundError:
%pip install -qq opencv-python
import cv2
import glob
try:
import tqdm
except ModuleNotFoundError:
%pip install -qq tqdm
import tqdm
import shutil
from typing import Any
from IPython import display
import matplotlib.pyplot as plt
try:
from skimage.util import montage
except ModuleNotFoundError:
%pip install -qq scikit-image
from skimage.util import montage
%matplotlib inline
import jax
import jax.numpy as jnp
import jax.random as jrand
key = jrand.PRNGKey(42)
Explanation: Imports
End of explanation
try:
import tensorflow as tf
except ModuleNotFoundError:
%pip install -qq tensorflow
import tensorflow as tf
def load_img(dir, shape=False):
img = cv2.imread(dir)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
if shape:
img = cv2.resize(img, shape)
return jnp.array(img)
augs = augmax.Chain(
augmax.HorizontalFlip(), augmax.Resize(224, 224), augmax.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
)
def apply_augs(img, augs, key):
img = augs(key, img)
return img
class DataLoader(tf.keras.utils.Sequence):
def __init__(self, batch_size, motiv, shuffle=False):
self.batch_size = batch_size
assert motiv in ["train", "test"]
self.motiv = motiv
self.hot_dogs_list = f"/content/hotdog/{motiv}/hotdog"
self.non_hot_dogs_list = f"/content/hotdog/{motiv}/not-hotdog"
self.key = jrand.PRNGKey(42)
self.shuffle = shuffle
def __len__(self):
return len(os.listdir(self.hot_dogs_list)) // self.batch_size
def __getitem__(self, ix):
X, Y = [], []
hdl = os.listdir(self.hot_dogs_list)[ix * self.batch_size : (ix + 1) * self.batch_size]
nhdl = os.listdir(self.non_hot_dogs_list)[ix * self.batch_size : (ix + 1) * self.batch_size]
for lst in zip(hdl, nhdl):
X.append(apply_augs(load_img(os.path.join(self.hot_dogs_list, lst[0])), augs, self.key))
Y.append(1)
X.append(apply_augs(load_img(os.path.join(self.non_hot_dogs_list, lst[1])), augs, self.key))
Y.append(0)
X = jnp.array(X).reshape(self.batch_size * 2, 224, 224, 3).astype(jnp.float16)
Y = (
jnp.array(Y)
.reshape(
self.batch_size * 2,
)
.astype(jnp.uint8)
)
ix = jnp.arange(X.shape[0])
ix = jrand.shuffle(key, ix)
X = X[ix]
Y = Y[ix]
return X, Y
train_dl = DataLoader(batch_size=32, motiv="train")
val_dl = DataLoader(batch_size=32, motiv="test")
Explanation: Load Data
End of explanation
example_x, example_y = train_dl.__getitem__(0)
viewable_imgs = example_x[:32]
viewable_imgs = (viewable_imgs - viewable_imgs.min()) / (viewable_imgs.max() - viewable_imgs.min())
viewable_imgs = viewable_imgs * 255.0
viewable_imgs = viewable_imgs.astype(jnp.uint8)
plt.imshow(montage(viewable_imgs, multichannel=True, grid_shape=(4, 8)))
plt.show();
Explanation: Let's view some images(Because the images are normalized so we need to first convert them to the range of 0 to 1) in order to view them.
End of explanation
import jax
try:
import optax
except ModuleNotFoundError:
%pip install -qq optax
import optax
try:
import flax.linen as nn
except ModuleNotFoundError:
%pip install -qq flax
import flax.linen as nn
from flax.training import train_state
try:
from jax_resnet import pretrained_resnet, pretrained_resnest
except ModuleNotFoundError:
%pip install -qq jax_resnet
from jax_resnet import pretrained_resnet, pretrained_resnest
from jax_resnet.common import Sequential
class MyResnet(nn.Module):
@nn.compact
def __call__(self, data):
ResNet18, _ = pretrained_resnet(18)
model = ResNet18()
model = Sequential(model.layers[:-1])
x = model(data)
x = nn.Dense(features=256)(x)
x = nn.relu(x)
x = nn.Dense(features=2)(x)
return x
class TrainState(train_state.TrainState):
batch_stats: Any
model = MyResnet()
vars = model.init(key, jnp.ones((1, 224, 224, 3)))
state = TrainState.create(
apply_fn=model.apply, params=vars["params"], batch_stats=vars["batch_stats"], tx=optax.adam(learning_rate=0.00001)
)
@jax.jit
def compute_metrics(pred, true):
loss = jnp.mean(optax.softmax_cross_entropy(logits=pred, labels=jax.nn.one_hot(true, num_classes=2)))
pred = nn.softmax(pred)
accuracy = jnp.mean(jnp.argmax(pred, -1) == true)
return {"loss": loss, "accuracy": jnp.mean(accuracy)}
@jax.jit
def eval_step(state, batch):
variables = {"params": state.params, "batch_stats": state.batch_stats}
logits, _ = state.apply_fn(variables, batch["x"], mutable=["batch_stats"])
return compute_metrics(pred=logits, true=batch["y"])
def train(state, epochs):
@jax.jit
def bce_loss(params):
y_pred, new_model_state = state.apply_fn(
{"params": params, "batch_stats": state.batch_stats}, batch["x"], mutable=["batch_stats"]
)
y_true = jax.nn.one_hot(batch["y"], num_classes=2)
loss = optax.softmax_cross_entropy(logits=y_pred, labels=y_true)
return jnp.mean(loss), (new_model_state, y_pred)
grad_fn = jax.value_and_grad(bce_loss, has_aux=True)
for e in range(epochs):
batch_metrics = []
for i in range(train_dl.__len__()):
batch = {}
batch["x"], batch["y"] = train_dl.__getitem__(i)
aux, grad = grad_fn(state.params)
batch_loss, (new_model_state, batch_pred) = aux
state = state.apply_gradients(grads=grad, batch_stats=new_model_state["batch_stats"])
computed_metrics = compute_metrics(pred=batch_pred, true=batch["y"])
sys.stdout.write(
"\rEpoch : {}/{} Iteration : {}/{} Loss : {} Accuracy : {}".format(
e + 1, epochs, i + 1, train_dl.__len__(), computed_metrics["loss"], computed_metrics["accuracy"]
)
)
batch_metrics.append(computed_metrics)
print("\n")
val_batch_loss, val_batch_acc = [], []
for i in range(val_dl.__len__()):
val_batch = {}
val_batch["x"], val_batch["y"] = val_dl.__getitem__(i)
val_metrics = eval_step(state, val_batch)
val_batch_loss.append(val_metrics["loss"])
val_batch_acc.append(val_metrics["accuracy"])
eval_loss, eval_acc = jnp.mean(jnp.array(val_batch_loss)), jnp.mean(jnp.array(val_batch_acc))
sys.stdout.write(
"Validation Results : Epoch : {} Validation Loss : {} Validation Accuracy : {}".format(
e + 1, jax.device_get(eval_loss), jax.device_get(eval_acc)
)
)
print("\n")
return state
Explanation: Model
End of explanation
epochs = 10
trained_state = train(state, epochs)
Explanation: Training
End of explanation
test_dl = DataLoader(batch_size=1, motiv="test")
ix = jrand.randint(key, shape=(1, 1), minval=0, maxval=test_dl.__len__() - 1)
test_imgs, test_labels = test_dl.__getitem__(jax.device_get(ix)[0][0])
test_img1 = test_imgs[0]
test_label1 = jax.device_get(test_labels)[0]
viewable_img1 = ((test_img1 - test_img1.min()) / (test_img1.max() - test_img1.min())) * 255.0
plt.imshow(viewable_img1.astype(jnp.uint8))
plt.show()
print("True Label : ", test_label1)
print(
"Prediction : ",
jax.device_get(
jnp.argmax(
jax.nn.softmax(
trained_state.apply_fn(
{"params": trained_state.params, "batch_stats": trained_state.batch_stats},
test_img1.reshape(1, 224, 224, 3),
)
)
)
),
)
test_img2 = test_imgs[1]
test_label2 = jax.device_get(test_labels)[1]
viewable_img2 = ((test_img2 - test_img2.min()) / (test_img2.max() - test_img2.min())) * 255.0
plt.imshow(viewable_img2.astype(jnp.uint8))
plt.show()
print("True Label : ", test_label2)
print(
"Prediction : ",
jax.device_get(
jnp.argmax(
jax.nn.softmax(
trained_state.apply_fn(
{"params": trained_state.params, "batch_stats": trained_state.batch_stats},
test_img2.reshape(1, 224, 224, 3),
)
)
)
),
)
Explanation: Testing The Model
<ul>
<li>0 = No HotDog</li>
<li>1 = HotDog</li>
</ul>
End of explanation
<END_TASK> |
15,833 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Palettable
Find Palettable online
Step1: Palettable API
Step2: Setting the matplotlib Color Cycle
Adapted from the example at http
Step3: Using a Continuous Palette
Adapted from http | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Palettable
Find Palettable online:
Docs: https://jiffyclub.github.io/palettable/
GitHub: https://github.com/jiffyclub/palettable
PyPI: https://pypi.python.org/pypi/palettable/
End of explanation
from palettable.colorbrewer.qualitative import Set1_9
Set1_9.name
Set1_9.type
Set1_9.number
Set1_9.colors
Set1_9.hex_colors
Set1_9.mpl_colors
Set1_9.mpl_colormap
# requires ipythonblocks
Set1_9.show_as_blocks()
Set1_9.show_continuous_image()
Set1_9.show_discrete_image()
Explanation: Palettable API
End of explanation
from palettable.wesanderson import Aquatic1_5, Moonrise4_5
x = np.linspace(0, 2 * np.pi)
offsets = np.linspace(0, 2*np.pi, 4, endpoint=False)
# Create array with shifted-sine curve along each column
yy = np.transpose([np.sin(x + phi) for phi in offsets])
plt.rc('lines', linewidth=4)
plt.rc('axes', color_cycle=Aquatic1_5.mpl_colors)
fig, (ax0, ax1) = plt.subplots(nrows=2)
ax0.plot(yy)
ax0.set_title('Set default color cycle to Aquatic1_5')
ax1.set_color_cycle(Moonrise4_5.mpl_colors)
ax1.plot(yy)
ax1.set_title('Set axes color cycle to Moonrise4_5')
# Tweak spacing between subplots to prevent labels from overlapping
plt.subplots_adjust(hspace=0.3)
Explanation: Setting the matplotlib Color Cycle
Adapted from the example at http://matplotlib.org/examples/color/color_cycle_demo.html.
Use the .mpl_colors attribute to change the color cycle used by matplotlib
when colors for plots are not specified.
End of explanation
from palettable.colorbrewer.sequential import YlGnBu_9
from matplotlib.colors import LogNorm
#normal distribution center at x=0 and y=5
x = np.random.randn(100000)
y = np.random.randn(100000)+5
plt.hist2d(x, y, bins=40, norm=LogNorm(), cmap=YlGnBu_9.mpl_colormap)
plt.colorbar()
Explanation: Using a Continuous Palette
Adapted from http://matplotlib.org/examples/pylab_examples/hist2d_log_demo.html.
Use the .mpl_colormap attribute any place you need a matplotlib colormap.
End of explanation
<END_TASK> |
15,834 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
United States carries most of the weight of the total electricity consumption in the household market in N. America in the perdio 1990-2014. US is followed in consumption by Canada and Mexico. The average consumption in the US is about 6 times more than in Canada and about 10 times the one in Mexico.
Step1: Consumption in Europe is led by Germany followed by France and the United Kingdom. Spain is in the 5th place with a household consumption during the period of less than half the one of Germany. The tail of consumptionis led by Poland followed by Belgium and the Netherlands. It seems that there is a correlation between the size of the country and the electricity consumption.
Step2: Electricity consumption between 1990 and 2014 in the household market in Central & SOuth America is led by Brazil frollowed by Argentina & Venezuela. Although it was expected Chile to be between the first three due to its economic development, its in the 5 place after Colombia. Compared to Brazil (first place) households consumption in Argentina (second place) is about 4 times less.
Step3: The comparison between North America, Europe and Central & South America shows that average eletricity consumption in North America is 8.5 times bigger than the one in Europe (comparing the best in breed in each case). Europe compared to Central & South America has an average consumption 1.8 bigger. Within each regions variations are high concentrating most of the regionยดs consumption in less than 10 contries.
Step4: There is an asymetric distribution of electricity consumtpion values in the world. While most of them are in the range from 0-10 000 GWh, contries like the US has a consumption of 120 times bigger. Additionally, frequency rises to 0.95 when the electricity consumption reaches 80 000 GWh which is similar to the consumption in Brazil.
Step5: There is a sustained growth in the electricity consumption in Spain from 1990 to 2014. This is a good indicator of the economic growth of the country although between 2005 and 2015 there is a decrease in the interannual grouwth due to aggressive energy efficiency measures.
Step6: The electricity consumption experiments a moderate growth from 1990 to 2015. There is a higher growth between 1990 and 2005 than from 2005 onwards. In the last 10 years of the period under analysis, the UKยดs electricity consumption in the household segment has decreased. At the end of the period electricity consumption levels have fallen to those in the year 2000. | <ASSISTANT_TASK:>
Python Code:
#Europe
df5 = df4.loc[df4.index.isin(['Austria', 'Belgium', 'Bulgaria','Croatia', 'Cyprus', 'Czechia','Denmark', 'Estonia','Finland','France','Germany','Greece','Hungary','Ireland','Italy','Latvia','Lithuania','Luxembourg','Malta','Netherlands','Poland','Portugal','Romania','Slovakia', 'Slovenia','Spain', 'Sweden', 'United Kingdom'])]
df6= df5.sort_values(ascending=[False])
plt.figure(figsize=(10, 5))
plt.ylabel('GWh')
plt.title('Average Electricity Consumption in Europe: Household Market 1990-2014')
df6.plot.bar()
Explanation: United States carries most of the weight of the total electricity consumption in the household market in N. America in the perdio 1990-2014. US is followed in consumption by Canada and Mexico. The average consumption in the US is about 6 times more than in Canada and about 10 times the one in Mexico.
End of explanation
#Central & South America
df7 = df4.loc[df4.index.isin(['Antigua and Barbuda', 'Argentina', 'Bahamas','Barbados', 'Belize', 'Bolivia (Plur. State of)','Brazil','Chile','Colombia','Costa Rica','Cuba','Dominica','Dominican Republic','Ecuador','El Salvador','Grenada','Guatemala','Guyana','Haiti','Honduras','Jamaica','Nicaragua','Panama', 'Paraguay','Peru', 'St. Kitts-Nevis', 'St. Lucia','St. Vincent-Grenadines','Suriname','Trinidad and Tobago','Uruguay','Venezuela (Bolivar. Rep.)'])]
df8= df7.sort_values(ascending=[False])
plt.figure(figsize=(10, 5))
plt.ylabel('GWh')
plt.title('Average Electricity Consumption in Central & South America: Household Market 1990-2014')
df8.plot.bar()
Explanation: Consumption in Europe is led by Germany followed by France and the United Kingdom. Spain is in the 5th place with a household consumption during the period of less than half the one of Germany. The tail of consumptionis led by Poland followed by Belgium and the Netherlands. It seems that there is a correlation between the size of the country and the electricity consumption.
End of explanation
#Plotting all the figures together for comparison.
#North America has a different scale that Europe & "Central & South America"
plt.figure(figsize=(20, 7))
plt.subplot(1, 3, 1)
df10.plot.bar()
plt.ylabel('GWh')
plt.ylim(0,1200000)
plt.title('Av. Elect. Cons. in N. America: Households 1990-2014')
plt.subplot(1, 3, 2)
df6.plot.bar()
plt.ylabel('GWh')
plt.ylim(0,140000)
plt.title('Av. Elect. Cons. in Europe: Households 1990-2014')
plt.subplot(1, 3, 3)
df8.plot.bar()
plt.ylabel('GWh')
plt.ylim(0,140000)
plt.title('Av. Elect. Cons. in Central & South America: Households 1990-2014')
#plt.legend(loc='lower right',prop={'size':14})
plt.tight_layout()
plt.show()
#Correct the problem of skewness when the 3 graphs are represented together by normalizing with Log the data.
plt.figure(figsize=(20, 7))
plt.subplot(1, 3, 1)
np.log(df10).plot.bar()
plt.ylabel('Log(GWh)')
plt.ylim(0,14)
plt.title('Av. Elect. Cons. in N. America: Households 1990-2014')
plt.subplot(1, 3, 2)
np.log(df6).plot.bar()
plt.ylabel('Log(GWh)')
plt.ylim(0,14)
plt.title('Av. Elect. Cons. in Europe: Households 1990-2014')
plt.subplot(1, 3, 3)
np.log(df8).plot.bar()
plt.ylabel('Log(GWh)')
plt.ylim(0,14)
plt.title('Av. Elect. Cons. in Central & South America: Households 1990-2014')
#plt.legend(loc='lower right',prop={'size':14})
plt.tight_layout()
plt.show()
Explanation: Electricity consumption between 1990 and 2014 in the household market in Central & SOuth America is led by Brazil frollowed by Argentina & Venezuela. Although it was expected Chile to be between the first three due to its economic development, its in the 5 place after Colombia. Compared to Brazil (first place) households consumption in Argentina (second place) is about 4 times less.
End of explanation
#Histograms showing consumption in the World 1990-2014
plt.figure(figsize=(20, 5))
plt.subplot(1, 2, 1)
plt.xlabel("Electricity Consumption")
plt.ylabel("Frequency")
plt.hist(df3['Quantity (GWh)'], bins=5000 ,facecolor='green', alpha=0.5)
plt.axis([0, 20000, 0, 2000])
plt.ylabel('frequency')
plt.xlabel('Year')
plt.title('Distribution of Electricity Consumption in the World 1990-2014')
plt.subplot(1, 2, 2)
plt.xlabel("Electricity Consumption")
plt.ylabel("Frequency")
plt.hist(df3['Quantity (GWh)'], bins=5000 ,facecolor='red', normed=1, cumulative=1, alpha=0.5)
plt.axis([0, 80000, 0, 1])
plt.ylabel('frequency')
plt.xlabel('Year')
plt.title('Cumulative distribution of Electricity Consumption in the World')
plt.tight_layout()
plt.show()
Explanation: The comparison between North America, Europe and Central & South America shows that average eletricity consumption in North America is 8.5 times bigger than the one in Europe (comparing the best in breed in each case). Europe compared to Central & South America has an average consumption 1.8 bigger. Within each regions variations are high concentrating most of the regionยดs consumption in less than 10 contries.
End of explanation
#Dynamic analysis of the electricity consumption in Spain (delving into the details of Europe)
#To see this cell properly, it needs to be run individually while screening through the notebook.
#When 'Cell-Run all" is used the graph and an 'error message' appears.
df1 = df.ix[lambda df: w['Country or Area'] == "Spain", :]
plt.scatter(x = df1["Year"], y = df1['Quantity'], color = 'purple', marker = 'o', s = 30)
plt.ylabel('GWh')
plt.xlabel('Year')
plt.ylim([20000, 160000])
plt.title('Electricity consumption in Spain by household 1990-2014')
plt.show()
Explanation: There is an asymetric distribution of electricity consumtpion values in the world. While most of them are in the range from 0-10 000 GWh, contries like the US has a consumption of 120 times bigger. Additionally, frequency rises to 0.95 when the electricity consumption reaches 80 000 GWh which is similar to the consumption in Brazil.
End of explanation
#Dynamic analysis of electricity consumption in The UK
df2 = df.ix[lambda df: w['Country or Area'] == "United Kingdom", :]
plt.scatter(x = df2["Year"], y = df2['Quantity'], color = 'green', marker = 'x', s = 30)
plt.ylabel('GWh')
plt.xlabel('Year')
plt.ylim([20000, 160000])
plt.title('Electricity consumption in UK by household 1990-2014')
plt.show()
Explanation: There is a sustained growth in the electricity consumption in Spain from 1990 to 2014. This is a good indicator of the economic growth of the country although between 2005 and 2015 there is a decrease in the interannual grouwth due to aggressive energy efficiency measures.
End of explanation
#Dynamic Comparison of the Electricity consumption between The UK & Spain
plt.figure(figsize=(20, 5))
plt.subplot(1, 3, 1)
plt.scatter(x = df1["Year"], y = df1['Quantity'], color = 'purple', marker = 'o', s = 30)
plt.ylabel('GWh')
plt.xlabel('Year')
plt.ylim([20000, 160000])
plt.title('Electricity consumption in Spain by household 1990-2014')
plt.subplot(1, 3, 2)
plt.scatter(x = df2["Year"], y = df2['Quantity'], color = 'green', marker = 'x', s = 30)
plt.ylabel('GWn')
plt.xlabel('Year')
plt.ylim([20000, 160000])
plt.title('Electricity consumption in UK by household 1990-2014')
plt.subplot(1, 3, 3)
plt.scatter(x = df1["Year"], y = df1['Quantity'], color = 'purple', marker= "o", s= 30, label="Spain")
plt.scatter(x = df2["Year"], y = df2['Quantity'], color = 'green', marker ="x", s= 30, label="UK")
plt.ylabel('GWh')
plt.xlabel('Year')
plt.ylim([20000, 160000])
plt.title('Electricity consumption in Spain, UK by household 1990-2014')
plt.legend(loc='lower right',prop={'size':14})
plt.tight_layout()
plt.show()
Explanation: The electricity consumption experiments a moderate growth from 1990 to 2015. There is a higher growth between 1990 and 2005 than from 2005 onwards. In the last 10 years of the period under analysis, the UKยดs electricity consumption in the household segment has decreased. At the end of the period electricity consumption levels have fallen to those in the year 2000.
End of explanation
<END_TASK> |
15,835 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
El Oscilador armonico.
Dibujamos el espacio de fases para la ecuacion $$\ddot{x} = -\omega^2x$$
Para eso lo pasamos a un sistema
Step1: El Pendulo
Dibujamos el espacio de fases para la ecuacion $$\ddot{\theta} = -\frac{g}{l}sin(\theta)$$
Para eso lo pasamos a un sistema
Step2: El Pendulo con perdidas
Dibujamos el espacio de fases para la ecuacion $$\ddot{\theta} = -\frac{g}{l}sin(\theta) - \gamma \dot \theta$$
Para eso lo pasamos a un sistema
Step3: El resorte Oscilaciones longitudinales.
Dibujamos el espacio de fases para la ecuacion $$\ddot{y} = -2\frac{k}{m}\left(1-\frac{l_0}{\sqrt{y^2+l^2}}\right)y$$
Para eso lo pasamos a un sistema | <ASSISTANT_TASK:>
Python Code:
@interact(xin=(-5,5,0.1),yin=(-5,5,0.1))
def plotInt(xin,yin):
xmax = 2
vmax = 5
x = linspace(-xmax, xmax, 15) # Definimos el rango en el que se mueven las variables y el paso
v = linspace(-vmax, vmax, 15)
X, V = meshgrid(x,v) # Creamos una grilla con eso
# Definimos las constantes
w = 3
# Definimos las ecuaciones
Vp = -w**2*X
Xp = V
def resorte(y, t):
yp = y[1]
vp = -w**2*y[0]
return [yp, vp]
x0 = [xin, yin]
t = linspace(0,10,2000)
sh = integrate.odeint(resorte, x0, t)
fig = figure(figsize(10,5))
ax1 = subplot(121) # Hacer el grafico
quiver(X, V, Xp, Vp, angles='xy')
plot(x, [0]*len(x) ,[0]*len(v), v)
lfase = plot(sh[:,0],sh[:,1],'.')
ylim((-vmax,vmax))
xlim((-xmax,xmax))
# Retocarlo: tamanios, colores, leyendas, etc...
xlabel('$x$', fontsize=16)
ylabel('$\\dot{x}$',fontsize=16)
ax1.set_title('Espacio de fases')
ax2 = subplot(122) # Hacer otro grafico
lines = plot(t,sh )
xlabel('Tiempo [s]')
ax2.set_title('Espacio de tiempo')
legend(['Posicion','Velocidad'])
tight_layout()
ylim((-xmax, xmax))
Explanation: El Oscilador armonico.
Dibujamos el espacio de fases para la ecuacion $$\ddot{x} = -\omega^2x$$
Para eso lo pasamos a un sistema:
$$
\begin{cases}
\dot{V_{x}} = -\omega^2 x\
\dot{x} = V_{x}
\end{cases}
$$
End of explanation
@interact(thI=(0,np.pi,0.1),vI=(0,5,0.1))
def plotInt(thI, vI):
h = linspace(-pi,pi,15) # Definimos el rango en el que se mueven las variables y el paso
v = linspace(-10,10,15)
H, V = meshgrid(h,v) # Creamos una grilla con eso
# Definimos las constantes
g = 10
l = 1
# Definimos las ecuaciones
Vp = -g/l*sin(H)
Hp = V
def pendulo(y, t):
hp = y[1]
vp = -g/l*sin(y[0])
return [hp, vp]
y0 = [thI, vI]
t = linspace(0,10,2000)
sh = integrate.odeint(pendulo, y0, t)
fig = figure(figsize(10,5))
ax1 = subplot(121) # Hacer el grafico
quiver(H, V, Hp, Vp, angles='xy')
plot(h, [0]*len(h) ,[0]*len(v), v)
sh[:,0] = np.mod(sh[:,0] + np.pi, 2*np.pi) - np.pi
lfase = plot(sh[:,0], sh[:,1],'.')
# Retocarlo: tamanios, colores, leyendas, etc...
xlabel('$\\theta$', fontsize=16)
ylabel('$\\dot{\\theta}$', fontsize=16)
xlim((-pi,pi))
ylim((-10,10))
xtick = arange(-1,1.5,0.5)
x_label = [ r"$-\pi$",
r"$-\frac{\pi}{2}$", r"$0$",
r"$+\frac{\pi}{2}$", r"$+\pi$",
]
ax1.set_xticks(xtick*pi)
ax1.set_xticklabels(x_label, fontsize=20)
ax1.set_title('Espacio de fases')
ax2 = subplot(122) # Hacer otro grafico
lines = plot(t,sh )
ylim((-pi, pi))
ytick = [-pi, 0, pi]
y_label = [ r"$-\pi$", r"$0$", r"$+\pi$"]
ax2.set_yticks(ytick)
ax2.set_yticklabels(y_label, fontsize=20)
xlabel('Tiempo [s]')
ax2.set_title('Espacio de tiempo')
legend(['Posicion','Velocidad'])
tight_layout()
Explanation: El Pendulo
Dibujamos el espacio de fases para la ecuacion $$\ddot{\theta} = -\frac{g}{l}sin(\theta)$$
Para eso lo pasamos a un sistema:
$$
\begin{cases}
\dot{V_{\theta}} = -\frac{g}{l}sin(\theta)\
\dot{\theta} = V_{\theta}
\end{cases}
$$
End of explanation
@interact(th0=(-2*np.pi,2*np.pi,0.1),v0=(-2,2,0.1))
def f(th0 = np.pi/3, v0 = 0):
h = linspace(-pi,pi,15) # Definimos el rango en el que se mueven las variables y el paso
v = linspace(-10,10,15)
H, V = meshgrid(h,v) # Creamos una grilla con eso
# Definimos las constantes
g = 10
l = 1
ga = 0.5
# Definimos las ecuaciones
Vp = -g/l*sin(H) - ga*V #SOLO CAMBIA ACA
Hp = V
def pendulo(y, t):
hp = y[1]
vp = -g/l*sin(y[0]) - ga* y[1] # Y ACAA
return [hp, vp]
y0 = [th0, v0]
t = linspace(0,10,2000)
sh = integrate.odeint(pendulo, y0, t)
fig = figure(figsize(10,5))
ax1 = subplot(121) # Hacer el grafico
quiver(H, V, Hp, Vp, angles='xy')
plot(h, [0]*len(h) , h , -g/l/ga*sin(h)) # Dibujar nulclinas
lfase = plot(sh[:,0],sh[:,1],'.')
# Retocarlo: tamanios, colores, leyendas, etc...
xlabel('$\\theta$', fontsize=16)
ylabel('$\\dot{\\theta}$',fontsize=16)
xlim((-pi,pi))
ylim((-10,10))
xtick = arange(-1,1.5,0.5)
x_label = [ r"$-\pi$",
r"$-\frac{\pi}{2}$", r"$0$",
r"$+\frac{\pi}{2}$", r"$+\pi$",
]
ax1.set_xticks(xtick*pi)
ax1.set_xticklabels(x_label, fontsize=20)
ax1.set_title('Espacio de fases')
ax2 = subplot(122) # Hacer otro grafico
lines = plot(t,sh )
ylim((-pi, pi))
ytick = [-pi, 0, pi]
y_label = [ r"$-\pi$", r"$0$", r"$+\pi$"]
ax2.set_yticks(ytick)
ax2.set_yticklabels(y_label, fontsize=20)
xlabel('Tiempo [s]')
ax2.set_title('Espacio de tiempo')
legend(['Posicion','Velocidad'])
tight_layout()
Explanation: El Pendulo con perdidas
Dibujamos el espacio de fases para la ecuacion $$\ddot{\theta} = -\frac{g}{l}sin(\theta) - \gamma \dot \theta$$
Para eso lo pasamos a un sistema:
$$
\begin{cases}
\dot{V_{\theta}} = -\frac{g}{l}sin(\theta) - \gamma \dot \theta\
\dot{\theta} = V_{\theta}
\end{cases}
$$
End of explanation
@interact(x0=(-1,1,0.1),v0=(0,1,0.1))
def f(x0=0,v0=1):
ymax = 2
vmax = 5
y = linspace(-ymax, ymax, 15) # Definimos el rango en el que se mueven las variables y el paso
v = linspace(-vmax, vmax, 15)
Y, V = meshgrid(y,v) # Creamos una grilla con eso
# Definimos las constantes
k = 10
l = 1
l0 = 1.2
m = 1
# Definimos las ecuaciones
Vp = -2*k/m*(1-l0/(sqrt(Y**2+l**2)))*Y
Yp = V
def resorte(y, t):
yp = y[1]
vp = -2*k/m*(1-l0/(sqrt(y[0]**2+l**2)))*y[0]
return [yp, vp]
y0 = [x0, v0]
t = linspace(0,10,2000)
sh = integrate.odeint(resorte, y0, t)
fig = figure(figsize(10,5))
ax1 = subplot(121) # Hacer el grafico
quiver(Y, V, Yp, Vp, angles='xy')
plot(y, [0]*len(y) ,[0]*len(v), v)
lfase = plot(sh[:,0],sh[:,1],'.')
ylim((-vmax,vmax))
xlim((-ymax,ymax))
# Retocarlo: tamanios, colores, leyendas, etc...
xlabel('$y$', fontsize=16)
ylabel('$\\dot{y}$', fontsize=16)
ax1.set_title('Espacio de fases')
ax2 = subplot(122) # Hacer otro grafico
lines = plot(t,sh )
xlabel('Tiempo [s]')
ax2.set_title('Espacio de tiempo')
legend(['Posicion','Velocidad'])
tight_layout()
ylim((-ymax, ymax))
Explanation: El resorte Oscilaciones longitudinales.
Dibujamos el espacio de fases para la ecuacion $$\ddot{y} = -2\frac{k}{m}\left(1-\frac{l_0}{\sqrt{y^2+l^2}}\right)y$$
Para eso lo pasamos a un sistema:
$$
\begin{cases}
\dot{V_{y}} = -2\frac{k}{m}\left(1-\frac{l_0}{\sqrt{y^2+l^2}}\right)y\
\dot{y} = V_{y}
\end{cases}
$$
End of explanation
<END_TASK> |
15,836 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
FastAI Machine Learning 1 โ Random Forests
CodeAlong / Reimplementation of
Step1: 2. Data
Step2: In any sort of analytics work, it's important to look at your data, to make sure you understand the format, how it's stored, what type of values it holds, etc. Even if you've read descriptions about your data, the actual data may not be what you expect.
Step3: Lecture 2 00
Step4: 2.2.2 Initial Processing
Step5: From the error above, we see we need all our columns to be numbers.
We'll start off by replacing the date ('saledate') column with a whole bunch of date-related columns.
Step6: We can see those new date columns below where 'saledate' used to be
Step7: The new date columns are numbers, but
Step8: At first glance it doesn't look like anything's changed, but if you take a deeper look, you'll see the data type has changed to 'category'. 'category' is a Pandas class, with attributes accesible via .cat.xxxx
The index below shows that 'High' --> 0, 'Low' --> 1, 'Medium' --> 2
Step9: To actually use this dataset and turn it into numbers, what we need to do is to take every categorical column and replace it with .cat.codes
This is done further below in 2.2.3 Pre-processing via proc_df()
Step10: 2.2.3 Pre-processing
The nas coming out of proc_df() is a dictionary, where the keys are the names of columns with missing values, and the values are the medians.
Optionally you can pass nas as an additional arg to proc_df(), and it'll make sure it adds those specific columns and uses those specific medians. IE
Step11: The R^2 score shown below shows the variance (or mean?) of the data. It shows how much the data varies.
Step12: A validation set helps handle the issue of overfitting. Make it st it shares the test set's properties, ie
Step13: Lecture 2 00
Step14: Here we see our model, which had 0.982 R2 on the training set, got only 0.887 on the validation set, which makes us think it's overfitting quite badly. However it's not too badly because the RMSE on the logs of the prices (0.25) would've put us in the top 25% of the competition anyway (100/407).
*reran this on another machine
3.2 Speeding things up
Fast feedback is important for iteration and good interactive analysis. To this end we can pass in the subset par to proc_df() which'll randomly sample the data. We want no more than a 10sec wait when experimenting.
When you do this you still have to be careful your validation set doesn't change, and your training set doesn't overlap with it. So after sampling 30k items, we'll then taken the first 20k (since they're sorted by date) for our training data, and ignore the other 10k -- keeping our validation set the same as before.
Step15: Instead of 83 seconds of total compute time (15.2s thanks to multi-cores), we now run in only 2.94 total seconds of compute.
3.3 Single tree
Let's use that subset to build a model that's so simple we can actually take a look at it. We'll build a forest made of trees - and before we look at the forest, we'll look at the trees.
In scikit-learn the trees are called 'estimators'. Below we'll make a forest with a single tree n_estimators=1, and a small tree at that max_depth=3, and we'll turn off the random-component of the RandomForest bootstrap=False. Now it'll create a small deteriministic tree.
Step16: After fitting the model and printing the score, the R2 score has dropped from 0.77 to 0.39. This is not a good model. It's better than the Mean-model (being > 0) but still not good.
But we can draw this model to take a look at it
Step17: A tree is a series of binary decisions, or splits. Our tree first of all decided to split on Coupler_System โค 0.5. That's actually a boolean variable, True/False. Within the group where it was True, it further split those into YearMade โค 1988 (1987.5), and on, etc.
Looking at our tree, in the first box
Step18: If we don't limit depth, the training R^2 is of of course, a perfect 1.0. Because we can exactly predict every training element because it's in a leaf-node all it's own.
But the validation R^2 is not 1.0. It's a lot better than our super-shallow tree, but not as good as we'd like.
We want to find another way of making these trees better. And we'll do that by making a forest.
What's a forest?
To create a forest we're going to use a statistical technique called bagging.
3.4 Bagging
3.4.1 Intro to Bagging
You can bag any kind of model. The Random Forest is a way of bagging Decision Trees.
Bagging
Step19: We'll grab the predictions for each individual tree, and look at one example.
Each tree is stored in the attribute
Step20: We see a shape of 10 different sets of predictions and for each one our validation set of size 12,000 -- so 12,000 predictions for each of the 10 trees
Step21: Above, preds[
Step22: Note that the final value on the plot is the same as the final R^2 score returned by the RandomForest -- about 0.7748 here.
The shape of this curve suggests that adding more trees isn't going to help much. Let's check (Compare this to our original model on a sample).
Step23: At this point, it looks like we're inside signal noise. More trees is never going to make the model worse - but a lower score is easily explained as whatever diminished accuracy gain being overwhelmed by noise in the random-sampling of the data.
If that's the case, I'd expect being able to see an R2 score greater than 0.79786, with the same hyperparameters
Step24: Well there you have it. And the highest score so far to boot.
The n_estimators hyperparameter is a tradeoff between improvement vs. computation time.
Interesting note above, number of trees increases computation time Linearly.
You can still get many of the same insights as a large forest, with a few dozen trees, like 20 or 30.
3.4.2 Out-of-Bag (OOB) Score
Is our validation set worse than our training set because we're overfitting, or because the validation set is for a different time period, or a bit of both? With the existing information we've shown, we can't tell. However, Random Forests have a very clever trick called out-of-bag (OOB) error which can handle this (and more!)
The idea is to calculate error on the training set, but only include the trees in the calculation of a row's error where that row was not included in training that tree. This allows us to see whether the model is over-fitting, without needing a separate validation set.
This also has the benefit of allowing us to see whether our model generalizes, even if we only have a small amount of data so want to avoid separating some out to create a validation set.
This is as simple as adding one more parameter to our model constructor. We print the OOB error last in our print_score function below.
So what if your dataset is so small you don't want to pull anything out for a validation set - because doing so means you no longer have enough data to build a good model? What do you do?
There's a cool trick unique to Random Forests. We could recognize that for each tree there are some portion of rows not used... So we could pass in the rows not used by the 1st tree to the 1st, the rows not used by the 2nd to the 2nd, and so on. So technically we'd have a different validation set for each tree. To calculate our prediction, we would average all of the trees where that row was not used for training.
As long as you have enough trees, every row is going to appear in the OOB sample for one of them at least. So you'll be averaging a few trees - more if you have more trees.
You can create an OOB prediction by averaging all the trees you didn't use to train each individual row, and then calculate RMSE, R2, etc, on that.
If you pass oob_score=True to scikit-learn, it'll do that for you. It'll then create an attribute oob_score_. Our print_score(.) function at top prints out the oob score if it exists.
Step25: The extra value at the end is the R2 for the oob score. We want it to be very close to the R2 for the validation set (2nd to last value) although that doesn't seem to be the case here.
In general the OOB R2 score will slightly underestimate how generalizable the model is. The more trees you have, the less it'll be less by.
Although in this case my OOB R2 score is actually better than my validation R2... NOTE (L2 1
Step26: The basic idea is this
Step27: We don't see that much of an improvement over the R2 with the 20k data-subset, because we haven't used many estimators yet.
Since each additional tree allows th emodel to see more data, this approach can make additional trees more useful.
Step28: With more estimators the model can see a larger portion of the data, and the R2 (2nd last value) has gone up from 0.8591 to 0.8755.
The Favorita groceries competition has over a hundred-million rows of data. There's no way you'll create an RF using 128M rows in every tree. It'll take forever. Instead you can use set_rf_samples(.) set to 100k or 1M.
The trick here is with a Random Forest using this technique, no dataset is too big. Even if it has 100B rows. You can just create a bunch of trees, each with a different subset.
NOTE
Step29: Let's get a baseline for this full set to compare to. This'll train 40 estimators all the way down until the leaf nodes have just one sample in them.
Step30: This gets us a 0.899 R2 on the validation set, or a 0.908 on the OOB.
L2 1
Step31: Setting min_samples_leaf = 3 stops the RF when each leaf-node has 3 or fewer samples in it. In practice this means 1 or 2 fewer levels of decisions being made - which means around half the number of decision criteria, so it'll train much quicker. It also means when we look at an individual tree, instead of taking one point, wer're taking the average of 3 points - so we'd expect the trees to generalize better, though each tree is likely to be less powerful than before.
Values of in the range of 1, 3, 5, 10, 25 tend to work well for min_samples_leaf.
If you have a massive dataset and aren't using small samples, you may need a min_samples_leaf in the hundreds or thousands.
In this case, going from the default leaf-size of 1 to 3 has increased our valset R2 from 0.899 to 0.903.
We can also increase the amount of variation amongst the trees by not only using a sample of rows for each tree, but also using a sample of columns for each split. We do this by specifying max_features, which is the proportion of features to randomly select from at each split.
Idea | <ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.imports import *
from fastai.structured import *
from pandas_summary import DataFrameSummary
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier
from IPython.display import display
from sklearn import metrics
PATH = "data/bulldozers/"
!ls {PATH}
Explanation: FastAI Machine Learning 1 โ Random Forests
CodeAlong / Reimplementation of: https://github.com/fastai/fastai/blob/master/courses/ml1/lesson1-rf.ipynb
Lessons 1 & 2 | this notebook has been rerun on another machine โ numbers may not exactly match notes (though trends will be the same).
1.2 Imports
End of explanation
df_raw = pd.read_csv(f'{PATH}Train.csv', low_memory=False,
parse_dates=["saledate"])
Explanation: 2. Data
End of explanation
def display_all(df):
with pd.option_context("display.max_rows", 1000):
with pd.option_context("display.max_columns", 1000):
display(df)
display_all(df_raw.tail().transpose())
display_all(df_raw.describe(include='all').transpose())
Explanation: In any sort of analytics work, it's important to look at your data, to make sure you understand the format, how it's stored, what type of values it holds, etc. Even if you've read descriptions about your data, the actual data may not be what you expect.
End of explanation
df_raw.SalePrice = np.log(df_raw.SalePrice)
Explanation: Lecture 2 00:06:08 It's important to note what metric is being used for a project. Generally, selecting the metric(s) is an important part of project setup. However, in this case Kaggle tells us what metric to use: RMSLE (Root Mean Squared Log Error) between the actual and predicted auction prices. Therefore we take the log of the prices, so that RMSE will give us what we need.
$$\sum\big((\ln{(acts)} - \ln{(preds)})^2\big)$$
End of explanation
m = RandomForestRegressor(n_jobs=-1)
m.fit(df_raw.drop('SalePrice', axis=1), df_raw.SalePrice)
Explanation: 2.2.2 Initial Processing
End of explanation
add_datepart(df_raw, 'saledate')
df_raw.saleYear.head()
Explanation: From the error above, we see we need all our columns to be numbers.
We'll start off by replacing the date ('saledate') column with a whole bunch of date-related columns.
End of explanation
df_raw.columns
Explanation: We can see those new date columns below where 'saledate' used to be:
End of explanation
train_cats(df_raw)
df_raw.UsageBand
Explanation: The new date columns are numbers, but:
The categorical variables are currently stored as strings, which is inefficient, and doesn't provide the numeric coding required for a random forest. Therefore we call train_cats to convert strings to Pandas categories.
End of explanation
df_raw.UsageBand.cat.categories
# we can do .cat.codes to get the actual numbers
df_raw.UsageBand.cat.codes
Explanation: At first glance it doesn't look like anything's changed, but if you take a deeper look, you'll see the data type has changed to 'category'. 'category' is a Pandas class, with attributes accesible via .cat.xxxx
The index below shows that 'High' --> 0, 'Low' --> 1, 'Medium' --> 2
End of explanation
df_raw.UsageBand.cat.set_categories(['High', 'Medium', 'Low'], ordered=True, inplace=True)
df_raw.UsageBand = df_raw.UsageBand.cat.codes
display_all(df_raw.isnull().sum().sort_index()/len(df_raw))
os.makedirs('tmp', exist_ok=True)
df_raw.to_feather('tmp/bulldozers-raw.feather')
Explanation: To actually use this dataset and turn it into numbers, what we need to do is to take every categorical column and replace it with .cat.codes
This is done further below in 2.2.3 Pre-processing via proc_df()
End of explanation
# df_raw = pd.read_feather('tmp/bulldozers-raw.feather')
df, y, nas = proc_df(df_raw, 'SalePrice')
??numericalize
df.columns
Explanation: 2.2.3 Pre-processing
The nas coming out of proc_df() is a dictionary, where the keys are the names of columns with missing values, and the values are the medians.
Optionally you can pass nas as an additional arg to proc_df(), and it'll make sure it adds those specific columns and uses those specific medians. IE: it gives you the ability to say "process this test set exactly the same way we processed the training set." FAML1-L3: 00:07:00
End of explanation
m = RandomForestRegressor(n_jobs=-1)
m.fit(df, y)
m.score(df, y)
Explanation: The R^2 score shown below shows the variance (or mean?) of the data. It shows how much the data varies.
End of explanation
def split_vals(a, n): return a[:n].copy(), a[n:].copy()
n_valid = 12000 # same as Kaggle's test set size
n_trn = len(df) - n_valid
raw_train, raw_valid = split_vals(df_raw, n_trn)
X_train, X_valid = split_vals(df, n_trn)
y_train, y_valid = split_vals(y, n_trn)
X_train.shape, y_train.shape, X_valid.shape
Explanation: A validation set helps handle the issue of overfitting. Make it st it shares the test set's properties, ie: give it 12k rows just like the test set, and split it as the first n - 12k rows for training and the last 12k rows as validation set.
End of explanation
def rmse(x,y): return math.sqrt(((x-y)**2).mean())
def print_score(m):
res = [rmse(m.predict(X_train), y_train), rmse(m.predict(X_valid), y_valid),
m.score(X_train, y_train), m.score(X_valid, y_valid)]
if hasattr(m, 'oob_score_'): res.append(m.oob_score_)
print(res)
m = RandomForestRegressor(n_jobs=-1)
%time m.fit(X_train, y_train)
print_score(m)
Explanation: Lecture 2 00:17:58 Creating your validation set is the most important thing [I think] you need to do when you're doing a Machine Learning project โ at least in terms of the actual modeling part.
A Note on the validation set: in general any time you're building a model that has a time element, you want your test set to be a separate time period -- and consequently your validation set too. In this case the dataset is already sorted by date, so you can just take the later portion.
3. Random Forests
3.1 Base Model
Let's try our model again, this time with separate training and validation sets.
End of explanation
df_trn, y_trn, nas = proc_df(df_raw, 'SalePrice', subset=30000, na_dict=nas)
X_train, _ = split_vals(df_trn, 20000)
y_train, _ = split_vals(y_trn, 20000)
m = RandomForestRegressor(n_jobs=-1) # n_jobs=-1: set to num. cores on CPU
%time m.fit(X_train, y_train)
print_score(m)
Explanation: Here we see our model, which had 0.982 R2 on the training set, got only 0.887 on the validation set, which makes us think it's overfitting quite badly. However it's not too badly because the RMSE on the logs of the prices (0.25) would've put us in the top 25% of the competition anyway (100/407).
*reran this on another machine
3.2 Speeding things up
Fast feedback is important for iteration and good interactive analysis. To this end we can pass in the subset par to proc_df() which'll randomly sample the data. We want no more than a 10sec wait when experimenting.
When you do this you still have to be careful your validation set doesn't change, and your training set doesn't overlap with it. So after sampling 30k items, we'll then taken the first 20k (since they're sorted by date) for our training data, and ignore the other 10k -- keeping our validation set the same as before.
End of explanation
m = RandomForestRegressor(n_estimators=1, max_depth=3, bootstrap=False, n_jobs=-1)
m.fit(X_train, y_train)
print_score(m)
Explanation: Instead of 83 seconds of total compute time (15.2s thanks to multi-cores), we now run in only 2.94 total seconds of compute.
3.3 Single tree
Let's use that subset to build a model that's so simple we can actually take a look at it. We'll build a forest made of trees - and before we look at the forest, we'll look at the trees.
In scikit-learn the trees are called 'estimators'. Below we'll make a forest with a single tree n_estimators=1, and a small tree at that max_depth=3, and we'll turn off the random-component of the RandomForest bootstrap=False. Now it'll create a small deteriministic tree.
End of explanation
draw_tree(m.estimators_[0], df_trn, precision=3)
df_raw.fiProductClassDesc.cat.categories
# df_raw.fiProductClassDesc.cat.codes
Explanation: After fitting the model and printing the score, the R2 score has dropped from 0.77 to 0.39. This is not a good model. It's better than the Mean-model (being > 0) but still not good.
But we can draw this model to take a look at it:
End of explanation
m = RandomForestRegressor(n_estimators=1, bootstrap=False, n_jobs=-1)
m.fit(X_train, y_train)
print_score(m)
Explanation: A tree is a series of binary decisions, or splits. Our tree first of all decided to split on Coupler_System โค 0.5. That's actually a boolean variable, True/False. Within the group where it was True, it further split those into YearMade โค 1988 (1987.5), and on, etc.
Looking at our tree, in the first box: there are 20,000 rows in our data set (samples), the average of the log of price is 10.1, and if we built a model where we just used that average all the time: then the mean-squared-error would be 0.456.
So this first box is like the Denominator of an R^2. The most basic model is a tree with zero splits, just predict the average.
It turns out above that the best single binary split we can make turns out to be splitting by where the Coupler System is โค 0.5. (True or False).
If we do that, the MSE of Coupler System < 0.5 (ie: False) goes down from 0.456 to 0.111, improving the error a lot. In the other group, it's only improved slightly, from 0.456 to 0.398.
We can also see the Coupler System False group is only a small percentage: 1,721 samples of the total 20,000.
If you wanted to know what the single best binary decision to make for your data, how could you do it?
We want to build a Random Forest from scratch.
The first step is to create a tree. The first step to creating a tree is to create the first binary decision. How do you do this?
FAML1-0:39:02
Enumerate the different splits for each variable and choose the one with the lowest MSE. so how do we do the enumeration?
For each variable, for each possible value of that variable: see if its better. What does better mean?:
We could take the weighted average of the new MSE times number of samples.
That would be the same as saying:
I've got a model. The model is a single binary decision. For everybody with YearMade โค 1987.5, I'll fill-in 10.21, for everyone > 1987.5 I'll fill-in 9.184, and calculate the RMSE of this model.
That'll give the same answer as the weighted-average idea.
So now we have a single number that represents how good a split is: the weighted average of the MSE's of the two groups it creates.
We also have a way to find the best split, which is to try every variable, and every possible value of that variable, and see which variable and which value gives us a split with the best score.
The granuality is defined by the variables. So, Coupler_System only has two possible values, True or False. YearMade ranges from 1960 to 2010, so we just try all those unique values. All those possible split points.
Now rinse and repeat: with the conditions set by the split: continue.
Claim: it's Never necessary to do more than 1 split at a level
Why?: because you can split it again.
THAT is the entirety of creating a Decision Tree. You stop either when you hit some requested limit (here when depth reaches 3), or when the leaf-nodes each only contain 1 thing.
That is how we grow decision trees.
Now this tree isn't very good, it has a validation R^2 of 0.39. We can try to make it better by letting grow deeper (removing max_depth=3
Bigger tree:
End of explanation
m = RandomForestRegressor(n_jobs=-1)
m.fit(X_train, y_train)
print_score(m)
Explanation: If we don't limit depth, the training R^2 is of of course, a perfect 1.0. Because we can exactly predict every training element because it's in a leaf-node all it's own.
But the validation R^2 is not 1.0. It's a lot better than our super-shallow tree, but not as good as we'd like.
We want to find another way of making these trees better. And we'll do that by making a forest.
What's a forest?
To create a forest we're going to use a statistical technique called bagging.
3.4 Bagging
3.4.1 Intro to Bagging
You can bag any kind of model. The Random Forest is a way of bagging Decision Trees.
Bagging: what if we created 5 different models, each of which was only somewhat predictive, but the models weren't at all correlated with each other -- their predictions weren't correlated. That would mean the 5 models would've had to've found different insights into relationships in the data.
If you took the average of those models, you're effectively taking in the insights from each of them.
Averaging models: Ensembling.
Let's come up with a more specific idea of how to do this. What if we created a hole lot of these trees: big, deep, massively-overfit D-Trees. But each tree gets a random 1/10th of the data. And do that a hundred times with different random samples.
All the trees will have errors, but random errors. What's the average of a bunch of random errors? Zero. So if we take the average the error will average to zero and what's left is the true relationship.
That's a Random Forest.
After making those trees, we'll take our test data, run it through the tree, get to the leaf node, take the average in that leaf node for all the trees, and average them all together.
To do that we call RandomForestRegressor(.). An 'estimator' is what scikit-learn calls a tree. By default n_estimators = 10
End of explanation
preds = np.stack([t.predict(X_valid) for t in m.estimators_])
preds[:,0], np.mean(preds[:,0]), y_valid[0]
Explanation: We'll grab the predictions for each individual tree, and look at one example.
Each tree is stored in the attribute: .estimators_. Below gives a list of arrays of predictions. Each array will be all the predictions for that tree.
np.stack(.) concatenates them on a new axis.
End of explanation
preds.shape
Explanation: We see a shape of 10 different sets of predictions and for each one our validation set of size 12,000 -- so 12,000 predictions for each of the 10 trees:
End of explanation
plt.plot([metrics.r2_score(y_valid, np.mean(preds[:i+1], axis=0)) for i in range(10)]);
Explanation: Above, preds[:,0] returns an array of the first prediction for each of our 10 trees. np.mean(preds[:,0]) returns the mean of those predictions, and y_valid[0] shows the actual answer. Most of our trees had inaccurate predictions, but the mean of them was actually pretty close.
Note: I probably made a mistake somewhere - or the data was too small - for multiple trees getting the exact answer
The models are based on different random subsets, and so their errors aren't correlated with eachother. The key insight here is to construct multiple models which are better than nothing, and the errors are - as much as possible - not correlated with eachother.
One of our first tunable hyperparameters is our number of trees.
What scikit-learn does by default is for N rows it picks out N rows with replacement: bootstrapping. ~ 63.2% of the rows will be represented, and a bunch of them multiple times.
The whole point of Machine Learning is to identify which variables matter the most and how do they relate to each other and your dependant variable together.
Random Forests were discovered/invented with the aim of creating trees as predictive and as uncorrelated as possible -- 1990s. Recent research has focused more on minimizing correlation: creating forests with trees that are individually less predictive, but with very little correlation.
There's another scikit-learn class called:
sklearn.ensemble.ExtraTreesClassifier
or
sklearn.ensemble.ExtraTreesRegressor
With the exact same API (just replace RandomForestRegressor). It's called an "Extremely Randomized Trees" model. It does exactly what's discussed above, but instead of trying every split of every variable, it randomly tries a few splits of a few variables.
So it's much faster to train, and has more randomness. With the time saved, you can build more trees - and therefore get better generalization.
In practice: if you have crappy individual trees, you just need more models to get a good overall model.
Now the obvious question: isn't this computationally expensive? Going through every possible value of a 32-bit Float or .. God forbid.. a 64-bit Float? Yes.
Firstly, that's why it's good your CPU runs in GHz, billions of clock-cycles per second, and moreso why Multi-Core processors are fantastic. Each core has SIMD capability -- Single Instruction Multiple Data -- allowing it to perform up to 8 computations at once - and that's per core.
On the GPU performance is measured in TFLOPS - Terra FLOPS - Trillions of FLoating Point Operations per Second.
This is why, when designing Algorithms, it's very difficult for us Humans to realize how * stupid algorithms should be given how fast modern computers are.*
It's quite a few operations... but at trillions per second, you hardly notice it.
Let's do a little data analysis. Let's go through each of the 10 trees, take the mean of all the predictions up to the ith tree and plot the R^2:
End of explanation
m = RandomForestRegressor(n_estimators=20, n_jobs=-1)
m.fit(X_train, y_train)
print_score(m)
m = RandomForestRegressor(n_estimators=40, n_jobs=-1)
m.fit(X_train, y_train)
print_score(m)
m = RandomForestRegressor(n_estimators=80, n_jobs=-1)
%time m.fit(X_train, y_train)
print_score(m)
m = RandomForestRegressor(n_estimators=160, n_jobs=-1)
%time m.fit(X_train, y_train)
print_score(m)
Explanation: Note that the final value on the plot is the same as the final R^2 score returned by the RandomForest -- about 0.7748 here.
The shape of this curve suggests that adding more trees isn't going to help much. Let's check (Compare this to our original model on a sample).
End of explanation
m = RandomForestRegressor(n_estimators=160, n_jobs=-1)
%time m.fit(X_train, y_train)
print_score(m)
Explanation: At this point, it looks like we're inside signal noise. More trees is never going to make the model worse - but a lower score is easily explained as whatever diminished accuracy gain being overwhelmed by noise in the random-sampling of the data.
If that's the case, I'd expect being able to see an R2 score greater than 0.79786, with the same hyperparameters:
End of explanation
m = RandomForestRegressor(n_estimators=40, n_jobs=-1, oob_score=True)
m.fit(X_train, y_train)
print_score(m)
Explanation: Well there you have it. And the highest score so far to boot.
The n_estimators hyperparameter is a tradeoff between improvement vs. computation time.
Interesting note above, number of trees increases computation time Linearly.
You can still get many of the same insights as a large forest, with a few dozen trees, like 20 or 30.
3.4.2 Out-of-Bag (OOB) Score
Is our validation set worse than our training set because we're overfitting, or because the validation set is for a different time period, or a bit of both? With the existing information we've shown, we can't tell. However, Random Forests have a very clever trick called out-of-bag (OOB) error which can handle this (and more!)
The idea is to calculate error on the training set, but only include the trees in the calculation of a row's error where that row was not included in training that tree. This allows us to see whether the model is over-fitting, without needing a separate validation set.
This also has the benefit of allowing us to see whether our model generalizes, even if we only have a small amount of data so want to avoid separating some out to create a validation set.
This is as simple as adding one more parameter to our model constructor. We print the OOB error last in our print_score function below.
So what if your dataset is so small you don't want to pull anything out for a validation set - because doing so means you no longer have enough data to build a good model? What do you do?
There's a cool trick unique to Random Forests. We could recognize that for each tree there are some portion of rows not used... So we could pass in the rows not used by the 1st tree to the 1st, the rows not used by the 2nd to the 2nd, and so on. So technically we'd have a different validation set for each tree. To calculate our prediction, we would average all of the trees where that row was not used for training.
As long as you have enough trees, every row is going to appear in the OOB sample for one of them at least. So you'll be averaging a few trees - more if you have more trees.
You can create an OOB prediction by averaging all the trees you didn't use to train each individual row, and then calculate RMSE, R2, etc, on that.
If you pass oob_score=True to scikit-learn, it'll do that for you. It'll then create an attribute oob_score_. Our print_score(.) function at top prints out the oob score if it exists.
End of explanation
n_trn
df_trn, y_trn, nas = proc_df(df_raw, 'SalePrice')
X_train, X_valid = split_vals(df_trn, n_trn)
y_train, y_valid = split_vals(y_trn, n_trn)
len(df_trn), len(X_train)
Explanation: The extra value at the end is the R2 for the oob score. We want it to be very close to the R2 for the validation set (2nd to last value) although that doesn't seem to be the case here.
In general the OOB R2 score will slightly underestimate how generalizable the model is. The more trees you have, the less it'll be less by.
Although in this case my OOB R2 score is actually better than my validation R2... NOTE (L2 1:21:54) the OOb score is better because it's taken from a random sample of the data, whereas our validation set is not: it's a different time-period which is harder to predict.
The OOB R2 score is handy for finding an automated way to set hyperparameters. A Grid-Search is one way to do this. You pass in a list of all hyperpars you want to tune to scikit-learn and a list of all values you want to try for each hypar, and it'll run your model on every possible combination and tells you which one is best. OOB score is a great choice for measuring that.
3.5 Reducing Over-Fitting
3.5.1 Subsampling Lecture 2 1:14:48
It turns out that one of the easiest ways to avoid over-fitting is also one of the best ways to speed up analysis: subsampling. Let's return to using our full dataset, so that we can demonstrate the impact of this technique. [1:15:00]
What we did before wasn't ideal. We took a subset of 30k rows of the data, and built our models on that - meaning every tree in our RF is a different subset of that subset of 30k. Why? Why not pick a totally different subset of 30k for each tree. So leave the total 300k dataset as is, and if we want to make things faster, pick a different subset of 30k each time. Instead of bootstrapping the entire set of rows, let's just randomly sample a subset of the data.
Let's do that by calling proc_df() without the subset par to get all our data.
End of explanation
set_rf_samples(20000)
m = RandomForestRegressor(n_jobs=-1, oob_score=True)
%time m.fit(X_train, y_train)
print_score(m)
Explanation: The basic idea is this: rather than limit the total amount of data that our model can access, let's instead limit it to a different random subset per tree. That way, given enough trees, the model can still see all the data, but for each individual tree, it'll be just as fast as if we'd cut down our dataset as before.
When we run set_rf_samples(20000), when we run a RF, it's not going to bootstrap an entire set of 390k rows, it's going to grab a subset of 20k rows.
So when running, it'll be just as fast as when we did a random sample of 20k, but now every tree can have access to the entire dataset. So if we have enough estimators/trees, the model will eventually see everything.
End of explanation
m = RandomForestRegressor(n_estimators=40, n_jobs=-1, oob_score=True)
%time m.fit(X_train, y_train)
print_score(m)
Explanation: We don't see that much of an improvement over the R2 with the 20k data-subset, because we haven't used many estimators yet.
Since each additional tree allows th emodel to see more data, this approach can make additional trees more useful.
End of explanation
reset_rf_samples()
Explanation: With more estimators the model can see a larger portion of the data, and the R2 (2nd last value) has gone up from 0.8591 to 0.8755.
The Favorita groceries competition has over a hundred-million rows of data. There's no way you'll create an RF using 128M rows in every tree. It'll take forever. Instead you can use set_rf_samples(.) set to 100k or 1M.
The trick here is with a Random Forest using this technique, no dataset is too big. Even if it has 100B rows. You can just create a bunch of trees, each with a different subset.
NOTE: right now OOB Scores and set_rf_samples(.) are not compatible with each other, so you need to set oob_score=False if you use set_rf_samples(.) because the OOB score will be meaningless.
To turn of set_rf_samples(.) just call: reset_rf_samples()
A great big tip - that very few people in Industry or Academia use:
Most people run all their models on all their data all the time using their best possible pars - which is just pointless.
If you're trying to find which features are important and how they relate to oneanother, having that 4th dec place of accuracy isn't going to change anything.
Do most of your models on a large-enough sample size that your accuracy is reasonable (within a reasonable distance of the best accuracy you can get) and it's taking a few seconds to train - so you can do your analysis.
3.5.2 Tree Building Parameters
We revert to using a full bootstrap sample in order to show the impact of the other over-fitting avoidance methods.
End of explanation
m = RandomForestRegressor(n_estimators=40, n_jobs=-1, oob_score=True)
m.fit(X_train, y_train)
print_score(m)
Explanation: Let's get a baseline for this full set to compare to. This'll train 40 estimators all the way down until the leaf nodes have just one sample in them.
End of explanation
m = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, n_jobs=-1, oob_score=True)
%time m.fit(X_train, y_train)
print_score(m)
Explanation: This gets us a 0.899 R2 on the validation set, or a 0.908 on the OOB.
L2 1:21:54 Our OOB is better than our ValSet bc our ValSet is actually a different time period (future), whereas the OOB is a random sample. It's harder to predict the future.
Another way to reduce over-fitting is to grow our trees less deeply. We do this by specifying (with min_samples_leaf) that we require some minimum number of rows in every leaf node. This has two benefits:
There are less decision rules for each leaf node; simpler models should generalize better.
The predictions are made by average more rows in the leaf node, resulting in less volatility.
End of explanation
m = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, max_features=0.5, n_jobs=-1, oob_score=True)
m.fit(X_train, y_train)
print_score(m)
Explanation: Setting min_samples_leaf = 3 stops the RF when each leaf-node has 3 or fewer samples in it. In practice this means 1 or 2 fewer levels of decisions being made - which means around half the number of decision criteria, so it'll train much quicker. It also means when we look at an individual tree, instead of taking one point, wer're taking the average of 3 points - so we'd expect the trees to generalize better, though each tree is likely to be less powerful than before.
Values of in the range of 1, 3, 5, 10, 25 tend to work well for min_samples_leaf.
If you have a massive dataset and aren't using small samples, you may need a min_samples_leaf in the hundreds or thousands.
In this case, going from the default leaf-size of 1 to 3 has increased our valset R2 from 0.899 to 0.903.
We can also increase the amount of variation amongst the trees by not only using a sample of rows for each tree, but also using a sample of columns for each split. We do this by specifying max_features, which is the proportion of features to randomly select from at each split.
Idea: the less correlated your trees are with eachother: the better. Image you had 1 column that was so much better than all the other cols at being predictive, that every tree you built - regardless of which subset of rows - always started with that column. So the trees are all going to be pretty similar. But you can imagine there being some interaction of variables where that interaction is more important than that individual column.
So if each tree always splits on the same thing the first time, you're not going to get much variation on those trees.
So what we do in addition to just taking a subset of those rows, is at every single split point take a different subset of columns.
This is slightly different to the row sampling. In row-sampling, each new tree is based on a random set of rows. In column sampling every individual binary split chooses from a different subset of columns.
IoW: rather than looking at every possible level of every possible column, we look at every possible level of a random subset of columns. And at each decision point we use a different random subset.
How many? You get to pick. max_features=0.5 means you choose half of them. The default is to use all.
J.Howard has found good values at: 1, 0.5, Log(2), Sqrt()
NOTE the RF never removes variables as it's building a Decision Tree. It just goes through the different split points based on possible values.
You may have noticed our RMSE of Log(price) has been dropping on our validation set as well (2nd value), now down to 0.23306.
Checking the public leaderboard, 0.23305579 gets us to 25th place. Unfortunately the competition is old enough that direct comparisons are difficult, but we get the general idea.
Roughly speaking, we'd've gotten in the top 25 of this competition with a brainless Random Forest with some brainless hyperparameter tuning.
This is why the Random Forest is such an important first step, and often * only step in Machine Learning.*
End of explanation
<END_TASK> |
15,837 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Regression Week 4
Step1: Polynomial regression, revisited
We build on the material from Week 3, where we wrote the function to produce an SFrame with columns containing the powers of a given input. Copy and paste the function polynomial_sframe from Week 3
Step2: Let's use matplotlib to visualize what a polynomial regression looks like on the house data.
Step3: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
Step4: Let us revisit the 15th-order polynomial model using the 'sqft_living' input. Generate polynomial features up to degree 15 using polynomial_sframe() and fit a model with these features. When fitting the model, use an L2 penalty of 1e-5
Step5: Note
Step6: QUIZ QUESTION
Step7: Next, fit a 15th degree polynomial on set_1, set_2, set_3, and set_4, using 'sqft_living' to predict prices. Print the weights and make a plot of the resulting model.
Hint
Step8: The four curves should differ from one another a lot, as should the coefficients you learned.
QUIZ QUESTION
Step9: Ridge regression comes to rescue
Generally, whenever we see weights change so much in response to change in data, we believe the variance of our estimate to be large. Ridge regression aims to address this issue by penalizing "large" weights. (Weights of model15 looked quite small, but they are not that small because 'sqft_living' input is in the order of thousands.)
With the argument l2_penalty=1e5, fit a 15th-order polynomial model on set_1, set_2, set_3, and set_4. Other than the change in the l2_penalty parameter, the code should be the same as the experiment above. Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.
Step10: These curves should vary a lot less, now that you applied a high degree of regularization.
QUIZ QUESTION
Step11: Selecting an L2 penalty via cross-validation
Just like the polynomial degree, the L2 penalty is a "magic" parameter we need to select. We could use the validation set approach as we did in the last module, but that approach has a major disadvantage
Step12: Once the data is shuffled, we divide it into equal segments. Each segment should receive n/k elements, where n is the number of observations in the training set and k is the number of segments. Since the segment 0 starts at index 0 and contains n/k elements, it ends at index (n/k)-1. The segment 1 starts where the segment 0 left off, at index (n/k). With n/k elements, the segment 1 ends at index (n*2/k)-1. Continuing in this fashion, we deduce that the segment i starts at index (n*i/k) and ends at (n*(i+1)/k)-1.
With this pattern in mind, we write a short loop that prints the starting and ending indices of each segment, just to make sure you are getting the splits right.
Step13: Let us familiarize ourselves with array slicing with SFrame. To extract a continuous slice from an SFrame, use colon in square brackets. For instance, the following cell extracts rows 0 to 9 of train_valid_shuffled. Notice that the first index (0) is included in the slice but the last index (10) is omitted.
Step14: Now let us extract individual segments with array slicing. Consider the scenario where we group the houses in the train_valid_shuffled dataframe into k=10 segments of roughly equal size, with starting and ending indices computed as above.
Extract the fourth segment (segment 3) and assign it to a variable called validation4.
Step15: To verify that we have the right elements extracted, run the following cell, which computes the average price of the fourth segment. When rounded to nearest whole number, the average should be $536,234.
Step16: After designating one of the k segments as the validation set, we train a model using the rest of the data. To choose the remainder, we slice (0
Step17: Extract the remainder of the data after excluding fourth segment (segment 3) and assign the subset to train4.
Step18: To verify that we have the right elements extracted, run the following cell, which computes the average price of the data with fourth segment excluded. When rounded to nearest whole number, the average should be $539,450.
Step19: Now we are ready to implement k-fold cross-validation. Write a function that computes k validation errors by designating each of the k segments as the validation set. It accepts as parameters (i) k, (ii) l2_penalty, (iii) dataframe, (iv) name of output column (e.g. price) and (v) list of feature names. The function returns the average validation error using k segments as validation sets.
For each i in [0, 1, ..., k-1]
Step20: Once we have a function to compute the average validation error for a model, we can write a loop to find the model that minimizes the average validation error. Write a loop that does the following
Step21: QUIZ QUESTIONS
Step22: You may find it useful to plot the k-fold cross-validation errors you have obtained to better understand the behavior of the method.
Step23: Once you found the best value for the L2 penalty using cross-validation, it is important to retrain a final model on all of the training data using this value of l2_penalty. This way, your final model will be trained on the entire dataset.
Step24: QUIZ QUESTION | <ASSISTANT_TASK:>
Python Code:
import graphlab
Explanation: Regression Week 4: Ridge Regression (interpretation)
In this notebook, we will run ridge regression multiple times with different L2 penalties to see which one produces the best fit. We will revisit the example of polynomial regression as a means to see the effect of L2 regularization. In particular, we will:
* Use a pre-built implementation of regression (GraphLab Create) to run polynomial regression
* Use matplotlib to visualize polynomial regressions
* Use a pre-built implementation of regression (GraphLab Create) to run polynomial regression, this time with L2 penalty
* Use matplotlib to visualize polynomial regressions under L2 regularization
* Choose best L2 penalty using cross-validation.
* Assess the final fit using test data.
We will continue to use the House data from previous notebooks. (In the next programming assignment for this module, you will implement your own ridge regression learning algorithm using gradient descent.)
Fire up graphlab create
End of explanation
def polynomial_sframe(feature, degree):
# assume that degree >= 1
# initialize the SFrame:
poly_sframe = graphlab.SFrame()
# and set poly_sframe['power_1'] equal to the passed feature
poly_sframe['power_1'] = feature
# first check if degree > 1
if degree > 1:
# then loop over the remaining degrees:
# range usually starts at 0 and stops at the endpoint-1. We want it to start at 2 and stop at degree
for power in range(2, degree+1):
# first we'll give the column a name:
name = 'power_' + str(power)
# then assign poly_sframe[name] to the appropriate power of feature
poly_sframe[name] = feature.apply(lambda x : x**power)
return poly_sframe
Explanation: Polynomial regression, revisited
We build on the material from Week 3, where we wrote the function to produce an SFrame with columns containing the powers of a given input. Copy and paste the function polynomial_sframe from Week 3:
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
sales = graphlab.SFrame('kc_house_data.gl/')
Explanation: Let's use matplotlib to visualize what a polynomial regression looks like on the house data.
End of explanation
sales = sales.sort(['sqft_living','price'])
Explanation: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
End of explanation
l2_small_penalty = 1e-5
Explanation: Let us revisit the 15th-order polynomial model using the 'sqft_living' input. Generate polynomial features up to degree 15 using polynomial_sframe() and fit a model with these features. When fitting the model, use an L2 penalty of 1e-5:
End of explanation
poly15_data = polynomial_sframe(sales['sqft_living'], 15)
my_features = poly15_data.column_names() # get the name of the features
poly15_data['price'] = sales['price'] # add price to the data since it's the target
model15 = graphlab.linear_regression.create(poly15_data,
target = 'price',
features = my_features,
validation_set = None,
l2_penalty=l2_small_penalty
)
model15.get("coefficients")
Explanation: Note: When we have so many features and so few data points, the solution can become highly numerically unstable, which can sometimes lead to strange unpredictable results. Thus, rather than using no regularization, we will introduce a tiny amount of regularization (l2_penalty=1e-5) to make the solution numerically stable. (In lecture, we discussed the fact that regularization can also help with numerical stability, and here we are seeing a practical example.)
With the L2 penalty specified above, fit the model and print out the learned weights.
Hint: make sure to add 'price' column to the new SFrame before calling graphlab.linear_regression.create(). Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set=None in this call.
End of explanation
(semi_split1, semi_split2) = sales.random_split(.5,seed=0)
(set_1, set_2) = semi_split1.random_split(0.5, seed=0)
(set_3, set_4) = semi_split2.random_split(0.5, seed=0)
Explanation: QUIZ QUESTION: What's the learned value for the coefficient of feature power_1?
Observe overfitting
Recall from Week 3 that the polynomial fit of degree 15 changed wildly whenever the data changed. In particular, when we split the sales data into four subsets and fit the model of degree 15, the result came out to be very different for each subset. The model had a high variance. We will see in a moment that ridge regression reduces such variance. But first, we must reproduce the experiment we did in Week 3.
First, split the data into split the sales data into four subsets of roughly equal size and call them set_1, set_2, set_3, and set_4. Use .random_split function and make sure you set seed=0.
End of explanation
set_1_15_data = polynomial_sframe(set_1['sqft_living'], 15)
my_features = set_1_15_data.column_names() # get the name of the features
set_1_15_data['price'] = set_1['price'] # add price to the data since it's the target
model_set_1_15 = graphlab.linear_regression.create(
set_1_15_data,
target = 'price',
features = my_features,
validation_set = None,
l2_penalty=l2_small_penalty
)
plt.plot(set_1_15_data['power_1'],set_1_15_data['price'],'.',
set_1_15_data['power_1'], model_set_1_15.predict(set_1_15_data),'-')
print "set_1"
model_set_1_15.get("coefficients").print_rows(16)
set_2_15_data = polynomial_sframe(set_2['sqft_living'], 15)
my_features = set_2_15_data.column_names() # get the name of the features
set_2_15_data['price'] = set_2['price'] # add price to the data since it's the target
model_set_2_15 = graphlab.linear_regression.create(
set_2_15_data,
target = 'price',
features = my_features,
validation_set = None,
l2_penalty=l2_small_penalty
)
plt.plot(set_2_15_data['power_1'],set_2_15_data['price'],'.',
set_2_15_data['power_1'], model_set_2_15.predict(set_2_15_data),'-')
print "set_2"
model_set_2_15.get("coefficients").print_rows(16)
set_3_15_data = polynomial_sframe(set_3['sqft_living'], 15)
my_features = set_3_15_data.column_names() # get the name of the features
set_3_15_data['price'] = set_3['price'] # add price to the data since it's the target
model_set_3_15 = graphlab.linear_regression.create(
set_3_15_data,
target = 'price',
features = my_features,
validation_set = None,
l2_penalty=l2_small_penalty
)
plt.plot(set_3_15_data['power_1'],set_3_15_data['price'],'.',
set_3_15_data['power_1'], model_set_3_15.predict(set_3_15_data),'-')
print "set_3"
model_set_3_15.get("coefficients").print_rows(16)
set_4_15_data = polynomial_sframe(set_4['sqft_living'], 15)
my_features = set_4_15_data.column_names() # get the name of the features
set_4_15_data['price'] = set_4['price'] # add price to the data since it's the target
model_set_4_15 = graphlab.linear_regression.create(
set_4_15_data,
target = 'price',
features = my_features,
validation_set = None,
l2_penalty=l2_small_penalty
)
plt.plot(set_4_15_data['power_1'],set_4_15_data['price'],'.',
set_4_15_data['power_1'], model_set_4_15.predict(set_4_15_data),'-')
print "set_4"
model_set_4_15.get("coefficients").print_rows(16)
Explanation: Next, fit a 15th degree polynomial on set_1, set_2, set_3, and set_4, using 'sqft_living' to predict prices. Print the weights and make a plot of the resulting model.
Hint: When calling graphlab.linear_regression.create(), use the same L2 penalty as before (i.e. l2_small_penalty). Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.
End of explanation
print model_set_1_15.get("coefficients")['value'][1]
print model_set_2_15.get("coefficients")['value'][1]
print model_set_3_15.get("coefficients")['value'][1]
print model_set_4_15.get("coefficients")['value'][1]
Explanation: The four curves should differ from one another a lot, as should the coefficients you learned.
QUIZ QUESTION: For the models learned in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)
End of explanation
set_1_15_data = polynomial_sframe(set_1['sqft_living'], 15)
my_features = set_1_15_data.column_names() # get the name of the features
set_1_15_data['price'] = set_1['price'] # add price to the data since it's the target
model_set_1_15 = graphlab.linear_regression.create(
set_1_15_data,
target = 'price',
features = my_features,
validation_set = None,
l2_penalty=1e5
)
plt.plot(set_1_15_data['power_1'],set_1_15_data['price'],'.',
set_1_15_data['power_1'], model_set_1_15.predict(set_1_15_data),'-')
print "set_1"
model_set_1_15.get("coefficients").print_rows(16)
set_2_15_data = polynomial_sframe(set_2['sqft_living'], 15)
my_features = set_2_15_data.column_names() # get the name of the features
set_2_15_data['price'] = set_2['price'] # add price to the data since it's the target
model_set_2_15 = graphlab.linear_regression.create(
set_2_15_data,
target = 'price',
features = my_features,
validation_set = None,
l2_penalty=1e5
)
plt.plot(set_2_15_data['power_1'],set_2_15_data['price'],'.',
set_2_15_data['power_1'], model_set_2_15.predict(set_2_15_data),'-')
print "set_2"
model_set_2_15.get("coefficients").print_rows(16)
set_3_15_data = polynomial_sframe(set_3['sqft_living'], 15)
my_features = set_3_15_data.column_names() # get the name of the features
set_3_15_data['price'] = set_3['price'] # add price to the data since it's the target
model_set_3_15 = graphlab.linear_regression.create(
set_3_15_data,
target = 'price',
features = my_features,
validation_set = None,
l2_penalty=1e5
)
plt.plot(set_3_15_data['power_1'],set_3_15_data['price'],'.',
set_3_15_data['power_1'], model_set_3_15.predict(set_3_15_data),'-')
print "set_3"
model_set_3_15.get("coefficients").print_rows(16)
set_4_15_data = polynomial_sframe(set_4['sqft_living'], 15)
my_features = set_4_15_data.column_names() # get the name of the features
set_4_15_data['price'] = set_4['price'] # add price to the data since it's the target
model_set_4_15 = graphlab.linear_regression.create(
set_4_15_data,
target = 'price',
features = my_features,
validation_set = None,
l2_penalty=1e5
)
plt.plot(set_4_15_data['power_1'],set_4_15_data['price'],'.',
set_4_15_data['power_1'], model_set_4_15.predict(set_4_15_data),'-')
print "set_4"
model_set_4_15.get("coefficients").print_rows(16)
Explanation: Ridge regression comes to rescue
Generally, whenever we see weights change so much in response to change in data, we believe the variance of our estimate to be large. Ridge regression aims to address this issue by penalizing "large" weights. (Weights of model15 looked quite small, but they are not that small because 'sqft_living' input is in the order of thousands.)
With the argument l2_penalty=1e5, fit a 15th-order polynomial model on set_1, set_2, set_3, and set_4. Other than the change in the l2_penalty parameter, the code should be the same as the experiment above. Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.
End of explanation
print round(model_set_1_15.get("coefficients")['value'][1], 2)
print model_set_2_15.get("coefficients")['value'][1]
print model_set_3_15.get("coefficients")['value'][1]
print round(model_set_4_15.get("coefficients")['value'][1], 2)
Explanation: These curves should vary a lot less, now that you applied a high degree of regularization.
QUIZ QUESTION: For the models learned with the high level of regularization in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)
End of explanation
(train_valid, test) = sales.random_split(.9, seed=1)
train_valid_shuffled = graphlab.toolkits.cross_validation.shuffle(train_valid, random_seed=1)
Explanation: Selecting an L2 penalty via cross-validation
Just like the polynomial degree, the L2 penalty is a "magic" parameter we need to select. We could use the validation set approach as we did in the last module, but that approach has a major disadvantage: it leaves fewer observations available for training. Cross-validation seeks to overcome this issue by using all of the training set in a smart way.
We will implement a kind of cross-validation called k-fold cross-validation. The method gets its name because it involves dividing the training set into k segments of roughtly equal size. Similar to the validation set method, we measure the validation error with one of the segments designated as the validation set. The major difference is that we repeat the process k times as follows:
Set aside segment 0 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br>
Set aside segment 1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br>
...<br>
Set aside segment k-1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set
After this process, we compute the average of the k validation errors, and use it as an estimate of the generalization error. Notice that all observations are used for both training and validation, as we iterate over segments of data.
To estimate the generalization error well, it is crucial to shuffle the training data before dividing them into segments. GraphLab Create has a utility function for shuffling a given SFrame. We reserve 10% of the data as the test set and shuffle the remainder. (Make sure to use seed=1 to get consistent answer.)
End of explanation
n = len(train_valid_shuffled)
k = 10 # 10-fold cross-validation
for i in xrange(k):
start = (n*i)/k
end = (n*(i+1))/k-1
print i, (start, end)
Explanation: Once the data is shuffled, we divide it into equal segments. Each segment should receive n/k elements, where n is the number of observations in the training set and k is the number of segments. Since the segment 0 starts at index 0 and contains n/k elements, it ends at index (n/k)-1. The segment 1 starts where the segment 0 left off, at index (n/k). With n/k elements, the segment 1 ends at index (n*2/k)-1. Continuing in this fashion, we deduce that the segment i starts at index (n*i/k) and ends at (n*(i+1)/k)-1.
With this pattern in mind, we write a short loop that prints the starting and ending indices of each segment, just to make sure you are getting the splits right.
End of explanation
train_valid_shuffled[0:10] # rows 0 to 9
Explanation: Let us familiarize ourselves with array slicing with SFrame. To extract a continuous slice from an SFrame, use colon in square brackets. For instance, the following cell extracts rows 0 to 9 of train_valid_shuffled. Notice that the first index (0) is included in the slice but the last index (10) is omitted.
End of explanation
start = (n*3)/k
end = (n*4)/k-1
validation4 = train_valid_shuffled[start:end+1]
Explanation: Now let us extract individual segments with array slicing. Consider the scenario where we group the houses in the train_valid_shuffled dataframe into k=10 segments of roughly equal size, with starting and ending indices computed as above.
Extract the fourth segment (segment 3) and assign it to a variable called validation4.
End of explanation
print int(round(validation4['price'].mean(), 0))
Explanation: To verify that we have the right elements extracted, run the following cell, which computes the average price of the fourth segment. When rounded to nearest whole number, the average should be $536,234.
End of explanation
n = len(train_valid_shuffled)
first_two = train_valid_shuffled[0:2]
last_two = train_valid_shuffled[n-2:n]
print first_two.append(last_two)
Explanation: After designating one of the k segments as the validation set, we train a model using the rest of the data. To choose the remainder, we slice (0:start) and (end+1:n) of the data and paste them together. SFrame has append() method that pastes together two disjoint sets of rows originating from a common dataset. For instance, the following cell pastes together the first and last two rows of the train_valid_shuffled dataframe.
End of explanation
train4 = train_valid_shuffled[0:start].append(train_valid_shuffled[end+1:n])
Explanation: Extract the remainder of the data after excluding fourth segment (segment 3) and assign the subset to train4.
End of explanation
print int(round(train4['price'].mean(), 0))
Explanation: To verify that we have the right elements extracted, run the following cell, which computes the average price of the data with fourth segment excluded. When rounded to nearest whole number, the average should be $539,450.
End of explanation
def k_fold_cross_validation(k, l2_penalty, data, output_name, features_list):
trained_models_history = []
validation_rss_history = []
n = len(data)
# loop over the values of the k
for i in xrange(k):
start = (n*i)/k
end = (n*(i+1))/k-1
# obtain validation and train set
validation = data[start:end+1]
train = data[0:start].append(data[end+1:n])
# train model on train data
model = graphlab.linear_regression.create(
train,
target = output_name,
features = features_list,
validation_set = None,
l2_penalty=l2_penalty,
verbose=False
)
trained_models_history.append(model)
# find validation error
prediction = model.predict(validation[features_list])
error = prediction - validation['price']
error_squared = error * error
rss = error_squared.sum()
#print "Fold " + str(i) + " validation rss = " + str(rss)
validation_rss_history.append(rss)
return trained_models_history, validation_rss_history
Explanation: Now we are ready to implement k-fold cross-validation. Write a function that computes k validation errors by designating each of the k segments as the validation set. It accepts as parameters (i) k, (ii) l2_penalty, (iii) dataframe, (iv) name of output column (e.g. price) and (v) list of feature names. The function returns the average validation error using k segments as validation sets.
For each i in [0, 1, ..., k-1]:
Compute starting and ending indices of segment i and call 'start' and 'end'
Form validation set by taking a slice (start:end+1) from the data.
Form training set by appending slice (end+1:n) to the end of slice (0:start).
Train a linear model using training set just formed, with a given l2_penalty
Compute validation error using validation set just formed
End of explanation
import numpy as np
np.logspace(1, 7, num=13)
poly15_data = polynomial_sframe(train_valid_shuffled['sqft_living'], 15)
my_features = poly15_data.column_names() # get the name of the features
poly15_data['price'] = train_valid_shuffled['price'] # add price to the data since it's the target
k = 10
validation_rss_avg_list = []
for l2_penalty in np.logspace(1, 7, num=13):
model_list, validation_rss_list = k_fold_cross_validation(k, l2_penalty, poly15_data, 'price', my_features)
validation_rss_avg_list.append(np.mean(validation_rss_list))
validation_rss_avg_list
Explanation: Once we have a function to compute the average validation error for a model, we can write a loop to find the model that minimizes the average validation error. Write a loop that does the following:
* We will again be aiming to fit a 15th-order polynomial model using the sqft_living input
* For l2_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, you can use this Numpy function: np.logspace(1, 7, num=13).)
* Run 10-fold cross-validation with l2_penalty
* Report which L2 penalty produced the lowest average validation error.
Note: since the degree of the polynomial is now fixed to 15, to make things faster, you should generate polynomial features in advance and re-use them throughout the loop. Make sure to use train_valid_shuffled when generating polynomial features!
End of explanation
print np.logspace(1, 7, num=13)[4]
Explanation: QUIZ QUESTIONS: What is the best value for the L2 penalty according to 10-fold validation?
End of explanation
# Plot the l2_penalty values in the x axis and the cross-validation error in the y axis.
# Using plt.xscale('log') will make your plot more intuitive.
plt.plot(np.logspace(1, 7, num=13),validation_rss_avg_list,'k-')
plt.xlabel('$\ell_2$ penalty')
plt.ylabel('LOO cross validation error')
plt.xscale('log')
plt.yscale('log')
Explanation: You may find it useful to plot the k-fold cross-validation errors you have obtained to better understand the behavior of the method.
End of explanation
# train model on train data
poly15_model = graphlab.linear_regression.create(
poly15_data,
target = 'price',
features = my_features,
validation_set = None,
l2_penalty= 1000
)
Explanation: Once you found the best value for the L2 penalty using cross-validation, it is important to retrain a final model on all of the training data using this value of l2_penalty. This way, your final model will be trained on the entire dataset.
End of explanation
round(103.090927005, 2)
# find test rss
poly15_test_data = polynomial_sframe(test['sqft_living'], 15)
prediction = poly15_model.predict(poly15_test_data)
error = prediction - test['price']
error_squared = error * error
rss = error_squared.sum()
print rss
Explanation: QUIZ QUESTION: Using the best L2 penalty found above, train a model using all training data. What is the RSS on the TEST data of the model you learn with this L2 penalty?
End of explanation
<END_TASK> |
15,838 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Using FISSA with SIMA
SIMA is a toolbox for motion correction and cell detection.
Here we illustrate how to create a workflow which uses SIMA to detect cells and FISSA to extract decontaminated signals from those cells.
Reference
Step1: Detecting cells with SIMA
Setup data
Step2: Run SIMA segmentation algorithm
Step3: Plot detected cells
Step4: Extract decontaminated signals with FISSA
FISSA needs either ImageJ ROIs or numpy arrays as inputs for the ROIs.
SIMA outputs ROIs as numpy arrays, and can be directly read into FISSA.
A given roi is given as
python
rois[i].coords[0][
Step5: We can then run FISSA on the data using the ROIs supplied by SIMA having converted them to a FISSA-compatible format, rois_fissa.
Step6: Plotting the results | <ASSISTANT_TASK:>
Python Code:
# FISSA toolbox
import fissa
# SIMA toolbox
import sima
import sima.segment
# File operations
import glob
# For plotting our results, use numpy and matplotlib
import matplotlib.pyplot as plt
import numpy as np
Explanation: Using FISSA with SIMA
SIMA is a toolbox for motion correction and cell detection.
Here we illustrate how to create a workflow which uses SIMA to detect cells and FISSA to extract decontaminated signals from those cells.
Reference:
Kaifosh, P., Zaremba, J. D., Danielson, N. B., Losonczy, A. SIMA: Python software for analysis of dynamic fluorescence imaging data. Frontiers in neuroinformatics, 8(80), 2014. doi: 10.3389/fninf.2014.00080.
Please note that SIMA only supports Python 3.6 and below.
Import packages
End of explanation
# Define folder where tiffs are present
tiff_folder = "exampleData/20150529/"
# Find tiffs in folder
tiffs = sorted(glob.glob(tiff_folder + "/*.tif*"))
# define motion correction method
mc_approach = sima.motion.DiscreteFourier2D()
# Define SIMA dataset
sequences = [sima.Sequence.create("TIFF", tiff) for tiff in tiffs[:1]]
try:
dataset = sima.ImagingDataset(sequences, "example.sima")
except BaseException:
dataset = sima.ImagingDataset.load("example.sima")
Explanation: Detecting cells with SIMA
Setup data
End of explanation
stica_approach = sima.segment.STICA(components=2)
stica_approach.append(sima.segment.SparseROIsFromMasks())
stica_approach.append(sima.segment.SmoothROIBoundaries())
stica_approach.append(sima.segment.MergeOverlapping(threshold=0.5))
rois = dataset.segment(stica_approach, "auto_ROIs")
Explanation: Run SIMA segmentation algorithm
End of explanation
# Plotting lines surrounding each of the ROIs
plt.figure(figsize=(7, 6))
for roi in rois:
# Plot border around cell
plt.plot(roi.coords[0][:, 0], roi.coords[0][:, 1])
# Invert the y-axis because image co-ordinates are labelled from top-left
plt.gca().invert_yaxis()
plt.show()
Explanation: Plot detected cells
End of explanation
rois_fissa = [roi.coords[0][:, :2] for roi in rois]
rois[0].coords[0][:, :2].shape
Explanation: Extract decontaminated signals with FISSA
FISSA needs either ImageJ ROIs or numpy arrays as inputs for the ROIs.
SIMA outputs ROIs as numpy arrays, and can be directly read into FISSA.
A given roi is given as
python
rois[i].coords[0][:, :2]
FISSA expects rois to be provided as a list of lists
python
[[roiA1, roiA2, roiA3, ...]]
So some formatting will need to be done first.
End of explanation
output_folder = "fissa_sima_example"
experiment = fissa.Experiment(tiff_folder, [rois_fissa], output_folder)
experiment.separate()
Explanation: We can then run FISSA on the data using the ROIs supplied by SIMA having converted them to a FISSA-compatible format, rois_fissa.
End of explanation
# Fetch the colormap object for Cynthia Brewer's Paired color scheme
cmap = plt.get_cmap("Paired")
# Select which trial (TIFF index) to plot
trial = 0
# Plot the mean image and ROIs from the FISSA experiment
plt.figure(figsize=(7, 7))
plt.imshow(experiment.means[trial], cmap="gray")
for i_roi in range(len(experiment.roi_polys)):
# Plot border around ROI
for contour in experiment.roi_polys[i_roi, trial][0]:
plt.plot(
contour[:, 1],
contour[:, 0],
color=cmap((i_roi * 2 + 1) % cmap.N),
)
plt.show()
# Plot all ROIs and trials
# Get the number of ROIs and trials
n_roi = experiment.result.shape[0]
n_trial = experiment.result.shape[1]
# Find the maximum signal intensities for each ROI
roi_max_raw = [
np.max([np.max(experiment.raw[i_roi, i_trial][0]) for i_trial in range(n_trial)])
for i_roi in range(n_roi)
]
roi_max_result = [
np.max([np.max(experiment.result[i_roi, i_trial][0]) for i_trial in range(n_trial)])
for i_roi in range(n_roi)
]
roi_max = np.maximum(roi_max_raw, roi_max_result)
# Plot our figure using subplot panels
plt.figure(figsize=(16, 10))
for i_roi in range(n_roi):
for i_trial in range(n_trial):
# Make subplot axes
i_subplot = 1 + i_trial * n_roi + i_roi
plt.subplot(n_trial, n_roi, i_subplot)
# Plot the data
plt.plot(
experiment.raw[i_roi][i_trial][0, :],
label="Raw (SIMA)",
color=cmap((i_roi * 2) % cmap.N),
)
plt.plot(
experiment.result[i_roi][i_trial][0, :],
label="FISSA",
color=cmap((i_roi * 2 + 1) % cmap.N),
)
# Labels and boiler plate
plt.ylim([-0.05 * roi_max[i_roi], roi_max[i_roi] * 1.05])
if i_roi == 0:
plt.ylabel(
"Trial {}\n\nSignal intensity\n(candela per unit area)".format(
i_trial + 1
)
)
if i_trial == 0:
plt.legend()
plt.title("ROI {}".format(i_roi))
if i_trial == n_trial - 1:
plt.xlabel("Time (frame number)")
plt.show()
Explanation: Plotting the results
End of explanation
<END_TASK> |
15,839 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
WMI Eventing
Metadata
| | |
|
Step1: Download & Process Mordor Dataset
Step2: Analytic I
Look for WMI event filters registered
| Data source | Event Provider | Relationship | Event |
|
Step3: Analytic II
Look for WMI event consumers registered
| Data source | Event Provider | Relationship | Event |
|
Step4: Analytic III
Look for WMI consumers binding to filters
| Data source | Event Provider | Relationship | Event |
|
Step5: Analytic IV
Look for events related to the registration of FilterToConsumerBinding
| Data source | Event Provider | Relationship | Event |
| | <ASSISTANT_TASK:>
Python Code:
from openhunt.mordorutils import *
spark = get_spark()
Explanation: WMI Eventing
Metadata
| | |
|:------------------|:---|
| collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] |
| creation date | 2019/08/10 |
| modification date | 2020/09/20 |
| playbook related | [] |
Hypothesis
Adversaries might be leveraging WMI eventing for persistence in my environment.
Technical Context
WMI is the Microsoft implementation of the Web-Based Enterprise Management (WBEM) and Common Information Model (CIM). Both standards aim to provide an industry-agnostic means of collecting and transmitting information related to any managed component in an enterprise.
An example of a managed component in WMI would be a running process, registry key, installed service, file information, etc.
At a high level, Microsoftโs implementation of these standards can be summarized as follows > Managed Components Managed components are represented as WMI objects โ class instances representing highly structured operating system data. Microsoft provides a wealth of WMI objects that communicate information related to the operating system. E.g. Win32_Process, Win32_Service, AntiVirusProduct, Win32_StartupCommand, etc.
Offensive Tradecraft
From an offensive perspective WMI has the ability to trigger off nearly any conceivable event, making it a good technique for persistence.
Three requirements
* Filter โ An action to trigger off of
* Consumer โ An action to take upon triggering the filter
* Binding โ Registers a FilterConsumer
Mordor Test Data
| | |
|:----------|:----------|
| metadata | https://mordordatasets.com/notebooks/small/windows/03_persistence/SDWIN-190518184306.html |
| link | https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/persistence/host/empire_wmi_local_event_subscriptions_elevated_user.zip |
Analytics
Initialize Analytics Engine
End of explanation
mordor_file = "https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/persistence/host/empire_wmi_local_event_subscriptions_elevated_user.zip"
registerMordorSQLTable(spark, mordor_file, "mordorTable")
Explanation: Download & Process Mordor Dataset
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, User, EventNamespace, Name, Query
FROM mordorTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 19
'''
)
df.show(10,False)
Explanation: Analytic I
Look for WMI event filters registered
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| WMI object | Microsoft-Windows-Sysmon/Operational | User created Wmi filter | 19 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, User, Name, Type, Destination
FROM mordorTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 20
'''
)
df.show(10,False)
Explanation: Analytic II
Look for WMI event consumers registered
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| WMI object | Microsoft-Windows-Sysmon/Operational | User created Wmi consumer | 20 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, User, Operation, Consumer, Filter
FROM mordorTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 21
'''
)
df.show(10,False)
Explanation: Analytic III
Look for WMI consumers binding to filters
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| WMI object | Microsoft-Windows-Sysmon/Operational | User created Wmi subscription | 21 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, Message
FROM mordorTable
WHERE Channel = "Microsoft-Windows-WMI-Activity/Operational"
AND EventID = 5861
'''
)
df.show(10,False)
Explanation: Analytic IV
Look for events related to the registration of FilterToConsumerBinding
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| WMI object | Microsoft-Windows-WMI-Activity/Operational | Wmi subscription created | 5861 |
End of explanation
<END_TASK> |
15,840 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Outline
Glossary
8. Calibration
Previous
Step1: Import section specific modules
Step2: 8.1 Calibration as a Least Squares Problem <a id='cal
Step3: Figure 8.1.1
Step4: Our hour angle range is from -6h to 6h, and our declination is set to $60^{\circ}$.
As the earth rotates the antennas trace out $uv$-tracks (ellipses) as shown in the code fragment below, where the red tracks are due to baseline $pq$ and blue tracks are due to baseline $qp$. We can construct these $uv$-tracks by using Eq. 8.1 ⤵<!--\ref{cal
Step5: Figure 8.1.2
Step6: 8.1.2. Unpolarized Calibration <a id='cal
Step7: We now use create_vis_mat to create an example $\boldsymbol{\mathcal{M}}$ and $\boldsymbol{\mathcal{D}}$. Note that
there are two sources in our sky model.
Step8: We now plot the baseline entries of $\boldsymbol{\mathcal{M}}$ and $\boldsymbol{\mathcal{D}}$.
Step9: Figure 8.1.3
Step10: We are now able to define a wrapper function create_G_LM that in turn calls optimize.leastsq.
The wrapper function translates the calibration problem into a format that optimize.leastsq
can interpret. The input of create_G_LM is $\boldsymbol{\mathcal{D}}$ and $\boldsymbol{\mathcal{M}}$, while the output is $\mathbf{g}$ and $\boldsymbol{\mathscr{G}}=\mathbf{g}\mathbf{g}^H$.
Step11: We may now calibrate $\boldsymbol{\mathcal{D}}$ by using create_G_LM.
Step12: The above function works by vectorizing the real and imaginary part of $\boldsymbol{\mathcal{D}}$ and
storing the result in $\mathbf{d}$. The vector $\mathbf{m}$ is generated in a similar manner.
The error vector $\mathbf{r}$ is calculated by err_func. We initialize $\breve{\mathbf{g}}$ with
$\breve{\mathbf{g}}_0=[\mathbf{1},\mathbf{0}]$. We can then call
optimize.leastsq(self.err_func, g_0, args=(d, m)).
We can now calculate $\mathbf{g} = \breve{\mathbf{g}}_U+\imath\breve{\mathbf{g}}_L$ and
$\boldsymbol{\mathscr{G}}=\mathbf{g}\mathbf{g}^H$. This is repeated for each observational time-slot.
8.1.5 Corrected Visibilites <a id='cal
Step13: We plot the corrected visibilities below. Note that the model and corrected visibilities align well, implying that calibration was successfull. | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
Explanation: Outline
Glossary
8. Calibration
Previous: 8. Calibration
Next: 8.2 1GC calibration
Format status:
<span style="background-color:green"> </span> : LF: 06-02-2017
<span style="background-color:green"> </span> : NC: 06-02-2017
<span style="background-color:green"> </span> : RF: 06-02-2017
<span style="background-color:green"> </span> : HF: 06-02-2017
<span style="background-color:green"> </span> : GM: 06-02-2017
<span style="background-color:green"> </span> : CC: 06-02-2017
<span style="background-color:green"> </span> : CL: 06-02-2017
<span style="background-color:green"> </span> : ST: 06-02-2017
<span style="background-color:green"> </span> : FN: 06-02-2017
<span style="background-color:green"> </span> : TC: 06-02-2017
<span style="background-color:green"> </span> : SP: 06-02-2017
<span style="background-color:green"> </span> : XX: 06-02-2017
Import standard modules:
End of explanation
from scipy import optimize
%pylab inline
pylab.rcParams['figure.figsize'] = (15, 10)
HTML('../style/code_toggle.html')
Explanation: Import section specific modules:
End of explanation
lam = 3e8/1.4e9 #observational wavelenth
print("lam = ",lam)
b = np.array([100,200,300])/lam
print("b [wavelengths] = ",b)
plt.plot(np.array([0,100,200]),np.array([0,0,0]),'ro')
plt.xlim([-250,250])
plt.ylim([-100,100])
plt.xlabel("East-West [m]", fontsize=18)
plt.ylabel("North-South [m]", fontsize=18)
plt.title("ENU-coordinates of three element interferometer.", fontsize=18)
plt.show()
Explanation: 8.1 Calibration as a Least Squares Problem <a id='cal:sec:cal_ls'></a> <!--\label{cal:sec:cal_ls}-->
In this section we discuss the procedure that is generally used in practice to perform calibration. We will use the unpolarized RIME in this section instead of the full-polarized RIME (see $\S$ 7 ➞). It provides us with a much simpler framework with which we can grasp the basics of calibration. Moreover, we assume for the sake of simplicity that the observed data are only corrupted by the instrument's antenna gains. This assumption results in a idealised treatment as there are many other factors that do in fact corrupt radio interferometric data (see $\S$ 7 ➞).
The unpolarized RIME is given by the following:
<p class=conclusion>
<font size=4> <b>Unpolarized RIME</b></font>
<br>
\begin{equation}
d_{pq}(t) = g_p(t) g_q^*(t) \tilde{d}_{pq}(t) + \epsilon_{pq}(t),
\end{equation}
</p>
where $d_{pq}(t)$ and $\tilde{d}{pq}(t)$ denote the corrupted observed and uncorrupted visibility at time $t$ associated with baseline $pq$. Moreover, the factors $g_p$ and $g_q$
denote the complex gain of antenna $p$ and $q$. The term $\epsilon{pq}$ is a zero mean (Gaussian)
noise term, representing thermal noise.
We begin this section by generating the $uv$-tracks of a fictitious instrument in $\S$ 8.1.1 ⤵<!--\ref{cal:sec:uv}-->. In $\S$ 8.1.2 ⤵<!--\ref{cal:sec:RIME_un}--> we phrase the calibration problem (for the antenna gains) as a least squares minimization problem. Then in $\S$ 8.1.3 ⤵<!--\ref{cal:sec:sim}--> we simulate "realistic" visibility data for the $uv$-tracks by including gain errors and adding noise to the resulting visibilities (similar to adding noise to a simple sinusoid as seen in $\S$ 2.11 ➞). We then vectorize the problem in $\S$ 8.1.4 ⤵<!--\ref{cal:sec:LM}-->, enabling us to use the built in scipy Levenberg-Marquardt algorithm to calibrate the data produced in $\S$ 8.1.3 ⤵<!--\ref{cal:sec:sim}-->. We implement the aforementioned steps via a wrapper ipython function called create_G_LM. We finish $\S$ 8.1.4 ⤵<!--\ref{cal:sec:LM}--> by using create_G_LM to estimate the antenna gains corrupting the simulated data we produced in $\S$ 8.1.3 ⤵<!--\ref{cal:sec:sim}-->. The estimated antenna gains are then used to correct the corrupted data in $\S$ 8.1.5 ⤵<!--\ref{cal:sec:cor}-->.
8.1.1 Creating $uv$-Tracks: East-West Interferometer <a id='cal:sec:uv'></a> <!--\label{cal:sec:uv}-->
We know from $\S$ 4.4.1.B.3 ➞ that when we work with an east-west interferometer things simplify to a large degree. Firstly: $XYZ = [0~|\mathbf{b}|~0]^T$, where $|\mathbf{b}|$ is the baseline length.
Moreover, we have that:
<p class=conclusion>
<font size=4> <b>$uv$-Coverage of an EW-array (8.1)</b></font>
<br>
\begin{eqnarray}
\\
u &=&| \mathbf{b}|\cos H\\
v &=& |\mathbf{b}|\sin H \sin \delta,
\end{eqnarray}
</p>
<a id='cal:eq:uv_cov'></a> <!--\label{cal:eq:uv_cov}-->
where $H$ is the hour angle of the field center and $\delta$ its declination. In this section we will be plotting the $uv$-coverage of a three element east-west interferometer.
The ENU layout of a simple interferometer is given below. Note that $|\mathbf{b}|$ is measured in wavelengths.
Now consider an array made up of three antennas situated at 0, 100, 200 meters east of
the array center as shown in the code fragment below.
End of explanation
H = np.linspace(-6,6,600)*(np.pi/12) #Hour angle in radians
delta = 60*(np.pi/180) #Declination in radians
Explanation: Figure 8.1.1: <span style="background-color:cyan">AJR:NC: This figure needs a caption</span>
We first need to set the hour angle range of our observation and the declination of our field center.
End of explanation
u = np.zeros((len(b),len(H)))
v = np.zeros((len(b),len(H)))
for k in range(len(b)):
u[k,:] = b[k]*np.cos(H)
v[k,:] = b[k]*np.sin(H)*np.sin(delta)
plt.plot(u[k,:],v[k,:],"r")
plt.plot(-u[k,:],-v[k,:],"b")
plt.xlabel("$u$ [rad$^{-1}$]", fontsize=18)
plt.ylabel("$v$ [rad$^{-1}$]", fontsize=18)
plt.title("$uv$-Coverage of three element interferometer", fontsize=18)
plt.show()
Explanation: Our hour angle range is from -6h to 6h, and our declination is set to $60^{\circ}$.
As the earth rotates the antennas trace out $uv$-tracks (ellipses) as shown in the code fragment below, where the red tracks are due to baseline $pq$ and blue tracks are due to baseline $qp$. We can construct these $uv$-tracks by using Eq. 8.1 ⤵<!--\ref{cal:eq:uv_cov}-->.
End of explanation
u_m = np.zeros((len(b),len(b),len(H)))
v_m = np.zeros((len(b),len(b),len(H)))
u_m[0,1,:] = u[0,:] #the first two entries denote p and q and the third index denotes time
u_m[1,2,:] = u[1,:]
u_m[0,2,:] = u[2,:]
v_m[0,1,:] = v[0,:]
v_m[1,2,:] = v[1,:]
v_m[0,2,:] = v[2,:]
Explanation: Figure 8.1.2: <span style="background-color:cyan">AJR:NC: This figure needs a caption</span>
We can also pack the $uv$-coverage into a 2D-matrix. We denote the rows of this matrix with $p$ and the columns with $q$. The $pq$-th entry denotes the $uv$-track associated with baseline $pq$. The reason for packing the visibilities into a 2D structure will become apparent in Sec. 8.1.2 ⤵<!--\ref{cal:sec:RIME_un}-->.
End of explanation
'''Creates the observed visibilities
point_sources - skymodel of point sources - (amplitude, l, m)
u_m - the u coordinates of observation (packed in a 2D structure)
v_m - the v coordinates of observation (packed in a 2D structure)
g - the antenna gain error vector
sig - the noise
'''
def create_vis_mat(point_sources,u_m,v_m,g,sig):
D = np.zeros(u.shape)
G = np.diag(g)
#Step 1: Create Model Visibility Matrix
for k in range(len(point_sources)): #for each point source
l_0 = point_sources[k,1]
m_0 = point_sources[k,2]
D = D + point_sources[k,0]*np.exp(-2*np.pi*1j*(u_m*l_0+v_m*m_0))
for t in range(D.shape[2]): #for each time-step
#Step 2: Corrupting the Visibilities
D[:,:,t] = np.dot(G,D[:,:,t])
D[:,:,t] = np.dot(D[:,:,t],G.conj())
#Step 3: Adding Noise
D[:,:,t] = D[:,:,t] + sig*np.random.randn(u_m.shape[0],u_m.shape[1]) \
+ sig*np.random.randn(u_m.shape[0],u_m.shape[1])*1j
return D
Explanation: 8.1.2. Unpolarized Calibration <a id='cal:sec:RIME_un'></a> <!--\label{cal:sec:RIME_un}-->
As explained in $\S$ 7.2 ➞ the RIME assumes that our observed signal is polarized. For the sake
of simplicity, however, we will now introduce the calibration problem with the underlying assumption that the observed signal is unpolarized. Unpolarized calibration is achieved by solving the following minimization problem:
<p class=conclusion>
<font size=4> <b>Unpolarized Calibration</b></font>
<br>
\begin{equation}
\min_{\boldsymbol{\mathcal{G}}} \left \| \boldsymbol{\mathcal{D}} - \boldsymbol{\mathcal{G}}\boldsymbol{\mathcal{M}}\boldsymbol{\mathcal{G}}^H \right \|,
\end{equation}
</p>
where
* $\boldsymbol{\mathcal{D}}$ is the observed visibility matrix. Each entry, which we denote by $d_{pq}$, represents the visibility measured by the baseline formed by antennas $p$ and $q$.
* $\boldsymbol{\mathcal{M}}$ is the model visibility matrix. The entry $m_{pq}$ of $\boldsymbol{\mathcal{M}}$ denotes a true or model visibility which was created with the calibration sky model and a $uv$-point on the $uv$-track associated with baseline $pq$.
* $\boldsymbol{\mathcal{G}} = \textrm{diag}(\mathbf{g})$ is the antenna gain matrix, where $\mathbf{g}=[g_1,g_2,\cdots,g_N]^T$ denotes the antenna gain vector. The operator $\textrm{diag}(\cdot)$ forms a diagonal matrix from a vector by putting the elements of the vector on the main diagonal. The vector $\mathbf{g}$ represents the instrumental response of the antennas, i.e. the complex antenna gains. These antenna gains are chosen in such a way that they minimize the difference between the observed and model visibilities.
* $\boldsymbol{\mathcal{G}}\boldsymbol{\mathcal{M}}\boldsymbol{\mathcal{G}}^H$ is the predicted visibility matrix. This matrix contains the model visibilities after the antenna gains have been applied to them.
The superscript $(\cdot)^H$ denotes the Hermitian or conjugate transpose and $\left \| \cdot \right \|$ denotes the norm used. Most calibration algorithms use the Frobenius norm for matrices and the 2-norm or Euclidean norm for vectors, thus treating calibration as a least squares problem.<br><br>
<div class=warn>
<b>Warning:</b> Do not get confused with the polarized and unpolarized RIME. We use
the notation $\mathbf{V}_{pq}\in\mathbb{C}^{2\times 2}$ to denote the observed correlation matrix corresponding to the antenna feeds $XX,YY,XY$ and $YX$ of antenna $p$ and $q$. We use the notation $\boldsymbol{\mathcal{D}}\in\mathbb{C}^{N\times N}$ to denote the unpolarized observed visibility matrix which contains the observed scalar visibilities of all the antenna pairs.
</div>
<br><br>
<div class=advice>
<b>Advice:</b> The unpolarized calibration equation above is equivalent to the following more familiar form: $\min_{\boldsymbol{g}}\sum_{pq}\left|d_{pq}-g_pg_q^*m_{pq}\right|^2$.
</div>
<br>
8.1.3. Creating an Unpolarized Visibility Matrix (create_vis_mat) <a id='cal:sec:sim'></a> <!--\label{cal:sec:sim}-->
In this section we present a function that allows us to create the observed visibility matrix $\boldsymbol{\mathcal{D}}$ and
the model visibility matrix $\boldsymbol{\mathcal{M}}$. The function employs three separate
steps to produce a visibility matrix, namely:
We first take the Fourier transform of the sky model and then sample the result using the
sampling function (i.e. $uv$-coverage). The sky model can only consist of point sources. Mathematically we may represent our sky model as $I(l,m) = \sum_k A_k\delta(l-l_k,m-m_k)$, where $A_k$ denotes the flux of the $k$-th source and $(l_k,m_k)$ denotes the direction cosine position vector that is associated with the $k$-th source. We then have that
$V(u,v) = \mathscr{F}{I(l,m)} = A_k e^{-2\pi \imath l_k u + m_k v}$, where $\mathscr{F}{\cdot}$ denotes the Fourier transform of its operand. This result stems from the fact that the Fourier transform of a delta function is a complex exponential. If we now apply the sampling function we finally obtain $V_{pq}(u_{pq},v_{pq}) = A_k e^{-2\pi \imath l_k u_{pq} + m_k v_{pq}}$. We now use $V_{pq}$ to construct a 2D model visibility matrix. The sky model is passed to the function via the variable point_sources. The sampling function is passed in via u_m and v_m.
We then corrupt the visibilities with the antenna gains that were passed into the function via g. We use g to construct $\boldsymbol{\mathcal{G}}$. We corrupt our visibilities by multiplying by $\boldsymbol{\mathcal{G}}$ on the left of the model visibility matrix and on the right by $\boldsymbol{\mathcal{G}}^H$.
The last step is to add some noise to our visibilities. The standard deviation of the noise is passed in via sig.
It should now be obvious how we can use the same function to produce both $\boldsymbol{\mathcal{M}}$ and
$\boldsymbol{\mathcal{D}}$. In the case of $\boldsymbol{\mathcal{M}}$, we do not corrupt our visibilities, nor add any noise. See the function create_vis_mat below.
End of explanation
point_sources = np.array([(1,0,0),(0.5,(1*np.pi)/180,(0*np.pi)/180)]) #l and m are measures in radians
g = np.array([1.2+1.3j,1.1-1.5j,-1.3+0.7j])
sig = 0.1
D = create_vis_mat(point_sources,u_m,v_m,g,sig) #we corrupt our data and we add noise
M = create_vis_mat(point_sources,u_m,v_m,np.array([1,1,1]),0) #no corruption and no noise
Explanation: We now use create_vis_mat to create an example $\boldsymbol{\mathcal{M}}$ and $\boldsymbol{\mathcal{D}}$. Note that
there are two sources in our sky model.
End of explanation
fig = plt.figure()
timeslots = np.cumsum(np.ones((len(M[0,1,:]),)))
#We only plot the real part of visibilities
#Plotting Baseline 01
ax = plt.subplot("311")
ax.set_title("$m_{01}$ (blue) and $d_{01}$ (green)", fontsize=18)
ax.plot(timeslots,M[0,1,:].real)
ax.plot(timeslots,D[0,1,:].real)
ax.set_xlabel("Timeslot", fontsize=18)
ax.set_ylabel("Jy", fontsize=18)
ax.set_xlim([1,len(M[0,1,:])])
y_t = ax.get_yticks()
y_t = y_t[::2]
ax.set_yticks(y_t)
#Plotting Baseline 02
ax = plt.subplot("312")
ax.set_title("$m_{02}$ (blue) and $d_{02}$ (green)", fontsize=18)
ax.plot(timeslots,M[0,2,:].real)
ax.plot(timeslots,D[0,2,:].real)
ax.set_xlabel("Timeslot", fontsize=18)
ax.set_ylabel("Jy", fontsize=18)
ax.set_xlim([1,len(M[0,1,:])])
y_t = ax.get_yticks()
y_t = y_t[::2]
ax.set_yticks(y_t)
#Plotting Baseline 12
ax = plt.subplot("313")
ax.set_title("$m_{12}$ (blue) and $d_{12}$ (green)", fontsize=18)
ax.plot(timeslots,M[1,2,:].real)
ax.plot(timeslots,D[1,2,:].real)
ax.set_xlabel("Timeslot", fontsize=18)
ax.set_ylabel("Jy", fontsize=18)
ax.set_xlim([1,len(M[0,1,:])])
y_t = ax.get_yticks()
y_t = y_t[::2]
ax.set_yticks(y_t)
plt.tight_layout()
plt.show()
Explanation: We now plot the baseline entries of $\boldsymbol{\mathcal{M}}$ and $\boldsymbol{\mathcal{D}}$.
End of explanation
'''Unpolarized direction independent calibration entails finding the G that minimizes ||R-GMG^H||.
This function evaluates D-GMG^H.
g is a vector containing the real and imaginary components of the antenna gains.
d is a vector containing a vecotrized R (observed visibilities), real and imaginary.
m is a vector containing a vecotrized M (predicted), real and imaginary.
r is a vector containing the residuals.
'''
def err_func(g,d,m):
Nm = len(d)//2
N = len(g)//2
G = np.diag(g[0:N]+1j*g[N:])
D = np.reshape(d[0:Nm],(N,N))+np.reshape(d[Nm:],(N,N))*1j #matrization
M = np.reshape(m[0:Nm],(N,N))+np.reshape(m[Nm:],(N,N))*1j
T = np.dot(G,M)
T = np.dot(T,G.conj())
R = D - T
r_r = np.ravel(R.real) #vectorization
r_i = np.ravel(R.imag)
r = np.hstack([r_r,r_i])
return r
Explanation: Figure 8.1.3: <span style="background-color:cyan">AJR:NC: This figure needs a caption</span>
The images above contain the real part of the corrupted (green) and uncorrupted (blue)
visibilities as a function of timeslots for baseline 01, 02 and 12 respectively.
8.1.4 Levenberg-Marquardt (create_G_LM) <a id='cal:sec:LM'></a> <!--\label{cal:sec:LM}-->
We are now ready to use least squares to calibrate $\boldsymbol{\mathcal{D}}$ (see <cite data-cite='Yatawatta2012'>GPU accelerated nonlinear optimization in radio interferometric calibration</cite> ⤴).
We first present a brief review of least squares minimization. Suppose we wish to fit a model $\mathbf{f}\left( \mathbf{m},\breve{\mathbf{g}}\right)$, where $\mathbf{m}$ and $\breve{\mathbf{g}}$ denote
the model input values and a vector of unknown variables respectively, to some data $\left{\mathbf{d}{i},\mathbf{m}{i}\right}$. The vector of unknown variables $\breve{\mathbf{g}}$ parametrize the model. A standard method for determining which parameter vector $\breve{\mathbf{g}}$ best fits the data is to minimize the sum of the squared residuals. This technique is referred to as least squares minimization. The residual vector is denoted by $\mathbf{r}(\mathbf{m},\mathbf{d},\breve{\mathbf{g}}) = \mathbf{d} - \mathbf{f}\left( \mathbf{m},\breve{\mathbf{g}}\right)$. The objective function (the function we wish to minimize) associated with least squares is: $\sum_i \mathbf{r}_i^2$. The function optimize.leastsq is scipy's built-in least squares solver and employs the Levenberg-Marquardt algorithm in the background. The Levenberg-Marquardt algorithm is discussed in more detail in $\S$ 2.11 ➞. To use optimize.leastsq one needs a function, here called err_func, that calculates the residual vector $\mathbf{r}$. An initial guess of the parameter vector $\breve{\mathbf{g}}$ is also required.
For calibration the above variables become:
<p class=conclusion>
<font size=4> <b>Vectorizing</b></font>
<br>
<br>
• $\mathbf{d} = [\textrm{vec}(\Re\{\boldsymbol{\mathcal{D}}\}),\textrm{vec}(\Im\{\boldsymbol{\mathcal{D}}\})]$ <br><br>
• $\mathbf{m} = [\textrm{vec}(\Re\{\boldsymbol{\mathcal{M}}\}),\textrm{vec}(\Im\{\boldsymbol{\mathcal{M}}\})]$ <br><br>
• $\breve{\mathbf{g}} = [\Re\{\mathbf{g}\},\Im\{\mathbf{g}\}]$ <br><br>
• $\mathbf{f}\left(\mathbf{m},\breve{\mathbf{g}}\right) = [\textrm{vec}(\Re\{\boldsymbol{\mathcal{G}}\boldsymbol{\mathcal{M}}\boldsymbol{\mathcal{G}}^H\}),\textrm{vec}(\Im\{\boldsymbol{\mathcal{G}}\boldsymbol{\mathcal{M}}\boldsymbol{\mathcal{G}}^H\})]$, where
$\boldsymbol{\mathcal{M}} = \textrm{vec}^{-1}(\mathbf{m}_U)+\imath\textrm{vec}^{-1}(\mathbf{m}_L)$ and $\boldsymbol{\mathcal{G}} = \textrm{diag}(\breve{\mathbf{g}}_U)+\imath\textrm{diag}(\breve{\mathbf{g}}_L)$
</p>
In the above bullets $\textrm{vec}(\cdot)$, $\textrm{vec}^{-1}(\cdot)$, $(\cdot)_U$,
and $(\cdot)_L$ denote vectorization, matrization, the upper half of
a vector and the lower half of a vector respectively. Moreover, $\Re{\cdot}$ and $\Im{\cdot}$ denote the real and imaginary part of their operands.
The first thing we need to define in order to perform calibration by using optimize.leastsq is the function err_func, which we do below.
End of explanation
'''This function finds argmin G ||D-GMG^H|| using Levenberg-Marquardt. It uses the optimize.leastsq scipy to perform
the actual minimization.
D is your observed visibilities matrx.
M is your predicted visibilities.
g the antenna gains.
G = gg^H.'''
def create_G_LM(D,M):
N = D.shape[0] #number of antennas
temp =np.ones((D.shape[0],D.shape[1]) ,dtype=complex)
G = np.zeros(D.shape,dtype=complex)
g = np.zeros((D.shape[0],D.shape[2]),dtype=complex)
for t in range(D.shape[2]): #perform calibration per time-slot
g_0 = np.ones((2*N,)) # first antenna gain guess
g_0[N:] = 0
d_r = np.ravel(D[:,:,t].real) #vectorization of observed + seperating real and imag
d_i = np.ravel(D[:,:,t].imag)
d = np.hstack([d_r,d_i])
m_r = np.ravel(M[:,:,t].real) #vectorization of model + seperating real and imag
m_i = np.ravel(M[:,:,t].imag)
m = np.hstack([m_r,m_i])
g_lstsqr_temp = optimize.leastsq(err_func, g_0, args=(d, m))
g_lstsqr = g_lstsqr_temp[0]
G_m = np.dot(np.diag(g_lstsqr[0:N]+1j*g_lstsqr[N:]),temp)
G_m = np.dot(G_m,np.diag((g_lstsqr[0:N]+1j*g_lstsqr[N:]).conj()))
g[:,t] = g_lstsqr[0:N]+1j*g_lstsqr[N:] #creating antenna gain vector
G[:,:,t] = G_m
return g,G
Explanation: We are now able to define a wrapper function create_G_LM that in turn calls optimize.leastsq.
The wrapper function translates the calibration problem into a format that optimize.leastsq
can interpret. The input of create_G_LM is $\boldsymbol{\mathcal{D}}$ and $\boldsymbol{\mathcal{M}}$, while the output is $\mathbf{g}$ and $\boldsymbol{\mathscr{G}}=\mathbf{g}\mathbf{g}^H$.
End of explanation
glm,Glm = create_G_LM(D,M)
Explanation: We may now calibrate $\boldsymbol{\mathcal{D}}$ by using create_G_LM.
End of explanation
R_c = Glm**(-1)*D
Explanation: The above function works by vectorizing the real and imaginary part of $\boldsymbol{\mathcal{D}}$ and
storing the result in $\mathbf{d}$. The vector $\mathbf{m}$ is generated in a similar manner.
The error vector $\mathbf{r}$ is calculated by err_func. We initialize $\breve{\mathbf{g}}$ with
$\breve{\mathbf{g}}_0=[\mathbf{1},\mathbf{0}]$. We can then call
optimize.leastsq(self.err_func, g_0, args=(d, m)).
We can now calculate $\mathbf{g} = \breve{\mathbf{g}}_U+\imath\breve{\mathbf{g}}_L$ and
$\boldsymbol{\mathscr{G}}=\mathbf{g}\mathbf{g}^H$. This is repeated for each observational time-slot.
8.1.5 Corrected Visibilites <a id='cal:sec:cor'></a> <!--\label{cal:sec:cor}-->
Before imaging, we have to correct our observed visibilities by removing the effect that the antenna gains had on the observed visibilities. This can be accomplished by using
<p class=conclusion>
<font size=4> <b>Correcting Visibilities</b></font>
<br>
\begin{equation}
\boldsymbol{\mathcal{D}}^\mathrm{(c)} = \boldsymbol{\mathcal{G}}^{-1}\boldsymbol{\mathcal{D}}\boldsymbol{\mathcal{G}}^{-H} = \boldsymbol{\boldsymbol{\mathscr{G}}}^{\odot-1}\odot\boldsymbol{\mathcal{D}},
\end{equation}
</p>
<br>
where
$\boldsymbol{\mathcal{D}}^\mathrm{(c)}$ is the corrected visibility matrix.
$\boldsymbol{\mathscr{G}}^{\odot-1}$ denotes the visibility calibration matrix, which is computed by taking the Hadamard (element-wise) inverse of $\boldsymbol{\mathscr{G}}$.
The superscript $(\cdot)^{-1}$ denotes matrix inversion, while $(\cdot)^{-H}$ denotes the inverse of the Hermitian transpose. The operator $\odot$ denotes the Hadamard product.
We calculate the corrected visibilities below.<br><br>
<div class=advice>
<b>Advice:</b> The matrix and vector operations (like $\odot$) and operators used in this section are discussed in more detail in [$\S$ 2.10 ➞](../2_Mathematical_Groundwork/2_10_linear_algebra.ipynb)
</div>
End of explanation
fig = plt.figure()
timeslots = np.cumsum(np.ones((len(M[0,1,:]),)))
#We only plot the real part of visibilities
#Plotting Baseline 01
ax = plt.subplot("311")
ax.set_title("$m_{01}$ (blue) and $d_{01}^{(c)}$ (green)", fontsize=18)
ax.plot(timeslots,M[0,1,:].real)
ax.plot(timeslots,R_c[0,1,:].real)
ax.set_xlabel("Timeslot", fontsize=18)
ax.set_ylabel("Jy", fontsize=18)
ax.set_xlim([1,len(M[0,1,:])])
y_t = ax.get_yticks()
y_t = y_t[::2]
ax.set_yticks(y_t)
#Plotting Baseline 02
ax = plt.subplot("312")
ax.set_title("$m_{02}$ (blue) and $d_{02}^{(c)}$ (green)", fontsize=18)
ax.plot(timeslots,M[0,2,:].real)
ax.plot(timeslots,R_c[0,2,:].real)
ax.set_xlabel("Timeslot", fontsize=18)
ax.set_ylabel("Jy", fontsize=18)
ax.set_xlim([1,len(M[0,1,:])])
y_t = ax.get_yticks()
y_t = y_t[::2]
ax.set_yticks(y_t)
#Plotting Baseline 12
ax = plt.subplot("313")
ax.set_title("$m_{12}$ (blue) and $d_{12}^{(c)}$ (green)", fontsize=18)
ax.plot(timeslots,M[1,2,:].real)
ax.plot(timeslots,R_c[1,2,:].real)
ax.set_xlabel("Timeslot", fontsize=18)
ax.set_ylabel("Jy", fontsize=18)
ax.set_xlim([1,len(M[0,1,:])])
y_t = ax.get_yticks()
y_t = y_t[::2]
ax.set_yticks(y_t)
plt.tight_layout()
plt.show()
Explanation: We plot the corrected visibilities below. Note that the model and corrected visibilities align well, implying that calibration was successfull.
End of explanation
<END_TASK> |
15,841 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<a id="top"></a>
Cloud Statistics
<hr>
Notebook Summary
This notebook explores Landsat 7 and Landsat 8 Data Cubes and reports cloud statistics
for selected regions within a cube. This is valuable information for performing analyses.
For example, if there are extensive clouds for a season it may significantly impact the
mosaic product or index values. Another example is that a user may want to find a single
date when there are few clouds to assess land features.
<hr>
Index
Import Dependencies and Connect to the Data Cube
Choose Platforms and Products
Get the Extents of the Cube
Define the Extents of the Analysis
Calculate the Cloud Coverage Percentage for Each Pixel
Create a Table of Cloud Coverage Percentage for Each Date
Create a Plot of Cloud Coverage Percentage for Each Date
Create an Image of the Percent of Clear Views Per Pixel for the Entire Time Period
Review an RGB Scene for a Selected Time Slice
<span id="import">Import Dependencies and Connect to the Data Cube ▴</span>
Step1: <span id="plat_prod">Choose Platforms and Products ▴</span>
List available products for each platform
Step2: Choose products
<p style="color
Step3: <span id="extents">Get the Extents of the Cube ▴</span>
Step4: Visualize the available area
Step5: <span id="define_extents">Define the Extents of the Analysis ▴</span>
<p style="color
Step6: Visualize the selected area
Step7: <span id="calc_cloud_coverage">Calculate the Cloud Coverage Percentage for Each Pixel▴</span>
Step8: <span id="create_cloud_cov_table">Create a Table of Cloud Coverage Percentage for Each Date▴</span>
Step9: <span id="plot_cloud_cov">Create a Plot of Cloud Coverage Percentage for Each Date▴</span>
Step10: <span id="pct_clear_img">Create an Image of the Percent of Clear Views Per Pixel for the Entire Time Period▴</span>
Step11: <span id="rgb_time_slice">Review an RGB Scene for a Selected Time Slice▴</span>
Step12: <p style="color | <ASSISTANT_TASK:>
Python Code:
# Enable importing of utilities.
import sys
import os
sys.path.append(os.environ.get('NOTEBOOK_ROOT'))
import numpy as np
import xarray as xr
import pandas as pd
import matplotlib.pyplot as plt
# Load Data Cube Configuration
import datacube
import utils.data_cube_utilities.data_access_api as dc_api
api = dc_api.DataAccessApi()
dc = api.dc
from datacube.utils.aws import configure_s3_access
configure_s3_access(requester_pays=True)
Explanation: <a id="top"></a>
Cloud Statistics
<hr>
Notebook Summary
This notebook explores Landsat 7 and Landsat 8 Data Cubes and reports cloud statistics
for selected regions within a cube. This is valuable information for performing analyses.
For example, if there are extensive clouds for a season it may significantly impact the
mosaic product or index values. Another example is that a user may want to find a single
date when there are few clouds to assess land features.
<hr>
Index
Import Dependencies and Connect to the Data Cube
Choose Platforms and Products
Get the Extents of the Cube
Define the Extents of the Analysis
Calculate the Cloud Coverage Percentage for Each Pixel
Create a Table of Cloud Coverage Percentage for Each Date
Create a Plot of Cloud Coverage Percentage for Each Date
Create an Image of the Percent of Clear Views Per Pixel for the Entire Time Period
Review an RGB Scene for a Selected Time Slice
<span id="import">Import Dependencies and Connect to the Data Cube ▴</span>
End of explanation
# Get available products
products_info = dc.list_products()
# List LANDSAT 7 products
print("LANDSAT 7 Products:")
products_info[["platform", "name"]][products_info.platform == "LANDSAT_7"]
# List LANDSAT 8 products
print("LANDSAT 8 Products:")
products_info[["platform", "name"]][products_info.platform == "LANDSAT_8"]
Explanation: <span id="plat_prod">Choose Platforms and Products ▴</span>
List available products for each platform
End of explanation
# These are the platforms (satellites) and products (datacube sets)
# used for this demonstration. Uncomment only 1 set.
platform = 'LANDSAT_8'
product = 'ls8_usgs_sr_scene'
collection = 'c1'
level = 'l2'
# platform = 'LANDSAT_8'
# product = 'ls8_l2_c2'
# collection = 'c2'
# level = 'l2'
band_no_data_values = dc.list_measurements().loc[product, 'nodata']
Explanation: Choose products
<p style="color:red";><b>CHANGE INPUTS BELOW
End of explanation
from utils.data_cube_utilities.dc_load import get_product_extents
from utils.data_cube_utilities.dc_time import dt_to_str
full_lat, full_lon, min_max_dates = get_product_extents(api, platform, product)
# Print the extents of the data.
print("Latitude Extents:", full_lat)
print("Longitude Extents:", full_lon)
print("Time Extents:", list(map(dt_to_str, (min_max_dates[0], min_max_dates[1]))))
Explanation: <span id="extents">Get the Extents of the Cube ▴</span>
End of explanation
from utils.data_cube_utilities.dc_display_map import display_map
display_map(full_lat, full_lon)
Explanation: Visualize the available area
End of explanation
# Select an analysis region (Lat-Lon) within the extents listed above.
# Select a time period (Min-Max) within the extents listed above (Year-Month-Day)
# This region and time period will be used for the cloud assessment
# Nairobi, Kenya
latitude = (-1.3407, -1.2809)
longitude = (36.7640, 36.9206)
# Mombasa, Kenya
# latitude = (-4.12, -3.975)
# longitude = (39.55, 39.7)
# Mau Forest - Western Kenya
# latitude = (-0.13406, 0.21307)
# longitude = (35.28322, 35.56681)
# Dar es Salaam, Tanzania
# latitude = (-7.0, -6.7)
# longitude = (39.1, 39.4)
# Lake Sulunga, Tanzania
# latitude = (-6.2622, -5.8822)
# longitude = (34.9802, 35.3602)
# Freetown, Sierra Leone
# latitude = (8.3267, 8.5123)
# longitude = (-13.3109, -13.1197 )
# Vietnam
# latitude = (10.9358, 11.0358)
# longitude = (107.1899, 107.2899)
# Ghanas
# latitude = (5.5, 5.7) # Accra
# longitude = (-0.4, 0.0) # Accra
# Time Period
time_extents = ('2016-01-01', '2016-01-31')
Explanation: <span id="define_extents">Define the Extents of the Analysis ▴</span>
<p style="color:red";><b>CHANGE INPUTS BELOW
End of explanation
display_map(latitude,longitude)
Explanation: Visualize the selected area
End of explanation
from utils.data_cube_utilities.clean_mask import landsat_clean_mask_invalid, landsat_qa_clean_mask
def build_cloud_coverage_table_landsat(product,
platform,
collection,
level,
latitude,
longitude,
time = None,
dc = None,
extra_band = 'green',
band_no_data_values = None):
dc = dc if dc is not None else datacube.Datacube(app = "")
load_params = dict(platform=platform,
product=product,
latitude = latitude,
longitude = longitude,
measurements = [extra_band, 'pixel_qa'],
group_by='solar_day')
if time is not None:
load_params["time"] = time
landsat_dataset = dc.load(**load_params)
clean_mask = landsat_qa_clean_mask(landsat_dataset, platform=platform,
collection=collection, level=level) & \
landsat_clean_mask_invalid(landsat_dataset, platform, collection, level)
data_mask = xr.full_like(clean_mask, True)
if band_no_data_values is not None:
for data_var_name in landsat_dataset.data_vars:
band_data_mask = landsat_dataset[data_var_name] != band_no_data_values[data_var_name]
data_mask = data_mask & band_data_mask
clean_data_mask = clean_mask & data_mask
landsat_dataset = landsat_dataset.where(clean_data_mask)
times = list(landsat_dataset.time.values)
scene_slice_list = list(map(lambda t: landsat_dataset.sel(time = str(t)), times))
clean_data_mask_list = [clean_data_mask.sel(time=str(time)).values for time in clean_data_mask.time.values]
# Calculate the percentage of all pixels which are not cloud.
percentage_list = [clean_data_mask.mean()*100 for clean_data_mask in clean_data_mask_list]
clean_pixel_count_list = list(map(np.sum, clean_data_mask_list))
data = {"times": times,
"clean_percentage": percentage_list,
"clean_count": clean_pixel_count_list }
return landsat_dataset, pd.DataFrame(data=data, columns=["times", "clean_percentage", "clean_count"]), \
clean_mask, data_mask, clean_data_mask
extra_band = 'green'
landsat_dataset, coverage_table, clean_mask, data_mask, clean_data_mask = \
build_cloud_coverage_table_landsat(product = product,
platform = platform,
collection = collection,
level = level,
latitude = latitude,
longitude = longitude,
time = time_extents,
extra_band=extra_band,
band_no_data_values=band_no_data_values)
Explanation: <span id="calc_cloud_coverage">Calculate the Cloud Coverage Percentage for Each Pixel▴</span>
End of explanation
pd.set_option('display.max_rows', len(coverage_table))
coverage_table
Explanation: <span id="create_cloud_cov_table">Create a Table of Cloud Coverage Percentage for Each Date▴</span>
End of explanation
plt.figure(figsize = (15,5))
plt.plot(coverage_table["times"].values, coverage_table["clean_percentage"].values, 'bo', markersize=8)
plt.title("Percentage of Clean (not cloud) Pixels for Each Time Slice")
plt.show()
Explanation: <span id="plot_cloud_cov">Create a Plot of Cloud Coverage Percentage for Each Date▴</span>
End of explanation
# We are really plotting the fraction of times that are not no_data which are clear.
# This is done to account for regions filled with no_data - such as when querying across path/rows.
clear_and_data_per_px = clean_data_mask.sum(dim='time')
data_per_px = data_mask.sum(dim='time')
frac_clear_per_data_per_px = clear_and_data_per_px / data_per_px
num_cbar_ticks = 8 # The number of ticks to use for the colorbar.
quad_mesh = (frac_clear_per_data_per_px).plot(figsize=(12,10),cmap = "RdYlGn", vmin=0, vmax=1)
plt.show()
print("Percent of pixels with data: {:.2%}".format(data_mask.mean().values))
print("Percent of pixels that are clear: {:.2%}".format(clean_mask.mean().values))
print("Percent of pixels that are clear and have data: {:.2%}".format(clean_data_mask.mean().values))
(frac_clear_per_data_per_px == 0).sum() / frac_clear_per_data_per_px.count()
print("Number of pixels which have no non-cloud data:", (frac_clear_per_data_per_px == 0).sum().values)
print("Total number of pixels:", frac_clear_per_data_per_px.count().values)
Explanation: <span id="pct_clear_img">Create an Image of the Percent of Clear Views Per Pixel for the Entire Time Period▴</span>
End of explanation
# Load the data to create an RGB image
landsat_dataset = dc.load(latitude = latitude,
longitude = longitude,
platform = platform,
time = time_extents,
product = product,
measurements = ['red', 'green', 'blue'],
group_by='solar_day')
Explanation: <span id="rgb_time_slice">Review an RGB Scene for a Selected Time Slice▴</span>
End of explanation
from utils.data_cube_utilities.dc_rgb import rgb
# CHANGE HERE >>>>>>>>>>>>>>
time_ind = 0 # The acquisition to select. The first acquisition has index 0.
# Select one of the time slices and create an RGB image.
# Time slices are numbered from 0 to x and shown in the table above
# Review the clean_percentage values above to select scenes with few clouds
# Clouds will be visible in WHITE and cloud-shadows will be visible in BLACK
rgb(landsat_dataset.isel(time=time_ind), width=12)
plt.show()
Explanation: <p style="color:red";><b>CHANGE INPUTS BELOW
End of explanation
<END_TASK> |
15,842 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Steady-state space-charge-limited current with traps
This example shows how to simulate effects of a single trap level on current-voltage characteristics of a single carrier device.
Step1: Model and parameters
Electron only device is simulated, without contact barrier. Note that more trap levels can be included by modifying traps= argument below. Each trap level should have unique name.
Step2: Sweep parameters
For simplicity, the case of absent traps is modeled by putting trap level 1 eV above transport level. This makes trap states effectively unoccupied.
Step3: Result | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pylab as plt
import oedes
import numpy as np
oedes.init_notebook() # for displaying progress bars
Explanation: Steady-state space-charge-limited current with traps
This example shows how to simulate effects of a single trap level on current-voltage characteristics of a single carrier device.
End of explanation
L = 200e-9 # device thickness, m
model = oedes.models.std.electrononly(L, traps=['trap'])
params = {
'T': 300, # K
'electrode0.workfunction': 0, # eV
'electrode1.workfunction': 0, # eV
'electron.energy': 0, # eV
'electron.mu': 1e-9, # m2/(Vs)
'electron.N0': 2.4e26, # 1/m^3
'electron.trap.energy': 0, # eV
'electron.trap.trate': 1e-22, # 1/(m^3 s)
'electron.trap.N0': 6.2e22, # 1/m^3
'electrode0.voltage': 0, # V
'electrode1.voltage': 0, # V
'epsilon_r': 3. # 1
}
Explanation: Model and parameters
Electron only device is simulated, without contact barrier. Note that more trap levels can be included by modifying traps= argument below. Each trap level should have unique name.
End of explanation
trapenergy_sweep = oedes.sweep('electron.trap.energy',np.asarray([-0.45, -0.33, -0.21, 1.]))
voltage_sweep = oedes.sweep('electrode0.voltage', np.logspace(-3, np.log10(20.), 100))
Explanation: Sweep parameters
For simplicity, the case of absent traps is modeled by putting trap level 1 eV above transport level. This makes trap states effectively unoccupied.
End of explanation
c=oedes.context(model)
for tdepth,ct in c.sweep(params, trapenergy_sweep):
for _ in ct.sweep(ct.params, voltage_sweep):
pass
v,j = ct.teval(voltage_sweep.parameter_name,'J')
oedes.testing.store(j, rtol=1e-3) # for automatic testing
if tdepth < 0:
label = 'no traps'
else:
label = 'trap depth %s eV' % tdepth
plt.plot(v,j,label=label)
plt.xscale('log')
plt.yscale('log')
plt.xlabel('V')
plt.ylabel(r'$\mathrm{A/m^2}$')
plt.legend(loc=0,frameon=False);
Explanation: Result
End of explanation
<END_TASK> |
15,843 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Bayesian approach with emcee - Test case - 3 free parameters
An example of applying the Bayesian approach with 3 free parameters (erosion rate, time exposure and density), using the emcee package.
For more info about the method used, see the notebook Inference_Notes.
This example (a test case) is based on a generic dataset of 10Be concentration vs. depth, which is drawn from a distribution with given "true" parameters.
This notebook has the following external dependencies
Step1: The mathematical (deterministic, forward) model
An implementation of the mathematical model used for predicting profiles of 10Be concentrations is available in the models Python module (see the notebook Models). The 10Be model assumes that the soil density is constant along the depth profile and that the inheritance is the same for the whole sample of 10Be concentration vs. depth.
Step2: The data
The dataset is generated using the following parameter values. eps is the erosion rate, t is the exposure time, rho is the soil density and inh is the inheritance.
Step3: The gendata Python module is used to generate the dataset (see the notebook Datasets).
Step4: Make a plot of the dataset
Step5: The statistical model used for computing the posterior probability density PPD
Here below we define a data model by the tuple m = (eps, t, rho). It correspond to a given location in the 3-d parameter space. the inheritance is assumed known.
Define the parameter names. It is important to use the same order to further define the priors and bounds tuples!
Step6: Create a pd.Series with the true parameter values. It will be used for plotting purpose.
Step7: Define the prior probability distribution for each free parameter. Here the uniform distribution is used, with given bounds (loc and scale arguments of scipy.stats.uniform are the lower bound and the range, respectively)
Step8: Define (min, max) bounds for each free parameter. It should be given by lower and upper quantiles (lower_qtl, upper_qtl) of the prior distribution. Choose the extreme quantiles (0, 1) if the distribution is uniform. It will be used for plotting purpose and also for constrained optimization (see below).
Step9: Plot the prior probability density for each parameter.
Step10: Define a function that returns the (logarithm of the) prior probability density for a given data model m.
Step11: Define a function that returns the log-likelihood. It is a $n$-dimensional Gaussian ($n$ nucleide concentrations sampled along the depth profile) with the mean given by the formard model and the variance given by the error estimated from the measurements of the nucleide concentration of each sample. This Gaussian implies that (1) the error on each measurement is random, (2) the sampled nucleide concentrations are measured independently of each other, (3) the forward model - i.e., the deterministic model that predicts the nucleide concentration profile - represents the real physics and (4) the values of the non-free parameters of the forward model - e.g., nucleide surface production rate, attenuation lengths... - are exactly known.
Step12: Define a function that returns the log-posterior probability density, according to the Bayes's theorem.
Step13: Sampling the posterior probablility density using MCMC
In our case, the from of the PPD may be highly anisotropic ; it may present high (negative or positive) correlations between its parameters (erosion rate, exposure time, soil density, inheritance). Usually, these relationships are even non-linear.
It is therefore important to use a robust algorithm to sample this complex PPD. The Affine Invariant Markov chain Monte Carlo (MCMC) Ensemble sampler implemented in the emcee package will be more efficient in our case than the standard MCMC algorithms such as the Metropolis-Hasting method.
The emcee sampler allows to define multiple, independent walkers. This requires to first set the initial position of each walker in the parameter space. As shown in the emcee documentation, the author suggests initializing the walkers in a tiny Gaussian ball around the maximum likelihood result. We can obtain the maximum likelihood estimate by applying an optimization algorithm such as one of those implemented in the scipy.optimize module. Note that non-linear optimization usually requires to provide an initial guess.
Given our complex, non-linear, and potentially flat form of the PDD in some areas of the parameter space, we prefer to set the initial positions of the walkers as the maximum likelihood estimates resulting from randomly chosing initial guesses in the parameter space according to the prior probability density. Note that we use a constrained optimization algorithm to ensure that the initial positions are within the bounds defined above.
Step14: We show below the initial guesses and the initial positions of the walkers in a scatter plot.
Step15: We can then setup the emcee sampler and run the MCMC for n_steps iterations starting from the initial positions defined above.
Step16: Let's plot the trace of the MCMC iterations. The red lines show the true values.
Step17: Try plotting only the firsts samples (e.g., sample_plot_range = slice(0, 1000)). We see that thanks to the initial positions of the walkers, the emcee sampler quickly starts exploring the full posterior distribution. The โburn-inโ period is small and we can therefore set a small value for nburn below.
Step18: We can visualize the sampled posterior propbability density by joint plots of the MCMC samples. The red lines show the true values.
Step19: Given the samples, it is straightforward to characterize the posterior porbability density and estimate its moments.
the PPD mean (if the PPD distribution is strictly gaussian, it also correspond to the MAP (Maximum A-Posterori) and therefore the most probable model)
Step20: the sample which have the max PPD value (i.e., the most probable sampled model)
Step21: the PPD quantiles (useful for delineating the Bayesian confidence intervals or credible intervals for each free parameter)
Step22: We finally plot the nucleide concentration profiles (blue dots | <ASSISTANT_TASK:>
Python Code:
import math
import numpy as np
import pandas as pd
from scipy import stats
from scipy import optimize
import emcee
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
clr_plt = sns.color_palette()
Explanation: Bayesian approach with emcee - Test case - 3 free parameters
An example of applying the Bayesian approach with 3 free parameters (erosion rate, time exposure and density), using the emcee package.
For more info about the method used, see the notebook Inference_Notes.
This example (a test case) is based on a generic dataset of 10Be concentration vs. depth, which is drawn from a distribution with given "true" parameters.
This notebook has the following external dependencies:
End of explanation
import models
Explanation: The mathematical (deterministic, forward) model
An implementation of the mathematical model used for predicting profiles of 10Be concentrations is available in the models Python module (see the notebook Models). The 10Be model assumes that the soil density is constant along the depth profile and that the inheritance is the same for the whole sample of 10Be concentration vs. depth.
End of explanation
# the true parameters
eps_true = 5e-4
t_true = 3e5
rho_true = 2.
inh_true = 5e4
# depths and sample size
depth_minmax = [50, 500]
N = 8
# perturbations
err_magnitude = 20.
err_variability = 5.
Explanation: The data
The dataset is generated using the following parameter values. eps is the erosion rate, t is the exposure time, rho is the soil density and inh is the inheritance.
End of explanation
import gendata
profile_data = gendata.generate_dataset(
models.C_10Be,
(eps_true, t_true, rho_true, inh_true),
zlimits=depth_minmax,
n=N,
err=(err_magnitude, err_variability)
)
Explanation: The gendata Python module is used to generate the dataset (see the notebook Datasets).
End of explanation
sns.set_context('notebook')
fig, ax = plt.subplots()
profile_data.plot(
y='depth', x='C', xerr='std',
kind="scatter", ax=ax, rot=45
)
ax.invert_yaxis()
Explanation: Make a plot of the dataset
End of explanation
param_names = 'erosion rate', 'time exposure', 'soil density'
Explanation: The statistical model used for computing the posterior probability density PPD
Here below we define a data model by the tuple m = (eps, t, rho). It correspond to a given location in the 3-d parameter space. the inheritance is assumed known.
Define the parameter names. It is important to use the same order to further define the priors and bounds tuples!
End of explanation
param_true = pd.Series((eps_true, t_true, rho_true), index=param_names)
Explanation: Create a pd.Series with the true parameter values. It will be used for plotting purpose.
End of explanation
eps_prior = stats.uniform(loc=0., scale=1e-3)
t_prior = stats.uniform(loc=0., scale=8e5)
rho_prior = stats.uniform(loc=1.6, scale=1.)
priors = eps_prior, t_prior, rho_prior
param_priors = pd.Series(priors, index=param_names)
Explanation: Define the prior probability distribution for each free parameter. Here the uniform distribution is used, with given bounds (loc and scale arguments of scipy.stats.uniform are the lower bound and the range, respectively)
End of explanation
def get_bounds(f, lower_qtl=0., upper_qtl=1.):
return f.ppf(lower_qtl), f.ppf(upper_qtl)
eps_bounds = get_bounds(eps_prior, 0, 1)
t_bounds = get_bounds(t_prior, 0, 1)
rho_bounds = get_bounds(rho_prior, 0, 1)
bounds = eps_bounds, t_bounds, rho_bounds
param_bounds = pd.DataFrame(
np.array(bounds), columns=('min', 'max'), index=param_names
)
param_bounds
Explanation: Define (min, max) bounds for each free parameter. It should be given by lower and upper quantiles (lower_qtl, upper_qtl) of the prior distribution. Choose the extreme quantiles (0, 1) if the distribution is uniform. It will be used for plotting purpose and also for constrained optimization (see below).
End of explanation
fig, axes = plt.subplots(1, 3, figsize=(13, 3))
for ax, p, b, name in zip(axes.flatten(),
param_priors.values,
param_bounds.values,
param_names):
xmin, xmax = b
eps = 0.1 * (xmax - xmin)
x = np.linspace(xmin - eps, xmax + eps, 200)
d = p.pdf(x)
ax.plot(x, d)
ax.fill(x, d, alpha=0.4)
plt.setp(ax.xaxis.get_majorticklabels(), rotation=45)
plt.setp(ax, ylim=(0, None), yticklabels=[],
xlabel=name)
plt.subplots_adjust()
Explanation: Plot the prior probability density for each parameter.
End of explanation
def lnprior(m):
lps = [p.logpdf(v) for (p, v) in zip(priors, m)]
if not np.all(np.isfinite(lps)):
return -np.inf
return np.sum(lps)
Explanation: Define a function that returns the (logarithm of the) prior probability density for a given data model m.
End of explanation
def lnlike(m):
eps, t, rho = m
mean = models.C_10Be(profile_data['depth'].values,
eps, t, rho, inh_true)
var = profile_data['std']**2
lngauss = -0.5 * np.sum(
np.log(2. * np.pi * var) +
(profile_data['C'] - mean)**2 / var
)
return lngauss
Explanation: Define a function that returns the log-likelihood. It is a $n$-dimensional Gaussian ($n$ nucleide concentrations sampled along the depth profile) with the mean given by the formard model and the variance given by the error estimated from the measurements of the nucleide concentration of each sample. This Gaussian implies that (1) the error on each measurement is random, (2) the sampled nucleide concentrations are measured independently of each other, (3) the forward model - i.e., the deterministic model that predicts the nucleide concentration profile - represents the real physics and (4) the values of the non-free parameters of the forward model - e.g., nucleide surface production rate, attenuation lengths... - are exactly known.
End of explanation
def lnprob(m):
lp = lnprior(m)
if not np.isfinite(lp):
return -np.inf
return lp + lnlike(m)
Explanation: Define a function that returns the log-posterior probability density, according to the Bayes's theorem.
End of explanation
n_params, n_walkers = len(param_names), 100
# randomly choose initial guesses according to the prior
init_guesses = np.array(
[p.rvs(size=n_walkers) for p in priors]
).T
# perform bounded non-linear optimization from each initial guess
op_lnlike = lambda *args: -lnlike(*args)
init_walkers = np.empty_like(init_guesses)
for i, g in enumerate(init_guesses):
res = optimize.minimize(op_lnlike, g,
method='TNC',
bounds=bounds)
init_walkers[i] = res['x']
Explanation: Sampling the posterior probablility density using MCMC
In our case, the from of the PPD may be highly anisotropic ; it may present high (negative or positive) correlations between its parameters (erosion rate, exposure time, soil density, inheritance). Usually, these relationships are even non-linear.
It is therefore important to use a robust algorithm to sample this complex PPD. The Affine Invariant Markov chain Monte Carlo (MCMC) Ensemble sampler implemented in the emcee package will be more efficient in our case than the standard MCMC algorithms such as the Metropolis-Hasting method.
The emcee sampler allows to define multiple, independent walkers. This requires to first set the initial position of each walker in the parameter space. As shown in the emcee documentation, the author suggests initializing the walkers in a tiny Gaussian ball around the maximum likelihood result. We can obtain the maximum likelihood estimate by applying an optimization algorithm such as one of those implemented in the scipy.optimize module. Note that non-linear optimization usually requires to provide an initial guess.
Given our complex, non-linear, and potentially flat form of the PDD in some areas of the parameter space, we prefer to set the initial positions of the walkers as the maximum likelihood estimates resulting from randomly chosing initial guesses in the parameter space according to the prior probability density. Note that we use a constrained optimization algorithm to ensure that the initial positions are within the bounds defined above.
End of explanation
df_init_guesses = pd.DataFrame(init_guesses, columns=param_names)
df_init_walkers = pd.DataFrame(init_walkers, columns=param_names)
def scatter_pos(xcol, ycol, ax):
df_init_guesses.plot(
kind='scatter', x=xcol, y=ycol,
alpha=0.5, ax=ax, color=clr_plt[0], label='init guesses'
)
df_init_walkers.plot(
kind='scatter', x=xcol, y=ycol,
alpha=0.8, ax=ax, color=clr_plt[1], label='init walkers'
)
legend = ax.legend(frameon=True, loc='lower right')
legend.get_frame().set_facecolor('w')
plt.setp(ax, xlim=param_bounds.loc[xcol],
ylim=param_bounds.loc[ycol])
fig, ax = plt.subplots(2, 2, figsize=(12,12))
scatter_pos('erosion rate', 'time exposure', ax[0][0])
scatter_pos('soil density', 'time exposure', ax[0][1])
scatter_pos('erosion rate', 'soil density', ax[1][0])
Explanation: We show below the initial guesses and the initial positions of the walkers in a scatter plot.
End of explanation
sampler = emcee.EnsembleSampler(n_walkers, n_params, lnprob)
n_steps = 500
sampler.run_mcmc(init_walkers, n_steps)
mcmc_samples = pd.DataFrame(sampler.flatchain,
columns=param_names)
Explanation: We can then setup the emcee sampler and run the MCMC for n_steps iterations starting from the initial positions defined above.
End of explanation
sample_plot_range = slice(None)
axes = mcmc_samples[sample_plot_range].plot(
kind='line', subplots=True,
figsize=(10, 8), color=clr_plt[0]
)
for i, ax in enumerate(axes):
ax.axhline(param_true.iloc[i], color='r')
Explanation: Let's plot the trace of the MCMC iterations. The red lines show the true values.
End of explanation
nburn = 100
mcmc_kept_samples = pd.DataFrame(
sampler.chain[:, nburn:, :].reshape((-1, n_params)),
columns=param_names
)
Explanation: Try plotting only the firsts samples (e.g., sample_plot_range = slice(0, 1000)). We see that thanks to the initial positions of the walkers, the emcee sampler quickly starts exploring the full posterior distribution. The โburn-inโ period is small and we can therefore set a small value for nburn below.
End of explanation
def jointplot_density(xcol, ycol):
p = sns.jointplot(
xcol, ycol,
data=mcmc_kept_samples,
xlim=(mcmc_kept_samples[xcol].min(),
mcmc_kept_samples[xcol].max()),
ylim=(mcmc_kept_samples[ycol].min(),
mcmc_kept_samples[ycol].max()),
joint_kws={'alpha': 0.02}
)
p.ax_joint.axhline(param_true.loc[ycol], color='r')
p.ax_joint.axvline(param_true.loc[xcol], color='r')
jointplot_density('erosion rate', 'time exposure')
jointplot_density('soil density', 'time exposure')
jointplot_density('erosion rate', 'soil density')
Explanation: We can visualize the sampled posterior propbability density by joint plots of the MCMC samples. The red lines show the true values.
End of explanation
mcmc_kept_samples.mean()
Explanation: Given the samples, it is straightforward to characterize the posterior porbability density and estimate its moments.
the PPD mean (if the PPD distribution is strictly gaussian, it also correspond to the MAP (Maximum A-Posterori) and therefore the most probable model)
End of explanation
max_ppd = sampler.lnprobability[:, nburn:].reshape((-1)).argmax()
mcmc_kept_samples.iloc[max_ppd]
Explanation: the sample which have the max PPD value (i.e., the most probable sampled model)
End of explanation
percentiles = np.array([2.5, 5, 25, 50, 75, 95, 97.5])
mcmc_kept_samples.quantile(percentiles * 0.01)
Explanation: the PPD quantiles (useful for delineating the Bayesian confidence intervals or credible intervals for each free parameter)
End of explanation
fig, ax = plt.subplots()
# plot the profile data with error bars
profile_data.plot(
y='depth', x='C', xerr='std',
kind="scatter", ax=ax, rot=45
)
# plot 50 randomly chosen profiles from MCMC samples
depths = np.linspace(profile_data['depth'].min(),
profile_data['depth'].max(),
100)
for i in np.random.randint(len(mcmc_kept_samples), size=100):
eps, t, rho = mcmc_kept_samples.iloc[i]
c = models.C_10Be(depths, eps, t, rho, inh_true)
ax.plot(c, depths, color='grey', alpha=0.1)
# plot the true profile
c_true = models.C_10Be(depths, eps_true, t_true,
rho_true, inh_true)
ax.plot(c_true, depths, color='r', label='true model')
ax.invert_yaxis()
Explanation: We finally plot the nucleide concentration profiles (blue dots: data w/ error bars, red line: true profile, grey lines: randomly chosen profiles from MCMC samples).
End of explanation
<END_TASK> |
15,844 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Using Dask EntitySets (BETA)
Creating a feature matrix from a very large dataset can be problematic if the underlying pandas dataframes that make up the EntitySet cannot easily fit in memory. To help get around this issue, Featuretools supports creating EntitySet objects from Dask dataframes. A Dask EntitySet can then be passed to featuretools.dfs or featuretools.calculate_feature_matrix to create a feature matrix, which will be returned as a Dask dataframe. In addition to working on larger than memory datasets, this approach also allows users to take advantage of the parallel and distributed processing capabilities offered by Dask.
This guide will provide an overview of how to create a Dask EntitySet and then generate a feature matrix from it. If you are already familiar with creating a feature matrix starting from pandas DataFrames, this process will seem quite familiar, as there are no differences in the process. There are, however, some limitations when using Dask dataframes, and those limitations are reviewed in more detail below.
Creating EntitySets
For this example, we will create a very small pandas DataFrame and then convert this into a Dask DataFrame to use in the remainder of the process. Normally when using Dask, you would just read your data directly into a Dask DataFrame without the intermediate step of using pandas.
Step1: Now that we have our Dask DataFrame, we can start to create the EntitySet. Inferring Woodwork logical types for the columns in a Dask dataframe can be computationally expensive. To avoid this expense, logical type inference can be skipped by supplying a dictionary of logical types using the logical_types parameter when calling es.add_dataframe(). Logical types can be specified as Woodwork LogicalType classes, or their equivalent string representation. For more information refer to the Woodwork Typing in Featuretools guide.
Aside from supplying the logical types, the rest of the process of creating an EntitySet is the same as if we were using pandas DataFrames.
Step2: Notice that when we print our EntitySet, the number of rows for the DataFrame named dask_input_df is returned as a Dask Delayed object. This is because obtaining the length of a Dask DataFrame may require an expensive compute operation to sum up the lengths of all the individual partitions that make up the DataFrame and that operation is not performed by default.
Running DFS
We can pass the EntitySet we created above to featuretools.dfs in order to create a feature matrix. If the EntitySet we pass to dfs is made of Dask DataFrames, the feature matrix we get back will be a Dask DataFrame.
Step3: This feature matrix can be saved to disk or computed and brought into memory, using the appropriate Dask DataFrame methods.
Step4: While this is a simple example to illustrate the process of using Dask DataFrames with Featuretools, this process will also work with an EntitySet containing multiple dataframes, as well as with aggregation primitives.
Limitations
The key functionality of Featuretools is available for use with a Dask EntitySet, and work is ongoing to add the remaining functionality that is available when using a pandas EntitySet. There are, however, some limitations to be aware of when creating a Dask Entityset and then using it to generate a feature matrix. The most significant limitations are reviewed in more detail in this section.
Supported Primitives
When creating a feature matrix from a Dask EntitySet, only certain primitives can be used. Primitives that rely on the order of the entire DataFrame or require an entire column for computation are currently not supported when using a Dask EntitySet. Multivariable and time-dependent aggregation primitives also are not currently supported.
To obtain a list of the primitives that can be used with a Dask EntitySet, you can call featuretools.list_primitives(). This will return a table of all primitives. Any primitive that can be used with a Dask EntitySet will have a value of True in the dask_compatible column.
Step5: Primitive Limitations
At this time, custom primitives created with featuretools.primitives.make_trans_primitive() or featuretools.primitives.make_agg_primitive() cannot be used for running deep feature synthesis on a Dask EntitySet. While it is possible to create custom primitives for use with a Dask EntitySet by extending the proper primitive class, there are several potential problems in doing so, and those issues are beyond the scope of this guide.
DataFrame Limitations
Featuretools stores the DataFrames that make up an EntitySet as Woodwork DataFrames which include additional typing information about the columns that are in the DataFrame. When adding a DataFrame to an EntitySet, Woodwork will attempt to infer the logical types for any columns that do not have a logical type defined. This inference process can be quite expensive for Dask DataFrames. In order to skip type inference and speed up the process of adding a Dask DataFrame to an EntitySet, users can specify the logical type to use for each column in the DataFrame. A list of available logical types can be obtained by running featuretools.list_logical_types(). To learn more about the limitations of a Dask dataframe with Woodwork typing, see the Woodwork guide on Dask dataframes.
By default, Woodwork checks that pandas DataFrames have unique index values. Because performing this same check with Dask would require an expensive compute operation, this check is not performed when adding a Dask DataFrame to an EntitySet. When using Dask DataFrames, users must ensure that the supplied index values are unique.
When using a pandas DataFrames, the ordering of the underlying DataFrame rows is maintained by Featuretools. For a Dask DataFrame, the ordering of the DataFrame rows is not guaranteed, and Featuretools does not attempt to maintain row order. If ordering is important, close attention must be paid to any output to avoid issues.
EntitySet Limitations
When creating a Featuretools EntitySet that will be made of Dask DataFrames, all of the DataFrames used to create the EntitySet must be of the same type, either all Dask DataFrames or all pandas DataFrames. Featuretools does not support creating an EntitySet containing a mix of Dask and pandas DataFrames.
Additionally, EntitySet.add_interesting_values() cannot be used in Dask EntitySets to find interesting values; however, it can be used set a column's interesting values with the values parameter. | <ASSISTANT_TASK:>
Python Code:
import featuretools as ft
import pandas as pd
import dask.dataframe as dd
id = [0, 1, 2, 3, 4]
values = [12, -35, 14, 103, -51]
df = pd.DataFrame({"id": id, "values": values})
dask_df = dd.from_pandas(df, npartitions=2)
dask_df
Explanation: Using Dask EntitySets (BETA)
Creating a feature matrix from a very large dataset can be problematic if the underlying pandas dataframes that make up the EntitySet cannot easily fit in memory. To help get around this issue, Featuretools supports creating EntitySet objects from Dask dataframes. A Dask EntitySet can then be passed to featuretools.dfs or featuretools.calculate_feature_matrix to create a feature matrix, which will be returned as a Dask dataframe. In addition to working on larger than memory datasets, this approach also allows users to take advantage of the parallel and distributed processing capabilities offered by Dask.
This guide will provide an overview of how to create a Dask EntitySet and then generate a feature matrix from it. If you are already familiar with creating a feature matrix starting from pandas DataFrames, this process will seem quite familiar, as there are no differences in the process. There are, however, some limitations when using Dask dataframes, and those limitations are reviewed in more detail below.
Creating EntitySets
For this example, we will create a very small pandas DataFrame and then convert this into a Dask DataFrame to use in the remainder of the process. Normally when using Dask, you would just read your data directly into a Dask DataFrame without the intermediate step of using pandas.
End of explanation
from woodwork.logical_types import Double, Integer
es = ft.EntitySet(id="dask_es")
es = es.add_dataframe(dataframe_name="dask_input_df",
dataframe=dask_df,
index="id",
logical_types={"id": Integer, "values": Double})
es
Explanation: Now that we have our Dask DataFrame, we can start to create the EntitySet. Inferring Woodwork logical types for the columns in a Dask dataframe can be computationally expensive. To avoid this expense, logical type inference can be skipped by supplying a dictionary of logical types using the logical_types parameter when calling es.add_dataframe(). Logical types can be specified as Woodwork LogicalType classes, or their equivalent string representation. For more information refer to the Woodwork Typing in Featuretools guide.
Aside from supplying the logical types, the rest of the process of creating an EntitySet is the same as if we were using pandas DataFrames.
End of explanation
feature_matrix, features = ft.dfs(entityset=es,
target_dataframe_name="dask_input_df",
trans_primitives=["negate"],
max_depth=1)
feature_matrix
Explanation: Notice that when we print our EntitySet, the number of rows for the DataFrame named dask_input_df is returned as a Dask Delayed object. This is because obtaining the length of a Dask DataFrame may require an expensive compute operation to sum up the lengths of all the individual partitions that make up the DataFrame and that operation is not performed by default.
Running DFS
We can pass the EntitySet we created above to featuretools.dfs in order to create a feature matrix. If the EntitySet we pass to dfs is made of Dask DataFrames, the feature matrix we get back will be a Dask DataFrame.
End of explanation
fm_computed = feature_matrix.compute()
fm_computed
Explanation: This feature matrix can be saved to disk or computed and brought into memory, using the appropriate Dask DataFrame methods.
End of explanation
primitives_df = ft.list_primitives()
dask_compatible_df = primitives_df[primitives_df["dask_compatible"] == True]
dask_compatible_df.head()
dask_compatible_df.tail()
Explanation: While this is a simple example to illustrate the process of using Dask DataFrames with Featuretools, this process will also work with an EntitySet containing multiple dataframes, as well as with aggregation primitives.
Limitations
The key functionality of Featuretools is available for use with a Dask EntitySet, and work is ongoing to add the remaining functionality that is available when using a pandas EntitySet. There are, however, some limitations to be aware of when creating a Dask Entityset and then using it to generate a feature matrix. The most significant limitations are reviewed in more detail in this section.
Supported Primitives
When creating a feature matrix from a Dask EntitySet, only certain primitives can be used. Primitives that rely on the order of the entire DataFrame or require an entire column for computation are currently not supported when using a Dask EntitySet. Multivariable and time-dependent aggregation primitives also are not currently supported.
To obtain a list of the primitives that can be used with a Dask EntitySet, you can call featuretools.list_primitives(). This will return a table of all primitives. Any primitive that can be used with a Dask EntitySet will have a value of True in the dask_compatible column.
End of explanation
values_dict = {'values': [12, 103]}
es.add_interesting_values(dataframe_name='dask_input_df', values=values_dict)
es['dask_input_df'].ww.columns['values'].metadata
Explanation: Primitive Limitations
At this time, custom primitives created with featuretools.primitives.make_trans_primitive() or featuretools.primitives.make_agg_primitive() cannot be used for running deep feature synthesis on a Dask EntitySet. While it is possible to create custom primitives for use with a Dask EntitySet by extending the proper primitive class, there are several potential problems in doing so, and those issues are beyond the scope of this guide.
DataFrame Limitations
Featuretools stores the DataFrames that make up an EntitySet as Woodwork DataFrames which include additional typing information about the columns that are in the DataFrame. When adding a DataFrame to an EntitySet, Woodwork will attempt to infer the logical types for any columns that do not have a logical type defined. This inference process can be quite expensive for Dask DataFrames. In order to skip type inference and speed up the process of adding a Dask DataFrame to an EntitySet, users can specify the logical type to use for each column in the DataFrame. A list of available logical types can be obtained by running featuretools.list_logical_types(). To learn more about the limitations of a Dask dataframe with Woodwork typing, see the Woodwork guide on Dask dataframes.
By default, Woodwork checks that pandas DataFrames have unique index values. Because performing this same check with Dask would require an expensive compute operation, this check is not performed when adding a Dask DataFrame to an EntitySet. When using Dask DataFrames, users must ensure that the supplied index values are unique.
When using a pandas DataFrames, the ordering of the underlying DataFrame rows is maintained by Featuretools. For a Dask DataFrame, the ordering of the DataFrame rows is not guaranteed, and Featuretools does not attempt to maintain row order. If ordering is important, close attention must be paid to any output to avoid issues.
EntitySet Limitations
When creating a Featuretools EntitySet that will be made of Dask DataFrames, all of the DataFrames used to create the EntitySet must be of the same type, either all Dask DataFrames or all pandas DataFrames. Featuretools does not support creating an EntitySet containing a mix of Dask and pandas DataFrames.
Additionally, EntitySet.add_interesting_values() cannot be used in Dask EntitySets to find interesting values; however, it can be used set a column's interesting values with the values parameter.
End of explanation
<END_TASK> |
15,845 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bcc', 'sandbox-3', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: BCC
Source ID: SANDBOX-3
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:39
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation
<END_TASK> |
15,846 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Compute the power spectral density of raw data
This script shows how to compute the power spectral density (PSD)
of measurements on a raw dataset. It also show the effect of applying SSP
to the data to reduce ECG and EOG artifacts.
Step1: Load data
We'll load a sample MEG dataset, along with SSP projections that will
allow us to reduce EOG and ECG artifacts. For more information about
reducing artifacts, see the preprocessing section in documentation.
Step2: Plot the raw PSD
First we'll visualize the raw PSD of our data. We'll do this on all of the
channels first. Note that there are several parameters to the
Step3: Plot a cleaned PSD
Next we'll focus the visualization on a subset of channels.
This can be useful for identifying particularly noisy channels or
investigating how the power spectrum changes across channels.
We'll visualize how this PSD changes after applying some standard
filtering techniques. We'll first apply the SSP projections, which is
accomplished with the proj=True kwarg. We'll then perform a notch filter
to remove particular frequency bands.
Step4: Alternative functions for PSDs
There are also several functions in MNE that create a PSD using a Raw
object. These are in the | <ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Martin Luessi <mluessi@nmr.mgh.harvard.edu>
# Eric Larson <larson.eric.d@gmail.com>
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import io, read_proj, read_selection
from mne.datasets import sample
from mne.time_frequency import psd_multitaper
print(__doc__)
Explanation: Compute the power spectral density of raw data
This script shows how to compute the power spectral density (PSD)
of measurements on a raw dataset. It also show the effect of applying SSP
to the data to reduce ECG and EOG artifacts.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
proj_fname = data_path + '/MEG/sample/sample_audvis_eog-proj.fif'
tmin, tmax = 0, 60 # use the first 60s of data
# Setup for reading the raw data (to save memory, crop before loading)
raw = io.read_raw_fif(raw_fname).crop(tmin, tmax).load_data()
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# Add SSP projection vectors to reduce EOG and ECG artifacts
projs = read_proj(proj_fname)
raw.add_proj(projs, remove_existing=True)
fmin, fmax = 2, 300 # look at frequencies between 2 and 300Hz
n_fft = 2048 # the FFT size (n_fft). Ideally a power of 2
Explanation: Load data
We'll load a sample MEG dataset, along with SSP projections that will
allow us to reduce EOG and ECG artifacts. For more information about
reducing artifacts, see the preprocessing section in documentation.
End of explanation
raw.plot_psd(area_mode='range', tmax=10.0, show=False, average=True)
Explanation: Plot the raw PSD
First we'll visualize the raw PSD of our data. We'll do this on all of the
channels first. Note that there are several parameters to the
:meth:mne.io.Raw.plot_psd method, some of which will be explained below.
End of explanation
# Pick MEG magnetometers in the Left-temporal region
selection = read_selection('Left-temporal')
picks = mne.pick_types(raw.info, meg='mag', eeg=False, eog=False,
stim=False, exclude='bads', selection=selection)
# Let's just look at the first few channels for demonstration purposes
picks = picks[:4]
plt.figure()
ax = plt.axes()
raw.plot_psd(tmin=tmin, tmax=tmax, fmin=fmin, fmax=fmax, n_fft=n_fft,
n_jobs=1, proj=False, ax=ax, color=(0, 0, 1), picks=picks,
show=False, average=True)
raw.plot_psd(tmin=tmin, tmax=tmax, fmin=fmin, fmax=fmax, n_fft=n_fft,
n_jobs=1, proj=True, ax=ax, color=(0, 1, 0), picks=picks,
show=False, average=True)
# And now do the same with SSP + notch filtering
# Pick all channels for notch since the SSP projection mixes channels together
raw.notch_filter(np.arange(60, 241, 60), n_jobs=1, fir_design='firwin')
raw.plot_psd(tmin=tmin, tmax=tmax, fmin=fmin, fmax=fmax, n_fft=n_fft,
n_jobs=1, proj=True, ax=ax, color=(1, 0, 0), picks=picks,
show=False, average=True)
ax.set_title('Four left-temporal magnetometers')
plt.legend(ax.lines[::3], ['Without SSP', 'With SSP', 'SSP + Notch'])
Explanation: Plot a cleaned PSD
Next we'll focus the visualization on a subset of channels.
This can be useful for identifying particularly noisy channels or
investigating how the power spectrum changes across channels.
We'll visualize how this PSD changes after applying some standard
filtering techniques. We'll first apply the SSP projections, which is
accomplished with the proj=True kwarg. We'll then perform a notch filter
to remove particular frequency bands.
End of explanation
f, ax = plt.subplots()
psds, freqs = psd_multitaper(raw, low_bias=True, tmin=tmin, tmax=tmax,
fmin=fmin, fmax=fmax, proj=True, picks=picks,
n_jobs=1)
psds = 10 * np.log10(psds)
psds_mean = psds.mean(0)
psds_std = psds.std(0)
ax.plot(freqs, psds_mean, color='k')
ax.fill_between(freqs, psds_mean - psds_std, psds_mean + psds_std,
color='k', alpha=.5)
ax.set(title='Multitaper PSD', xlabel='Frequency',
ylabel='Power Spectral Density (dB)')
plt.show()
Explanation: Alternative functions for PSDs
There are also several functions in MNE that create a PSD using a Raw
object. These are in the :mod:mne.time_frequency module and begin with
psd_*. For example, we'll use a multitaper method to compute the PSD
below.
End of explanation
<END_TASK> |
15,847 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Developing a model for actuator line turbulence injection
Step6: From these plots, it looks like a reasonable model would be a piecewise linear regression. Lower values of $C_d$ will match better, of course.
Step7: It looks like the turbulence quantities basically "saturate" once we hit static stall, so we can try fitting a line with zero slope (taking the average value since they're evenly spaced?) | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from pxl.styleplot import set_sns
set_sns()
import os
from scipy.interpolate import interp1d
import scipy.stats
dataset_name = "NACA0021_2.0e+05.csv"
dataset_url = "https://raw.githubusercontent.com/petebachant/NACAFoil-OpenFOAM/master/processed/NACA0021_2.0e%2B05.csv"
local_fpath = "data/" + dataset_name
def download():
Download data and save locally
df = pd.read_csv(dataset_url)
df.to_csv(local_fpath, index=False)
if not os.path.isdir("data"):
os.mkdir("data")
if not os.path.isfile(local_fpath):
download()
def lookup(df, alpha_deg, quantity="cl"):
Lookup specified quantity at given angle of attack using linear interpolation.
alpha_deg = np.asarray(alpha_deg)
f = interp1d(df.alpha_deg, df[quantity])
return float(f(alpha_deg))
def find_alpha_ss(df, threshold=0.02):
Find static stall angle in degrees. Threshold is the change in $C_d$ per degree of angle of attack
where static stall occurs.
d_cd_d_alpha = np.diff(df.cd)/np.diff(df.alpha_deg)
n = np.where(d_cd_d_alpha > threshold)[0]
alpha_ss = df.alpha_deg.iloc[n]
alpha_ss = alpha_ss.iloc[0]
return alpha_ss
def load():
return pd.read_csv(local_fpath)
labels = {"cd": "$C_d$",
"alpha_deg": r"$\alpha$ (deg.)",
"k": "$k$",
"epsilon": r"$\epsilon$"}
x = "cd"
marker = "o"
df = load()
print("Static stall angle:", find_alpha_ss(df))
fig, ax = plt.subplots(ncols=2, figsize=(7.5, 3.25))
ax[0].plot(df[x], df.k, marker=marker)
ax[1].plot(df[x], df.epsilon, marker=marker)
ax[0].set_ylabel("$k$")
ax[1].set_ylabel("$\epsilon$")
for a in ax:
a.set_xlabel(labels[x])
fig.tight_layout()
plt.show()
Explanation: Developing a model for actuator line turbulence injection
End of explanation
# Use scipy.stats
def fit(df, quantity="k", threshold=0.02):
Calculate linear fits for a quantity
cd_thresh = lookup(df, find_alpha_ss(df, threshold=threshold), quantity="cd")
data = {"cd_thresh": cd_thresh}
for highlow in ["low", "high"]:
if highlow == "low":
dfi = df[df.cd <= cd_thresh]
elif highlow == "high":
dfi = df[df.cd > cd_thresh]
slope, intercept, r_value, p_value, std_err = scipy.stats.linregress(dfi.cd, dfi[quantity])
data["slope_" + quantity + "_" + highlow] = slope
data["intercept_" + quantity + "_" + highlow] = intercept
data["std_err_" + quantity + "_" + highlow] = std_err
data["r_value_" + quantity + "_" + highlow] = r_value
data["cd_fit_" + highlow] = np.linspace(0, dfi.cd.max() + 0.05, num=100)
data[quantity + "_fit_" + highlow] = data["cd_fit_" + highlow]*slope + intercept
if intercept < 0:
sign = "-"
else:
sign = "+"
data[quantity + "_" + highlow + "_eq"] = r"${:.3f}C_d {} {:.4f}$".format(slope, sign, np.abs(intercept))
return data
def fit_all(df):
data = {}
for q in ["k", "epsilon"]:
data.update(fit(df, q))
return data
fits = fit_all(df)
for i in ["slope", "intercept", "r_value"]:
for j in ["k", "epsilon"]:
for k in ["low", "high"]:
key = "_".join([i, j, k])
print(key + ":", fits[key])
For 0012
slope_k_low: 0.593388327002
slope_k_high: 0.0143189026664
slope_epsilon_low: 0.764339209867
slope_epsilon_high: 0.0136409303959
intercept_k_low: -0.00473891507231
intercept_k_high: 0.0775546672942
intercept_epsilon_low: -0.00151541577433
intercept_epsilon_high: 0.0966371905465
fig, ax = plt.subplots(ncols=2, figsize=(7.5, 3.25))
for a, q in zip(ax, ["k", "epsilon"]):
a.plot(df.cd, df[q], marker=marker, label="")
a.plot(fits["cd_fit_low"], fits[q + "_fit_low"], linestyle="--", label=fits[q + "_low_eq"])
# a.plot(fits["cd_fit_high"], fits[q + "_fit_high"], linestyle="--", label=fits[q + "_high_eq"])
# plt.vlines(lookup(df, find_alpha_ss(df, threshold=0.03), quantity="cd"), -0.02, 0.1)
a.set_xlabel(labels["cd"])
a.set_ylabel(labels[q])
a.legend(loc="lower right")
fig.tight_layout()
plt.show()
plt.plot(df.alpha_deg, df.cd, marker="o", label=labels["cd"])
plt.plot(df.alpha_deg, df.k, marker="s", label=labels["k"])
plt.plot(df.alpha_deg, df.epsilon, marker="^", label=labels["epsilon"])
plt.legend(loc="upper left")
plt.xlabel(labels["alpha_deg"])
plt.tight_layout()
plt.show()
Explanation: From these plots, it looks like a reasonable model would be a piecewise linear regression. Lower values of $C_d$ will match better, of course.
End of explanation
print("k saturation point:", df.k[df.alpha_deg > find_alpha_ss(df, threshold=0.02)].mean())
print("epsilon saturation point:", df.epsilon[df.alpha_deg > find_alpha_ss(df, threshold=0.02)].mean())
Explanation: It looks like the turbulence quantities basically "saturate" once we hit static stall, so we can try fitting a line with zero slope (taking the average value since they're evenly spaced?)
End of explanation
<END_TASK> |
15,848 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Class 01
Big Data Ingesting
Step1: The next step will be to copy the data file that we will be using for this tutorial into the same folder as these notes. We will be looking at a couple of different types of data sets. We'll start with a simple data set that appears to be a functional set of data where one output column depends on the input columns of the data. In this case, we're looking at a set of patient data where there are a handful of input variables that may feed into the likelyhood that the patient will develop type 2 diabetes. The output column is a quantitative measure of disease progression one year after baseline measurements. (http
Step2: Now that we've loaded the data in, the first thing to do is to take a look at the raw data. We can look at the first 5 rows (the head of the data set) by doing the following.
Step3: Before we move forward, note that there is a strange value in the first row under 'GLU'
Step4: So we see the first row is gone. That's what we wanted. However, this doesn't really tell us much by itself. It is better to start investigating how the output variable ('Target' in this case) depends on the inputs. We'll visualize the data one at a time to look at this. We'll make a scatter plot where we look at the Target as a function of the Age column. The first entry provides the 'x' values where the second provides the 'y' values. The final input tells the plotting software to plot the data points as dots, not connected lines. We'll almost always use this feature.
Step5: This doesn't tell us much. It looks like there isn't a large dependence on age - othewise we would have seen something more specific than a large blob of data. Let's try other inputs. We'll plot a bunch of them in a row.
Jupyter Hint
Step6: It looks like there are some of these, like BMI, that as the BMI goes up, so does the Target.
Import Classification Data
There is another type of data set where we have any number of input variables, but the output is no longer a continuous number, but rather it is a class. By that we mean that it is one of a finite number of possibilities. For example, in this next data set, we are looking at the characteristics of three different iris flowers. The measurements apply to one of the three types
Step7: As you can see, the 'target' column is no longer numerical, but a text entry that is one of the three possible iris varieties. We also see that the default column headings are a bit long and will get tiring to type out when we want to reference them. Let's rename the columns first.
Step8: Now we want to visualize the data. We don't know what to expect, so let's just pick a couple of variables and see what the data look like.
Step9: So we see that there are entries at a number of different points, but it would be really nice to be able to identify which point correpsonds to which variety. We will use another python library to do this. We'll also set the default style to 'white' which looks better.
Step10: The seaborn library provides a number of different plotting options. One of them is lmplot. It is designed to provide a linear model fit (which we don't want right now), so we'll set the fig_reg option to False so that it doesn't try to fit them.
Note that we need two additional parameters here
Step11: Now we can see that the cluster off to the left all belongs to the Setosa variety. It would be really nice to try plotting the other variables as well. We could do that manually or use a nice shortcut in seaborn called pairplot. This plots the hue column against all possible pairs of the other data columns.
Step12: We see that there are some of these plots that show there might be a way to distinuish the three different varieties. We'll look at how to do that later on, but this gives us a start.
Import Image Data
The last type of data we are going to look at are image data. This type of data provides information about each pixel (or element) in an image. We'll start by working with gray-scale images where each pixel could be a value anywhere between 0 (black) and 255 (white). We'll read in the data then look at how to create the image. This data set are handwritten digits from 0 to 9 that have been digitized. We will eventually try to teach the computer to read the handwritten digits.
Step13: This data set has 65 columns. The first 64 correspond to the grayscale value for each of the pixels in an 8 by 8 image. The last column (the 'target') indicates what digit the image is supposed to be. We'll pick one row to start with (row 41 in this case). We'll use some in-line commenting to explain each step here. | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
Explanation: Class 01
Big Data Ingesting: CSVs, Data frames, and Plots
Welcome to PHY178/CSC171. We will be using the Python language to import data, run machine learning, visualize the results, and communicate those results.
Much of the data that we will use this semester is stored in a CSV file. This stand for Comma-separated Values. The data files are stored in rows- one row per line, with the column values separated by commas. Take a quick look at the data in Class01_diabetes_data.csv by clicking on it in the "Files" tab. You can see that the entries all bunch up together since they are separated by the comma delimeter, not by space.
Where to get data
We will spend quite a bit of time looking for public data as we get going in this class. Here are a couple of places to look for data sets to work with:
* The UCI repository: https://archive.ics.uci.edu/ml/datasets.html
* Kaggle Public Datasets: https://www.kaggle.com/datasets
* Ceasar's repository: https://github.com/caesar0301/awesome-public-datasets
Explore a few of these and try downloading one of the files. For example, the data in the UCI repository can be downloaded from the "Data Folder" links. You have to right-click the file, then save it to the local computer. Their files aren't labeled as "CSV" files (the file extension is .data), but they are CSV files.
How to put it on the cloud
Once you have a data file, you need to upload it to the cloud so that we can import it and plot it. The easiest way to do this is to click on the "Files" link in the toolbar. Click on the "Create" button and then drag the file into the upload box. Put the file in the same folder as the Class01 notebook and you'll be able to load it later on.
Import Regression Data
The first thing we want to do is to import data into our notebook so that we can examine it, evaluate it, and use machine learning to learn from it. We will be using a Python library that makes all of that much easier.
Jupyter Hint: Run the command in the next window to import that Pandas library. You evaluate cells in the notebook by highlighting them (by clicking on them), then pressing Shift-Enter to execute the cell.
End of explanation
diabetes = pd.read_csv('Class01_diabetes_data.csv')
Explanation: The next step will be to copy the data file that we will be using for this tutorial into the same folder as these notes. We will be looking at a couple of different types of data sets. We'll start with a simple data set that appears to be a functional set of data where one output column depends on the input columns of the data. In this case, we're looking at a set of patient data where there are a handful of input variables that may feed into the likelyhood that the patient will develop type 2 diabetes. The output column is a quantitative measure of disease progression one year after baseline measurements. (http://www4.stat.ncsu.edu/~boos/var.select/diabetes.html)
End of explanation
diabetes.head()
Explanation: Now that we've loaded the data in, the first thing to do is to take a look at the raw data. We can look at the first 5 rows (the head of the data set) by doing the following.
End of explanation
diabetes.dropna(inplace=True)
diabetes.head()
Explanation: Before we move forward, note that there is a strange value in the first row under 'GLU': NaN. This means 'not a number' and indicates there was a missing value or other problem with the data. Before we move foward, we want to drop any row that has missing values in it. There is a simple pandas command that will do that: dropna(inplace=True). The argument to this command: inplace=True tells the computer to drop the rows in our current dataset, not make a new copy.
End of explanation
diabetes.plot(x='Age',y='Target',kind='scatter')
Explanation: So we see the first row is gone. That's what we wanted. However, this doesn't really tell us much by itself. It is better to start investigating how the output variable ('Target' in this case) depends on the inputs. We'll visualize the data one at a time to look at this. We'll make a scatter plot where we look at the Target as a function of the Age column. The first entry provides the 'x' values where the second provides the 'y' values. The final input tells the plotting software to plot the data points as dots, not connected lines. We'll almost always use this feature.
End of explanation
diabetes.plot(x='Sex',y='Target',kind='scatter')
diabetes.plot(x='BMI',y='Target',kind='scatter')
diabetes.plot(x='BP',y='Target',kind='scatter')
diabetes.plot(x='TC',y='Target',kind='scatter')
diabetes.plot(x='LDL',y='Target',kind='scatter')
diabetes.plot(x='HDL',y='Target',kind='scatter')
diabetes.plot(x='TCH',y='Target',kind='scatter')
diabetes.plot(x='LTG',y='Target',kind='scatter')
diabetes.plot(x='GLU',y='Target',kind='scatter')
Explanation: This doesn't tell us much. It looks like there isn't a large dependence on age - othewise we would have seen something more specific than a large blob of data. Let's try other inputs. We'll plot a bunch of them in a row.
Jupyter Hint: Clicking in the white space next to the output cell will expand and contract the output contents. This is helpful when you have lots of output.
End of explanation
irisDF = pd.read_csv('Class01_iris_data.csv')
irisDF.head()
Explanation: It looks like there are some of these, like BMI, that as the BMI goes up, so does the Target.
Import Classification Data
There is another type of data set where we have any number of input variables, but the output is no longer a continuous number, but rather it is a class. By that we mean that it is one of a finite number of possibilities. For example, in this next data set, we are looking at the characteristics of three different iris flowers. The measurements apply to one of the three types:
* Setosa
* Versicolour
* Virginica
Let's take a look at this data set and see what it takes to visualize it. First load the data in and inspect the first few rows.
End of explanation
irisDF.columns=['sepalLen','sepalWid','petalLen','petalWid','target']
irisDF.head()
Explanation: As you can see, the 'target' column is no longer numerical, but a text entry that is one of the three possible iris varieties. We also see that the default column headings are a bit long and will get tiring to type out when we want to reference them. Let's rename the columns first.
End of explanation
irisDF.plot(x='sepalLen',y='sepalWid',kind='scatter')
Explanation: Now we want to visualize the data. We don't know what to expect, so let's just pick a couple of variables and see what the data look like.
End of explanation
import seaborn as sns
sns.set_style('white')
Explanation: So we see that there are entries at a number of different points, but it would be really nice to be able to identify which point correpsonds to which variety. We will use another python library to do this. We'll also set the default style to 'white' which looks better.
End of explanation
sns.lmplot(x='sepalLen', y='sepalWid', data=irisDF, hue='target', fit_reg=False)
Explanation: The seaborn library provides a number of different plotting options. One of them is lmplot. It is designed to provide a linear model fit (which we don't want right now), so we'll set the fig_reg option to False so that it doesn't try to fit them.
Note that we need two additional parameters here: the first is to tell seaborn to use the irisDF data. That means it will look in that data set for the x and y columns we provide. The second is the hue option. This tells seaborn what column to use to determine the color (or hue) of the points. In this case, it will notice that there are three different options in that column and color them appropriately.
End of explanation
sns.pairplot(irisDF, hue="target")
Explanation: Now we can see that the cluster off to the left all belongs to the Setosa variety. It would be really nice to try plotting the other variables as well. We could do that manually or use a nice shortcut in seaborn called pairplot. This plots the hue column against all possible pairs of the other data columns.
End of explanation
digitDF = pd.read_csv('Class01_digits_data.csv')
digitDF.head()
Explanation: We see that there are some of these plots that show there might be a way to distinuish the three different varieties. We'll look at how to do that later on, but this gives us a start.
Import Image Data
The last type of data we are going to look at are image data. This type of data provides information about each pixel (or element) in an image. We'll start by working with gray-scale images where each pixel could be a value anywhere between 0 (black) and 255 (white). We'll read in the data then look at how to create the image. This data set are handwritten digits from 0 to 9 that have been digitized. We will eventually try to teach the computer to read the handwritten digits.
End of explanation
testnum = 61
#
# First, get the first 64 columns which correspond to the image data
#
testimage = digitDF.loc[testnum][0:64]
#
# Then reshape this from a 1 by 64 array into a matrix that is 8 by 8.
#
testimage = testimage.reshape((8,8))
#
# We'll print out what the image is supposed to be. Note the format of the print statement.
# The '{}' means 'insert the argument from the format here'.
# The .format means 'pass these values into the string.
#
print('Expected Digit: {}'.format(digitDF.loc[testnum][64]))
#
# Finally, we need one more library to plot the images.
#
import matplotlib.pyplot as plt
#
# We tell Python to plot a gray scale image, then to show our resahped data as an image.
#
plt.gray()
plt.matshow(testimage)
Explanation: This data set has 65 columns. The first 64 correspond to the grayscale value for each of the pixels in an 8 by 8 image. The last column (the 'target') indicates what digit the image is supposed to be. We'll pick one row to start with (row 41 in this case). We'll use some in-line commenting to explain each step here.
End of explanation
<END_TASK> |
15,849 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Tutorial 07 - Non linear Elliptic problem
Keywords
Step1: 3. Affine Decomposition
For this problem the affine decomposition is straightforward
Step2: 4. Main program
4.1. Read the mesh for this problem
The mesh was generated by the data/generate_mesh.ipynb notebook.
Step3: 4.2. Create Finite Element space (Lagrange P1)
Step4: 4.3. Allocate an object of the NonlinearElliptic class
Step5: 4.4. Prepare reduction with a POD-Galerkin method
Step6: 4.5. Perform the offline phase
Step7: 4.6. Perform an online solve
Step8: 4.7. Perform an error analysis
Step9: 4.8. Perform a speedup analysis | <ASSISTANT_TASK:>
Python Code:
from dolfin import *
from rbnics import *
Explanation: Tutorial 07 - Non linear Elliptic problem
Keywords: EIM, POD-Galerkin
1. Introduction
In this tutorial, we consider a non linear elliptic problem in a two-dimensional spatial domain $\Omega=(0,1)^2$. We impose a homogeneous Dirichlet condition on the boundary $\partial\Omega$. The source term is characterized by the following expression
$$
g(\boldsymbol{x}; \boldsymbol{\mu}) = 100\sin(2\pi x_0)cos(2\pi x_1) \quad \forall \boldsymbol{x} = (x_0, x_1) \in \Omega.
$$
This problem is characterized by two parameters. The first parameter $\mu_0$ controls the strength of the sink term and the second parameter $\mu_1$ the strength of the nonlinearity. The range of the two parameters is the following:
$$
\mu_0,\mu_1\in[0.01,10.0]
$$
The parameter vector $\boldsymbol{\mu}$ is thus given by
$$
\boldsymbol{\mu} = (\mu_0,\mu_1)
$$
on the parameter domain
$$
\mathbb{P}=[0.01,10]^2.
$$
In order to obtain a faster approximation of the problem, we pursue a model reduction by means of a POD-Galerkin reduced order method. In order to preserve the affinity assumption empirical interpolation method will be used on the forcing term $g(\boldsymbol{x}; \boldsymbol{\mu})$.
2. Parametrized formulation
Let $u(\boldsymbol{\mu})$ be the solution in the domain $\Omega$.
The strong formulation of the parametrized problem is given by:
<center>for a given parameter $\boldsymbol{\mu}\in\mathbb{P}$, find $u(\boldsymbol{\mu})$ such that</center>
$$ -\nabla^2u(\boldsymbol{\mu})+\frac{\mu_0}{\mu_1}(\exp{\mu_1u(\boldsymbol{\mu})}-1)=g(\boldsymbol{x}; \boldsymbol{\mu})$$
<br>
The corresponding weak formulation reads:
<center>for a given parameter $\boldsymbol{\mu}\in\mathbb{P}$, find $u(\boldsymbol{\mu})\in\mathbb{V}$ such that</center>
$$a\left(u(\boldsymbol{\mu}),v;\boldsymbol{\mu}\right)+c\left(u(\boldsymbol{\mu}),v;\boldsymbol{\mu}\right)=f(v;\boldsymbol{\mu})\quad \forall v\in\mathbb{V}$$
where
the function space $\mathbb{V}$ is defined as
$$
\mathbb{V} = {v\in H_1(\Omega) : v|_{\partial\Omega}=0}
$$
the parametrized bilinear form $a(\cdot, \cdot; \boldsymbol{\mu}): \mathbb{V} \times \mathbb{V} \to \mathbb{R}$ is defined by
$$a(u, v;\boldsymbol{\mu})=\int_{\Omega} \nabla u\cdot \nabla v \ d\boldsymbol{x},$$
the parametrized bilinear form $c(\cdot, \cdot; \boldsymbol{\mu}): \mathbb{V} \times \mathbb{V} \to \mathbb{R}$ is defined by
$$c(u, v;\boldsymbol{\mu})=\mu_0\int_{\Omega} \frac{1}{\mu_1}\big(\exp{\mu_1u} - 1\big)v \ d\boldsymbol{x},$$
the parametrized linear form $f(\cdot; \boldsymbol{\mu}): \mathbb{V} \to \mathbb{R}$ is defined by
$$f(v; \boldsymbol{\mu})= \int_{\Omega}g(\boldsymbol{x}; \boldsymbol{\mu})v \ d\boldsymbol{x}.$$
The output of interest $s(\boldsymbol{\mu})$ is given by
$$s(\boldsymbol{\mu}) = \int_{\Omega} v \ d\boldsymbol{x}$$
is computed for each $\boldsymbol{\mu}$.
End of explanation
@EIM("online")
@ExactParametrizedFunctions("offline")
class NonlinearElliptic(NonlinearEllipticProblem):
# Default initialization of members
def __init__(self, V, **kwargs):
# Call the standard initialization
NonlinearEllipticProblem.__init__(self, V, **kwargs)
# ... and also store FEniCS data structures for assembly
assert "subdomains" in kwargs
assert "boundaries" in kwargs
self.subdomains, self.boundaries = kwargs["subdomains"], kwargs["boundaries"]
self.du = TrialFunction(V)
self.u = self._solution
self.v = TestFunction(V)
self.dx = Measure("dx")(subdomain_data=self.subdomains)
self.ds = Measure("ds")(subdomain_data=self.boundaries)
# Store the forcing term expression
self.f = Expression("sin(2*pi*x[0])*sin(2*pi*x[1])", element=self.V.ufl_element())
# Customize nonlinear solver parameters
self._nonlinear_solver_parameters.update({
"linear_solver": "mumps",
"maximum_iterations": 20,
"report": True
})
# Return custom problem name
def name(self):
return "NonlinearEllipticEIM"
# Return theta multiplicative terms of the affine expansion of the problem.
@compute_theta_for_derivatives
def compute_theta(self, term):
mu = self.mu
if term == "a":
theta_a0 = 1.
return (theta_a0,)
elif term == "c":
theta_c0 = mu[0]
return (theta_c0,)
elif term == "f":
theta_f0 = 100.
return (theta_f0,)
elif term == "s":
theta_s0 = 1.0
return (theta_s0,)
else:
raise ValueError("Invalid term for compute_theta().")
# Return forms resulting from the discretization of the affine expansion of the problem operators.
def assemble_operator(self, term):
v = self.v
dx = self.dx
if term == "a":
du = self.du
a0 = inner(grad(du), grad(v)) * dx
return (a0,)
elif term == "c":
u = self.u
mu = self.mu
c0 = (exp(mu[1] * u) - 1) / mu[1] * v * dx
return (c0,)
elif term == "dc": # preferred over derivative() computation which does not cancel out trivial mu[1] factors
du = self.du
u = self.u
mu = self.mu
dc0 = exp(mu[1] * u) * du * v * dx
return (dc0,)
elif term == "f":
f = self.f
f0 = f * v * dx
return (f0,)
elif term == "s":
s0 = v * dx
return (s0,)
elif term == "dirichlet_bc":
bc0 = [DirichletBC(self.V, Constant(0.0), self.boundaries, 1)]
return (bc0,)
elif term == "inner_product":
du = self.du
x0 = inner(grad(du), grad(v)) * dx
return (x0,)
else:
raise ValueError("Invalid term for assemble_operator().")
# Customize the resulting reduced problem
@CustomizeReducedProblemFor(NonlinearEllipticProblem)
def CustomizeReducedNonlinearElliptic(ReducedNonlinearElliptic_Base):
class ReducedNonlinearElliptic(ReducedNonlinearElliptic_Base):
def __init__(self, truth_problem, **kwargs):
ReducedNonlinearElliptic_Base.__init__(self, truth_problem, **kwargs)
self._nonlinear_solver_parameters.update({
"report": True,
"line_search": "wolfe"
})
return ReducedNonlinearElliptic
Explanation: 3. Affine Decomposition
For this problem the affine decomposition is straightforward:
$$a(u,v;\boldsymbol{\mu})=\underbrace{1}{\Theta^{a}_0(\boldsymbol{\mu})}\underbrace{\int{\Omega}\nabla u \cdot \nabla v \ d\boldsymbol{x}}{a_0(u,v)},$$
$$c(u,v;\boldsymbol{\mu})=\underbrace{\mu_0}{\Theta^{c}0(\boldsymbol{\mu})}\underbrace{\int{\Omega}\frac{1}{\mu_1}\big(\exp{\mu_1u} - 1\big)v \ d\boldsymbol{x}}{c_0(u,v)},$$
$$f(v; \boldsymbol{\mu}) = \underbrace{100}{\Theta^{f}0(\boldsymbol{\mu})} \underbrace{\int{\Omega}\sin(2\pi x_0)cos(2\pi x_1)v \ d\boldsymbol{x}}{f_0(v)}.$$
We will implement the numerical discretization of the problem in the class
class NonlinearElliptic(NonlinearEllipticProblem):
by specifying the coefficients $\Theta^{a}(\boldsymbol{\mu})$, $\Theta^{c}_(\boldsymbol{\mu})$ and $\Theta^{f}(\boldsymbol{\mu})$ in the method
def compute_theta(self, term):
and the bilinear forms $a_(u, v)$, $c(u, v)$ and linear forms $f_(v)$ in
def assemble_operator(self, term):
End of explanation
mesh = Mesh("data/square.xml")
subdomains = MeshFunction("size_t", mesh, "data/square_physical_region.xml")
boundaries = MeshFunction("size_t", mesh, "data/square_facet_region.xml")
Explanation: 4. Main program
4.1. Read the mesh for this problem
The mesh was generated by the data/generate_mesh.ipynb notebook.
End of explanation
V = FunctionSpace(mesh, "Lagrange", 1)
Explanation: 4.2. Create Finite Element space (Lagrange P1)
End of explanation
problem = NonlinearElliptic(V, subdomains=subdomains, boundaries=boundaries)
mu_range = [(0.01, 10.0), (0.01, 10.0)]
problem.set_mu_range(mu_range)
Explanation: 4.3. Allocate an object of the NonlinearElliptic class
End of explanation
reduction_method = PODGalerkin(problem)
reduction_method.set_Nmax(20, EIM=21)
reduction_method.set_tolerance(1e-8, EIM=1e-4)
Explanation: 4.4. Prepare reduction with a POD-Galerkin method
End of explanation
reduction_method.initialize_training_set(50, EIM=60)
reduced_problem = reduction_method.offline()
Explanation: 4.5. Perform the offline phase
End of explanation
online_mu = (0.3, 9.0)
reduced_problem.set_mu(online_mu)
reduced_solution = reduced_problem.solve()
plot(reduced_solution, reduced_problem=reduced_problem)
Explanation: 4.6. Perform an online solve
End of explanation
reduction_method.initialize_testing_set(50, EIM=60)
reduction_method.error_analysis()
Explanation: 4.7. Perform an error analysis
End of explanation
reduction_method.speedup_analysis()
Explanation: 4.8. Perform a speedup analysis
End of explanation
<END_TASK> |
15,850 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Galaxies that are missing from Simard+2011
Summary
* A total of 44 galaxies are not in galfit sample
* 31/44 are not in the SDSS catalog, so these would not have been targeted by Simard+2011
* this is not true because simard drew from phot catalog. Need to check all.
* 1 arcmin cutouts show
Step1: Galaxies not in SDSS phot catalog
Step2: Galaxies in SDSS but no B/T fit
Step3: Download SDSS Images
Step4: NSAID 69538 (244.060699, 34.258434)
http
Step5: NSAID 143514 (202.163284, 11.387049)
(too bright)
NSAID 163615 (202.697357, 11.200765)
(too bright)
NSAID 146832 (243.526657, 34.870651)
BINNED1 SATURATED INTERP COSMIC_RAY CHILD
(saturated)
NSAID 146875 (244.288803, 34.878895)
DEBLENDED_AT_EDGE BINNED1 NOTCHECKED INTERP CHILD EDGE
r = 13.92
(too bright)
NSAID 165409 (220.235657, 3.527517)
DEBLEND_NOPEAK INTERP_CENTER BINNED1 DEBLENDED_AS_PSF INTERP CHILD
r = 18.82
(deblended and too faint)
NSAID 166699 (241.500229, 22.641125)
BAD_MOVING_FIT MOVED BINNED1 INTERP COSMIC_RAY CHILD
r = 15.01, ext_r = 0.18
(not sure)
NSAID 146638 (241.287476, 17.729904)
STATIONARY BINNED1 SATURATED INTERP COSMIC_RAY CHILD
r = 13.57, ext_r = .13
(too bright, saturated)
NSAID 146659 (241.416641, 18.055758)
STATIONARY BINNED1 INTERP COSMIC_RAY CHILD
r = 14.52, ext_r = 0.14
(not sure)
NSAID 146664 (241.435760, 17.715572)
STATIONARY MOVED BINNED1 INTERP COSMIC_RAY CHILD
r = 14.65, ext_r = 0.13
(not sure)
not in simard
NSAID 140139 (175.954529, 19.968401)
DEBLEND_DEGENERATE PSF_FLUX_INTERP DEBLENDED_AT_EDGE BAD_MOVING_FIT MOVED BINNED1 INTERP COSMIC_RAY NODEBLEND CHILD BLENDED
r = 13.78, ext_r = 0.06
(too bright)
NSAID 140160 (176.071716, 20.223295)
STATIONARY BINNED1 INTERP COSMIC_RAY CHILD
r = 13.90, ext_r = 0.06
(too bright)
NSAID 140174 (176.204865, 19.795046)
STATIONARY BINNED1 INTERP COSMIC_RAY CHILD
r = 13.18, ext_r = 0.07
(too bright)
NSAID 140175 (176.195999, 20.125084)
STATIONARY BINNED1 INTERP COSMIC_RAY CHILD
r = 13.38, ext_r = 0.07
not in simard
NSAID 140187 (176.228104, 20.017143)
STATIONARY BINNED1 INTERP CHILD
r = 16.19, ext_r = 0.08
(not sure)
not in simard
NSAID 146094 (230.452133, 8.410197)
STATIONARY BINNED1 CHILD
r = 15.86, ext_r = 0.10
(not sure)
IN SIMARD!!!
NSAID 146121 (230.750824, 8.465475)
MOVED BINNED1 INTERP COSMIC_RAY CHILD
r = 15.80, ext_r = 0.09
NSAID 146127 (230.785812, 8.334576)
PSF_FLUX_INTERP INTERP_CENTER STATIONARY BINNED1 INTERP NOPETRO NODEBLEND CHILD BLENDED
r = 17.20, ext_r = 0.09
(blended?)
NSAID 146130 (230.800995, 8.549866)
BINNED1 INTERP COSMIC_RAY CHILD
r = 15.36, ext_r = 0.09
(not sure, maybe blended?)
NSAID 145965 (228.749756, 6.804669)
STATIONARY MOVED BINNED1 INTERP COSMIC_RAY NOPETRO CHILD
r = 16.70, ext_r = 0.10
(no petro)
NSAID 145984 (229.076614, 6.803605)
BINNED1 INTERP COSMIC_RAY CHILD
r = 15.72, ext_r = 0.1
(not sure)
NSAID 145998 (229.185364, 7.021626)
NSAID 145999 (229.187805, 7.055664)
NSAID 146012 (229.295181, 6.941795)
NSAID 146041 (229.713806, 6.435888)
NSAID 166042 (228.910904, 8.302397)
NSAID 166044 (228.936951, 6.958703)
NSAID 166083 (229.217957, 6.539137)
NSAID 142797 (195.073654, 27.955275)
NSAID 142819 (195.169479, 28.519848)
NSAID 142833 (195.214752, 28.042875)
NSAID 162838 (195.280670, 28.121592)
Oh No!
seems like galaxies that are in simard are not in my catalog | <ASSISTANT_TASK:>
Python Code:
%run ~/Dropbox/pythonCode/LCSanalyzeblue.py
t = s.galfitflag & s.lirflag & s.sizeflag & ~s.agnflag & s.sbflag
galfitnogim = t & ~s.gim2dflag
sum(galfitnogim)
Explanation: Galaxies that are missing from Simard+2011
Summary
* A total of 44 galaxies are not in galfit sample
* 31/44 are not in the SDSS catalog, so these would not have been targeted by Simard+2011
* this is not true because simard drew from phot catalog. Need to check all.
* 1 arcmin cutouts show: 5 have a bright star overlapping or nearby the galaxy, 2
have a close companion.
* 69538 has problem with NSA coords
* 68342 - not in DR7 for some reason
By galaxy:
NSAID 70630 (202.306824, 11.275839)
STATIONARY BAD_MOVING_FIT BINNED1 INTERP COSMIC_RAY CHILD
r = 13.87 (too bright)
NSAID 70685 (202.269455, 12.238585)
DEBLENDED_AT_EDGE STATIONARY MOVED BINNED1 DEBLENDED_AS_PSF INTERP CHILD,
affected by point source that is offset from center of galaxy. might a foreground star.
(blended)
NSAID 43712 (244.129181, 35.708172)
BINNED1 INTERP COSMIC_RAY CHILD
r = 13.66 (too bright)
NSAID 69538 (244.060699, 34.258434)
NSA has problem with coords, chose nearby pt source rather than galaxy
STATIONARY BINNED1 INTERP COSMIC_RAY CHILD
r = 18.83 (too faint)
NSAID 18158 (219.578888, 3.639615)
PSF_FLUX_INTERP INTERP_CENTER BAD_MOVING_FIT BINNED1 NOTCHECKED SATURATED INTERP COSMIC_RAY CHILD
r = 15.31, ext_r = 0.11
(not sure)
NSAID 68283 (242.047577, 24.507439)
not in dr7?
NSAID 68286 (241.749313, 24.160772)
STATIONARY MOVED BINNED1 INTERP COSMIC_RAY NOPETRO NODEBLEND CHILD BLENDED PrimTarget
r = 17.66, ext_r = 0.2 (too faint)
NSAID 68299 (240.918945, 24.602676)
STATIONARY BINNED1 INTERP COSMIC_RAY CHILD
r = 15.4, ext_r = 0.2
(not sure) why this is not in simard
NSAID 68342 (241.297867, 24.960102)
does not come up under dr7 search. get nearby object instead
NSAID 113068 (175.995667, 20.077011)
STATIONARY MOVED BINNED1 INTERP COSMIC_RAY CHILD
r = 13.74, ext_r = 0.06 (too bright)
NSAID 72631 (230.999481, 8.508963)
MAYBE_CR BAD_MOVING_FIT MOVED BINNED1 INTERP CHILD
Type probably not 3
r = 16.98, ext_r = 0.1
NSAID 103927 (194.490204, 27.443319)
STATIONARY BINNED1 INTERP CHILD
r = 17.64, ext_r = 0.02 (maybe petro r is too faint?)
(too faint?)
NSAID 103966 (194.025421, 27.677467)
DEBLEND_DEGENERATE BAD_MOVING_FIT MOVED BINNED1 INTERP COSMIC_RAY NODEBLEND CHILD BLENDED
r = 15.13, ext_r = 0.02
(not sure) why this isn't in simard
End of explanation
s.s.ISDSS[galfitnogim]
print sum(s.s.ISDSS[galfitnogim] == -1)
Explanation: Galaxies not in SDSS phot catalog
End of explanation
galfitsdssnogim = galfitnogim & (s.s.ISDSS != -1)
sum(galfitsdssnogim)
s.s.NSAID[galfitsdssnogim]
Explanation: Galaxies in SDSS but no B/T fit
End of explanation
from astropy import units as u
from astropy.coordinates import SkyCoord
from astropy.table import Table
try:
# Python 3.x
from urllib.parse import urlencode
from urllib.request import urlretrieve
except ImportError:
# Python 2.x
from urllib import urlencode
from urllib import urlretrieve
import IPython.display
r = 22.5 - 2.5*log10(s.s.NMGY[:,4])
flag = galfitnogim & (r >= 14.) & (r <= 18.)
print sum(flag)
ra = s.s.RA[flag]
dec = s.s.DEC[flag]
ids = s.s.NSAID[flag]
coords = SkyCoord(ra*u.deg, dec*u.deg, frame='icrs')
testcoord = coords[0]
impix = 100
imsize = 1*u.arcmin
cutoutbaseurl = 'http://skyservice.pha.jhu.edu/DR12/ImgCutout/getjpeg.aspx'
for i in range(len(coords.ra)):
query_string = urlencode(dict(ra=coords[i].ra.deg,
dec=coords[i].dec.deg,
width=impix, height=impix,
scale=imsize.to(u.arcsec).value/impix))
url = cutoutbaseurl + '?' + query_string
# this downloads the image to your disk
urlretrieve(url, 'images/'+str(ids[i])+'_SDSS_cutout.jpg')
print 'NSAID %i (%10.6f, %10.6f)'%(ids[i],ra[i],dec[i])
t = IPython.display.Image('images/'+str(ids[i])+'_SDSS_cutout.jpg')
IPython.display.display(t)
for i in range(10,len(ids)):
print '* NSAID %i (%10.6f, %10.6f)'%(ids[i],ra[i],dec[i])
print 'http://cas.sdss.org/dr7/en/tools/explore/obj.asp?ra=%.5f&dec=%.5f'%(ra[i],dec[i])
Explanation: Download SDSS Images
End of explanation
for i in range(len(coords.ra)):
query_string = urlencode(dict(ra=coords[i].ra.deg,
dec=coords[i].dec.deg,
width=impix, height=impix,
scale=imsize.to(u.arcsec).value/impix))
url = cutoutbaseurl + '?' + query_string
# this downloads the image to your disk
urlretrieve(url, 'images/'+str(nsaids[i])+'_SDSS_cutout.jpg')
print i, nsaids[i],coords[i].ra,coords[i].dec
print 'NSAID %i (%10.6f, %10.6f)'%(nsaids[i],coords[i].ra.deg,coords[i].dec)
t = IPython.display.Image('images/'+str(nsaids[i])+'_SDSS_cutout.jpg')
IPython.display.display(t)
for i in range(len(coords.ra)):
print 'NSAID %i (%10.6f, %10.6f)'%(ids[i],ra[i],dec[i])
ids = where(galfitnogim & (s.s.ISDSS == -1))
print ids
Explanation: NSAID 69538 (244.060699, 34.258434)
http://cas.sdss.org/dr7/en/tools/explore/obj.asp?ra=244.06070&dec=34.25843
too faint according to DR7 catalog
NSAID 18158 (219.578888, 3.639615)
http://cas.sdss.org/dr7/en/tools/explore/obj.asp?ra=219.57889&dec=3.63961
PSF_FLUX_INTERP INTERP_CENTER BAD_MOVING_FIT BINNED1 NOTCHECKED SATURATED INTERP COSMIC_RAY CHILD
(saturated)
NSAID 165409 (220.235657, 3.527517)
http://cas.sdss.org/dr7/en/tools/explore/obj.asp?ra=220.23566&dec=3.52752
DEBLEND_NOPEAK INTERP_CENTER BINNED1 DEBLENDED_AS_PSF INTERP CHILD
(blended)
NSAID 68283 (242.047577, 24.507439)
http://cas.sdss.org/dr7/en/tools/explore/obj.asp?ra=242.04758&dec=24.50744
not in DR7
NSAID 68286 (241.749313, 24.160772)
http://cas.sdss.org/dr7/en/tools/explore/obj.asp?ra=241.74931&dec=24.16077
STATIONARY MOVED BINNED1 INTERP COSMIC_RAY NOPETRO NODEBLEND CHILD BLENDED
NOT IN SIMARD
(too faint maybe?)
NSAID 68299 (240.918945, 24.602676)
http://cas.sdss.org/dr7/en/tools/explore/obj.asp?ra=240.91895&dec=24.60268
STATIONARY BINNED1 INTERP COSMIC_RAY CHILD
(not sure)
NOT IN SIMARD
NSAID 68342 (241.297867, 24.960102)
http://cas.sdss.org/dr7/en/tools/explore/obj.asp?ra=241.29787&dec=24.96010
TOO_FEW_GOOD_DETECTIONS PSF_FLUX_INTERP INTERP_CENTER BINNED1 DEBLENDED_AS_PSF INTERP NOPETRO CHILD
BAD COORDS in DR7? Image on website above does not match with galaxy.
NSAID 166124 (230.213974, 8.623065)
http://cas.sdss.org/dr7/en/tools/explore/obj.asp?ra=230.21397&dec=8.62306
BINNED1 INTERP CHILD
NOT IN SIMARD CAT
(not sure)
NSAID 146012 (229.295181, 6.941795)
http://cas.sdss.org/dr7/en/tools/explore/obj.asp?ra=229.29518&dec=6.94179
BINNED1 NOTCHECKED SATURATED INTERP COSMIC_RAY CHILD
(saturated)
NSAID 166042 (228.910904, 8.302397)
http://cas.sdss.org/dr7/en/tools/explore/obj.asp?ra=228.91090&dec=8.30240
DEBLEND_DEGENERATE PSF_FLUX_INTERP BAD_MOVING_FIT MOVED BINNED1 INTERP COSMIC_RAY NODEBLEND CHILD BLENDED
NOT IN SIMARD
(not sure)
NSAID 142819 (195.169479, 28.519848)
http://cas.sdss.org/dr7/en/tools/explore/obj.asp?ra=195.16948&dec=28.51985
DEBLENDED_AT_EDGE BAD_MOVING_FIT BINNED1 DEBLENDED_AS_PSF NOTCHECKED INTERP NODEBLEND CHILD BLENDED EDGE
(too faint, blended)
End of explanation
lcs = fits.getdata('/Users/rfinn/research/LocalClusters/NSAmastertables/LCS_all_size.fits')
gim = fits.getdata('/Users/rfinn/research/SimardSDSS2011/table1.fits')virgocat = SkyCoord(vdat.RA*u.degree,vdat.DEC*u.degree,frame='icrs')
from astropy.coordinates import SkyCoord
from astropy import units as u
%matplotlib inline
lcat = SkyCoord(lcs.RA*u.degree,lcs.DEC*u.degree,frame='icrs')
gcat = SkyCoord(gim._RAJ2000*u.degree,gim._DEJ2000*u.degree,frame='icrs')
index,dist2d,dist3d = lcat.match_to_catalog_sky(gcat)
plt.figure()
plt.plot
# only keep matches with matched RA and Dec w/in 1 arcsec
matchflag = dist2d.degree < 3./3600
matchedarray1=np.zeros(len(lcat),dtype=gim.dtype)
matchedarray1[matchflag] = gim[index[matchflag]]
print 'percent of LCS galaxies matched = %.1f'%(sum(matchflag)*1./len(matchflag)*100.)
# get rid of names that start with __
# these cause trouble in the analysis program
t = []
for a in matchedarray1.dtype.names:
t.append(a)
for i in range(len(t)):
if t[i].startswith('__'):
t[i] = t[i][2:]
t = tuple(t)
#print t
matchedarray1.dtype.names = t
outfile = '/Users/rfinn/research/LocalClusters/NSAmastertables/LCS_all.gim2d.tab1.fits'
fits.writeto(outfile,matchedarray1,overwrite=True)
diff = (lcs.B_T_r - matchedarray1['B_T_r'])
bad_matches = (abs(diff) > .01) & matchflag
print 'number of bad matches = ',sum(bad_matches)
s.s.NSAID[bad_matches]
plt.figure()
plt.plot(lcs.RA[bad_matches],lcs.DEC[bad_matches],'ko')
print lcs.CLUSTER[bad_matches]
print sum(s.galfitflag[bad_matches])
print sum(diff < 0.)
outfile = '/Users/rfinn/research/LocalClusters/NSAmastertables/LCS_all.gim2d.tab1.fits'
gdat = fits.getdata(outfile)
gdat.__B_T_r
Explanation: NSAID 143514 (202.163284, 11.387049)
(too bright)
NSAID 163615 (202.697357, 11.200765)
(too bright)
NSAID 146832 (243.526657, 34.870651)
BINNED1 SATURATED INTERP COSMIC_RAY CHILD
(saturated)
NSAID 146875 (244.288803, 34.878895)
DEBLENDED_AT_EDGE BINNED1 NOTCHECKED INTERP CHILD EDGE
r = 13.92
(too bright)
NSAID 165409 (220.235657, 3.527517)
DEBLEND_NOPEAK INTERP_CENTER BINNED1 DEBLENDED_AS_PSF INTERP CHILD
r = 18.82
(deblended and too faint)
NSAID 166699 (241.500229, 22.641125)
BAD_MOVING_FIT MOVED BINNED1 INTERP COSMIC_RAY CHILD
r = 15.01, ext_r = 0.18
(not sure)
NSAID 146638 (241.287476, 17.729904)
STATIONARY BINNED1 SATURATED INTERP COSMIC_RAY CHILD
r = 13.57, ext_r = .13
(too bright, saturated)
NSAID 146659 (241.416641, 18.055758)
STATIONARY BINNED1 INTERP COSMIC_RAY CHILD
r = 14.52, ext_r = 0.14
(not sure)
NSAID 146664 (241.435760, 17.715572)
STATIONARY MOVED BINNED1 INTERP COSMIC_RAY CHILD
r = 14.65, ext_r = 0.13
(not sure)
not in simard
NSAID 140139 (175.954529, 19.968401)
DEBLEND_DEGENERATE PSF_FLUX_INTERP DEBLENDED_AT_EDGE BAD_MOVING_FIT MOVED BINNED1 INTERP COSMIC_RAY NODEBLEND CHILD BLENDED
r = 13.78, ext_r = 0.06
(too bright)
NSAID 140160 (176.071716, 20.223295)
STATIONARY BINNED1 INTERP COSMIC_RAY CHILD
r = 13.90, ext_r = 0.06
(too bright)
NSAID 140174 (176.204865, 19.795046)
STATIONARY BINNED1 INTERP COSMIC_RAY CHILD
r = 13.18, ext_r = 0.07
(too bright)
NSAID 140175 (176.195999, 20.125084)
STATIONARY BINNED1 INTERP COSMIC_RAY CHILD
r = 13.38, ext_r = 0.07
not in simard
NSAID 140187 (176.228104, 20.017143)
STATIONARY BINNED1 INTERP CHILD
r = 16.19, ext_r = 0.08
(not sure)
not in simard
NSAID 146094 (230.452133, 8.410197)
STATIONARY BINNED1 CHILD
r = 15.86, ext_r = 0.10
(not sure)
IN SIMARD!!!
NSAID 146121 (230.750824, 8.465475)
MOVED BINNED1 INTERP COSMIC_RAY CHILD
r = 15.80, ext_r = 0.09
NSAID 146127 (230.785812, 8.334576)
PSF_FLUX_INTERP INTERP_CENTER STATIONARY BINNED1 INTERP NOPETRO NODEBLEND CHILD BLENDED
r = 17.20, ext_r = 0.09
(blended?)
NSAID 146130 (230.800995, 8.549866)
BINNED1 INTERP COSMIC_RAY CHILD
r = 15.36, ext_r = 0.09
(not sure, maybe blended?)
NSAID 145965 (228.749756, 6.804669)
STATIONARY MOVED BINNED1 INTERP COSMIC_RAY NOPETRO CHILD
r = 16.70, ext_r = 0.10
(no petro)
NSAID 145984 (229.076614, 6.803605)
BINNED1 INTERP COSMIC_RAY CHILD
r = 15.72, ext_r = 0.1
(not sure)
NSAID 145998 (229.185364, 7.021626)
NSAID 145999 (229.187805, 7.055664)
NSAID 146012 (229.295181, 6.941795)
NSAID 146041 (229.713806, 6.435888)
NSAID 166042 (228.910904, 8.302397)
NSAID 166044 (228.936951, 6.958703)
NSAID 166083 (229.217957, 6.539137)
NSAID 142797 (195.073654, 27.955275)
NSAID 142819 (195.169479, 28.519848)
NSAID 142833 (195.214752, 28.042875)
NSAID 162838 (195.280670, 28.121592)
Oh No!
seems like galaxies that are in simard are not in my catalog :(
Going to read in my catalog
read in simard catalog
match them
and then see what's going on
End of explanation
<END_TASK> |
15,851 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Classifiers
A classifier is a function from an asset and a moment in time to a categorical output such as a string or integer label
Step1: Previously, we saw that the latest attribute produced an instance of a Factor. In this case, since the underlying data is of type string, latest produces a Classifier.
Similarly, a computation producing the latest Morningstar sector code of a security is a Classifier. In this case, the underlying type is an int, but the integer doesn't represent a numerical value (it's a category) so it produces a classifier. To get the latest sector code, we can use the built-in Sector classifier.
Step2: Using Sector is equivalent to Fundamentals.morningstar_sector_code.latest.
Building Filters from Classifiers
Classifiers can also be used to produce filters with methods like isnull, eq, and startswith. The full list of Classifier methods producing Filters can be found here.
As an example, if we wanted a filter to select for securities trading on the New York Stock Exchange, we can use the eq method of our exchange classifier.
Step3: This filter will return True for securities having 'NYS' as their most recent exchange_id.
Quantiles
Classifiers can also be produced from various Factor methods. The most general of these is the quantiles method which accepts a bin count as an argument. The quantiles method assigns a label from 0 to (bins - 1) to every non-NaN data point in the factor output and returns a Classifier with these labels. NaNs are labeled with -1. Aliases are available for quartiles (quantiles(4)), quintiles (quantiles(5)), and deciles (quantiles(10)). As an example, this is what a filter for the top decile of a factor might look like
Step4: Let's put each of our classifiers into a pipeline and run it to see what they look like. | <ASSISTANT_TASK:>
Python Code:
from quantopian.pipeline.data import Fundamentals
# Since the underlying data of Fundamentals.exchange_id
# is of type string, .latest returns a Classifier
exchange = Fundamentals.exchange_id.latest
Explanation: Classifiers
A classifier is a function from an asset and a moment in time to a categorical output such as a string or integer label:
F(asset, timestamp) -> category
An example of a classifier producing a string output is the exchange ID of a security. To create this classifier, we'll have to import Fundamentals.exchange_id and use the latest attribute to instantiate our classifier:
End of explanation
from quantopian.pipeline.classifiers.fundamentals import Sector
morningstar_sector = Sector()
Explanation: Previously, we saw that the latest attribute produced an instance of a Factor. In this case, since the underlying data is of type string, latest produces a Classifier.
Similarly, a computation producing the latest Morningstar sector code of a security is a Classifier. In this case, the underlying type is an int, but the integer doesn't represent a numerical value (it's a category) so it produces a classifier. To get the latest sector code, we can use the built-in Sector classifier.
End of explanation
nyse_filter = exchange.eq('NYS')
Explanation: Using Sector is equivalent to Fundamentals.morningstar_sector_code.latest.
Building Filters from Classifiers
Classifiers can also be used to produce filters with methods like isnull, eq, and startswith. The full list of Classifier methods producing Filters can be found here.
As an example, if we wanted a filter to select for securities trading on the New York Stock Exchange, we can use the eq method of our exchange classifier.
End of explanation
dollar_volume_decile = AverageDollarVolume(window_length=10).deciles()
top_decile = (dollar_volume_decile.eq(9))
Explanation: This filter will return True for securities having 'NYS' as their most recent exchange_id.
Quantiles
Classifiers can also be produced from various Factor methods. The most general of these is the quantiles method which accepts a bin count as an argument. The quantiles method assigns a label from 0 to (bins - 1) to every non-NaN data point in the factor output and returns a Classifier with these labels. NaNs are labeled with -1. Aliases are available for quartiles (quantiles(4)), quintiles (quantiles(5)), and deciles (quantiles(10)). As an example, this is what a filter for the top decile of a factor might look like:
End of explanation
def make_pipeline():
exchange = Fundamentals.exchange_id.latest
nyse_filter = exchange.eq('NYS')
morningstar_sector = Sector()
dollar_volume_decile = AverageDollarVolume(window_length=10).deciles()
top_decile = (dollar_volume_decile.eq(9))
return Pipeline(
columns={
'exchange': exchange,
'sector_code': morningstar_sector,
'dollar_volume_decile': dollar_volume_decile
},
screen=(nyse_filter & top_decile)
)
result = run_pipeline(make_pipeline(), '2015-05-05', '2015-05-05')
print 'Number of securities that passed the filter: %d' % len(result)
result.head(5)
Explanation: Let's put each of our classifiers into a pipeline and run it to see what they look like.
End of explanation
<END_TASK> |
15,852 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<a href="https
Step1: Az ismรฉtlรฉsi szรกm elรฉg magas.
Vรฉletlenszรกmokat hasznรกlunk a 2D-s pontok generรกlรกsรกhoz. n pontot generรกlunk a (0.451 x - y + 2 = 0) egyens kรถzelรฉben. Tovรกbbi k pontot generรกlunk "zajkรฉnt".
Step2: Jelenรญtsรผk meg a generรกlt pontokat รฉs az egyenest.
Step3: Futtasa tรถbbszรถr a fenti kรณdblokkot รฉs vegye รฉszre az egyes egyenletรฉnek vรกltozรกsรกt.
Vรฉgรผl jelenรญtsรผk meg a legjobban illeszkedล egyenest. | <ASSISTANT_TASK:>
Python Code:
# Python packages used
import numpy as np # for array operations
from matplotlib import pyplot as plt # for graphic output
from math import sqrt
# parameters
tolerance = 2.5 # max distance from the plane to accept point
rep = 1000 # number of repetition
Explanation: <a href="https://colab.research.google.com/github/OSGeoLabBp/tutorials/blob/master/english/data_processing/lessons/ransac_line.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
RANSAC 2D vonal
Ebben a leckรฉben a RANSAC (RANdom SAmple Consensus) mรณdszert mutatjuk be egy egyszerลฑ pรฉldรกn.
RANSAC egy robosztus mรณdszer geometriai elemek megtalรกlรกsรกra egy pontfelhลben. Egy vonal megtalรกlรกsรกhoz egy sรญkbeli ponthalmazon a kรถvetkezล lรฉpรฉseket hajtjuk vรฉgre.
vรกlasszunk ki kรฉt pontot vรฉletlenszerลฑen a ponthalmazbรณl
รญrjuk fel a kรฉt ponton รกtmenล egyenes egyenletรฉt
keressรผk meg a vonal kรถzelรฉbe esล pontokat (eg toleranciรกn belรผl)
ha a kรถzelben lรฉvล pontok szรกma tรถbb mint az eddigi maximum akkor ลrizzรผk meg mint az eddigi legjobb megoldรกs
ismรฉteljรผk az elsล lรฉpรฉstลl amรญg elรฉrjรผk el nem รฉrjรผk a megadott ismรฉtlรฉsi szรกmot
Ez az algoritmus nem determinisztikus, azaz, ha tรถbbszรถr futtatjuk eltรฉrล eredmรฉnyeket kaphatunk. Azonban, ha az ismรฉtlรฉsi szรกm elรฉg nagy, akkor kรถzel azonos megoldรกsokat kapunk.
Az algoritmus kรฉt konstans รฉrtรฉktลl fรผgg, az egyenestลl valรณ maximรกlis tรกvolsรกgtรณl รฉs az ismรฉtlรฉsi szรกmtรณl.
End of explanation
n = 100 # number of inliers
k = 200 # number of outliers
range = 100.0 # range of x, y coordinates
l = [0.451, -1.0, 2.0] # line equation ax + by + c = 0
x = np.zeros(n+k)
y = np.zeros(n+k)
# points near to the line
x[:n] = np.random.rand(n) * range
y[:n] = -l[0] / l[1] * x[:n] - l[2] / l[1] + (np.random.rand(n) * 2 * tolerance - tolerance)
# outlier points (noise)
x[n:] = np.random.rand(k) * range
y[n:] = np.random.rand(k) * range
points = np.c_[x, y, np.full(n+k, 1.0)] # put together inliers and outliers
Explanation: Az ismรฉtlรฉsi szรกm elรฉg magas.
Vรฉletlenszรกmokat hasznรกlunk a 2D-s pontok generรกlรกsรกhoz. n pontot generรกlunk a (0.451 x - y + 2 = 0) egyens kรถzelรฉben. Tovรกbbi k pontot generรกlunk "zajkรฉnt".
End of explanation
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot()
ax.scatter(x, y)
ax.plot([0,100], [-l[2] / l[1], -l[0] / l[1] * 100 - l[2] / l[1]], 'r', label='original line')
_ = ax.set_title('Pointok รฉs az egyenes')
best_n = 0 # number of points on the best fit line so far
best_i = 0 # iteration index of best fit line so far
best_inliers = np.array([]) # indices of inliers of the best fit line so far
for i in np.arange(rep):
# select two random points
p = [] # list of random indices for points
while len(p) != 2:
p = list(set(np.random.randint(n+k, size=2))) # remove repeated random integers
p1 = points[p] # randomly selected points
x1 = p1[:,0] # x coordinates
y1 = p1[:,1] # y coordinates
# line equation from the two points using homogenouos coordinates
l1 = np.array([y1[0] - y1[1], x1[1] - x1[0], x1[0] * y1[1] - x1[1] * y1[0]])
l1 = l1 / sqrt(l1[0]**2 + l1[1]**2) # normalize
# select close points
inliers = points[np.abs(np.dot(points, l1)) < tolerance]
if inliers.shape[0] > best_n:
# better solution found
best_n = inliers.shape[0]
best_i = i
best_inliers = inliers.copy()
best_line = l1.copy()
print(f'Legjobb megoldรกs {best_i} ismรฉtlรฉs utรกn, pontok szรกma az egyenesen: {best_n} egyenes: {best_line}')
Explanation: Jelenรญtsรผk meg a generรกlt pontokat รฉs az egyenest.
End of explanation
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot()
ax.scatter(x, y)
ax.scatter(best_inliers[:,0], best_inliers[:,1], c='g')
ax.plot([0,100], [-l[2] / l[1], -l[0] / l[1] * 100 - l[2] / l[1]], 'g', label='eredeti vonal')
ax.plot([0,100], [-best_line[2] / best_line[1], -best_line[0] / best_line[1] * 100 - best_line[2] / best_line[1]], 'r', label='legjobb kรถzelรญtรฉs')
ax.legend()
_ = ax.set_title('RANSAC vonal')
Explanation: Futtasa tรถbbszรถr a fenti kรณdblokkot รฉs vegye รฉszre az egyes egyenletรฉnek vรกltozรกsรกt.
Vรฉgรผl jelenรญtsรผk meg a legjobban illeszkedล egyenest.
End of explanation
<END_TASK> |
15,853 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described.
<END_TASK>
<USER_TASK:>
Description:
Input are two strings a and b consisting only of 1s and 0s.
Perform binary XOR on these inputs and return result also as a string.
This is how the function will work:
string_xor('010', '110')
'100'
| <ASSISTANT_TASK:>
Python Code:
from typing import List
def string_xor(a: str, b: str) -> str:
def xor(i, j):
if i == j:
return '0'
else:
return '1'
return ''.join(xor(x, y) for x, y in zip(a, b))
<END_TASK> |
15,854 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
This is a package to simply create tables to be displayed in IPython. The tables are rendered as both HTML and LaTeX, so they work both in the browser and if you convert the notebook to LaTeX.
The simplest case is a plain grid
Step1: You can add a header row like this
Step2: Table also accepts dicts (or any mapping) with keys as column headers and values as column contents. The order of columns is undefined unless the mapping is an OrderedDict.
Step3: The number of column values (rows) need not be equal
Step4: You can build a table incrementally using Table.append_row(). If you need it, rows also have an append_cell() method.
Step5: You can style cells with the bg_colour and text_colour parameters. This only works in HTML for the moment; if you convert the notebook to LaTeX, the colours will be ignored. | <ASSISTANT_TASK:>
Python Code:
Table((4, 1, 8),
(9, 7, 3),
(5, 2, 6))
Explanation: This is a package to simply create tables to be displayed in IPython. The tables are rendered as both HTML and LaTeX, so they work both in the browser and if you convert the notebook to LaTeX.
The simplest case is a plain grid:
End of explanation
Table(TableHeaderRow('a','b','c'),
(1, 2, 3),
(2, 4, 6),
)
Explanation: You can add a header row like this:
End of explanation
Table({'a': (1, 2),
'b': (2, 4),
'c': (3, 6)})
Explanation: Table also accepts dicts (or any mapping) with keys as column headers and values as column contents. The order of columns is undefined unless the mapping is an OrderedDict.
End of explanation
Table({'a': (1, 2),
'b': (2,),
'c': (3, 6)})
Explanation: The number of column values (rows) need not be equal:
End of explanation
# Computing values
t = Table(TableHeaderRow('number', 'square', 'cube'))
for x in range(1, 11):
t.append_row((x, x**2, x**3))
t
Explanation: You can build a table incrementally using Table.append_row(). If you need it, rows also have an append_cell() method.
End of explanation
# Styling determined by code
t = Table(TableHeaderRow('divisions', 'result'))
num = 55
for x in range(7):
if num < 1:
resultcell = TableCell(num, bg_colour='DarkBlue', text_colour='white')
else:
resultcell = TableCell(num)
t.append_row((x, resultcell))
num /= 3
t
Explanation: You can style cells with the bg_colour and text_colour parameters. This only works in HTML for the moment; if you convert the notebook to LaTeX, the colours will be ignored.
End of explanation
<END_TASK> |
15,855 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
Step1: First we'll load the text file and convert it into integers for our network to use.
Step3: Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
Step4: I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
Step5: Hyperparameters
Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.
Step6: Write out the graph for TensorBoard
Step7: Training
Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
Step8: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters. | <ASSISTANT_TASK:>
Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
chars[:100]
Explanation: First we'll load the text file and convert it into integers for our network to use.
End of explanation
def split_data(chars, batch_size, num_steps, split_frac=0.9):
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)
train_x.shape
train_x[:,:10]
Explanation: Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
End of explanation
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
with tf.name_scope('inputs'):
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
with tf.name_scope('targets'):
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Build the RNN layers
with tf.name_scope("RNN_layers"):
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
with tf.name_scope("RNN_init_state"):
initial_state = cell.zero_state(batch_size, tf.float32)
# Run the data through the RNN layers
with tf.name_scope("RNN_forward"):
rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)]
outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one row for each cell output
with tf.name_scope('sequence_reshape'):
seq_output = tf.concat(outputs, axis=1,name='seq_output')
output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
# Now connect the RNN putputs to a softmax layer and calculate the cost
with tf.name_scope('logits'):
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
name='softmax_w')
softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
logits = tf.matmul(output, softmax_w) + softmax_b
with tf.name_scope('predictions'):
preds = tf.nn.softmax(logits, name='predictions')
with tf.name_scope('cost'):
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
cost = tf.reduce_mean(loss, name='cost')
# Optimizer for training, using gradient clipping to control exploding gradients
with tf.name_scope('train'):
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
Explanation: I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
End of explanation
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.
End of explanation
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
file_writer = tf.summary.FileWriter('./logs/3', sess.graph)
Explanation: Write out the graph for TensorBoard
End of explanation
!mkdir -p checkpoints/anna
epochs = 10
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/anna20.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 0.5,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
tf.train.get_checkpoint_state('checkpoints/anna')
Explanation: Training
Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
End of explanation
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
prime = "Far"
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
<END_TASK> |
15,856 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'ec-earth3-cc', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: EC-EARTH-CONSORTIUM
Source ID: EC-EARTH3-CC
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:59
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation
<END_TASK> |
15,857 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
BigQuery command-line tool
The BigQuery command-line tool is installed as part of the Cloud SDK and can be used to interact with BigQuery. When you use CLI commands in a notebook, the command must be prepended with a !.
View available commands
To view the available commands for the BigQuery command-line tool, use the help command.
Step1: Create a new dataset
A dataset is contained within a specific project. Datasets are top-level containers that are used to organize and control access to your tables and views. A table or view must belong to a dataset. You need to create at least one dataset before loading data into BigQuery.
First, name your new dataset
Step2: The following command creates a new dataset in the US using the ID defined above.
NOTE
Step3: The response should look like the following
Step4: The response should look like the following
Step5: Load data from Cloud Storage to a table
The following example demonstrates how to load a local CSV file into a new table. See SourceFormat in the Python client library documentation for a list of available source formats. For more information, see Introduction to loading data from Cloud Storage in the BigQuery documentation.
Step6: Run a query
The BigQuery command-line tool has a query command for running queries, but it is recommended to use the magic command for this purpose.
Cleaning Up
The following code deletes the dataset created for this tutorial, including all tables in the dataset. | <ASSISTANT_TASK:>
Python Code:
!bq help
Explanation: BigQuery command-line tool
The BigQuery command-line tool is installed as part of the Cloud SDK and can be used to interact with BigQuery. When you use CLI commands in a notebook, the command must be prepended with a !.
View available commands
To view the available commands for the BigQuery command-line tool, use the help command.
End of explanation
dataset_id = "your_new_dataset"
Explanation: Create a new dataset
A dataset is contained within a specific project. Datasets are top-level containers that are used to organize and control access to your tables and views. A table or view must belong to a dataset. You need to create at least one dataset before loading data into BigQuery.
First, name your new dataset:
End of explanation
!bq --location=US mk --dataset $dataset_id
Explanation: The following command creates a new dataset in the US using the ID defined above.
NOTE: In the examples in this notebook, the dataset_id variable is referenced in the commands using both {} and $. To avoid creating and using variables, replace these interpolated variables with literal values and remove the {} and $ characters.
End of explanation
!bq ls
Explanation: The response should look like the following:
Dataset 'your-project-id:your_new_dataset' successfully created.
List datasets
The following command lists all datasets in your default project.
End of explanation
!bq \
--location=US load \
--autodetect \
--skip_leading_rows=1 \
--source_format=CSV \
{dataset_id}.us_states_local_file \
'resources/us-states.csv'
Explanation: The response should look like the following:
```
datasetId
your_new_dataset
```
Load data from a local file to a table
The following example demonstrates how to load a local CSV file into a new or existing table. See SourceFormat in the Python client library documentation for a list of available source formats. For more information, see Loading Data into BigQuery from a local data source in the BigQuery documentation.
End of explanation
!bq \
--location=US load \
--autodetect \
--skip_leading_rows=1 \
--source_format=CSV \
{dataset_id}.us_states_gcs \
'gs://cloud-samples-data/bigquery/us-states/us-states.csv'
Explanation: Load data from Cloud Storage to a table
The following example demonstrates how to load a local CSV file into a new table. See SourceFormat in the Python client library documentation for a list of available source formats. For more information, see Introduction to loading data from Cloud Storage in the BigQuery documentation.
End of explanation
!bq rm -r -f --dataset $dataset_id
Explanation: Run a query
The BigQuery command-line tool has a query command for running queries, but it is recommended to use the magic command for this purpose.
Cleaning Up
The following code deletes the dataset created for this tutorial, including all tables in the dataset.
End of explanation
<END_TASK> |
15,858 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Relation 1
The form of the damping is given as
Step1: Alternativity one can use
Step2: Muddassar and Gieseler's simplified formula for the environmental damping is
Step3: Relation 2
In Gieseler's Thermal Nonlinearities paper he has the following equation for $\Gamma_0$
$$ \Gamma_0 = \dfrac{64a^2}{3m\bar{v}}P $$
https
Step4: Relation 3
In Chang et al. paper "Cavity opto-mechanics using an optically levitated nanosphere"
They have $\Gamma_0 = \dfrac{\gamma_g}{2} = \dfrac{8}{\pi}\dfrac{P}{\bar{v}r\rho}$
Where
- $\rho$ is the density of the nanoparticle
- $P$ is the pressure of the gas
- $\bar{v}$ is the mean speed of the gas particles
- $r$ is the radius of the nanoparticle
Step5: Also relation 3 (different derivation by Millen et al.)
James Millen derives the following form of the damping due to impinging particles
Step6: This agrees exactly with Chang's result
Step7: Relation 3+ (more damping due to considering emerging particles)
James Millen derives the following form of the damping due to emerging particles
Step8: Plot of all 3 relations and measured data | <ASSISTANT_TASK:>
Python Code:
# constants
k_B = Boltzmann
eta_air = 18.27e-6 # Pa # (J.T.R.Watson (1995)).
d_gas = 0.372e-9 #m #(Sone (2007)), ฯSiO2
rho_SiO2 = 1800 # #kg/m^3 - Number told to us by
T0 = 300
R = 50e-9 # m
def mfp(P_gas):
mfp_val = k_B*T0/(2**0.5*pi*d_gas**2*P_gas)
return mfp_val
Explanation: Relation 1
The form of the damping is given as:
$$ \Gamma_0 = \dfrac{6 \pi \eta_{air} r}{m} \dfrac{0.619}{0.619 + K_n} (1+ c_k)$$
(Li et al. 2011 - https://arxiv.org/pdf/1101.1283.pdf)
Where:
$\eta_{air}$ is the viscosity of air
$r$ is the radius of the silica nanoparticles
$m$ is the mass of the silica nanoparticles
$K_n$ is the Knudsen number $\dfrac{s}{r}$ where $s$ is the mean free path of the air particles
$c_k$ is a small positive function of $K_n$ which takes the form $(0.31K_n)/(0.785+1.152K_n+K_n^2)$
The mean free path is dependant upon the pressure of the system. The mathematical
form the mean free path, is dependant upon whether the particles under study are con- sidered to be hard like spheres colliding or as โsoftโ spheres following Lennard-Jones Potential. In this case assuming the gas particles to be hard spheres yields the following form,
$$s = \dfrac{k_B T_0}{ \sqrt{2} \pi d_{gas}^2 P_{gas}} $$
(Muddassar - Thesis - Cooling and Squeezing in Levitated Optomechanics 2016)
Where:
$d_{gas}$ is the diameter |of the gas particles
$T_0$ is the temperature of the gas
$P_{gas}$ is the pressure of the gas
End of explanation
m_gas = 4.81e-26
def mfp_2(P_gas):
mfp_val = eta_air/P_gas * (pi*k_B*T0/(2*m_gas))**0.5
return mfp_val
s = mfp(300) # 3mbar = 300 Pascals
print(s)
s2 = mfp_2(300) # 3mbar = 300 Pascals
print(s2)
def Gamma_env(radius, Pressure_mbar):
mass = rho_SiO2 * 4/3*pi*radius**3
Pressure_pascals = 100*Pressure_mbar
s = mfp(Pressure_pascals)
K_n = s/radius
c_K = 0.31*K_n/(0.785 + 1.152*K_n + K_n**2)
Gamma_0 = 6*pi*eta_air*radius/mass * 0.619/(0.619 + K_n) * (1+c_K)
return Gamma_0
Gamma_env(R, 3)
Explanation: Alternativity one can use:
$$ s = \dfrac{\eta_{air}}{P_{gas}} \sqrt{\dfrac{\pi k_B T_0}{2m}} $$
this produces the same result as the previous form
https://en.wikipedia.org/wiki/Mean_free_path
Where
- $\eta_{air}$ is the viscosity of air
- $m$ is the molecualar mass of air
- $T_0$ is the temperature of the gas
- $P_{gas}$ is the pressure of the gas
molecular mass of air is $28.97 g/mol$ and the number of molecules in a mole is Avogadro's Number $6.0221409e^{23}$ therefore we get the molecular mass of air to be $4.81e^{-26} Kg$
End of explanation
def Gamma_env_simple(radius, Pressure_mbar):
Pressure_pascals = 100*Pressure_mbar
#Gamma_0 = 0.619*9*pi*eta_air*d_gas**2*Pressure_pascals/(2**0.5*rho_SiO2*k_B*T0*radius)
Gamma_0 = 0.619*9*pi*eta_air*d_gas**2*Pressure_pascals/(2**0.5*rho_SiO2*k_B*T0*radius)
return Gamma_0
Gamma_env_simple(R, 3)
Explanation: Muddassar and Gieseler's simplified formula for the environmental damping is:
$$ \Gamma_0 = 0.619 \dfrac{9 \pi}{\sqrt{2}} \dfrac{\eta_{air}d_{gas}^2}{\rho_{SiO_2} k_B T_0} \dfrac{P_{gas}}{r}$$
This produces the same result as the full unsimplified form for all pressures in range of interest.
Where:
$\eta_{air}$ is the viscosity of air
$d_{gas}$ is the diameter of the gas particles
$\rho_{SiO_2}$ is the density of the silica nanoparticles
$r$ is the radius of the silica nanoparticles
$T_0$ is the temperature of the gas
$P_{gas}$ is the pressure of the gas
End of explanation
def Gamma_alternative(radius, Pressure_mbar):
Pressure = 100*Pressure_mbar
ave_velocity = (8*k_B*T0/(pi*m_gas))**0.5
mass= rho_SiO2*4/3*pi*radius**3
Gamma0 = 64*radius**2*Pressure/(3*mass*ave_velocity)
return Gamma0
Gamma_alternative(R, 3)
Explanation: Relation 2
In Gieseler's Thermal Nonlinearities paper he has the following equation for $\Gamma_0$
$$ \Gamma_0 = \dfrac{64a^2}{3m\bar{v}}P $$
https://www.nature.com/nphys/journal/v9/n12/full/nphys2798.html
This appears to be incorrect as it is exactly double that which you get with Chang's formula and James Millen's formula
Where:
- $a$ is the radius of the particle
- $m$ is the mass of the particle
- $\bar{v}$ is the average verlocity of the gas particles
Where we can use the following formula for $\bar{v}$
$$ \bar{v} = \sqrt{\dfrac{8k_B T_0}{\pi \mu}} $$
Where:
- $T_0$ is the temperature of the gas
- $\mu$ is the mass of the air molecules
End of explanation
ave_velocity = (8*k_B*T0/(pi*m_gas))**0.5
ave_velocity
def Gamma_chang(radius, Pressure_mbar):
Pressure = 100*Pressure_mbar
ave_velocity = (8*k_B*T0/(pi*m_gas))**0.5
Gamma0 = 8*Pressure/(pi*ave_velocity*radius*rho_SiO2)/2
return 2*Gamma0
Gamma_chang(R, 3)
Explanation: Relation 3
In Chang et al. paper "Cavity opto-mechanics using an optically levitated nanosphere"
They have $\Gamma_0 = \dfrac{\gamma_g}{2} = \dfrac{8}{\pi}\dfrac{P}{\bar{v}r\rho}$
Where
- $\rho$ is the density of the nanoparticle
- $P$ is the pressure of the gas
- $\bar{v}$ is the mean speed of the gas particles
- $r$ is the radius of the nanoparticle
End of explanation
def Gamma_Millen_imp(radius, Pressure_mbar):
Pressure = 100*Pressure_mbar
ave_velocity = (8*k_B*T0/(pi*m_gas))**0.5
mass = rho_SiO2*4/3*pi*radius**3
N = Pressure/(k_B*T0)
Gamma0 = 4*pi*m_gas*N*radius**2*ave_velocity/(3*mass)
return Gamma0
Gamma_Millen_imp(R, 3)
Explanation: Also relation 3 (different derivation by Millen et al.)
James Millen derives the following form of the damping due to impinging particles:
$$ \Gamma^{imp} = \dfrac{4\pi}{3}\dfrac{mNr^2 \bar{v}{T{imp}}}{M} $$
https://arxiv.org/abs/1309.3990 -
https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.114.123602
However in their earlier paper http://iopscience.iop.org/article/10.1088/1367-2630/15/1/015001/meta they get double this, which is what Gieseler gets in his thermal non-linearities paper.
Where:
- $m$ is the molecular mass of the gas
- $N$ is the particle density of the gas
- $r$ is the radius of the nanoparticle
- $M$ is the mass of the nanoparticle
- $\bar{v}{T{imp}}$ is the mean thermal velocity $\sqrt{\dfrac{8 k_B T^{imp}}{\pi m}}$
Using the ideal gas equation $P = R\rho T$ and $N= \dfrac{\rho}{m}$ with $R=\dfrac{k_B}{m}$ we get $N = \dfrac{P}{k_BT}$
End of explanation
Gamma_chang(R, 3)
Explanation: This agrees exactly with Chang's result
End of explanation
def Gamma_Millen_em(radius, Pressure_mbar, T_em):
Pressure = 100*Pressure_mbar
h_prime = m_gas/(k_B*T_em)
mass = rho_SiO2*4/3*pi*radius**3
N = Pressure/(k_B*T_em)
Gamma0 = (m_gas*N*radius**2*pi**(3/2))/(3*np.sqrt(h_prime)*mass)
return Gamma0
def calc_surface_temp_Millen(T_em, T_imp=300):
accomodation_coef = 0.777 # accomodation coefficient of silica (from Nanoscale temp measurement paper)
T_surf = T_imp + (T_em + T_imp)/accomodation_coef
return T_surf
Explanation: Relation 3+ (more damping due to considering emerging particles)
James Millen derives the following form of the damping due to emerging particles:
$\Gamma^{em} = \dfrac{mNr^2\pi^{\frac{3}{2}}}{3\sqrt{h'}M}$
https://arxiv.org/abs/1309.3990 -
https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.114.123602
Where:
- $m$ is the molecular mass of the gas
- $N$ is the particle density of the gas
- $r$ is the radius of the nanoparticle
- $h'$ is $\dfrac{m}{2k_B T_0}$ where $T_0$ is the temperature of the gas
- $M$ is the mass of the nanoparticle
Using the ideal gas equation $P = R\rho T$ and $N= \dfrac{\rho}{m}$ with $R=\dfrac{k_B}{m}$ we get $N = \dfrac{P}{k_BT}$
He also says that this leads to $\Gamma^{em} = \dfrac{\pi}{8}\dfrac{T^{em}}{T^{imp}}$
From this you get the total effective damping rate is
$$ \Gamma_0 = \Gamma^{em} + \Gamma^{imp} = \dfrac{\pi}{8}\sqrt{\dfrac{T^{em}}{T^{imp}}}\Gamma^{imp} + \Gamma^{imp} $$
Therefore damping rate is higher if you consider this
End of explanation
P_exp = np.load("Pressure_mbar.npy")
Gamma_exp = np.load("Gamma_radians.npy")
P_G_Dict = dict(zip(P_exp, Gamma_exp))
r = np.linspace(5e-9, 1000e-9, 1000)
P = 3.6 # mbar
alpha=0.5
plt.figure(figsize=[10, 10])
plt.loglog(r, Gamma_env_simple(r, P), 'k', label="Rashid/Gieseler Full form", alpha=alpha)
#plt.semilogy(r, Gamma_env_simple(r, P), 'grey', label="Rashid/Gieseler simplfied form", alpha=alpha)
plt.loglog(r, Gamma_alternative(r, P), label="Gieseler Thermal Non-linearities form", alpha=alpha)
plt.loglog(r, Gamma_chang(r, P), label="Chang form", alpha=alpha)
plt.loglog(r, Gamma_Millen_imp(r, P), label="Millen (imp) form", alpha=alpha)
plt.xlabel("radius (nm)")
plt.ylabel("ฮ (radians/s)")
plt.legend(loc='best')
plt.show()
r = 50e-9
P = np.linspace(1e-2, 1000, 1000)
plt.figure(figsize=[10, 10])
plt.loglog(P, Gamma_env_simple(r, P), 'k', label="Rashid/Gieseler Full form", alpha=alpha)
#plt.loglog(P, Gamma_env_simple(r, P), 'grey', label="Rashid/Gieseler simplfied form", alpha=alpha)
plt.loglog(P, Gamma_alternative(r, P), label="Gieseler Thermal Non-linearities form", alpha=alpha)
plt.loglog(P, Gamma_chang(r, P), label="Chang form", alpha=alpha)
plt.loglog(P, Gamma_Millen_imp(r, P), label="Millen (imp) form", alpha=alpha)
plt.loglog(P_exp, Gamma_exp, label="Experiment", alpha=alpha)
plt.xlabel("P (mbar)")
plt.ylabel("ฮ (radians/s)")
plt.legend(loc='best')
plt.show()
Explanation: Plot of all 3 relations and measured data
End of explanation
<END_TASK> |
15,859 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Key Requirements for the iRF scikit-learn implementation
The following is a documentation of the main requirements for the iRF implementation
Typical Setup
Step1: Step 1
Step2: Check that we can store objects of different types in a dictionary
This includes storing the random forest object under one key and also a simple numerial value in another key
Step3: Yay - seems to work just fine!
Step 2
Step4: Step 2.2 Display Feature Importances Graphically (just for interest)
Step5: Step 3
Step6: Get the second Decision tree to use for testing
Step7: Write down an efficient Binary Tree Traversal Function
Step9: Create the single function to output the required values
We have the following inputs
Step10: Check that the following leaf node depth is correct
CHECK
Step12: Design the single function to get the key tree information | <ASSISTANT_TASK:>
Python Code:
# Setup
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.cross_validation import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import confusion_matrix
from sklearn.datasets import load_iris
from sklearn import tree
import numpy as np
# Define a function to draw the decision trees in IPython
# Adapted from: http://scikit-learn.org/stable/modules/tree.html
from IPython.display import display, Image
import pydotplus
# Custom util functions
from utils import utils
RANDOM_STATE_SPLIT = 1001
RANDOM_STATE_CLASSIFIER = 1039
Explanation: Key Requirements for the iRF scikit-learn implementation
The following is a documentation of the main requirements for the iRF implementation
Typical Setup
End of explanation
# Load the iris data
iris = load_iris()
# Create the train-test datasets
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, random_state = RANDOM_STATE_SPLIT)
# Just fit a simple random forest classifier with 2 decision trees
rf = RandomForestClassifier(n_estimators = 2, random_state = RANDOM_STATE_CLASSIFIER)
rf.fit(X = X_train, y = y_train)
# Now plot the trees individually
#for idx, dtree in enumerate(rf.estimators_):
# print(idx)
# utils.draw_tree(inp_tree = dtree)
Explanation: Step 1: Fit the Initial Random Forest
Just fit every feature with equal weights per the usual random forest code e.g. DecisionForestClassifier in scikit-learn
End of explanation
a = 1
test = {} # create the dictionary to store the objects
test['first'] = a
test['rf_obj'] = rf
print(test['first'])
print(test['rf_obj'].feature_importances_)
Explanation: Check that we can store objects of different types in a dictionary
This includes storing the random forest object under one key and also a simple numerial value in another key
End of explanation
importances = rf.feature_importances_
std = np.std([dtree.feature_importances_ for dtree in rf.estimators_]
, axis=0)
indices = np.argsort(importances)[::-1]
# Check that the feature importances are standardized to 1
print(sum(importances))
Explanation: Yay - seems to work just fine!
Step 2: Get the Gini Importance of Weights
For the first random forest we just need to get the Gini Importance of Weights
Step 2.1 Get them numerically - most important
End of explanation
# Print the feature ranking
print("Feature ranking:")
for f in range(X_train.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
# Plot the feature importances of the forest
plt.figure()
plt.title("Feature importances")
plt.bar(range(X_train.shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(X_train.shape[1]), indices)
plt.xlim([-1, X_train.shape[1]])
plt.show()
Explanation: Step 2.2 Display Feature Importances Graphically (just for interest)
End of explanation
feature_names = ["X" + str(i) for i in range(X_train.shape[1])]
target_vals = list(np.sort(np.unique(y_train)))
target_names = ["y" + str(i) for i in target_vals]
print(feature_names)
print(target_names)
Explanation: Step 3: For each Tree get core leaf node features
For each decision tree in the classifier, get:
The list of leaf nodes
Depth of the leaf node
Leaf node predicted class i.e. {0, 1}
Probability of predicting class in leaf node
Number of observations in the leaf node i.e. weight of node
Name the Features
End of explanation
estimator = rf.estimators_[1]
from sklearn.tree import _tree
estimator.tree_.children_left[0]
estimator.tree_.children_right[0]
Explanation: Get the second Decision tree to use for testing
End of explanation
# Now plot the trees individually
utils.draw_tree(inp_tree = estimator)
Explanation: Write down an efficient Binary Tree Traversal Function
End of explanation
# Setup the key variables
threshold = estimator.tree_.threshold
max_node_depth = estimator.tree_.max_depth
max_node_depth
print("Max node depth in tree", max_node_depth, sep = ":\n")
n_nodes = estimator.tree_.node_count
print("number of nodes in tree", n_nodes, sep = ":\n")
# Define the number of features
num_features = X_train.shape[1]
# Get the node features from the decision tree classifier attribute
# It is hard to tell which features this came from i.e. indices are zero,
# positive and negative - we want only non-negative indices for the
# corresponding feature columns
node_features = estimator.tree_.feature
# Get indices for all the features used - 0 indexed and ranging
# to the total number of possible features in the training data
all_features_idx = np.array(range(num_features))
node_features_idx = np.array(range(num_features))[node_features]
# Count the unique number of features used
num_features_used = (np.unique(node_features_idx)).shape[0]
print("number of node features", num_features_used, sep = ":\n")
print("all features indices", all_features_idx, sep = ":\n")
print("node features", node_features, sep = ":\n")
print("node feature indices", node_features_idx, sep = ":\n")
def allTreePaths(dtree, root_node_id = 0):
Get all the individual tree paths from root node
to the leaves
# Use these lists to parse the tree structure
children_left = dtree.tree_.children_left
children_right = dtree.tree_.children_right
if root_node_id is None:
paths = []
if root_node_id == _tree.TREE_LEAF:
raise ValueError("Invalid node_id %s" % _tree.TREE_LEAF)
# if left/right is None we'll get empty list anyway
if children_left[root_node_id] != _tree.TREE_LEAF:
paths = [np.append(root_node_id, l)
for l in allTreePaths(dtree, children_left[root_node_id]) +
allTreePaths(dtree, children_right[root_node_id])]
else:
paths = [root_node_id]
return paths
all_leaf_node_paths = allTreePaths(rf.estimators_[1], root_node_id = 0)
all_leaf_node_paths
leaf_nodes = [path[-1] for path in all_leaf_node_paths]
leaf_nodes
features_used = []
Explanation: Create the single function to output the required values
We have the following inputs:
* Decision Tree Classifier from the Random Forest Classifier
* Root node id = 0 (should be the default and only value passed in here)
We have the following outputs:
* Leaf node paths in order
* Max node depth
* Leaf node predicted class {0, 1}
* Total leaf node samples
* Leaf node class sample sizes
* Leaf node class sample proportions
* Unordered boolean features
End of explanation
leaf_nodes_depths = [np.size(y) - 1 for y in all_leaf_node_paths]
leaf_nodes_depths
n_node_samples = estimator.tree_.n_node_samples
num_samples = [n_node_samples[y].astype(int) for y in leaf_nodes]
print(n_node_samples)
print(len(n_node_samples))
num_samples
print(num_samples)
print(sum(num_samples))
print(sum(n_node_samples))
X_train.shape
value = estimator.tree_.value
values = [value[node_id].astype(int) for node_id in leaf_nodes]
print(values)
# This should match the number of rows in the training feature set
print(sum(values).sum())
values
feature_names = ["X" + str(i) for i in range(X_train.shape[1])]
np.asarray(feature_names)
print(type(feature_names))
print(feature_names[0])
print(feature_names[-2])
#feature = estimator.tree_.feature
#z = [feature[y].astype(int) for y in x]
#z
#[feature_names[i] for i in z]
max_dpth = estimator.tree_.max_depth
max_dpth
max_n_class = estimator.tree_.max_n_classes
max_n_class
predict = estimator.tree_.predict
predict
all_leaf_nodes = [path[-1] for path in all_leaf_node_paths]
#[predict(node_id) for node_id in np.asarray(all_leaf_nodes)]
print(all_leaf_nodes)
print(all_leaf_nodes[0])
print(value[all_leaf_nodes[0]])
print(all_features_idx[np.argmax(value[all_leaf_nodes[0]])])
print(node_features_idx)
#predict(class_names[np.argmax(value[all_leaf_nodes[0]])])
#print("nodes", np.asarray(a = nodes, dtype = "int64"), sep = ":\n")
# print("node_depth", node_depth, sep = ":\n")
# print("leaf_node", is_leaves, sep = ":\n")
# print("feature_names", used_feature_names, sep = ":\n")
# print("feature", feature, sep = ":\n")
Explanation: Check that the following leaf node depth is correct
CHECK: That depth is correct value and not added 1 by accident
Root node must have depth 0 so need to deduct 1 from from the length of the path
CHECK: whether we can implement this directly in our getTreePaths function
End of explanation
def getTreeData(dtree, root_node_id = 0):
This returns all of the required summary results from an
individual decision tree
max_node_depth = dtree.tree_.max_depth
n_nodes = dtree.tree_.node_count
value = dtree.tree_.value
predict = dtree.tree_.predict
# Get the total number of features in the training data
tot_num_features = X_train.shape[1]
# Get indices for all the features used - 0 indexed and ranging
# to the total number of possible features in the training data
all_features_idx = np.array(range(tot_num_features), dtype = 'int64')
# Get the raw node feature indices from the decision tree classifier attribute
# It is hard to tell which features this came from i.e. indices are zero,
# positive and negative - we want only non-negative indices for the
# corresponding feature columns for consistency in reference
node_features_raw_idx = dtree.tree_.feature
# Get the refined non-negative feature indices for each node
# Start with a range over the total number of features and
# subset the relevant indices from the raw indices array
node_features_idx = np.array(range(tot_num_features))[node_features]
# Count the unique number of features used
num_features_used = (np.unique(node_features_idx)).shape[0]
# Get all of the paths used in the tree
all_leaf_node_paths = allTreePaths(dtree = dtree, root_node_id = root_node_id)
# Get list of leaf nodes
# In all paths it is the final node value
all_leaf_nodes = [path[-1] for path in all_leaf_node_paths]
# Final number of training samples predicted in each class at each leaf node
all_leaf_node_values = [value[node_id].astype(int) for node_id in leaf_nodes]
# Total number of training samples predicted in each class at each leaf node
tot_leaf_node_values = [np.sum(leaf_node_values) for leaf_node_values in all_leaf_node_values]
# All leaf node depths
# The depth is 0 indexed i.e. root node has depth 0
leaf_nodes_depths = [np.size(path) - 1 for path in all_leaf_node_paths]
# Predicted Classes
# Check that we correctly account for ties in determining the class here
all_leaf_node_classes = [all_features_idx[np.argmax(value)] for value in all_leaf_node_values]
# Get all of the features used along the leaf node paths i.e. features used to split a node
# CHECK: Why does the leaf node have a feature associated with it? Investigate further
# Removed the final leaf node value so that this feature does not get included currently
all_leaf_paths_features = [node_features_idx[path[:-1]] for path in all_leaf_node_paths]
# Get the unique list of features along a path
# NOTE: This removes the original ordering of the features along the path
# The original ordering could be preserved using a special function but will increase runtime
all_uniq_leaf_paths_features = [np.unique(feature_path) for feature_path in all_leaf_paths_features]
print("number of node features", num_features_used, sep = ":\n")
print("node feature indices", node_features_idx, sep = ":\n")
print("Max node depth in tree", max_node_depth, sep = ":\n")
print("number of nodes in tree", n_nodes, sep = ":\n")
print("node features", node_features, sep = ":\n")
print("all leaf node paths", all_leaf_node_paths, sep = ":\n")
print("all leaf node indices", all_leaf_nodes, sep = ":\n")
print("all leaf node depths", leaf_nodes_depths, sep = ":\n")
print("all leaf node predicted values", all_leaf_node_values, sep = ":\n")
print("total leaf node predicted values", tot_leaf_node_values, sep = ":\n")
print("all leaf node predicted classes", all_leaf_node_classes, sep = ":\n")
print("all features in leaf node paths", all_leaf_paths_features, sep = ":\n")
print("all unique features in leaf node paths", all_uniq_leaf_paths_features, sep = ":\n")
tree_data = {"num_features_used" : num_features_used,
"node_features_idx" : node_features_idx,
"max_node_depth" : max_node_depth,
"n_nodes" : n_nodes,
"all_leaf_node_paths" : all_leaf_node_paths,
"all_leaf_nodes" : all_leaf_nodes,
"leaf_nodes_depths" : leaf_nodes_depths,
"all_leaf_node_values" : all_leaf_node_values,
"tot_leaf_node_values" : tot_leaf_node_values,
"all_leaf_node_classes" : all_leaf_node_classes,
"all_leaf_paths_features" : all_leaf_paths_features,
"all_uniq_leaf_paths_features" : all_uniq_leaf_paths_features}
return tree_data
tree_dat1 = getTreeData(dtree = estimator, root_node_id = 0)
tree_dat1
print(sum(tree_dat1['tot_leaf_node_values']))
Explanation: Design the single function to get the key tree information
End of explanation
<END_TASK> |
15,860 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Copyright 2020 The TensorFlow Authors.
Step1: ์๋ผ๋ด๊ธฐ ์ข
ํฉ ๊ฐ์ด๋
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: ๋ชจ๋ธ ์ ์ํ๊ธฐ
์ ์ฒด ๋ชจ๋ธ ์๋ผ๋ด๊ธฐ(์์ฐจ ๋ฐ ํจ์ํ)
๋ชจ๋ธ ์ ํ์ฑ์ ํฅ์์ ์ํ ํ
Step3: ์ผ๋ถ ๋ ์ด์ด ์๋ผ๋ด๊ธฐ(์์ฐจ ๋ฐ ํจ์ํ)
๋ชจ๋ธ์ ์๋ผ๋ด๋ฉด ์ ํ์ฑ์ ๋ถ์ ์ ์ธ ์ํฅ์ ๋ฏธ์น ์ ์์ต๋๋ค. ๋ชจ๋ธ์ ๋ ์ด์ด๋ฅผ ์ ํ์ ์ผ๋ก ์๋ผ๋ด์ด ์ ํ์ฑ, ์๋ ๋ฐ ๋ชจ๋ธ ํฌ๊ธฐ ๊ฐ์ ๊ท ํ์ ํ์ํ ์ ์์ต๋๋ค.
๋ชจ๋ธ ์ ํ์ฑ์ ํฅ์์ ์ํ ํ
Step4: ์ด ์์์๋ ๋ ์ด์ด ์ ํ์ ์ฌ์ฉํ์ฌ ์๋ผ๋ผ ๋ ์ด์ด๋ฅผ ๊ฒฐ์ ํ์ง๋ง, ํน์ ๋ ์ด์ด๋ฅผ ์๋ผ๋ด๋ ๊ฐ์ฅ ์ฌ์ด ๋ฐฉ๋ฒ์ name ์์ฑ์ ์ค์ ํ๊ณ clone_function์์ ํด๋น ๋ด์ฉ์ ์ฐพ๋ ๊ฒ์
๋๋ค.
Step5: ์ฝ๊ธฐ ๋ ์ฝ์ง๋ง ์ ์ฌ์ ์ผ๋ก ๋ชจ๋ธ ์ ํ์ฑ์ด ๋ฎ์
์๋ผ๋ด๊ธฐ๋ฅผ ์ฌ์ฉํ ๋ฏธ์ธ ์กฐ์ ๊ณผ ํธํ๋์ง ์์ผ๋ฏ๋ก ๋ฏธ์ธ ์กฐ์ ์ ์ง์ํ๋ ์์ ์๋ณด๋ค ์ ํ์ฑ์ด ๋จ์ด์ง ์ ์์ต๋๋ค.
์ด๊ธฐ ๋ชจ๋ธ์ ์ ์ํ๋ ๋์ prune_low_magnitude๋ฅผ ์ ์ฉํ ์ ์์ง๋ง, ์ดํ์ ๊ฐ์ค์น๋ฅผ ๋ก๋ํ๋ฉด ์๋ ์์์ ๋์ํ์ง ์์ต๋๋ค.
ํจ์ํ ์
Step6: ์์ฐจ ์
Step7: ์ฌ์ฉ์ ์ ์ Keras ๋ ์ด์ด๋ฅผ ์๋ผ๋ด๊ฑฐ๋ ์๋ผ๋ผ ๋ ์ด์ด์ ์ผ๋ถ๋ฅผ ์์ ํฉ๋๋ค.
์ผ๋ฐ์ ์ธ ์ค์
Step8: ๋ชจ๋ธ ํ๋ จํ๊ธฐ
Model.fit
ํ๋ จ ์ค์ tfmot.sparsity.keras.UpdatePruningStep ์ฝ๋ฐฑ์ ํธ์ถํฉ๋๋ค.
ํ๋ จ ๋๋ฒ๊น
์ tfmot.sparsity.keras.PruningSummaries ์ฝ๋ฐฑ์ ์ฌ์ฉํฉ๋๋ค.
Step9: Colab์ด ์๋ ์ฌ์ฉ์์ ๊ฒฝ์ฐ, TensorBoard.dev์์ ์ด ์ฝ๋ ๋ธ๋ก์ ์ด์ ์คํ์ ๊ฒฐ๊ณผ๋ฅผ ๋ณผ ์ ์์ต๋๋ค.
์ฌ์ฉ์ ์ ์ ํ๋ จ ๋ฃจํ
ํ๋ จ ์ค์ tfmot.sparsity.keras.UpdatePruningStep ์ฝ๋ฐฑ์ ํธ์ถํฉ๋๋ค.
To help debug training, use the tfmot.sparsity.keras.PruningSummaries callback.
Step10: Colab์ด ์๋ ์ฌ์ฉ์์ ๊ฒฝ์ฐ, TensorBoard.dev์์ ์ด ์ฝ๋ ๋ธ๋ก์ ์ด์ ์คํ์ ๊ฒฐ๊ณผ๋ฅผ ๋ณผ ์ ์์ต๋๋ค.
์๋ผ๋ธ ๋ชจ๋ธ์ ์ ํ์ฑ ํฅ์ํ๊ธฐ
๋จผ์ , tfmot.sparsity.keras.prune_low_magnitude API ๋ฌธ์๋ฅผ ๋ณด๊ณ ์๋ผ๋ด๊ธฐ ์ผ์ ์ด ๋ฌด์์ธ์ง, ๊ทธ๋ฆฌ๊ณ ๊ฐ ์๋ผ๋ด๊ธฐ ์ผ์ ์ ํ์ ์ํ์ ์ดํดํฉ๋๋ค.
ํ
Step11: ์์ ์ฝ๋๊ฐ ์ผ๋ฐ์ ์ผ๋ก ์ ์ฉ๋ฉ๋๋ค. ์๋ ์ฝ๋๋ HDF5 ๋ชจ๋ธ ํ์(HDF5 ๊ฐ์ค์น ๋ฐ ๊ธฐํ ํ์์ด ์๋)์๋ง ํ์ํฉ๋๋ค.
Step12: ์๋ผ๋ธ ๋ชจ๋ธ ๋ฐฐํฌํ๊ธฐ
ํฌ๊ธฐ ์์ถ์ผ๋ก ๋ชจ๋ธ ๋ด๋ณด๋ด๊ธฐ
์ผ๋ฐ์ ์ธ ์ค์
Step13: ํ๋์จ์ด๋ณ ์ต์ ํ
์ฌ๋ฌ ๋ฐฑ์๋์์ ์๋ผ๋ด๊ธฐ๋ฅผ ์ฌ์ฉํ์ฌ ์ง์ฐ ์๊ฐ์ ๊ฐ์ ํ๋ฉด, ๋ธ๋ก ํฌ์์ฑ์ ์ฌ์ฉํ์ฌ ํน์ ํ๋์จ์ด์ ์ง์ฐ ์๊ฐ์ ๊ฐ์ ํ ์ ์์ต๋๋ค.
๋ธ๋ก ํฌ๊ธฐ๋ฅผ ๋๋ฆฌ๋ฉด ๋์ ๋ชจ๋ธ์ ์ ํ์ฑ์ ๋ํด ๋ฌ์ฑํ ์ ์๋ ์ต๋ ํฌ์์ฑ์ด ๊ฐ์ํฉ๋๋ค. ๊ทธ๋ผ์๋ ๋ถ๊ตฌํ๊ณ , ์ง์ฐ ์๊ฐ์ ์ฌ์ ํ ๊ฐ์ ๋ ์ ์์ต๋๋ค.
๋ธ๋ก ํฌ์์ฑ์ ์ง์๋๋ ํญ๋ชฉ์ ๋ํ ์์ธํ ๋ด์ฉ์ tfmot.sparsity.keras.prune_low_magnitude API ๋ฌธ์๋ฅผ ์ฐธ์กฐํ์ธ์. | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
! pip install -q tensorflow-model-optimization
import tensorflow as tf
import numpy as np
import tensorflow_model_optimization as tfmot
%load_ext tensorboard
import tempfile
input_shape = [20]
x_train = np.random.randn(1, 20).astype(np.float32)
y_train = tf.keras.utils.to_categorical(np.random.randn(1), num_classes=20)
def setup_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(20, input_shape=input_shape),
tf.keras.layers.Flatten()
])
return model
def setup_pretrained_weights():
model = setup_model()
model.compile(
loss=tf.keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy']
)
model.fit(x_train, y_train)
_, pretrained_weights = tempfile.mkstemp('.tf')
model.save_weights(pretrained_weights)
return pretrained_weights
def get_gzipped_model_size(model):
# Returns size of gzipped model, in bytes.
import os
import zipfile
_, keras_file = tempfile.mkstemp('.h5')
model.save(keras_file, include_optimizer=False)
_, zipped_file = tempfile.mkstemp('.zip')
with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f:
f.write(keras_file)
return os.path.getsize(zipped_file)
setup_model()
pretrained_weights = setup_pretrained_weights()
Explanation: ์๋ผ๋ด๊ธฐ ์ข
ํฉ ๊ฐ์ด๋
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/model_optimization/guide/pruning/comprehensive_guide"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">}TensorFlow.org์์ ๋ณด๊ธฐ</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/pruning/comprehensive_guide.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab์์ ์คํํ๊ธฐ</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/pruning/comprehensive_guide.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub์์์์ค ๋ณด๊ธฐ</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/model-optimization/tensorflow_model_optimization/g3doc/guide/pruning/comprehensive_guide.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">๋
ธํธ๋ถ ๋ค์ด๋ก๋ํ๊ธฐ</a></td>
</table>
Keras ๊ฐ์ค์น ์๋ผ๋ด๊ธฐ์ ๋ํ ์ข
ํฉ ๊ฐ์ด๋๋ฅผ ์์ํฉ๋๋ค.
์ด ํ์ด์ง๋ ๋ค์ํ ์ฌ์ฉ ์ฌ๋ก๋ฅผ ๋ฌธ์ํํ๊ณ ๊ฐ๊ฐ์ ๋ํด API๋ฅผ ์ฌ์ฉํ๋ ๋ฐฉ๋ฒ์ ๋ณด์ฌ์ค๋๋ค. ํ์ํ API๋ฅผ ์๊ณ ๋๋ฉด, API ๋ฌธ์์์ ๋งค๊ฐ๋ณ์์ ํ์ ์์ค์ ์ธ๋ถ ์ ๋ณด๋ฅผ ์ฐพ์๋ณด์ธ์.
์๋ผ๋ด๊ธฐ์ ์ด์ ๊ณผ ์ง์๋๋ ๊ธฐ๋ฅ์ ๋ณด๋ ค๋ฉด ๊ฐ์๋ฅผ ์ฐธ์กฐํ์ธ์.
๋จ์ผ ์๋ ํฌ ์๋ ์๋ ์๋ผ๋ด๊ธฐ ์๋ฅผ ์ฐธ์กฐํ์ธ์.
๋ค์ ์ฌ์ฉ ์ฌ๋ก๋ฅผ ๋ค๋ฃน๋๋ค.
์๋ผ๋ธ ๋ชจ๋ธ์ ์ ์ํ๊ณ ํ๋ จํฉ๋๋ค.
์์ฐจ ๋ฐ ํจ์ํ
Keras model.fit ๋ฐ ์ฌ์ฉ์ ์ ์ ํ๋ จ ๋ฃจํ
์๋ผ๋ธ ๋ชจ๋ธ์ ์ฒดํฌํฌ์ธํธ ์ง์ ํ๊ณ ์ญ์ง๋ ฌํํฉ๋๋ค.
์๋ผ๋ธ ๋ชจ๋ธ์ ๋ฐฐํฌํ๊ณ ์์ถ ์ด์ ์ ํ์ธํฉ๋๋ค.
์๋ผ๋ด๊ธฐ ์๊ณ ๋ฆฌ์ฆ์ ๊ตฌ์ฑ์ ๋ํด์๋ tfmot.sparsity.keras.prune_low_magnitude API ๋ฌธ์๋ฅผ ์ฐธ์กฐํ์ธ์.
์ค์
ํ์ํ API๋ฅผ ์ฐพ๊ณ ๋ชฉ์ ์ ์ดํดํ๊ธฐ ์ํด ์คํํ ์ ์์ง๋ง, ์ด ์น์
์ ๊ฑด๋๋ธ ์ ์์ต๋๋ค.
End of explanation
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended.
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)
model_for_pruning.summary()
Explanation: ๋ชจ๋ธ ์ ์ํ๊ธฐ
์ ์ฒด ๋ชจ๋ธ ์๋ผ๋ด๊ธฐ(์์ฐจ ๋ฐ ํจ์ํ)
๋ชจ๋ธ ์ ํ์ฑ์ ํฅ์์ ์ํ ํ:
์ ํ์ฑ์ ๊ฐ์ฅ ๋ง์ด ๋จ์ด๋จ๋ฆฌ๋ ๋ ์ด์ด ์๋ผ๋ด๊ธฐ๋ฅผ ๊ฑด๋๋ฐ๋ ค๋ฉด "์ผ๋ถ ๋ ์ด์ด ์๋ผ๋ด๊ธฐ"๋ฅผ ์๋ํฉ๋๋ค.
์ผ๋ฐ์ ์ผ๋ก ์ฒ์๋ถํฐ ํ๋ จํ๋ ๊ฒ๋ณด๋ค ์๋ผ๋ด๊ธฐ๋ก ๋ฏธ์ธ ์กฐ์ ํ๋ ๊ฒ์ด ์ข์ต๋๋ค.
์๋ผ๋ด๊ธฐ๋ก ์ ์ฒด ๋ชจ๋ธ์ ํ๋ จํ๋ ค๋ฉด, tfmot.sparsity.keras.prune_low_magnitude๋ฅผ ๋ชจ๋ธ์ ์ ์ฉํฉ๋๋ค.
End of explanation
# Create a base model
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
# Helper function uses `prune_low_magnitude` to make only the
# Dense layers train with pruning.
def apply_pruning_to_dense(layer):
if isinstance(layer, tf.keras.layers.Dense):
return tfmot.sparsity.keras.prune_low_magnitude(layer)
return layer
# Use `tf.keras.models.clone_model` to apply `apply_pruning_to_dense`
# to the layers of the model.
model_for_pruning = tf.keras.models.clone_model(
base_model,
clone_function=apply_pruning_to_dense,
)
model_for_pruning.summary()
Explanation: ์ผ๋ถ ๋ ์ด์ด ์๋ผ๋ด๊ธฐ(์์ฐจ ๋ฐ ํจ์ํ)
๋ชจ๋ธ์ ์๋ผ๋ด๋ฉด ์ ํ์ฑ์ ๋ถ์ ์ ์ธ ์ํฅ์ ๋ฏธ์น ์ ์์ต๋๋ค. ๋ชจ๋ธ์ ๋ ์ด์ด๋ฅผ ์ ํ์ ์ผ๋ก ์๋ผ๋ด์ด ์ ํ์ฑ, ์๋ ๋ฐ ๋ชจ๋ธ ํฌ๊ธฐ ๊ฐ์ ๊ท ํ์ ํ์ํ ์ ์์ต๋๋ค.
๋ชจ๋ธ ์ ํ์ฑ์ ํฅ์์ ์ํ ํ:
์ผ๋ฐ์ ์ผ๋ก ์ฒ์๋ถํฐ ํ๋ จํ๋ ๊ฒ๋ณด๋ค ์๋ผ๋ด๊ธฐ๋ก ๋ฏธ์ธ ์กฐ์ ํ๋ ๊ฒ์ด ์ข์ต๋๋ค.
์ฒซ ๋ฒ์งธ ๋ ์ด์ด ๋์ ์ดํ ๋ ์ด์ด๋ฅผ ์๋ผ๋
๋๋ค.
์ค์ ๋ ์ด์ด(์: attention ๋ฉ์ปค๋์ฆ)์ ์๋ผ๋ด์ง ๋ง์ธ์.
์ถ๊ฐ ์๋ฃ:
tfmot.sparsity.keras.prune_low_magnitude API ๋ฌธ์๋ ๋ ์ด์ด๋ณ๋ก ์๋ผ๋ด๊ธฐ ๊ตฌ์ฑ์ ๋ณ๊ฒฝํ๋ ๋ฐฉ๋ฒ์ ๋ํ ์ธ๋ถ ์ ๋ณด๋ฅผ ์ ๊ณตํฉ๋๋ค.
์๋ ์์์๋ Dense ๋ ์ด์ด๋ง ์๋ผ๋
๋๋ค.
End of explanation
print(base_model.layers[0].name)
Explanation: ์ด ์์์๋ ๋ ์ด์ด ์ ํ์ ์ฌ์ฉํ์ฌ ์๋ผ๋ผ ๋ ์ด์ด๋ฅผ ๊ฒฐ์ ํ์ง๋ง, ํน์ ๋ ์ด์ด๋ฅผ ์๋ผ๋ด๋ ๊ฐ์ฅ ์ฌ์ด ๋ฐฉ๋ฒ์ name ์์ฑ์ ์ค์ ํ๊ณ clone_function์์ ํด๋น ๋ด์ฉ์ ์ฐพ๋ ๊ฒ์
๋๋ค.
End of explanation
# Use `prune_low_magnitude` to make the `Dense` layer train with pruning.
i = tf.keras.Input(shape=(20,))
x = tfmot.sparsity.keras.prune_low_magnitude(tf.keras.layers.Dense(10))(i)
o = tf.keras.layers.Flatten()(x)
model_for_pruning = tf.keras.Model(inputs=i, outputs=o)
model_for_pruning.summary()
Explanation: ์ฝ๊ธฐ ๋ ์ฝ์ง๋ง ์ ์ฌ์ ์ผ๋ก ๋ชจ๋ธ ์ ํ์ฑ์ด ๋ฎ์
์๋ผ๋ด๊ธฐ๋ฅผ ์ฌ์ฉํ ๋ฏธ์ธ ์กฐ์ ๊ณผ ํธํ๋์ง ์์ผ๋ฏ๋ก ๋ฏธ์ธ ์กฐ์ ์ ์ง์ํ๋ ์์ ์๋ณด๋ค ์ ํ์ฑ์ด ๋จ์ด์ง ์ ์์ต๋๋ค.
์ด๊ธฐ ๋ชจ๋ธ์ ์ ์ํ๋ ๋์ prune_low_magnitude๋ฅผ ์ ์ฉํ ์ ์์ง๋ง, ์ดํ์ ๊ฐ์ค์น๋ฅผ ๋ก๋ํ๋ฉด ์๋ ์์์ ๋์ํ์ง ์์ต๋๋ค.
ํจ์ํ ์
End of explanation
# Use `prune_low_magnitude` to make the `Dense` layer train with pruning.
model_for_pruning = tf.keras.Sequential([
tfmot.sparsity.keras.prune_low_magnitude(tf.keras.layers.Dense(20, input_shape=input_shape)),
tf.keras.layers.Flatten()
])
model_for_pruning.summary()
Explanation: ์์ฐจ ์
End of explanation
class MyDenseLayer(tf.keras.layers.Dense, tfmot.sparsity.keras.PrunableLayer):
def get_prunable_weights(self):
# Prune bias also, though that usually harms model accuracy too much.
return [self.kernel, self.bias]
# Use `prune_low_magnitude` to make the `MyDenseLayer` layer train with pruning.
model_for_pruning = tf.keras.Sequential([
tfmot.sparsity.keras.prune_low_magnitude(MyDenseLayer(20, input_shape=input_shape)),
tf.keras.layers.Flatten()
])
model_for_pruning.summary()
Explanation: ์ฌ์ฉ์ ์ ์ Keras ๋ ์ด์ด๋ฅผ ์๋ผ๋ด๊ฑฐ๋ ์๋ผ๋ผ ๋ ์ด์ด์ ์ผ๋ถ๋ฅผ ์์ ํฉ๋๋ค.
์ผ๋ฐ์ ์ธ ์ค์: ๋ฐ์ด์ด์ค๋ฅผ ์ ๊ฑฐํ๋ฉด ์ผ๋ฐ์ ์ผ๋ก ๋ชจ๋ธ ์ ํ์ฑ์ด ๋๋ฌด ๋ง์ด ์์๋ฉ๋๋ค.
tfmot.sparsity.keras.PrunableLayer๋ ๋ ๊ฐ์ง ์ฌ์ฉ ์ฌ๋ก๋ฅผ ์ ๊ณตํฉ๋๋ค.
์ฌ์ฉ์ ์ ์ Keras ๋ ์ด์ด๋ฅผ ์๋ผ๋
๋๋ค.
๋ด์ฅ Keras ๋ ์ด์ด์ ์ผ๋ถ๋ฅผ ์์ ํ์ฌ ์๋ผ๋
๋๋ค.
์๋ฅผ ๋ค์ด, API๋ ๊ธฐ๋ณธ์ ์ผ๋ก Dense ๋ ์ด์ด์ ์ปค๋๋ง ์๋ผ๋
๋๋ค. ์๋์ ์๋ ๋ฐ์ด์ด์ค๋ ์ ๊ฑฐํฉ๋๋ค.
End of explanation
# Define the model.
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)
log_dir = tempfile.mkdtemp()
callbacks = [
tfmot.sparsity.keras.UpdatePruningStep(),
# Log sparsity and other metrics in Tensorboard.
tfmot.sparsity.keras.PruningSummaries(log_dir=log_dir)
]
model_for_pruning.compile(
loss=tf.keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy']
)
model_for_pruning.fit(
x_train,
y_train,
callbacks=callbacks,
epochs=2,
)
#docs_infra: no_execute
%tensorboard --logdir={log_dir}
Explanation: ๋ชจ๋ธ ํ๋ จํ๊ธฐ
Model.fit
ํ๋ จ ์ค์ tfmot.sparsity.keras.UpdatePruningStep ์ฝ๋ฐฑ์ ํธ์ถํฉ๋๋ค.
ํ๋ จ ๋๋ฒ๊น
์ tfmot.sparsity.keras.PruningSummaries ์ฝ๋ฐฑ์ ์ฌ์ฉํฉ๋๋ค.
End of explanation
# Define the model.
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)
# Boilerplate
loss = tf.keras.losses.categorical_crossentropy
optimizer = tf.keras.optimizers.Adam()
log_dir = tempfile.mkdtemp()
unused_arg = -1
epochs = 2
batches = 1 # example is hardcoded so that the number of batches cannot change.
# Non-boilerplate.
model_for_pruning.optimizer = optimizer
step_callback = tfmot.sparsity.keras.UpdatePruningStep()
step_callback.set_model(model_for_pruning)
log_callback = tfmot.sparsity.keras.PruningSummaries(log_dir=log_dir) # Log sparsity and other metrics in Tensorboard.
log_callback.set_model(model_for_pruning)
step_callback.on_train_begin() # run pruning callback
for _ in range(epochs):
log_callback.on_epoch_begin(epoch=unused_arg) # run pruning callback
for _ in range(batches):
step_callback.on_train_batch_begin(batch=unused_arg) # run pruning callback
with tf.GradientTape() as tape:
logits = model_for_pruning(x_train, training=True)
loss_value = loss(y_train, logits)
grads = tape.gradient(loss_value, model_for_pruning.trainable_variables)
optimizer.apply_gradients(zip(grads, model_for_pruning.trainable_variables))
step_callback.on_epoch_end(batch=unused_arg) # run pruning callback
#docs_infra: no_execute
%tensorboard --logdir={log_dir}
Explanation: Colab์ด ์๋ ์ฌ์ฉ์์ ๊ฒฝ์ฐ, TensorBoard.dev์์ ์ด ์ฝ๋ ๋ธ๋ก์ ์ด์ ์คํ์ ๊ฒฐ๊ณผ๋ฅผ ๋ณผ ์ ์์ต๋๋ค.
์ฌ์ฉ์ ์ ์ ํ๋ จ ๋ฃจํ
ํ๋ จ ์ค์ tfmot.sparsity.keras.UpdatePruningStep ์ฝ๋ฐฑ์ ํธ์ถํฉ๋๋ค.
To help debug training, use the tfmot.sparsity.keras.PruningSummaries callback.
End of explanation
# Define the model.
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)
_, keras_model_file = tempfile.mkstemp('.h5')
# Checkpoint: saving the optimizer is necessary (include_optimizer=True is the default).
model_for_pruning.save(keras_model_file, include_optimizer=True)
Explanation: Colab์ด ์๋ ์ฌ์ฉ์์ ๊ฒฝ์ฐ, TensorBoard.dev์์ ์ด ์ฝ๋ ๋ธ๋ก์ ์ด์ ์คํ์ ๊ฒฐ๊ณผ๋ฅผ ๋ณผ ์ ์์ต๋๋ค.
์๋ผ๋ธ ๋ชจ๋ธ์ ์ ํ์ฑ ํฅ์ํ๊ธฐ
๋จผ์ , tfmot.sparsity.keras.prune_low_magnitude API ๋ฌธ์๋ฅผ ๋ณด๊ณ ์๋ผ๋ด๊ธฐ ์ผ์ ์ด ๋ฌด์์ธ์ง, ๊ทธ๋ฆฌ๊ณ ๊ฐ ์๋ผ๋ด๊ธฐ ์ผ์ ์ ํ์ ์ํ์ ์ดํดํฉ๋๋ค.
ํ:
๋ชจ๋ธ์ด ์๋ผ๋ด๊ธฐ๋ฅผ ์ํํ ๋ ํ์ต๋ฅ ์ด ๋๋ฌด ๋๊ฑฐ๋ ๋ฎ์ง ์์ต๋๋ค. ์๋ผ๋ด๊ธฐ ์ผ์ ์ ํ์ดํผ ๋งค๊ฐ๋ณ์๋ก ๊ฐ์ฃผํฉ๋๋ค.
๋น ๋ฅธ ํ
์คํธ๋ก, tfmot.sparsity.keras.ConstantSparsity ์ผ์ ์ผ๋ก begin_step์ 0์ผ๋ก ์ค์ ํ์ฌ ํ๋ จ ์์ ์ ๋ชจ๋ธ์ ์ต์ข
ํฌ์์ฑ๊น์ง ์๋ผ๋ด๋ ์คํ์ ์๋ํด ๋ณด์ธ์. ์ด์ด ์ข์ผ๋ฉด ์ฐ์ํ ๊ฒฐ๊ณผ๋ฅผ ์ป์ ์๋ ์์ต๋๋ค.
๋ชจ๋ธ์ด ๋ณต๊ตฌํ ์๊ฐ์ ์ฃผ๊ธฐ ์ํด ์์ฃผ ์๋ผ๋ด๊ธฐ๋ฅผ ์ํํ์ง ๋ง์ธ์. ์๋ผ๋ด๊ธฐ ์ผ์ ์์ ์ ์ ํ ๊ธฐ๋ณธ ๋น๋๋ฅผ ์ ๊ณตํฉ๋๋ค.
๋ชจ๋ธ ์ ํ์ฑ์ ๊ฐ์ ํ๊ธฐ ์ํ ์ผ๋ฐ์ ์ธ ์์ด๋์ด๋ '๋ชจ๋ธ ์ ์ํ๊ธฐ'์์ ์ฌ์ฉ ์ฌ๋ก์ ๋ํ ํ์ ์ฐพ์๋ณด์ธ์.
์ฒดํฌํฌ์ธํธ ๋ฐ ์ญ์ง๋ ฌํ
์ฒดํฌํฌ์ธํธ ์ค์ ์ตํฐ๋ง์ด์ ๋จ๊ณ๋ฅผ ๋ณด์กดํด์ผ ํฉ๋๋ค. ์ฆ, ์ฒดํฌํฌ์ธํธ ์ง์ ์ ์ํด Keras HDF5 ๋ชจ๋ธ์ ์ฌ์ฉํ ์ ์์ง๋ง, Keras HDF5 ๊ฐ์ค์น๋ ์ฌ์ฉํ ์ ์์ต๋๋ค.
End of explanation
# Deserialize model.
with tfmot.sparsity.keras.prune_scope():
loaded_model = tf.keras.models.load_model(keras_model_file)
loaded_model.summary()
Explanation: ์์ ์ฝ๋๊ฐ ์ผ๋ฐ์ ์ผ๋ก ์ ์ฉ๋ฉ๋๋ค. ์๋ ์ฝ๋๋ HDF5 ๋ชจ๋ธ ํ์(HDF5 ๊ฐ์ค์น ๋ฐ ๊ธฐํ ํ์์ด ์๋)์๋ง ํ์ํฉ๋๋ค.
End of explanation
# Define the model.
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)
# Typically you train the model here.
model_for_export = tfmot.sparsity.keras.strip_pruning(model_for_pruning)
print("final model")
model_for_export.summary()
print("\n")
print("Size of gzipped pruned model without stripping: %.2f bytes" % (get_gzipped_model_size(model_for_pruning)))
print("Size of gzipped pruned model with stripping: %.2f bytes" % (get_gzipped_model_size(model_for_export)))
Explanation: ์๋ผ๋ธ ๋ชจ๋ธ ๋ฐฐํฌํ๊ธฐ
ํฌ๊ธฐ ์์ถ์ผ๋ก ๋ชจ๋ธ ๋ด๋ณด๋ด๊ธฐ
์ผ๋ฐ์ ์ธ ์ค์: ์๋ผ๋ด๊ธฐ์ ์์ถ ์ด์ ์ ํ์ธํ๋ ค๋ฉด, strip_pruning๊ณผ ํ์ค ์์ถ ์๊ณ ๋ฆฌ์ฆ(์: gzip์ ํตํด)์ ์ ์ฉํ๋ ๊ฒ์ด ๋ชจ๋ ํ์ํฉ๋๋ค.
End of explanation
base_model = setup_model()
# For using intrinsics on a CPU with 128-bit registers, together with 8-bit
# quantized weights, a 1x16 block size is nice because the block perfectly
# fits into the register.
pruning_params = {'block_size': [1, 16]}
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model, **pruning_params)
model_for_pruning.summary()
Explanation: ํ๋์จ์ด๋ณ ์ต์ ํ
์ฌ๋ฌ ๋ฐฑ์๋์์ ์๋ผ๋ด๊ธฐ๋ฅผ ์ฌ์ฉํ์ฌ ์ง์ฐ ์๊ฐ์ ๊ฐ์ ํ๋ฉด, ๋ธ๋ก ํฌ์์ฑ์ ์ฌ์ฉํ์ฌ ํน์ ํ๋์จ์ด์ ์ง์ฐ ์๊ฐ์ ๊ฐ์ ํ ์ ์์ต๋๋ค.
๋ธ๋ก ํฌ๊ธฐ๋ฅผ ๋๋ฆฌ๋ฉด ๋์ ๋ชจ๋ธ์ ์ ํ์ฑ์ ๋ํด ๋ฌ์ฑํ ์ ์๋ ์ต๋ ํฌ์์ฑ์ด ๊ฐ์ํฉ๋๋ค. ๊ทธ๋ผ์๋ ๋ถ๊ตฌํ๊ณ , ์ง์ฐ ์๊ฐ์ ์ฌ์ ํ ๊ฐ์ ๋ ์ ์์ต๋๋ค.
๋ธ๋ก ํฌ์์ฑ์ ์ง์๋๋ ํญ๋ชฉ์ ๋ํ ์์ธํ ๋ด์ฉ์ tfmot.sparsity.keras.prune_low_magnitude API ๋ฌธ์๋ฅผ ์ฐธ์กฐํ์ธ์.
End of explanation
<END_TASK> |
15,861 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
#print(type(text))
vocab=set(text)
vocab_to_int= {word:integer for integer, word in enumerate(vocab)}
int_to_vocab= {integer:word for integer, word in enumerate(vocab)}
return (vocab_to_int, int_to_vocab)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
return {
'.': '||Period||',
',': '||Comma||',
'"': '||Quotation_Mark||',
';': '||Semicolon||',
'!': '||Exclamation_mark||',
'?': '||Question_mark||',
'(': '||Left_Parentheses||',
')': '||Right_Parentheses||',
'--': '||Dash||',
"\n": '||Return||'
}
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
input = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return (input, targets, learning_rate)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
lstm_cell_1 = tf.contrib.rnn.BasicLSTMCell(rnn_size)
lstm_cell_2 = tf.contrib.rnn.BasicLSTMCell(rnn_size)
rnn_cell = tf.contrib.rnn.MultiRNNCell([lstm_cell_1, lstm_cell_2])
init_state = rnn_cell.zero_state(batch_size, tf.float32)
init_state = tf.identity(init_state, name= "initial_state")
return (rnn_cell, init_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
return tf.contrib.layers.embed_sequence(input_data, vocab_size, embed_dim)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name="final_state")
return (outputs, final_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
inputs = get_embed(input_data, vocab_size, embed_dim)
rnn_output, rnn_state = build_rnn(cell, inputs)
logits = tf.contrib.layers.fully_connected(rnn_output, vocab_size, activation_fn=None)
return (logits, rnn_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
n_batches = int(len(int_text) / (batch_size * seq_length))
#print(n_batches)
# Drop the last few characters to make only full batches
x_data = np.array(int_text[: n_batches * batch_size * seq_length])
y_data = np.array(int_text[1: n_batches * batch_size * seq_length + 1])
# The last target of the last batch should be the first input of the first batch.
y_data = np.concatenate([ y_data[:-1], [x_data[0]] ])
#print(x_data)
#print(y_data)
x_batches = np.split(x_data.reshape(batch_size, -1), n_batches, 1)
y_batches = np.split(y_data.reshape(batch_size, -1), n_batches, 1)
return np.array(list(zip(x_batches, y_batches)))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
# Number of Epochs
num_epochs = 300
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 512
# Embedding Dimension Size
embed_dim = 256
# Sequence Length
seq_length = 12
# Learning Rate
learning_rate = 0.001
# Show stats for every n number of batches
show_every_n_batches = 150
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
inputs = loaded_graph.get_tensor_by_name('input:0');
initial_state = loaded_graph.get_tensor_by_name('initial_state:0')
final_state = loaded_graph.get_tensor_by_name('final_state:0')
probabilities = loaded_graph.get_tensor_by_name('probs:0')
return inputs, initial_state, final_state, probabilities
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
int_word = np.random.choice(len(int_to_vocab), p=probabilities)
return int_to_vocab[int_word]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
<END_TASK> |
15,862 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Caradonna Tung Hover Rotor
This case aims to reproduce the results from an experiment undertaken by F. X. Caradonna and C. Tung in 1981 and published in NASA TM 81232. The rotor airfoil section is a NACA0012 pitched about 25% chord. For this simulation the hub has been ignored.
This test case uses a moving (rotating) frame of reference to model the rotor.
See the NASA Technical Report below, page 34.
Step1: Define Data Location
For remote data the interaction will use ssh to securely interact with the data<br/>
This uses the reverse connection capability in paraview so that the paraview server can be submitted to a job scheduler<br/>
Note
Step2: Validation and regression
Step3: Initialise Environment
Step4: Data Connection
This starts paraview server on remote host and connects
Step5: Get control dictionary
Step6: Get status file
Step7: Define test conditions
Step8: Cp Profile
Step10: Convergence
Step11: Check validation and regressionยถ
Step12: Cleaning up | <ASSISTANT_TASK:>
Python Code:
from IPython.display import FileLink, display
display(FileLink('data/NASA_TM_81232.pdf'))
Explanation: Caradonna Tung Hover Rotor
This case aims to reproduce the results from an experiment undertaken by F. X. Caradonna and C. Tung in 1981 and published in NASA TM 81232. The rotor airfoil section is a NACA0012 pitched about 25% chord. For this simulation the hub has been ignored.
This test case uses a moving (rotating) frame of reference to model the rotor.
See the NASA Technical Report below, page 34.
End of explanation
remote_data = True
remote_server_auto = True
case_name = 'caratung-ar-6p0-pitch-8p0'
data_dir='/gpfs/thirdparty/zenotech/home/dstandingford/VALIDATION/CARATUNG'
data_host='dstandingford@vis03'
paraview_cmd='mpiexec /gpfs/cfms/apps/zCFD/bin/pvserver'
if not remote_server_auto:
paraview_cmd=None
if not remote_data:
data_host='localhost'
paraview_cmd=None
Explanation: Define Data Location
For remote data the interaction will use ssh to securely interact with the data<br/>
This uses the reverse connection capability in paraview so that the paraview server can be submitted to a job scheduler<br/>
Note: The default paraview server connection will use port 11111
End of explanation
# Validation for Caradonna Tung Rotor (Mach at Tip - 0.877) from NASA TM 81232, page 34
validate = True
regression = True
# Make movie option currently not working - TODO
make_movie = False
if (validate):
valid = True
validation_tol = 0.0100
valid_lower_cl_0p50 = 0.2298-validation_tol
valid_upper_cl_0p50 = 0.2298+validation_tol
valid_lower_cl_0p68 = 0.2842-validation_tol
valid_upper_cl_0p68 = 0.2842+validation_tol
valid_lower_cl_0p80 = 0.2736-validation_tol
valid_upper_cl_0p80 = 0.2736+validation_tol
valid_lower_cl_0p89 = 0.2989-validation_tol
valid_upper_cl_0p89 = 0.2989+validation_tol
valid_lower_cl_0p96 = 0.3175-validation_tol
valid_upper_cl_0p96 = 0.3175+validation_tol
print 'VALIDATING CARADONNA TUNG CASE'
if (regression):
print 'REGRESSION CARADONNA TUNG CASE'
Explanation: Validation and regression
End of explanation
%pylab inline
from paraview.simple import *
paraview.simple._DisableFirstRenderCameraReset()
import pylab as pl
Explanation: Initialise Environment
End of explanation
from zutil.post import pvserver_connect
if remote_data:
pvserver_connect(data_host=data_host,data_dir=data_dir,paraview_cmd=paraview_cmd)
Explanation: Data Connection
This starts paraview server on remote host and connects
End of explanation
from zutil.post import get_case_parameters,print_html_parameters
parameters=get_case_parameters(case_name,data_host=data_host,data_dir=data_dir)
# print parameters
Explanation: Get control dictionary
End of explanation
from zutil.post import get_status_dict
status=get_status_dict(case_name,data_host=data_host,data_dir=data_dir)
num_procs = str(status['num processor'])
Explanation: Get status file
End of explanation
from IPython.display import HTML
HTML(print_html_parameters(parameters))
aspect_ratio = 6.0
Pitch = 8.0
from zutil.post import for_each
from zutil import rotate_vector
from zutil.post import get_csv_data
def plot_cp_profile(ax,file_root,span_loc,ax2):
wall = PVDReader( FileName=file_root+'_wall.pvd' )
wall.UpdatePipeline()
point_data = CellDatatoPointData(Input=wall)
point_data.PassCellData = 0
point_data.UpdatePipeline()
merged = MergeBlocks(Input=point_data)
merged.UpdatePipeline()
wall_slice = Slice(Input=merged, SliceType="Plane" )
wall_slice.SliceType.Normal = [0.0,1.0,0.0]
wall_slice.SliceType.Origin = [0, span_loc*aspect_ratio, 0]
wall_slice.UpdatePipeline()
sorted_line = PlotOnSortedLines(Input=wall_slice)
sorted_line.UpdatePipeline()
slice_client = servermanager.Fetch(sorted_line)
for_each(slice_client,func=plot_array,axis=ax,span_loc=span_loc,axis2=ax2)
def plot_array(data_array,pts_array,**kwargs):
ax = kwargs['axis']
span_loc = kwargs['span_loc']
ax2 = kwargs['axis2']
data = []
pos = []
pos_y = []
count = 0
cp_array = data_array.GetPointData()['cp']
for p in pts_array.GetPoints()[:,0]:
cp = float(cp_array[count])
# transform to local Cp
cp = cp/(span_loc)**2
data.append(cp)
pt_x = pts_array.GetPoints()[count,0]
pt_z = pts_array.GetPoints()[count,2]
# rotate by -8 deg
pt_rot = rotate_vector([pt_x,0.0,pt_z],-8.0,0.0)
pt = pt_rot[0] + 0.25
pos.append(pt)
pos_y.append(pt_rot[2])
count+=1
ax.plot(pos, data , color='g',linestyle='-',marker='None',label='zCFD')
ax2.plot(pos, pos_y , color='grey',linestyle='-',marker='None',label='profile')
def plot_experiment(ax, filename):
header = True
remote = False
# Note - this returns a pandas dataframe object
df = get_csv_data(filename,True,False)
x = []
y = []
for ind in range(0,len(df.index)-1):
x.append(df[list(df.columns.values)[0]][ind])
y.append(-df[list(df.columns.values)[1]][ind])
ax.scatter(x, y, color='grey', label='Experiment')
Explanation: Define test conditions
End of explanation
from zutil.post import get_case_root, cp_profile_wall_from_file_span
from zutil.post import ProgressBar
from collections import OrderedDict
factor = 0.0
pbar = ProgressBar()
plot_list = OrderedDict([(0.50,{'exp_data_file': 'data/cp-0p50.txt', 'cp_axis':[0.0,1.0,1.2,-1.0]}),
(0.68,{'exp_data_file': 'data/cp-0p68.txt', 'cp_axis':[0.0,1.0,1.2,-1.5]}),
(0.80,{'exp_data_file': 'data/cp-0p80.txt', 'cp_axis':[0.0,1.0,1.2,-1.5]}),
(0.89,{'exp_data_file': 'data/cp-0p89.txt', 'cp_axis':[0.0,1.0,1.2,-1.5]}),
(0.96,{'exp_data_file': 'data/cp-0p96.txt', 'cp_axis':[0.0,1.0,1.2,-1.5]})])
fig = pl.figure(figsize=(25, 30),dpi=100, facecolor='w', edgecolor='k')
fig.suptitle('Caradonna Tung Hover Rotor (' + r'$\mathbf{M_{TIP}}$' + ' = 0.877)',
fontsize=28, fontweight='normal', color = '#5D5858')
pnum=1
cl = {}
for plot in plot_list:
pbar+=5
span_loc = plot + factor
ax = fig.add_subplot(3,2,pnum)
ax.set_title('$\mathbf{C_P}$' + ' at ' + '$\mathbf{r/R}$' + ' = ' + str(span_loc) + '\n',
fontsize=24, fontweight='normal', color = '#E48B25')
ax.grid(True)
ax.set_xlabel('$\mathbf{x/c}$', fontsize=24, fontweight='bold', color = '#5D5858')
ax.set_ylabel('$\mathbf{C_p}$', fontsize=24, fontweight='bold', color = '#5D5858')
ax.axis(plot_list[plot]['cp_axis'])
ax2 = ax.twinx()
ax2.set_ylabel('$\mathbf{z/c}$', fontsize=24, fontweight='bold', color = '#5D5858')
ax2.axis([0,1,-0.5,0.5])
plot_cp_profile(ax,get_case_root(case_name,num_procs),span_loc,ax2)
normal = [0.0, 1.0, 0.0]
origin = [0.0, span_loc*aspect_ratio, 0.0]
# Check this - alpha passed via kwargs to post.py
# THESE NUMBERS ARE COMPLETELY WRONG - CHECK
forces = cp_profile_wall_from_file_span(get_case_root(case_name,num_procs), normal, origin, alpha=Pitch)
cd = forces['friction force'][0] + forces['pressure force'][0]
cs = forces['friction force'][1] + forces['pressure force'][1]
cl[plot] = forces['friction force'][2] + forces['pressure force'][2]
print cd, cs, cl[plot]
plot_experiment(ax,plot_list[plot]['exp_data_file'])
ax.legend(loc='upper right', shadow=True)
legend = ax.legend(loc='best', scatterpoints=1, numpoints=1, shadow=False, fontsize=16)
legend.get_frame().set_facecolor('white')
ax.tick_params(axis='x', pad=16)
for tick in ax.xaxis.get_major_ticks():
tick.label.set_fontsize(18)
tick.label.set_fontweight('normal')
tick.label.set_color('#E48B25')
for tick in ax.yaxis.get_major_ticks():
tick.label.set_fontsize(18)
tick.label.set_fontweight('normal')
tick.label.set_color('#E48B25')
for tick in ax2.yaxis.get_major_ticks():
tick.label2.set_fontsize(18)
tick.label2.set_fontweight('normal')
tick.label2.set_color('#E48B25')
pnum=pnum+1
fig.subplots_adjust(hspace=0.3)
fig.subplots_adjust(wspace=0.4)
fig.savefig("images/Caradonna_Tung_CP_profile.png")
pbar.complete()
show()
from IPython.display import FileLink, display
display(FileLink('images/Caradonna_Tung_CP_profile.png'))
Explanation: Cp Profile
End of explanation
from zutil.post import residual_plot, get_case_report
residual_plot(get_case_report(case_name))
show()
if make_movie:
from zutil.post import get_case_root
from zutil.post import ProgressBar
pb = ProgressBar()
vtu = PVDReader( FileName=[get_case_root(case_name,num_procs)+'.pvd'] )
vtu.UpdatePipeline()
pb += 20
merged = CleantoGrid(Input=vtu)
merged.UpdatePipeline()
pb += 20
point_data = CellDatatoPointData(Input=merged)
point_data.PassCellData = 0
point_data.PieceInvariant = 1
point_data.UpdatePipeline()
pb.complete()
if make_movie:
# from paraview.vtk.dataset_adapter import DataSet
from vtk.numpy_interface.dataset_adapter import DataSet
stream = StreamTracer(Input=point_data)
stream.SeedType = "Point Source"
stream.SeedType.Center = [49673.0, 58826.0, 1120.0]
stream.SeedType.Radius = 1
stream.SeedType.NumberOfPoints = 1
stream.Vectors = ['POINTS', 'V']
stream.MaximumStreamlineLength = 135800.00000000035
# IntegrationDirection can be FORWARD, BACKWARD, or BOTH
stream.IntegrationDirection = 'BACKWARD'
stream.UpdatePipeline()
stream_client = servermanager.Fetch(stream)
upstream_data = DataSet(stream_client)
stream.IntegrationDirection = 'FORWARD'
stream.UpdatePipeline()
stream_client = servermanager.Fetch(stream)
downstream_data = DataSet(stream_client)
if make_movie:
def vtk_show(renderer, w=100, h=100):
Takes vtkRenderer instance and returns an IPython Image with the rendering.
from vtk import vtkRenderWindow,vtkWindowToImageFilter,vtkPNGWriter
renderWindow = vtkRenderWindow()
renderWindow.SetOffScreenRendering(1)
renderWindow.AddRenderer(renderer)
renderWindow.SetSize(w, h)
renderWindow.Render()
windowToImageFilter = vtkWindowToImageFilter()
windowToImageFilter.SetInput(renderWindow)
windowToImageFilter.Update()
writer = vtkPNGWriter()
writer.SetWriteToMemory(1)
writer.SetInputConnection(windowToImageFilter.GetOutputPort())
writer.Write()
data = str(buffer(writer.GetResult()))
from IPython.display import Image
return Image(data)
if make_movie:
#print stream_data.GetPoint(0)
from zutil.post import ProgressBar
pb = ProgressBar()
wall = PVDReader( FileName=[get_case_root(case_name,num_procs)+'_wall.pvd'] )
wall.UpdatePipeline()
merged = CleantoGrid(Input=wall)
merged.UpdatePipeline()
point_data = CellDatatoPointData(Input=merged)
point_data.PassCellData = 0
point_data.PieceInvariant = 1
point_data.UpdatePipeline()
total_pts = 100# stream_data.GetNumberOfPoints()
scene = GetAnimationScene()
scene.EndTime = total_pts
scene.PlayMode = 'Snap To TimeSteps'
scene.AnimationTime = 0
a1_yplus_PVLookupTable = GetLookupTableForArray( "yplus", 1, RGBPoints=[96.69050598144531, 0.23, 0.299, 0.754, 24391.206581115723, 0.865, 0.865, 0.865, 48685.72265625, 0.706, 0.016, 0.15], VectorMode='Magnitude', NanColor=[0.25, 0.0, 0.0], ColorSpace='Diverging', ScalarRangeInitialized=1.0 )
a1_yplus_PiecewiseFunction = CreatePiecewiseFunction( Points=[96.69050598144531, 0.0, 0.5, 0.0, 48685.72265625, 1.0, 0.5, 0.0] )
drepr = Show() # GetDisplayProperties( Contour1 )
drepr.EdgeColor = [0.0, 0.0, 0.5000076295109483]
drepr.SelectionPointFieldDataArrayName = 'yplus'
#DataRepresentation4.SelectionCellFieldDataArrayName = 'eddy'
drepr.ColorArrayName = ('POINT_DATA', 'yplus')
drepr.LookupTable = a1_yplus_PVLookupTable
drepr.ScaleFactor = 0.08385616838932038
drepr.Interpolation = 'Flat'
drepr.ScalarOpacityFunction = a1_yplus_PiecewiseFunction
view = GetRenderView()
if not view:
# When using the ParaView UI, the View will be present, not otherwise.
view = CreateRenderView()
scene.ViewModules = [view]
view.CameraViewUp = [0.0, 0.0, 1.0]
view.CameraPosition = list(upstream_data.GetPoint(0))
view.CameraFocalPoint = list(upstream_data.GetPoint(1))
view.CameraParallelScale = 0.499418869125992
view.CenterOfRotation = [49673.0, 58826.0, 1120.0]
view.CenterAxesVisibility = 0
view.ViewSize = [3840,2160]
view.LightSwitch=0
view.UseLight = 1
#RenderView2.SetOffScreenRendering(1)
#Render()
pb+=20
camera = view.GetActiveCamera()
key_frames = []
for p in range(total_pts):
pt = stream_data.GetPoint(p)
#print pt
frame = CameraKeyFrame()
frame.Position = list(pt)
frame.ViewUp = [0.0, 0.0, 1.0]
frame.FocalPoint = camera.GetFocalPoint()
frame.KeyTime = p/total_pts
key_frames.append(frame)
pb+=20
cue = GetCameraTrack()
cue.Mode = 'Interpolate Camera'
cue.AnimatedProxy = view
cue.KeyFrames = key_frames
TimeAnimationCue4 = GetTimeTrack()
scene.Cues = [cue]
for t in range(total_pts-1):
print 'Generating: ' + str(t)
pt = stream_data.GetPoint(t)
view.CameraPosition = list(pt)
view.CameraFocalPoint = list(stream_data.GetPoint(t+1))
#vtk_show(view.GetRenderer())
Render()
#scene.AnimationTime = t
WriteImage('movies/caradonna_'+str(t)+'.png')
pb.complete()
Explanation: Convergence
End of explanation
if (validate):
def validate_data(name, value, valid_lower, valid_upper):
if ((value < valid_lower) or (value > valid_upper)):
print 'INVALID: ' + name + ' %.4f '%valid_lower + '%.4f '%value + ' %.4f'%valid_upper
return False
else:
return True
valid = validate_data('C_L[0.50]', cl[0.50], valid_lower_cl_0p50, valid_upper_cl_0p50) and valid
valid = validate_data('C_L[0.68]', cl[0.68], valid_lower_cl_0p68, valid_upper_cl_0p68) and valid
valid = validate_data('C_L[0.80]', cl[0.80], valid_lower_cl_0p80, valid_upper_cl_0p80) and valid
valid = validate_data('C_L[0.89]', cl[0.89], valid_lower_cl_0p89, valid_upper_cl_0p89) and valid
valid = validate_data('C_L[0.96]', cl[0.96], valid_lower_cl_0p96, valid_upper_cl_0p96) and valid
if (valid):
print 'VALIDATION = PASS :-)'
else:
print 'VALIDATION = FAIL :-('
if (regression):
import pandas as pd
pd.options.display.float_format = '{:,.6f}'.format
print 'REGRESSION DATA'
regress = {'version' : ['v0.0', 'v0.1' , 'CURRENT'],
'C_L[0.50]' : [2.217000, 2.217000, cl[0.50]],
'C_L[0.68]' : [0.497464, 0.498132, cl[0.68]],
'C_L[0.80]' : [0.024460, 0.024495, cl[0.80]],
'C_L[0.89]' : [0.014094, 0.014099, cl[0.89]],
'C_L[0.96]' : [0.010366, 0.010396, cl[0.96]]}
regression_table = pd.DataFrame(regress, columns=['version','C_L[0.50]','C_L[0.68]',
'C_L[0.80]','C_L[0.89]','C_L[0.96]'])
print regression_table
Explanation: Check validation and regressionยถ
End of explanation
if remote_data:
#print 'Disconnecting from remote paraview server connection'
Disconnect()
Explanation: Cleaning up
End of explanation
<END_TASK> |
15,863 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Think Bayes
Step1: The Dice problem
Suppose I have a box of dice that contains a 4-sided die, a 6-sided
die, an 8-sided die, a 12-sided die, and a 20-sided die.
Suppose I select a die from the box at random, roll it, and get a 6.
What is the probability that I rolled each die?
The Dice class inherits Update and provides Likelihood
Step2: Here's what the update looks like
Step3: And here's what it looks like after more data
Step4: The train problem
The Train problem has the same likelihood as the Dice problem.
Step5: But there are many more hypotheses
Step6: Here's what the posterior looks like
Step7: And here's how we can compute the posterior mean
Step8: Or we can just use the method
Step10: Sensitivity to the prior
Here's a function that solves the train problem for different priors and data
Step11: Let's run it with the same dataset and several uniform priors
Step12: The results are quite sensitive to the prior, even with several observations.
Power law prior
Now let's try it with a power law prior.
Step13: Here's what a power law prior looks like, compared to a uniform prior
Step14: Now let's see what the posteriors look like after observing one train.
Step15: The power law gives less prior probability to high values, which yields lower posterior means, and less sensitivity to the upper bound.
Step16: Credible intervals
To compute credible intervals, we can use the Percentile method on the posterior.
Step17: If you have to compute more than a few percentiles, it is more efficient to compute a CDF.
Also, a CDF can be a better way to visualize distributions.
Step18: Cdf also provides Percentile
Step19: Exercises
Exercise
Step20: Exercise | <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
% matplotlib inline
import thinkplot
from thinkbayes2 import Hist, Pmf, Suite, Cdf
Explanation: Think Bayes: Chapter 3
This notebook presents example code and exercise solutions for Think Bayes.
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
class Dice(Suite):
def Likelihood(self, data, hypo):
if hypo < data:
return 0
else:
return 1/hypo
Explanation: The Dice problem
Suppose I have a box of dice that contains a 4-sided die, a 6-sided
die, an 8-sided die, a 12-sided die, and a 20-sided die.
Suppose I select a die from the box at random, roll it, and get a 6.
What is the probability that I rolled each die?
The Dice class inherits Update and provides Likelihood
End of explanation
suite = Dice([4, 6, 8, 12, 20])
suite.Update(6)
suite.Print()
Explanation: Here's what the update looks like:
End of explanation
for roll in [6, 8, 7, 7, 5, 4]:
suite.Update(roll)
suite.Print()
Explanation: And here's what it looks like after more data:
End of explanation
class Train(Suite):
def Likelihood(self, data, hypo):
if hypo < data:
return 0
else:
return 1/hypo
Explanation: The train problem
The Train problem has the same likelihood as the Dice problem.
End of explanation
hypos = xrange(1, 1001)
suite = Train(hypos)
suite.Update(60)
Explanation: But there are many more hypotheses
End of explanation
thinkplot.Pdf(suite)
Explanation: Here's what the posterior looks like
End of explanation
def Mean(suite):
total = 0
for hypo, prob in suite.Items():
total += hypo * prob
return total
Mean(suite)
Explanation: And here's how we can compute the posterior mean
End of explanation
suite.Mean()
Explanation: Or we can just use the method
End of explanation
def MakePosterior(high, dataset, constructor=Train):
Solves the train problem.
high: int maximum number of trains
dataset: sequence of observed train numbers
constructor: function used to construct the Train object
returns: Train object representing the posterior suite
hypos = range(1, high+1)
suite = constructor(hypos)
for data in dataset:
suite.Update(data)
return suite
Explanation: Sensitivity to the prior
Here's a function that solves the train problem for different priors and data
End of explanation
dataset = [30, 60, 90]
for high in [500, 1000, 2000]:
suite = MakePosterior(high, dataset)
print(high, suite.Mean())
Explanation: Let's run it with the same dataset and several uniform priors
End of explanation
class Train2(Train):
def __init__(self, hypos, alpha=1.0):
Pmf.__init__(self)
for hypo in hypos:
self[hypo] = hypo**(-alpha)
self.Normalize()
Explanation: The results are quite sensitive to the prior, even with several observations.
Power law prior
Now let's try it with a power law prior.
End of explanation
high = 100
hypos = range(1, high+1)
suite1 = Train(hypos)
suite2 = Train2(hypos)
thinkplot.Pdf(suite1)
thinkplot.Pdf(suite2)
Explanation: Here's what a power law prior looks like, compared to a uniform prior
End of explanation
dataset = [60]
high = 1000
thinkplot.PrePlot(num=2)
constructors = [Train, Train2]
labels = ['uniform', 'power law']
for constructor, label in zip(constructors, labels):
suite = MakePosterior(high, dataset, constructor)
suite.label = label
thinkplot.Pmf(suite)
thinkplot.Config(xlabel='Number of trains',
ylabel='Probability')
Explanation: Now let's see what the posteriors look like after observing one train.
End of explanation
dataset = [30, 60, 90]
for high in [500, 1000, 2000]:
suite = MakePosterior(high, dataset, Train2)
print(high, suite.Mean())
Explanation: The power law gives less prior probability to high values, which yields lower posterior means, and less sensitivity to the upper bound.
End of explanation
hypos = xrange(1, 1001)
suite = Train(hypos)
suite.Update(60)
suite.Percentile(5), suite.Percentile(95)
Explanation: Credible intervals
To compute credible intervals, we can use the Percentile method on the posterior.
End of explanation
cdf = Cdf(suite)
thinkplot.Cdf(cdf)
thinkplot.Config(xlabel='Number of trains',
ylabel='Cumulative Probability',
legend=False)
Explanation: If you have to compute more than a few percentiles, it is more efficient to compute a CDF.
Also, a CDF can be a better way to visualize distributions.
End of explanation
cdf.Percentile(5), cdf.Percentile(95)
Explanation: Cdf also provides Percentile
End of explanation
# Solution goes here
Explanation: Exercises
Exercise: To write a likelihood function for the locomotive problem, we had
to answer this question: "If the railroad has N locomotives, what
is the probability that we see number 60?"
The answer depends on what sampling process we use when we observe the
locomotive. In this chapter, I resolved the ambiguity by specifying
that there is only one train-operating company (or only one that we
care about).
But suppose instead that there are many companies with different
numbers of trains. And suppose that you are equally likely to see any
train operated by any company.
In that case, the likelihood function is different because you
are more likely to see a train operated by a large company.
As an exercise, implement the likelihood function for this variation
of the locomotive problem, and compare the results.
End of explanation
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
Explanation: Exercise: Suppose I capture and tag 10 rock hyraxes. Some time later, I capture another 10 hyraxes and find that two of them are already tagged. How many hyraxes are there in this environment?
As always with problems like this, we have to make some modeling assumptions.
1) For simplicity, you can assume that the environment is reasonably isolated, so the number of hyraxes does not change between observations.
2) And you can assume that each hyrax is equally likely to be captured during each phase of the experiment, regardless of whether it has been tagged. In reality, it is possible that tagged animals would avoid traps in the future, or possible that the same behavior that got them caught the first time makes them more likely to be caught again. But let's start simple.
I suggest the following notation:
N: total population of hyraxes
K: number of hyraxes tagged in the first round
n: number of hyraxes caught in the second round
k: number of hyraxes in the second round that had been tagged
So N is the hypothesis and (K, n, k) make up the data. The probability of the data, given the hypothesis, is the probability of finding k tagged hyraxes out of n if (in the population) K out of N are tagged.
If you are familiar with the hypergeometric distribution, you can use the hypergeometric PMF to compute the likelihood function. Otherwise, you can figure it out using combinatorics.
End of explanation
<END_TASK> |
15,864 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ะะพะดะณะพะฝ (ะฟัะตะดัะบะฐะทัะฒะฐะตะผ ะฟะพ ะดะฐะฝะฝัะผ h2_phi ั ััะถะพะน ัััะฝะฐะฝะพะฒะบะธ ัะตัะฒัััะพะต ะทะฝะฐัะตะฝะธะต ะฒ h1_phi - ะฝะฐ ะฝะฐัะตะน ัััะฐะฝะพะฒะบะต)
Step1: ะขะธะฟะฐ ะปะธะฝะธั, ัะธะฟะฐ ะฟััะผะฐั
Step3:
Step4: <img src="http | <ASSISTANT_TASK:>
Python Code:
plt.plot(h1_phi - h2_phi[:3], h2_phi[:3])
plt.show()
Explanation: ะะพะดะณะพะฝ (ะฟัะตะดัะบะฐะทัะฒะฐะตะผ ะฟะพ ะดะฐะฝะฝัะผ h2_phi ั ััะถะพะน ัััะฝะฐะฝะพะฒะบะธ ัะตัะฒัััะพะต ะทะฝะฐัะตะฝะธะต ะฒ h1_phi - ะฝะฐ ะฝะฐัะตะน ัััะฐะฝะพะฒะบะต)
End of explanation
from sklearn.linear_model import LinearRegression
LR = LinearRegression()
LR.fit(h2_phi[:3].reshape(3, 1), (h1_phi - h2_phi[:3]).reshape(3, 1))
h1_phi_4_predicted_delta = LR.predict([[h2_phi[3]]])
h1_phi_4_predicted = h2_phi[3] + h1_phi_4_predicted_delta
h1_phi_4_predicted # ะฟัะธะฑะปะธะทะธะปะธ ัะฐะทะฝะพััั ะพะฟัะธะผะฐะปัะฝะพะน ะบัะธะฒะพะน, ัะฟัะพะณะฝะพะทะธัะพะฒะฐะปะธ
h1_phi_new = np.array(list(h1_phi) + [h1_phi_4_predicted])
h1_phi_new
Explanation: ะขะธะฟะฐ ะปะธะฝะธั, ัะธะฟะฐ ะฟััะผะฐั
End of explanation
len(ne_phi), len(ne_lambda)
len(hg_phi), len(hg_lambda)
X = np.hstack([ne_phi, hg_phi])
y = np.hstack([ne_lambda, hg_lambda])
f = interp1d(X, y, kind='quadratic')
plt.figure(figsize=(15, 15))
plt.plot(ne_phi, ne_lambda)
plt.scatter(ne_phi, ne_lambda)
plt.plot(hg_phi, hg_lambda)
plt.scatter(hg_phi, hg_lambda)
#plt.plot(h1_phi, h1_lambda)
#plt.scatter(h1_phi, h1_lambda)
grid = np.linspace(X.min(), X.max(), 1000)
plt.plot(grid, f(grid), color="red")
plt.grid()
plt.show()
plt.figure(figsize=(15, 15))
plt.scatter(ne_phi, ne_lambda, marker="+", s=200, label="ะะตะพะฝ", color="black")
plt.scatter(hg_phi, hg_lambda, marker="x", s=200, label="ะ ัััั", color="black")
plt.scatter(h1_phi_new, f(h1_phi_new), marker=4, s=150, label="ะะพะดะพัะพะด", color="black")
plt.scatter(h1_phi_new, f(h1_phi_new+2), marker=4, s=150, label="ะะพะดะพัะพะด", color="black")
plt.scatter(h1_phi_new, f(h1_phi_new-2), marker=4, s=150, label="ะะพะดะพัะพะด", color="black")
grid = np.linspace(X.min(), X.max(), 1000)
plt.plot(grid, f(grid), color="red", label="ะะฝัะตัะฟะพะปััะธะพะฝะฝะฐั ะบัะธะฒะฐั", lw=2)
plt.xlabel("ะฃะณะปั", fontsize=15)
plt.ylabel("$\lambda$", fontsize=15)
plt.legend()
xticklocs = np.linspace(X.min(), X.max(), 20)
yticklocs = np.linspace(y.min(), y.max(), 20)
plt.xticks(xticklocs)
plt.yticks(yticklocs)
plt.grid(lw=2)
plt.savefig(fname="2.2_Hydrogen_Spectre.pdf", dpi=900, format="pdf", papertype="a4")
plt.savefig(fname="2.2_Hydrogen_Spectre.png", format="png")
plt.show()
h_lambda_predicted_lo = f(h1_phi_new - delta_phi) # ะธะฝัะตัะฒะฐะป ะดะปั ะฟะพะณัะตัะฝะพััะตะน
h_lambda_predicted_hi = f(h1_phi_new + delta_phi)
h_lambda_predicted = (h_lambda_predicted_lo + h_lambda_predicted_hi) / 2
for i in range(4):
error = (h_lambda_predicted_hi[i] - h_lambda_predicted_lo[i]) / 2
print("h_lambda_{} = ".format(i+3), labwork.sciRoundD(h_lambda_predicted[i],
error,
"ะะฝะณัััะตะผ"
))
labwork.sciPrintD(h_lambda_predicted[i],
error,
"h_lambda_{} = ".format(i+3))
h_lambda_predicted, (h_lambda_predicted_hi - h_lambda_predicted_lo) / 2
Explanation:
End of explanation
n = 2
R = np.array([((n*m)**2) / (h_lambda_predicted[m-3] * (m*m - n*n))for m in range(3, 7)])
R # ัะตะพัะธั: R = 109 677,6 ัะผ ^ (-1)
R.mean() # ะดะปะธะฝั ะฒะพะปะฝ ะฑัะปะธ ะฒ ะฐะฝะณัััะตะผะฐั
-> ะฝัะถะฝะพ ัะผะฝะพะถะธัั ะฝะฐ 10^10
R.mean() * 10 ** 10 # ััะพ ะฒ ะผ ^ (-1)
R_predicted = R.mean() * 10 ** 8 # ััะพ ะฒ cะผ ^ (-1)
R_predicted
R * 10 ** 8 # ัะตะพัะธั: R = 109 677,6 ัะผ ^ (-1)
labwork.sciPrintR(R_predicted , R.std(ddof=1) / (4)**0.5 / R.mean(), "R = ") # ั ะฟะพะณัะตัะฝะพัััั
print("R = ", labwork.sciRoundR(R_predicted , R.std(ddof=1) / (4)**0.5 / R.mean(), "ัะผ ^ (-1) "))
R_theory = 109677.6
(R_theory - R_predicted) / R_theory # ะพัะปะธัะธะต ะพั ัะตะพัะธะธ, ะดะพะปั
Explanation: <img src="http://mypresentation.ru/documents/637d3b7041847466abfc8a4779050bdc/img17.jpg">
$$\frac{1}{\lambda} = R Z^2 (\frac{1}{n^2} - \frac{1}{m^2})$$
$$R = \frac{(nm)^2}{\lambda Z^2 (m^2 - n^2)}$$
End of explanation
<END_TASK> |
15,865 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
House Prices Estimator
Note
Step1: First problem
The training and test datasets have almost the same size.
Step2: Selecting only numeric columns (by now)
Step3: Find if there's null values
Step4: Normalizing
Step5: Using Box-Cox
Step6: Splitting dataset in train and test (getting batches)
Step7: Selecting good features...
Step8: KFold
Step9: Anomaly Detection
Step10: Models
Multilayer Perceptron
Step11: Stacked model
Step12: Evaluation
It has to be used the root mean squared error, RMSE.
Step13: Get Predictions
Good results without data_gauss | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
#load the files
train = pd.read_csv('input/train.csv')
test = pd.read_csv('input/test.csv')
data = pd.concat([train, test])
#size of training dataset
train_samples = train.shape[0]
#print some of them
data.head()
# remove the Id feature
data.drop(['Id'],1, inplace=True);
data.info()
Explanation: House Prices Estimator
Note: It's a competition from Kaggle.com and the input data was retrieved from there.
Details
Goal
It is your job to predict the sales price for each house. For each Id in the test set, you must predict the value of the SalePrice variable.
Metric
Submissions are evaluated on Root-Mean-Squared-Error (RMSE) between the logarithm of the predicted value and the logarithm of the observed sales price. (Taking logs means that errors in predicting expensive houses and cheap houses will affect the result equally.)
Submission File Format
The file should contain a header and have the following format:
Id,SalePrice
1461,169000.1
1462,187724.1233
1463,175221
etc.
TODO
Use another algorithm to predict the house price
More feature engineering
Add more comments, thoughts, conclusions, ...
Come up with new ideas..
Data Analysis
End of explanation
print("Size training: {}".format(train.shape[0]))
print("Size testing: {}".format(test.shape[0]))
Explanation: First problem
The training and test datasets have almost the same size.
End of explanation
datanum = data.select_dtypes([np.number])
datanum.describe()
data.select_dtypes(exclude=[np.number]).head()
Explanation: Selecting only numeric columns (by now)
End of explanation
datanum.columns[datanum.isnull().any()].tolist()
#number of row without NaN
print(datanum.shape[0] - datanum.dropna().shape[0])
#list of columns with NaN
datanum.columns[datanum.isnull().any()].tolist()
#Filling with the mean
datanum_no_nan = datanum.fillna(datanum.mean())
#check
datanum_no_nan.columns[datanum_no_nan.isnull().any()].tolist()
Explanation: Find if there's null values
End of explanation
import matplotlib.pyplot as plt
datanum_no_nan.drop(['SalePrice'], axis=1).head(15).plot()
plt.show()
#Squeeze the data to [0,1]
from sklearn import preprocessing
scaler = preprocessing.MinMaxScaler()
columns = datanum_no_nan.columns
columns = columns.drop('SalePrice')
print("Features: {}".format(columns))
data_norm = datanum_no_nan
data_norm[columns] = scaler.fit_transform(datanum_no_nan[columns])
print("Train shape: {}".format(data_norm.shape))
data_norm.drop(['SalePrice'], axis=1).head(15).plot()
plt.show()
data_norm.describe().T
#plotting distributions of numeric features
data_norm.hist(bins=50, figsize=(22,16))
plt.show()
Explanation: Normalizing
End of explanation
data_norm['1stFlrSF'].hist()
plt.show()
#transform the data so it's closest to normal
from scipy import stats
data_gauss = data_norm.copy()
for f in datanum.columns.tolist():
data_gauss[f], _ = stats.boxcox(data_gauss[f]+0.01)
#rescale again
std_scaler = preprocessing.StandardScaler()
data_gauss[columns] = std_scaler.fit_transform(data_gauss[columns])
data_gauss['1stFlrSF'].hist()
plt.show()
#plotting distributions of numeric features
data_gauss.hist(bins=50, figsize=(22,16))
plt.show()
Explanation: Using Box-Cox
End of explanation
#include no numbers columns
data.select_dtypes(exclude=[np.number]).head()
data_categorical = pd.get_dummies(data.select_dtypes(exclude=[np.number]))
data_all = pd.concat([data_norm, data_categorical], axis=1)
Explanation: Splitting dataset in train and test (getting batches)
End of explanation
#data_norm.columns.tolist()
feat_list = ['1stFlrSF',
#'2ndFlrSF',
#'3SsnPorch',
'BedroomAbvGr',
'BsmtFinSF1',
#'BsmtFinSF2',
#'BsmtFullBath',
#'BsmtHalfBath',
'BsmtUnfSF',
#'EnclosedPorch',
#'Fireplaces',
#'FullBath',
'GarageArea',
'GarageCars',
'GarageYrBlt',
#'GrLivArea',
#'HalfBath',
#'KitchenAbvGr',
'LotArea',
'LotFrontage',
#'LowQualFinSF',
'MSSubClass',
'MasVnrArea',
#'MiscVal',
'MoSold',
'OpenPorchSF',
'OverallCond',
'OverallQual',
'PoolArea',
#'SalePrice',
#'ScreenPorch',
'TotRmsAbvGrd',
'TotalBsmtSF',
'WoodDeckSF',
'YearBuilt',
'YearRemodAdd']
#'YrSold']
%matplotlib inline
import seaborn as sns
fig = plt.figure(figsize=(14, 10))
sns.heatmap(data_norm[feat_list+['SalePrice']].corr())
#heatmap
fig = plt.figure(figsize=(14, 10))
sns.heatmap(data_norm.corr())
# Correlation features
data_norm.corr()['SalePrice'].sort_values().tail(13)
feat_low_corr = ['KitchenAbvGr',
'EnclosedPorch',
'MSSubClass',
'OverallCond',
'YrSold',
'LowQualFinSF',
'MiscVal',
'BsmtHalfBath',
'BsmtFinSF2',
'MoSold',
'3SsnPorch',
'PoolArea',
'ScreenPorch']
feat_high_corr = ['Fireplaces',
'MasVnrArea',
'YearRemodAdd',
'YearBuilt',
'TotRmsAbvGrd',
'FullBath',
'1stFlrSF',
'TotalBsmtSF',
'GarageArea',
'GarageCars',
'GrLivArea',
'OverallQual']
data_norm_low_corr = data_norm[feat_low_corr]
data_norm_high_corr = data_norm[feat_high_corr]
Explanation: Selecting good features...
End of explanation
from sklearn.model_selection import KFold
y = np.array(data_all['SalePrice'])
X = np.array(data_norm_high_corr)
#split by idx
idx = train_samples
X_train, X_test = X[:idx], X[idx:]
y_train, y_test = y[:idx], y[idx:]
print("Shape X train: {}".format(X_train.shape))
print("Shape y train: {}".format(y_train.shape))
print("Shape X test: {}".format(X_test.shape))
print("Shape y test: {}".format(y_test.shape))
kf = KFold(n_splits=3, random_state=9, shuffle=True)
print(kf)
Explanation: KFold
End of explanation
#plotting PCA
from sklearn.decomposition import PCA
def plotPCA(X, y):
pca = PCA(n_components=1)
X_r = pca.fit(X).transform(X)
plt.plot(X_r, y, 'x')
from sklearn.covariance import EllipticEnvelope
# fit the model
ee = EllipticEnvelope(contamination=0.05,
assume_centered=True,
random_state=9)
ee.fit(X_train)
pred = ee.predict(X_train)
X_train = X_train[pred == 1]
y_train = y_train[pred == 1]
print(X_train.shape)
print(y_train.shape)
#after removing anomalies
plotPCA(X_train, y_train)
Explanation: Anomaly Detection
End of explanation
from sklearn.neural_network import MLPRegressor
from sklearn.metrics import mean_squared_error
rf = MLPRegressor(activation='relu',
solver='lbfgs',
#learning_rate_init=1e-2,
#learning_rate='adaptive',
#alpha=0.0001,
max_iter=400,
#shuffle=True,
hidden_layer_sizes=(64,64),
warm_start=True,
random_state=9,
verbose=False)
for e in range(1):
batch = 1;
for train_idx, val_idx in kf.split(X_train, y_train):
X_t, X_v = X_train[train_idx], X_train[val_idx]
y_t, y_v = y_train[train_idx], y_train[val_idx]
#training
rf.fit(X_t, y_t)
#calculate costs
t_error = mean_squared_error(y_t, rf.predict(X_t))**0.5
v_error = mean_squared_error(y_v, rf.predict(X_v))**0.5
print("{}-{}) Training error: {:.2f} Validation error: {:.2f}".format(e, batch, t_error, v_error))
batch += 1
#Scores
print("Training score: {:.4f}".format(rf.score(X_train, y_train)))
# Gradient boosting
from sklearn import ensemble
params = {'n_estimators': 100, 'max_depth': 50, 'min_samples_split': 5,
'learning_rate': 0.1, 'loss': 'ls', 'random_state':9, 'warm_start':True}
gbr = ensemble.GradientBoostingRegressor(**params)
batch = 0
for train_idx, val_idx in kf.split(X_train, y_train):
X_t, X_v = X_train[train_idx], X_train[val_idx]
y_t, y_v = y_train[train_idx], y_train[val_idx]
#training
gbr.fit(X_t, y_t)
#calculate costs
t_error = mean_squared_error(y_t, gbr.predict(X_t))**0.5
v_error = mean_squared_error(y_v, gbr.predict(X_v))**0.5
print("{}) Training error: {:.2f} Validation error: {:.2f}".format(batch, t_error, v_error))
batch += 1
#Scores
print("Training score: {:.4f}".format(gbr.score(X_train, y_train)))
# AdaBoost
from sklearn.ensemble import AdaBoostRegressor
from sklearn.tree import DecisionTreeRegressor
abr = AdaBoostRegressor(DecisionTreeRegressor(max_depth=50),
n_estimators=100, random_state=9)
batch = 0
for train_idx, val_idx in kf.split(X_train, y_train):
X_t, X_v = X_train[train_idx], X_train[val_idx]
y_t, y_v = y_train[train_idx], y_train[val_idx]
#training
abr.fit(X_t, y_t)
#calculate costs
t_error = mean_squared_error(y_t, abr.predict(X_t))**0.5
v_error = mean_squared_error(y_v, abr.predict(X_v))**0.5
print("{}) Training error: {:.2f} Validation error: {:.2f}".format(batch, t_error, v_error))
batch += 1
#Scores
print("Training score: {:.4f}".format(abr.score(X_train, y_train)))
# Lasso
from sklearn.linear_model import Lasso
lr = Lasso()
batch = 0
for train_idx, val_idx in kf.split(X_train, y_train):
X_t, X_v = X_train[train_idx], X_train[val_idx]
y_t, y_v = y_train[train_idx], y_train[val_idx]
#training
lr.fit(X_t, y_t)
#calculate costs
t_error = mean_squared_error(y_t, lr.predict(X_t))**0.5
v_error = mean_squared_error(y_v, lr.predict(X_v))**0.5
print("{}) Training error: {:.2f} Validation error: {:.2f}".format(batch, t_error, v_error))
batch += 1
#Scores
print("Training score: {:.4f}".format(lr.score(X_train, y_train)))
Explanation: Models
Multilayer Perceptron
End of explanation
### Testing
### Ada + mlp + gradient boosting -> level 1 predictions
### level 1 -> mlp -> level 2 predictions (final)
# Training
#mlp1 = MLPRegressor(activation='logistic',
# solver='sgd',
# hidden_layer_sizes=(5,5),
# learning_rate='adaptive',
# random_state=9,
# warm_start=True,
# verbose=False)
from sklearn.linear_model import LogisticRegression
mlp = LogisticRegression(random_state=9)
sclr = preprocessing.StandardScaler()
def stack_training(X, y):
X0 = rf.predict(X)
X1 = gbr.predict(X)
X2 = abr.predict(X)
X3 = lr.predict(X)
Xt = np.array([X0, X1, X2, X3]).T
#Xt = np.array([X0, X1, X2, X3, X1+X3, X2*X3, X0*X2*X3, X0/X2, X1/X3, X0/X3, (X0+X1+X2+X3)/4]).T
Xt = sclr.fit_transform(Xt)
mlp.fit(Xt, y)
def stack_predict(X, verbose=False):
X0 = rf.predict(X)
X1 = gbr.predict(X)
X2 = abr.predict(X)
X3 = lr.predict(X)
Xt = np.array([X0, X1, X2, X3]).T
#Xt = np.array([X0, X1, X2, X3, X1+X3, X2*X3, X0*X2*X3, X0/X2, X1/X3, X0/X3, (X0+X1+X2+X3)/4]).T
Xt = sclr.transform(Xt)
if verbose:
print("Training score: {:.4f}".format(mlp.score(Xt, y_train)))
plotPCA(Xt, y_train)
return mlp.predict(Xt)
#
batch = 0
kf = KFold(n_splits=10, random_state=9, shuffle=True)
for train_idx, val_idx in kf.split(X_train, y_train):
X_t, X_v = X_train[train_idx], X_train[val_idx]
y_t, y_v = y_train[train_idx], y_train[val_idx]
#training
stack_training(X_t, y_t)
#calculate costs
t_error = mean_squared_error(y_t, abr.predict(X_t))**0.5
v_error = mean_squared_error(y_v, abr.predict(X_v))**0.5
print("{}) Training error: {:.2f} Validation error: {:.2f}".format(batch, t_error, v_error))
batch += 1
rmse = mean_squared_error(y_train, stack_predict(X_train, True))**0.5
print("RMSE: {:.4f}".format(rmse))
Explanation: Stacked model
End of explanation
from sklearn.metrics import mean_squared_error
import random
RMSE_rf = mean_squared_error(y_train, rf.predict(X_train))**0.5
RMSE_gbr = mean_squared_error(y_train, gbr.predict(X_train))**0.5
RMSE_abr = mean_squared_error(y_train, abr.predict(X_train))**0.5
RMSE_lr = mean_squared_error(y_train, lr.predict(X_train))**0.5
RMSE_stack = mean_squared_error(y_train, stack_predict(X_train))**0.5
def avg_predict(X):
return (rf.predict(X) + gbr.predict(X) + abr.predict(X) + lr.predict(X))/4
predictions = avg_predict(X_train)
RMSE_total = mean_squared_error(y_train, predictions)**0.5
print("RMSE mlp: {:.3f}".format(RMSE_rf))
print("RMSE gbr: {:.3f}".format(RMSE_gbr))
print("RMSE abr: {:.3f}".format(RMSE_abr))
print("RMSE lr: {:.3f}".format(RMSE_lr))
print("====")
print("RMSE average: {:.3f}".format(RMSE_total))
print("RMSE stacked: {:.3f}".format(RMSE_stack))
Explanation: Evaluation
It has to be used the root mean squared error, RMSE.
End of explanation
import os
#predict = avg_predict(X_test)
predict = stack_predict(X_test)
file = "Id,SalePrice" + os.linesep
startId = 1461
for i in range(len(X_test)):
file += "{},{}".format(startId, (int)(predict[i])) + os.linesep
startId += 1
#print(file)
# Save to file
with open('attempt.txt', 'w') as f:
f.write(file)
Explanation: Get Predictions
Good results without data_gauss
End of explanation
<END_TASK> |
15,866 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<img src="http
Step1: Single Risk Factor
The example is based on a single risk factor, a geometric_brownian_motion object.
Step2: American Put Option
We also model only a single derivative instrument.
Step3: Large Portfolio
However, the derivatives_portfolio object we compose consists of 100 derivatives positions. Each option differes with respect to the strike.
Step4: Sequential Valuation
First, the derivatives portfolio with sequential valuation.
Step5: The call of the get_values method to value all instruments ...
Step6: ... and the results visualized.
Step7: Parallel Valuation
Second, the derivatives portfolio with parallel valuation.
Step8: The call of the get_values method for the parall valuation case.
Step9: Again, the results visualized (and compared to the sequential results).
Step10: Speed-up
The realized speed-up is of course dependend on the hardware used, and in particular the number of cores (threads) available. | <ASSISTANT_TASK:>
Python Code:
from dx import *
import time
from pylab import plt
plt.style.use('seaborn')
%matplotlib inline
Explanation: <img src="http://hilpisch.com/tpq_logo.png" alt="The Python Quants" width="45%" align="right" border="4">
Parallel Valuation of Large Portfolios
Derivatives (portfolio) valuation by Monte Carlo simulation is a computationally demanding task. For practical applications, when valuation speed plays an important role, parallelization of both simulation and valuation tasks might prove a useful strategy. DX Analytics has built in a basic parallelization option which allows the use of the Python mulitprocessing module. Depending on the tasks at hand this can already lead to significant speed-ups.
End of explanation
# constant short rate
r = constant_short_rate('r', 0.02)
# market environments
me_gbm = market_environment('gbm', dt.datetime(2015, 1, 1))
# geometric Brownian motion
me_gbm.add_constant('initial_value', 100.)
me_gbm.add_constant('volatility', 0.2)
me_gbm.add_constant('currency', 'EUR')
me_gbm.add_constant('model', 'gbm')
# valuation environment
val_env = market_environment('val_env', dt.datetime(2015, 1, 1))
val_env.add_constant('paths', 25000)
val_env.add_constant('frequency', 'M')
val_env.add_curve('discount_curve', r)
val_env.add_constant('starting_date', dt.datetime(2015, 1, 1))
val_env.add_constant('final_date', dt.datetime(2015, 12, 31))
# add valuation environment to market environments
me_gbm.add_environment(val_env)
risk_factors = {'gbm' : me_gbm}
Explanation: Single Risk Factor
The example is based on a single risk factor, a geometric_brownian_motion object.
End of explanation
gbm = geometric_brownian_motion('gbm_obj', me_gbm)
me_put = market_environment('put', dt.datetime(2015, 1, 1))
me_put.add_constant('maturity', dt.datetime(2015, 12, 31))
me_put.add_constant('strike', 40.)
me_put.add_constant('currency', 'EUR')
me_put.add_environment(val_env)
am_put = valuation_mcs_american_single(
'am_put', mar_env=me_put, underlying=gbm,
payoff_func='np.maximum(strike - instrument_values, 0)')
Explanation: American Put Option
We also model only a single derivative instrument.
End of explanation
positions = {}
strikes = np.linspace(80, 120, 100)
for i, strike in enumerate(strikes):
positions[i] = derivatives_position(
name='am_put_pos_%s' % strike,
quantity=1,
underlyings=['gbm'],
mar_env=me_put,
otype='American single',
payoff_func='np.maximum(%5.3f - instrument_values, 0)' % strike)
Explanation: Large Portfolio
However, the derivatives_portfolio object we compose consists of 100 derivatives positions. Each option differes with respect to the strike.
End of explanation
port_sequ = derivatives_portfolio(
name='portfolio',
positions=positions,
val_env=val_env,
risk_factors=risk_factors,
correlations=None,
parallel=False) # sequential calculation
Explanation: Sequential Valuation
First, the derivatives portfolio with sequential valuation.
End of explanation
t0 = time.time()
ress = port_sequ.get_values()
ts = time.time() - t0
print('Time in sec %.2f' % ts)
Explanation: The call of the get_values method to value all instruments ...
End of explanation
ress['strike'] = strikes
ress.set_index('strike')['value'].plot(figsize=(10, 6))
plt.ylabel('option value estimates')
Explanation: ... and the results visualized.
End of explanation
port_para = derivatives_portfolio(
'portfolio',
positions,
val_env,
risk_factors,
correlations=None,
parallel=True) # parallel valuation
Explanation: Parallel Valuation
Second, the derivatives portfolio with parallel valuation.
End of explanation
t0 = time.time()
resp = port_para.get_values()
# parallel valuation with as many cores as available
tp = time.time() - t0
print('Time in sec %.2f' % tp)
Explanation: The call of the get_values method for the parall valuation case.
End of explanation
plt.figure(figsize=(10, 6))
plt.plot(strikes, resp['value'].values, 'r.', label='parallel')
plt.plot(strikes, ress['value'].values, 'b', label='sequential')
plt.legend(loc=0)
plt.ylabel('option value estimates')
Explanation: Again, the results visualized (and compared to the sequential results).
End of explanation
ts / tp
# speed-up factor
# of course harware-dependent
wi = 0.4
plt.figure(figsize=(10, 6))
plt.bar((1.5 - wi/2, 2.5 - wi/2), (ts/ts, tp/ts), width=wi)
plt.xticks((1.5, 2.5), ('sequential', 'parallel'))
plt.ylim(0, 1.1), plt.xlim(0.75, 3.25)
plt.ylabel('relative performance (lower = better)')
plt.title('DX Analytics Portfolio Valuation')
Explanation: Speed-up
The realized speed-up is of course dependend on the hardware used, and in particular the number of cores (threads) available.
End of explanation
<END_TASK> |
15,867 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
Step1: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is
Step3: In this equation
Step4: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
Step5: Use interact with plot_fermidist to explore the distribution | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Image
from IPython.html.widgets import interact, interactive, fixed
Explanation: Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
End of explanation
Image('fermidist.png')
Explanation: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is:
End of explanation
def fermidist(energy, mu, kT):
Compute the Fermi distribution at energy, mu and kT.
return (np.exp((energy-mu)/kT)+1)**-1
assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033)
assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0),
np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532,
0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ]))
Explanation: In this equation:
$\epsilon$ is the single particle energy.
$\mu$ is the chemical potential, which is related to the total number of particles.
$k$ is the Boltzmann constant.
$T$ is the temperature in Kelvin.
In the cell below, typeset this equation using LaTeX:
\begin{equation}
F(\epsilon) = {\displaystyle \frac{1}{e^{(\epsilon - \mu)/kT}+1}}
\end{equation}
Define a function fermidist(energy, mu, kT) that computes the distribution function for a given value of energy, chemical potential mu and temperature kT. Note here, kT is a single variable with units of energy. Make sure your function works with an array and don't use any for or while loops in your code.
End of explanation
np.arange(0,10.01,0.01)
def plot_fermidist(mu, kT):
energy=np.arange(0,10.01,0.01)
plt.figure(figsize=(10,6))
plt.plot(energy,fermidist(energy,mu,kT))
plt.tick_params(axis='x', top='off')
plt.tick_params(axis='y', right='off')
plt.xlabel('Energy')
plt.xlim(left=0, right=10)
plt.ylim(bottom=0.0,top=1.0)
plt.ylabel('Fermi Distribution')
plot_fermidist(4.0, 1.0)
assert True # leave this for grading the plot_fermidist function
Explanation: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
End of explanation
interact(plot_fermidist, mu=(0.0,5.0,0.1), kT=(0.1,10.0,0.1));
Explanation: Use interact with plot_fermidist to explore the distribution:
For mu use a floating point slider over the range $[0.0,5.0]$.
for kT use a floating point slider over the range $[0.1,10.0]$.
End of explanation
<END_TASK> |
15,868 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Self-Driving Car Engineer Nanodegree
Deep Learning
Project
Step1: Step 1
Step2: Include an exploratory visualization of the dataset
Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include
Step3: Step 2
Step4: Model Architecture
Train, Validate and Test the Model
Step5: A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
Step6: Evaluate the Model
evaluate the performance of the model on the test set.
Step7: Step 3
Step8: Predict the Sign Type for Each Image
Step9: Analyze Performance
Step10: Output Top 5 Softmax Probabilities For Each Image Found on the Web
For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here.
The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.
tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.
Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tk.nn.top_k is used to choose the three classes with the highest probability
Step11: Project Writeup
Once you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file.
Note | <ASSISTANT_TASK:>
Python Code:
# Load pickled data
import pickle
import cv2 # for grayscale and normalize
# TODO: Fill this in based on where you saved the training and testing data
training_file ='traffic-signs-data/train.p'
validation_file='traffic-signs-data/valid.p'
testing_file = 'traffic-signs-data/test.p'
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_trainLd, y_trainLd = train['features'], train['labels']
X_validLd, y_validLd = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
#X_trainLd=X_trainLd.astype(float)
#y_trainLd=y_trainLd.astype(float)
#X_validLd=X_validLd.astype(float)
#y_validLd=y_validLd.astype(float)
print("Xtrain shape : "+str(X_trainLd.shape)+" ytrain shape : "+str(y_trainLd.shape))
print("Xtrain shape : "+str(X_trainLd.shape)+" ytrain shape : "+str(y_trainLd.shape))
print("X_test shape : "+str(X_test.shape)+" y_test shape : "+str(y_test.shape))
from sklearn.model_selection import train_test_split
Explanation: Self-Driving Car Engineer Nanodegree
Deep Learning
Project: Build a Traffic Sign Recognition Classifier
In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary.
Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n",
"File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.
In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing the code template and writeup template will cover all of the rubric points for this project.
The rubric contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Step 0: Load The Data
End of explanation
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
import numpy as np
# TODO: Number of training examples
n_train = X_trainLd.shape[0]
# TODO: Number of validation examples
n_validation = X_validLd.shape[0]
# TODO: Number of testing examples.
n_test = X_test.shape[0]
# TODO: What's the shape of an traffic sign image?
image_shape = X_trainLd.shape[1:4]
# TODO: How many unique classes/labels there are in the dataset.
#n_classes = n_train+n_validation+n_test -- this doesn't seem correct 43 in excel file
n_classes = 43
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
Explanation: Step 1: Dataset Summary & Exploration
The pickled data is a dictionary with 4 key/value pairs:
'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
'sizes' is a list containing tuples, (width, height) representing the original width and height the image.
'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES
Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the pandas shape method might be useful for calculating some of the summary results.
Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
End of explanation
import random
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
# Visualizations will be shown in the notebook.
%matplotlib inline
index = random.randint(0, len(X_trainLd))
image = X_trainLd[100] #squeeze : Remove single-dimensional entries from the shape of an array.
image = image.astype(float)
#normalise
def normit(img):
size = img.shape[2]
imagenorm = cv2.normalize(img, dst =image_shape, alpha=0, beta=25, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8UC1)
image = img.astype(float)
norm = (image-128.0)/128.0
return norm
temp = normit(image)
plt.figure(figsize=(1,1))
plt.imshow(temp.squeeze())
Explanation: Include an exploratory visualization of the dataset
Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.
The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.
NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?
End of explanation
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include
### converting to grayscale, etc.
### Feel free to use as many code cells as needed.
import cv2
from sklearn.utils import shuffle
print("Test")
## xtrain
grey_X_train = np.zeros(shape=[X_trainLd.shape[0],X_trainLd.shape[1],X_trainLd.shape[2]])
norm_X_train = np.zeros(shape=[X_trainLd.shape[0],X_trainLd.shape[1],X_trainLd.shape[2],3])
norm_X_train = norm_X_train.astype(float)
X_train, y_train = shuffle(X_trainLd, y_trainLd)
shuff_X_train, shuff_y_train =X_train, y_train
X_valid, y_valid = X_validLd, y_validLd
i=0
for p in X_train:
t = normit(p)
norm_X_train[i] = t
i=i+1
print("after normalise")
##validate
norm_X_valid = np.zeros(shape=[X_validLd.shape[0],X_validLd.shape[1],X_validLd.shape[2],3])
norm_X_valid=norm_X_valid.astype(float)
i=0
for v in X_valid:
tv = normit(v)
#tempv = tv.reshape(32,32,1)
norm_X_valid[i] = tv
i=i+1
##test
norm_X_test=[]
norm_X_test = np.zeros(shape=[X_test.shape[0],X_test.shape[1],X_test.shape[2],3])
norm_X_test=norm_X_test.astype(float)
i=0
for testim in X_test:
tt = normit(testim)
norm_X_test[i] = tt
i=i+1
print("fin")
image22 = norm_X_train[110] ; imageb4 = X_train[110]; imagev=norm_X_valid[100]; imaget=norm_X_test[100]
plt.figure(figsize=(1,1))
plt.imshow(imagev.squeeze())
plt.figure(figsize=(1,1))
plt.imshow(imaget.squeeze()) #squeeze : Remove single-dimensional entries from the shape of an array
Explanation: Step 2: Design and Test a Model Architecture
Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.
The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!
With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission.
There are various aspects to consider when thinking about this problem:
Neural network architecture (is the network over or underfitting?)
Play around preprocessing techniques (normalization, rgb to grayscale, etc)
Number of examples per label (some have more than others).
Generate fake data.
Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.
Pre-process the Data Set (normalization, grayscale, etc.)
Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, (pixel - 128)/ 128 is a quick way to approximately normalize the data and can be used in this project.
Other pre-processing steps are optional. You can try different techniques to see if it improves performance.
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
End of explanation
### Define your architecture here.
### Feel free to use as many code cells as needed.
import tensorflow as tf
EPOCHS = 30
BATCH_SIZE = 128 #SMcM change to 256 from 128
#X_train=X_train.astype(float)
X_train=norm_X_train
#print(X_train[20])
#X_train=shuff_X_train
#X_valid=norm_X_valid
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0.0
sigma = 0.1 #SMcM changed from 0.1 to 0.2
# SOLUTION: Layer 1: Convolutional. Input = 32x32x3. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5,3, 6), mean = mu, stddev = sigma)) #SMcM depth cahnged to 3
conv1_b = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b #try same should be better (padding)
# SOLUTION: Activation.
conv1 = tf.nn.relu(conv1)
#conv1 = tf.nn.relu(conv1) #SMcM add an extra relu
# SOLUTION: Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# SOLUTION: Activation.
conv2 = tf.nn.relu(conv2)
# SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
# SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# SOLUTION: Activation.
fc1 = tf.nn.relu(fc1)
# SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# SOLUTION: Activation.
fc2 = tf.nn.relu(fc2)
# SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 43.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(43))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
print("model")
image22 = X_train[110] #squeeze : Remove single-dimensional entries from the shape of an array
print(norm_X_train.shape)
print(X_train.shape)
plt.figure(figsize=(1,1))
plt.imshow(image22.squeeze())
#print(image22)
Explanation: Model Architecture
Train, Validate and Test the Model
End of explanation
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
#Features and Labels
x = tf.placeholder(tf.float32, (None, 32, 32, 3))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 43)
print("start")
#Training Pipeline
rate = 0.0025 # SMCM decreased rate to .0008 from 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
#Model Evaluation
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
#Train the Model
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(norm_X_valid, y_valid)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './sign')
print("Model saved")
Explanation: A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
End of explanation
#evaluate the model
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
print("restored")
test_accuracy = evaluate(norm_X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
Explanation: Evaluate the Model
evaluate the performance of the model on the test set.
End of explanation
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
#http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset
#http://benchmark.ini.rub.de/Dataset/GTSRB_Online-Test-Images.zip
Explanation: Step 3: Test a Model on New Images
To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.
You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.
Load and Output the Images
End of explanation
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
Explanation: Predict the Sign Type for Each Image
End of explanation
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
Explanation: Analyze Performance
End of explanation
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
Explanation: Output Top 5 Softmax Probabilities For Each Image Found on the Web
For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here.
The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.
tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.
Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tk.nn.top_k is used to choose the three classes with the highest probability:
```
(5, 6) array
a = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497,
0.12789202],
[ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401,
0.15899337],
[ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 ,
0.23892179],
[ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 ,
0.16505091],
[ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137,
0.09155967]])
```
Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces:
TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202],
[ 0.28086119, 0.27569815, 0.18063401],
[ 0.26076848, 0.23892179, 0.23664738],
[ 0.29198961, 0.26234032, 0.16505091],
[ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5],
[0, 1, 4],
[0, 5, 1],
[1, 3, 5],
[1, 4, 3]], dtype=int32))
Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices.
End of explanation
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
Explanation: Project Writeup
Once you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file.
Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n",
"File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.
Step 4 (Optional): Visualize the Neural Network's State with Test Images
This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.
Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the LeNet lab's feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.
For an example of what feature map outputs look like, check out NVIDIA's results in their paper End-to-End Deep Learning for Self-Driving Cars in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.
<figure>
<img src="visualize_cnn.png" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above)</p>
</figcaption>
</figure>
<p></p>
End of explanation
<END_TASK> |
15,869 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#TP-2---Programmation-pour-la-prรฉparation-ร -l'agrรฉgation-maths-option-info" data-toc-modified-id="TP-2---Programmation-pour-la-prรฉparation-ร -l'agrรฉgation-maths-option-info-1"><span class="toc-item-num">1 </span>TP 2 - Programmation pour la prรฉparation ร l'agrรฉgation maths option info</a></span></li><li><span><a href="#Listes" data-toc-modified-id="Listes-2"><span class="toc-item-num">2 </span>Listes</a></span><ul class="toc-item"><li><span><a href="#Exercice-1-
Step1: Listes
Ces exercices sont un peu foireux
Step2: Exercice 2
Step3: Mais attention le typage est toujours optionnel en Python
Step4: Exercice 3
Step5: Notre implรฉmentation est รฉvidemment plus lente que le test x in liste de la librarie standard...
Mais pas tant
Step6: Exercice 4
Step7: Exercice 5
Step8: La complexitรฉ est linรฉaire en $\mathcal{O}(\max(|\text{liste 1}|, |\text{liste 2}|)$.
Exercice 6
Step9: Exercice 7
Step10: Exercice 8
Step11: Exercice 9
Step12: Une version purement fonctionnelle est moins facile qu'une version impรฉrative avec une rรฉfรฉrence boolรฉenne.
Step14: Listes simplement chaรฎnรฉe (manuellement dรฉfinies)
Comme ces exercices รฉtaient un peu foireux ร รฉcrire avec les "listes" de Python, qui ne sont pas des listes simplement chaรฎnรฉes, je propose une autre solution oรน l'on va dรฉfinir une petite classe qui reprรฉsentera une liste simplement chaรฎnรฉe, et on va รฉcrire les fonctions demandรฉes avec cette classe.
La classe ListeChainee
On va supposer que les listes que l'on reprรฉsentera ne contiendront pas la valeur None, qui est utilisรฉe pour reprรฉsenter l'absence de tรชte et/ou de queue de la liste.
Step15: Exercice 1
Step16: Exercice 2
Step17: On peut vรฉrifier que cela marche en regardant, par exemple, l'id de deux objets si le deuxiรจme est une copie du premier
Step18: Et donc pour concatรฉner deux chaรฎnes, c'est facile
Step19: Exercice 3
Step20: Exercice 4
Step21: Exercice 5
Step22: La complexitรฉ est quadratique en $\mathcal{O}((\max(|\text{liste 1}|, |\text{liste 2}|)^2)$ ร cause des recopies.
Exercice 6
Step23: On peut facilement รฉcrire une variante qui sera rรฉcursive terminale ("tail recursive")
Step24: Exercice 7
Step25: Et donc c'est rapide
Step26: Exercice 8
Step27: Si on veut les avoir dans l'ordre croissant, il faudrait utiliser miroir qui est quadratique.
Autant รฉcrire directement une fonction intervale(a, b) qui renvoie la liste simplement chaรฎnรฉe contenant a
Step28: Une autre approche est d'รฉcrire la fonction mymap et de dire que
python
intervale_bis(a, b) = miroir(mymap(lambda x
Step29: Exercice 9
Step30: Une version purement fonctionnelle est moins facile qu'une version impรฉrative avec une rรฉfรฉrence boolรฉenne.
Step31: On est prรชt ร รฉcrire estPremier
Step32: En effet il suffit de construire d'abord la liste des entiers impairs de 2 ร $\lfloor \sqrt{n} \rfloor$, de les filtrer par ceux qui divisent $n$, et de vรฉrifier si on a aucun diviseur (taille(..) == 0) auquel cas $n$ est premier, ou si $n$ a au moins un diviseur auquel cas $n$ n'est pas premier.
Step33: On voit dans l'exemple ci dessus les nombres premiers comme ceux n'ayant aucun diviseurs, et les nombres non premiers comme ceux ayant au moins un diviseur.
Step34: Quelques tris par comparaison
On fera les tris en ordre croissant.
Step35: Exercice 10
Step36: Complexitรฉ en temps $\mathcal{O}(n^2)$.
Exercice 11
Step37: Exercice 12
Step38: (On voit que la liste autre a รฉtรฉ inversรฉe)
Step39: Complexitรฉ en temps
Step40: Complexitรฉ en temps $\mathcal{O}(n \log n)$.
Comparaisons
Step41: C'est assez pour vรฉrifier que le tri fusion est bien plus efficace que les autres.
On voit aussi que les tris par insertion et sรฉlection sont pire que linรฉaires,
Mais que le tri par fusion est presque linรฉaire (pour $n$ petits, $n \log n$ est presque linรฉaire).
Listes
Step42: Exercice 17
Step43: Exercice 18
Step44: Exercice 19
Step45: Exercice 20
Step46: Exercice 21
Step47: Est-ce que notre implรฉmentation peut รชtre plus rapide que le test x in liste ?
Non, mais elle est aussi rapide. C'est dรฉjร pas mal !
Step48: Exercice 22
Step49: Exercice 23
Je vous laisse trouver pour premiers.
Step50: Exercice 24
Step51: Trรจs pratique pour calculer des sommes, notamment.
Exercice 25
Step52: Pour de petites listes, la version rรฉcursive est aussi efficace que la version impรฉrative. Chouette !
Step53: Bonus
Step54: Exercice 26
Step55: Attention en Python, les listes ne sont PAS simplement chainรฉes, donc lambda l, x
Step56: Avec une syntaxe amรฉliorรฉe, on se rapproche de trรจs prรจs de la syntaxe de Caml/OCaml
Step57: Exercice 28
Compte le nombre de feuilles et de sommets.
Step58: Exercice 29
Step59: Exercice 30
Bonus. (รcrivez une fonction testant si un arbre รฉtiquetรฉ par des entiers est tournoi.)
Parcours d'arbres binaires
Aprรจs quelques exercices manipulant cette structure de dictionnaire, รฉcrire la suite n'est pas trop difficile.
Exercice 31
Step60: Exercice 32
Step61: Pourquoi ont-ils une complexitรฉ quadratique ? La concatรฉnation (@ en OCaml, + en Python) ne se fait pas en temps constant mais linรฉaire dans la taille de la plus longue liste.
Exercice 33
Step62: Exercice 34
Step63: En remplaรงant la file par une pile (une simple list), on obtient le parcours en profondeur, avec la mรชme complexitรฉ.
Step64: Exercice 35 et fin
Reconstruction depuis le parcours prefixe
Step65: L'idรฉe de cette solution est la suivante
Step66: Et cet exemple va รฉchouer
Step67: Reconstruction depuis le parcours en largeur
Ce n'est pas รฉvident quand on ne connait pas. L'idรฉe est de se servir d'une file
pour stocker les arbres qu'on reconstruit peu ร peu depuis les feuilles. La file
permet de rรฉcupรฉrer les bons sous-arbres quand on rencontre un noeud
Step68: Le mรชme algorithme (enfin presque, modulo interversion de g et d)
avec une pile donne une autre version de la reconstruction du parcours prefixe. | <ASSISTANT_TASK:>
Python Code:
from sys import version
print(version)
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#TP-2---Programmation-pour-la-prรฉparation-ร -l'agrรฉgation-maths-option-info" data-toc-modified-id="TP-2---Programmation-pour-la-prรฉparation-ร -l'agrรฉgation-maths-option-info-1"><span class="toc-item-num">1 </span>TP 2 - Programmation pour la prรฉparation ร l'agrรฉgation maths option info</a></span></li><li><span><a href="#Listes" data-toc-modified-id="Listes-2"><span class="toc-item-num">2 </span>Listes</a></span><ul class="toc-item"><li><span><a href="#Exercice-1-:-taille" data-toc-modified-id="Exercice-1-:-taille-2.1"><span class="toc-item-num">2.1 </span>Exercice 1 : <code>taille</code></a></span></li><li><span><a href="#Exercice-2-:-concat" data-toc-modified-id="Exercice-2-:-concat-2.2"><span class="toc-item-num">2.2 </span>Exercice 2 : <code>concat</code></a></span></li><li><span><a href="#Exercice-3-:-appartient" data-toc-modified-id="Exercice-3-:-appartient-2.3"><span class="toc-item-num">2.3 </span>Exercice 3 : <code>appartient</code></a></span></li><li><span><a href="#Exercice-4-:-miroir" data-toc-modified-id="Exercice-4-:-miroir-2.4"><span class="toc-item-num">2.4 </span>Exercice 4 : <code>miroir</code></a></span></li><li><span><a href="#Exercice-5-:-alterne" data-toc-modified-id="Exercice-5-:-alterne-2.5"><span class="toc-item-num">2.5 </span>Exercice 5 : <code>alterne</code></a></span></li><li><span><a href="#Exercice-6-:-nb_occurrences" data-toc-modified-id="Exercice-6-:-nb_occurrences-2.6"><span class="toc-item-num">2.6 </span>Exercice 6 : <code>nb_occurrences</code></a></span></li><li><span><a href="#Exercice-7-:-pairs" data-toc-modified-id="Exercice-7-:-pairs-2.7"><span class="toc-item-num">2.7 </span>Exercice 7 : <code>pairs</code></a></span></li><li><span><a href="#Exercice-8-:-range" data-toc-modified-id="Exercice-8-:-range-2.8"><span class="toc-item-num">2.8 </span>Exercice 8 : <code>range</code></a></span></li><li><span><a href="#Exercice-9-:-premiers" data-toc-modified-id="Exercice-9-:-premiers-2.9"><span class="toc-item-num">2.9 </span>Exercice 9 : <code>premiers</code></a></span></li></ul></li><li><span><a href="#Listes-simplement-chaรฎnรฉe-(manuellement-dรฉfinies)" data-toc-modified-id="Listes-simplement-chaรฎnรฉe-(manuellement-dรฉfinies)-3"><span class="toc-item-num">3 </span>Listes simplement chaรฎnรฉe (manuellement dรฉfinies)</a></span><ul class="toc-item"><li><span><a href="#La-classe-ListeChainee" data-toc-modified-id="La-classe-ListeChainee-3.1"><span class="toc-item-num">3.1 </span>La classe <code>ListeChainee</code></a></span></li><li><span><a href="#Exercice-1-:-taille" data-toc-modified-id="Exercice-1-:-taille-3.2"><span class="toc-item-num">3.2 </span>Exercice 1 : <code>taille</code></a></span></li><li><span><a href="#Exercice-2-:-concat" data-toc-modified-id="Exercice-2-:-concat-3.3"><span class="toc-item-num">3.3 </span>Exercice 2 : <code>concat</code></a></span></li><li><span><a href="#Exercice-3-:-appartient" data-toc-modified-id="Exercice-3-:-appartient-3.4"><span class="toc-item-num">3.4 </span>Exercice 3 : <code>appartient</code></a></span></li><li><span><a href="#Exercice-4-:-miroir" data-toc-modified-id="Exercice-4-:-miroir-3.5"><span class="toc-item-num">3.5 </span>Exercice 4 : <code>miroir</code></a></span></li><li><span><a href="#Exercice-5-:-alterne" data-toc-modified-id="Exercice-5-:-alterne-3.6"><span class="toc-item-num">3.6 </span>Exercice 5 : <code>alterne</code></a></span></li><li><span><a href="#Exercice-6-:-nb_occurrences" data-toc-modified-id="Exercice-6-:-nb_occurrences-3.7"><span class="toc-item-num">3.7 </span>Exercice 6 : <code>nb_occurrences</code></a></span></li><li><span><a href="#Exercice-7-:-pairs" data-toc-modified-id="Exercice-7-:-pairs-3.8"><span class="toc-item-num">3.8 </span>Exercice 7 : <code>pairs</code></a></span></li><li><span><a href="#Exercice-8-:-range" data-toc-modified-id="Exercice-8-:-range-3.9"><span class="toc-item-num">3.9 </span>Exercice 8 : <code>range</code></a></span></li><li><span><a href="#Exercice-9-:-premiers" data-toc-modified-id="Exercice-9-:-premiers-3.10"><span class="toc-item-num">3.10 </span>Exercice 9 : <code>premiers</code></a></span></li></ul></li><li><span><a href="#Quelques-tris-par-comparaison" data-toc-modified-id="Quelques-tris-par-comparaison-4"><span class="toc-item-num">4 </span>Quelques tris par comparaison</a></span><ul class="toc-item"><li><span><a href="#Exercice-10-:-Tri-insertion" data-toc-modified-id="Exercice-10-:-Tri-insertion-4.1"><span class="toc-item-num">4.1 </span>Exercice 10 : Tri insertion</a></span></li><li><span><a href="#Exercice-11-:-Tri-insertion-gรฉnรฉrique" data-toc-modified-id="Exercice-11-:-Tri-insertion-gรฉnรฉrique-4.2"><span class="toc-item-num">4.2 </span>Exercice 11 : Tri insertion gรฉnรฉrique</a></span></li><li><span><a href="#Exercice-12-:-Tri-selection" data-toc-modified-id="Exercice-12-:-Tri-selection-4.3"><span class="toc-item-num">4.3 </span>Exercice 12 : Tri selection</a></span></li><li><span><a href="#Exercices-13,-14,-15-:-Tri-fusion" data-toc-modified-id="Exercices-13,-14,-15-:-Tri-fusion-4.4"><span class="toc-item-num">4.4 </span>Exercices 13, 14, 15 : Tri fusion</a></span></li><li><span><a href="#Comparaisons" data-toc-modified-id="Comparaisons-4.5"><span class="toc-item-num">4.5 </span>Comparaisons</a></span></li></ul></li><li><span><a href="#Listes-:-l'ordre-supรฉrieur" data-toc-modified-id="Listes-:-l'ordre-supรฉrieur-5"><span class="toc-item-num">5 </span>Listes : l'ordre supรฉrieur</a></span><ul class="toc-item"><li><span><a href="#Exercice-16-:-applique" data-toc-modified-id="Exercice-16-:-applique-5.1"><span class="toc-item-num">5.1 </span>Exercice 16 : <code>applique</code></a></span></li><li><span><a href="#Exercice-17" data-toc-modified-id="Exercice-17-5.2"><span class="toc-item-num">5.2 </span>Exercice 17</a></span></li><li><span><a href="#Exercice-18-:-itere" data-toc-modified-id="Exercice-18-:-itere-5.3"><span class="toc-item-num">5.3 </span>Exercice 18 : <code>itere</code></a></span></li><li><span><a href="#Exercice-19" data-toc-modified-id="Exercice-19-5.4"><span class="toc-item-num">5.4 </span>Exercice 19</a></span></li><li><span><a href="#Exercice-20-:-qqsoit-et-ilexiste" data-toc-modified-id="Exercice-20-:-qqsoit-et-ilexiste-5.5"><span class="toc-item-num">5.5 </span>Exercice 20 : <code>qqsoit</code> et <code>ilexiste</code></a></span></li><li><span><a href="#Exercice-21-:-appartient-version-2" data-toc-modified-id="Exercice-21-:-appartient-version-2-5.6"><span class="toc-item-num">5.6 </span>Exercice 21 : <code>appartient</code> version 2</a></span></li><li><span><a href="#Exercice-22-:-filtre" data-toc-modified-id="Exercice-22-:-filtre-5.7"><span class="toc-item-num">5.7 </span>Exercice 22 : <code>filtre</code></a></span></li><li><span><a href="#Exercice-23" data-toc-modified-id="Exercice-23-5.8"><span class="toc-item-num">5.8 </span>Exercice 23</a></span></li><li><span><a href="#Exercice-24-:-reduit" data-toc-modified-id="Exercice-24-:-reduit-5.9"><span class="toc-item-num">5.9 </span>Exercice 24 : <code>reduit</code></a></span></li><li><span><a href="#Exercice-25-:-somme,-produit" data-toc-modified-id="Exercice-25-:-somme,-produit-5.10"><span class="toc-item-num">5.10 </span>Exercice 25 : <code>somme</code>, <code>produit</code></a></span></li><li><span><a href="#Exercice-26-:-miroir-version-2" data-toc-modified-id="Exercice-26-:-miroir-version-2-5.11"><span class="toc-item-num">5.11 </span>Exercice 26 : <code>miroir</code> version 2</a></span></li></ul></li><li><span><a href="#Arbres" data-toc-modified-id="Arbres-6"><span class="toc-item-num">6 </span>Arbres</a></span><ul class="toc-item"><li><span><a href="#Exercice-27" data-toc-modified-id="Exercice-27-6.1"><span class="toc-item-num">6.1 </span>Exercice 27</a></span></li><li><span><a href="#Exercice-28" data-toc-modified-id="Exercice-28-6.2"><span class="toc-item-num">6.2 </span>Exercice 28</a></span></li><li><span><a href="#Exercice-29" data-toc-modified-id="Exercice-29-6.3"><span class="toc-item-num">6.3 </span>Exercice 29</a></span></li><li><span><a href="#Exercice-30" data-toc-modified-id="Exercice-30-6.4"><span class="toc-item-num">6.4 </span>Exercice 30</a></span></li></ul></li><li><span><a href="#Parcours-d'arbres-binaires" data-toc-modified-id="Parcours-d'arbres-binaires-7"><span class="toc-item-num">7 </span>Parcours d'arbres binaires</a></span><ul class="toc-item"><li><span><a href="#Exercice-31" data-toc-modified-id="Exercice-31-7.1"><span class="toc-item-num">7.1 </span>Exercice 31</a></span></li><li><span><a href="#Exercice-32-:-Parcours-naifs-(complexitรฉ-quadratique)" data-toc-modified-id="Exercice-32-:-Parcours-naifs-(complexitรฉ-quadratique)-7.2"><span class="toc-item-num">7.2 </span>Exercice 32 : Parcours naifs (complexitรฉ quadratique)</a></span></li><li><span><a href="#Exercice-33-:-Parcours-linรฉaires" data-toc-modified-id="Exercice-33-:-Parcours-linรฉaires-7.3"><span class="toc-item-num">7.3 </span>Exercice 33 : Parcours linรฉaires</a></span></li><li><span><a href="#Exercice-34-:-parcours-en-largeur-et-en-profondeur" data-toc-modified-id="Exercice-34-:-parcours-en-largeur-et-en-profondeur-7.4"><span class="toc-item-num">7.4 </span>Exercice 34 : parcours en largeur et en profondeur</a></span></li><li><span><a href="#Exercice-35-et-fin" data-toc-modified-id="Exercice-35-et-fin-7.5"><span class="toc-item-num">7.5 </span>Exercice 35 et fin</a></span><ul class="toc-item"><li><span><a href="#Reconstruction-depuis-le-parcours-prefixe" data-toc-modified-id="Reconstruction-depuis-le-parcours-prefixe-7.5.1"><span class="toc-item-num">7.5.1 </span>Reconstruction depuis le parcours prefixe</a></span></li><li><span><a href="#Reconstruction-depuis-le-parcours-en-largeur" data-toc-modified-id="Reconstruction-depuis-le-parcours-en-largeur-7.5.2"><span class="toc-item-num">7.5.2 </span>Reconstruction depuis le parcours en largeur</a></span></li></ul></li></ul></li><li><span><a href="#Conclusion" data-toc-modified-id="Conclusion-8"><span class="toc-item-num">8 </span>Conclusion</a></span></li></ul></div>
TP 2 - Programmation pour la prรฉparation ร l'agrรฉgation maths option info
En Python, version 3.
End of explanation
from typing import TypeVar, List
_a = TypeVar('alpha')
def taille(liste : List[_a]) -> int:
longueur = 0
for _ in liste:
longueur += 1
return longueur
taille([])
taille([1, 2, 3])
len([])
len([1, 2, 3])
Explanation: Listes
Ces exercices sont un peu foireux : les "listes" en Python ne sont pas des listes simplement chaรฎnรฉes !
Exercice 1 : taille
End of explanation
from typing import TypeVar, List
_a = TypeVar('alpha')
def concatene(liste1 : List[_a], liste2 : List[_a]) -> List[_a]:
# return liste1 + liste2 # easy solution
liste = []
for i in liste1:
liste.append(i)
for i in liste2:
liste.append(i)
return liste
concatene([1, 2], [3, 4])
[1, 2] + [3, 4]
Explanation: Exercice 2 : concat
End of explanation
concatene([1, 2], ["pas", "entier", "?"])
Explanation: Mais attention le typage est toujours optionnel en Python :
End of explanation
from typing import TypeVar, List
_a = TypeVar('alpha')
def appartient(x : _a, liste : List[_a]) -> bool:
for y in liste:
if x == y:
return True # on stoppe avant la fin
return False
appartient(1, [])
appartient(1, [1])
appartient(1, [1, 2, 3])
appartient(4, [1, 2, 3])
1 in []
1 in [1]
1 in [1, 2, 3]
4 in [1, 2, 3]
Explanation: Exercice 3 : appartient
End of explanation
%timeit appartient(1000, list(range(10000)))
%timeit 1000 in list(range(10000))
Explanation: Notre implรฉmentation est รฉvidemment plus lente que le test x in liste de la librarie standard...
Mais pas tant :
End of explanation
from typing import TypeVar, List
_a = TypeVar('alpha')
def miroir(liste : List[_a]) -> List[_a]:
# return liste[::-1] # version facile
liste2 = []
for x in liste:
liste2.insert(0, x)
return liste2
miroir([2, 3, 5, 7, 11])
[2, 3, 5, 7, 11][::-1]
%timeit miroir([2, 3, 5, 7, 11])
%timeit [2, 3, 5, 7, 11][::-1]
Explanation: Exercice 4 : miroir
End of explanation
from typing import TypeVar, List
_a = TypeVar('alpha')
def alterne(liste1 : List[_a], liste2 : List[_a]) -> List[_a]:
liste3 = []
i, j = 0, 0
n, m = len(liste1), len(liste2)
while i < n and j < m: # encore deux
liste3.append(liste1[i])
i += 1
liste3.append(liste2[j])
j += 1
while i < n: # si n > m
liste3.append(liste1[i])
i += 1
while j < m: # ou si n < m
liste3.append(liste2[j])
j += 1
return liste3
alterne([3, 5], [2, 4, 6])
alterne([1, 3, 5], [2, 4, 6])
alterne([1, 3, 5], [4, 6])
Explanation: Exercice 5 : alterne
La sรฉmantique n'รฉtait pas trรจs claire, mais on peut imaginer quelque chose comme รงa :
End of explanation
from typing import TypeVar, List
_a = TypeVar('alpha')
def nb_occurrences(x : _a, liste : List[_a]) -> int:
nb = 0
for y in liste:
if x == y:
nb += 1
return nb
nb_occurrences(0, [1, 2, 3, 4])
nb_occurrences(2, [1, 2, 3, 4])
nb_occurrences(2, [1, 2, 2, 3, 2, 4])
nb_occurrences(5, [1, 2, 3, 4])
Explanation: La complexitรฉ est linรฉaire en $\mathcal{O}(\max(|\text{liste 1}|, |\text{liste 2}|)$.
Exercice 6 : nb_occurrences
End of explanation
filter?
from typing import List
def pairs(liste : List[int]) -> List[int]:
# return list(filter(lambda x : x % 2 == 0, liste))
return [x for x in liste if x % 2 == 0]
pairs([1, 2, 3, 4, 5, 6])
pairs([1, 2, 3, 4, 5, 6, 7, 100000])
pairs([1, 2, 3, 4, 5, 6, 7, 100000000000])
pairs([1, 2, 3, 4, 5, 6, 7, 1000000000000000000])
Explanation: Exercice 7 : pairs
C'est un filtrage :
End of explanation
from typing import List
def myrange(n : int) -> List[int]:
liste = []
i = 1
while i <= n:
liste.append(i)
i += 1
return liste
myrange(4)
from typing import List
def intervale(a : int, b : int=None) -> List[int]:
if b == None:
a, b = 1, a
liste = []
i = a
while i <= b:
liste.append(i)
i += 1
return liste
intervale(10)
intervale(1, 4)
Explanation: Exercice 8 : range
End of explanation
def racine(n : int) -> int:
i = 1
for i in range(n + 1):
if i*i > n:
return i - 1
return i
racine(1)
racine(5)
racine(102)
racine(120031)
from typing import List
def intervale2(a : int, b : int, pas : int=1) -> List[int]:
assert pas > 0
liste = []
i = a
while i <= b:
liste.append(i)
i += pas
return liste
intervale2(2, 12, 1)
intervale2(2, 12, 3)
Explanation: Exercice 9 : premiers
Plusieurs possibilitรฉs. Un filtre d'Erathosthรจne marche bien, ou une filtration.
Je ne vais pas utiliser de tableaux donc on est un peu rรฉduit d'utiliser une filtration (filtrage ? pattern matching)
End of explanation
def estDivisible(n : int, k : int) -> bool:
return (n % k) == 0
estDivisible(10, 2)
estDivisible(10, 3)
estDivisible(10, 4)
estDivisible(10, 5)
def estPremier(n : int) -> bool:
return (n == 2) or (n == 3) or not any(map(lambda k: estDivisible(n, k), intervale2(2, racine(n), 1)))
for n in range(2, 20):
print(n, list(map(lambda k: estDivisible(n, k), intervale2(2, racine(n), 1))))
from typing import List
def premiers(n : int) -> List[int]:
return [p for p in intervale2(2, n, 1) if estPremier(p)]
premiers(10)
premiers(100)
Explanation: Une version purement fonctionnelle est moins facile qu'une version impรฉrative avec une rรฉfรฉrence boolรฉenne.
End of explanation
class ListeChainee():
def __init__(self, hd=None, tl=None):
self.hd = hd
self.tl = tl
def __repr__(self) -> str:
if self.tl is None:
if self.hd is None:
return "[]"
else:
return f"{self.hd} :: []"
else:
return f"{self.hd} :: {self.tl}"
def jolie(self) -> str:
if self.tl is None:
if self.hd is None:
return "[]"
else:
return f"[{self.hd}]"
else:
j = self.tl.jolie()
j = j.replace("[", "").replace("]", "")
if j == "":
return f"[{self.hd}]"
else:
return f"[{self.hd}, {j}]"
# รฉquivalent ร :: en OCaml
def insert(hd, tl: ListeChainee) -> ListeChainee:
Insรจre hd en dรฉbut de la liste chainรฉe tl.
return ListeChainee(hd=hd, tl=tl)
# liste vide, puis des listes plus grandes
vide = ListeChainee() # []
l_1 = insert(1, vide) # 1 :: [] ~= [1]
l_12 = insert(2, l_1) # 2 :: 1 :: [] ~= [2, 1]
l_123 = insert(3, l_12) # 3 :: 2 :: 1 :: []
print(vide) # []
print(l_1) # 1 :: []
print(l_12) # 2 :: 1 :: []
print(l_123) # 3 :: 2 :: 1 :: []
print(vide.jolie()) # []
print(l_1.jolie()) # [1]
print(l_12.jolie()) # [2, 1]
print(l_123.jolie()) # [3, 2, 1]
Explanation: Listes simplement chaรฎnรฉe (manuellement dรฉfinies)
Comme ces exercices รฉtaient un peu foireux ร รฉcrire avec les "listes" de Python, qui ne sont pas des listes simplement chaรฎnรฉes, je propose une autre solution oรน l'on va dรฉfinir une petite classe qui reprรฉsentera une liste simplement chaรฎnรฉe, et on va รฉcrire les fonctions demandรฉes avec cette classe.
La classe ListeChainee
On va supposer que les listes que l'on reprรฉsentera ne contiendront pas la valeur None, qui est utilisรฉe pour reprรฉsenter l'absence de tรชte et/ou de queue de la liste.
End of explanation
from typing import Optional
def taille(liste: Optional[ListeChainee]) -> int:
if liste is None:
return 0
elif liste.tl is None:
return 0 if liste.hd is None else 1
return 1 + taille(liste.tl)
print(taille(vide)) # 0
print(taille(l_1)) # 1
print(taille(l_12)) # 2
print(taille(l_123)) # 3
Explanation: Exercice 1 : taille
Par exemple la longueur sera bien en O(n) si n=taille(liste) avec cette approche rรฉcursive :
End of explanation
def copy(liste: ListeChainee) -> ListeChainee:
if liste.tl is None:
return ListeChainee(hd=liste.hd, tl=None)
else:
return ListeChainee(hd=liste.hd, tl=copy(liste.tl))
Explanation: Exercice 2 : concat
Je vais dรฉjร commencer par รฉcrire une fonction copy qui permet de copier rรฉcursivement une liste simplement chaรฎnรฉe, pour รชtre sรปr que l'on ne modifie pas en place une des listes donnรฉes en argument.
End of explanation
print(id(vide))
print(id(copy(vide)))
Explanation: On peut vรฉrifier que cela marche en regardant, par exemple, l'id de deux objets si le deuxiรจme est une copie du premier : les id seront bien diffรฉrents.
End of explanation
def concat(liste1: ListeChainee, liste2: ListeChainee) -> ListeChainee:
if taille(liste1) == 0:
return liste2
elif taille(liste2) == 0:
return liste1
# nouvelle liste : comme รงa changer queue.tl ne modifie PAS liste1
resultat = copy(liste1)
queue = resultat
while taille(queue.tl) > 0:
queue = queue.tl
assert taille(queue.tl) == 0
queue.tl = ListeChainee(hd=liste2.hd, tl=liste2.tl)
return resultat
print(concat(vide, l_1))
print(vide) # pas modifiรฉe : []
print(l_1) #ย pas modifiรฉe : 1 :: []
concat(l_1, l_12) # 1 :: 2 :: 1 :: []
concat(l_1, l_123) # 1 :: 3 :: 2 :: 1 :: []
concat(l_1, vide) # 1 :: []
concat(l_12, vide) # 2 :: 1 :: []
concat(l_12, l_1) # 2 :: 1 :: 1 :: []
concat(l_123, l_123) # 3 :: 2 :: 1 :: 3 :: 2 :: 1 :: []
Explanation: Et donc pour concatรฉner deux chaรฎnes, c'est facile :
End of explanation
def appartient(x, liste: ListeChainee) -> bool:
if liste.hd is None:
return False
else:
if liste.hd == x:
return True
else:
return appartient(x, liste.tl)
assert appartient(0, vide) == False
assert appartient(0, l_1) == False
assert appartient(0, l_12) == False
assert appartient(0, l_123) == False
assert appartient(1, l_1) == True
assert appartient(1, l_12) == True
assert appartient(1, l_123) == True
assert appartient(2, l_1) == False
assert appartient(2, l_12) == True
assert appartient(2, l_123) == True
assert appartient(3, l_1) == False
assert appartient(3, l_12) == False
assert appartient(3, l_123) == True
Explanation: Exercice 3 : appartient
C'est en complexitรฉ linรฉaire dans le pire des cas.
End of explanation
def miroir(liste: ListeChainee) -> ListeChainee:
if taille(liste) <= 1:
return copy(liste)
else:
hd, tl = liste.hd, copy(liste.tl) #ย O(n)
juste_hd = ListeChainee(hd=hd, tl=None) #ย O(1)
return concat(miroir(tl), juste_hd) # O(n^2) + O(n) ร cause de concat
print(miroir(vide)) # []ย => []
print(miroir(l_1)) # [1] => [1]
print(miroir(l_12)) # [2, 1] => [1, 2]
print(miroir(l_123)) # [3, 2, 1] => [1, 2, 3]
Explanation: Exercice 4 : miroir
Ce sera en temps quadratique, ร cause de toutes les recopies :
End of explanation
def alterne(liste1: ListeChainee, liste2: ListeChainee) -> ListeChainee:
if taille(liste1) == 0:
return copy(liste2) # on recopie pour ne rien modifier
if taille(liste2) == 0:
return copy(liste1) # on recopie pour ne rien modifier
h1, t1 = liste1.hd, liste1.tl
h2, t2 = liste2.hd, liste2.tl
return insert(h1, insert(h2, alterne(t1, t2)))
print(alterne(l_1, l_12)) # [1, 2, 1]
print(alterne(l_12, l_1)) # [2, 1, 1]
print(alterne(l_123, l_1)) # [3, 1, 2, 1]
print(alterne(l_123, l_12)) # [3, 2, 2, 1, 1]
print(alterne(l_123, l_123)) # [3, 3, 2, 2, 1, 1]
print(alterne(l_12, l_123)) # [2, 3, 1, 2, 1]
print(alterne(l_1, l_123)) # [1, 3, 2, 1]
Explanation: Exercice 5 : alterne
La sรฉmantique n'รฉtait pas trรจs claire, mais on peut imaginer quelque chose comme รงa :
si une des deux listes est vide, on prend l'autre,
si les deux ne sont pas vide, on prend le dรฉbut de l1, de l2, puis alterne(queue l1, queue l2)
End of explanation
def nb_occurrences(x, liste: ListeChainee) -> int:
if liste is None or liste.hd is None:
return 0
else:
count = 1 if x == liste.hd else 0
if liste.tl is None:
return count
else:
return count + nb_occurrences(x, liste.tl)
assert nb_occurrences(1, vide) == 0
assert nb_occurrences(1, l_1) == 1
assert nb_occurrences(1, l_12) == 1
assert nb_occurrences(2, l_12) == 1
assert nb_occurrences(1, l_123) == 1
assert nb_occurrences(2, l_123) == 1
assert nb_occurrences(3, l_123) == 1
assert nb_occurrences(1, concat(l_1, l_1)) == 2
assert nb_occurrences(2, concat(l_1, l_12)) == 1
assert nb_occurrences(3, concat(l_12, l_1)) == 0
assert nb_occurrences(1, concat(l_12, l_12)) == 2
assert nb_occurrences(2, concat(l_12, l_12)) == 2
assert nb_occurrences(1, concat(l_123, concat(l_1, l_1))) == 3
assert nb_occurrences(2, concat(l_123, concat(l_1, l_12))) == 2
assert nb_occurrences(3, concat(l_123, concat(l_12, l_1))) == 1
assert nb_occurrences(3, concat(l_123, concat(l_12, l_12))) == 1
Explanation: La complexitรฉ est quadratique en $\mathcal{O}((\max(|\text{liste 1}|, |\text{liste 2}|)^2)$ ร cause des recopies.
Exercice 6 : nb_occurrences
Ce sera en temps linรฉaire, dans tous les cas.
End of explanation
def nb_occurrences(x, liste: ListeChainee, count=0) -> int:
if liste is None or liste.hd is None:
return count
else:
count += 1 if x == liste.hd else 0
if liste.tl is None:
return count
else:
return nb_occurrences(x, liste.tl, count=count)
Explanation: On peut facilement รฉcrire une variante qui sera rรฉcursive terminale ("tail recursive") :
End of explanation
def filtrer(liste: ListeChainee, predicate) -> ListeChainee:
if liste is None or liste.hd is None: # liste de taille 0
return ListeChainee(hd=None, tl=None)
elif liste.tl is None: #ย liste de taille 1
if predicate(liste.hd): #ย on renvoie [hd]
return ListeChainee(hd=liste.hd, tl=None)
else: # on renvoie []
return ListeChainee(hd=None, tl=None)
else: #ย liste de taille >= 2
if predicate(liste.hd):
return insert(liste.hd, filtrer(liste.tl, predicate))
else:
return filtrer(liste.tl, predicate)
Explanation: Exercice 7 : pairs
C'est un filtrage par le prรฉdicat x % 2 == 0.
Autant รฉcrire la fonction de filtrage gรฉnรฉrique :
End of explanation
def pairs(liste: ListeChainee) -> ListeChainee:
def predicate(x):
return (x % 2) == 0
# aussi : predicate = lambda x: (x % 2) == 0
return filtrer(liste, predicate)
def impairs(liste: ListeChainee) -> ListeChainee:
def predicate(x):
return (x % 2) == 1
return filtrer(liste, predicate)
print(pairs(vide)) # []
print(pairs(l_1)) # []
print(pairs(l_12)) # [2]
print(pairs(l_123)) # [2]
print(pairs(insert(4, insert(6, insert(8, l_123))))) # [4, 6, 8, 2]
print(pairs(insert(5, insert(6, insert(8, l_123))))) #ย [6, 8, 2]
print(impairs(vide)) # []
print(impairs(l_1)) # [1]
print(impairs(l_12)) # [1]
print(impairs(l_123)) # [3, 1]
print(impairs(insert(4, insert(6, insert(8, l_123))))) # [3, 1]
print(impairs(insert(5, insert(6, insert(8, l_123))))) #ย [5, 3, 1]
Explanation: Et donc c'est rapide :
End of explanation
def myrange(n: int) -> ListeChainee:
if n <= 0:
return ListeChainee(hd=None, tl=None)
elif n == 1:
return ListeChainee(hd=1, tl=None)
# return insert(1, vide)
else:
return ListeChainee(hd=n, tl=myrange(n-1))
print(myrange(1)) # [1]
print(myrange(2)) # [1, 2]
print(myrange(3)) # [1, 2, 3]
print(myrange(4)) # [1, 2, 3, 4]
Explanation: Exercice 8 : range
Ce sera de complexitรฉ temporelle linรฉaire :
End of explanation
def intervale(a: int, b: Optional[int]=None) -> ListeChainee:
if b is None:
a, b = 1, a
n = b - a
if n < 0: # [a..b] = []
return ListeChainee(hd=None, tl=None)
elif n == 0: #ย [a..b] = [a]
return ListeChainee(hd=a, tl=None)
else: # [a..b] = a :: [a+1..b]
return ListeChainee(hd=a, tl=intervale(a+1, b))
print(intervale(10)) # [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
print(intervale(1, 4)) # [1, 2, 3, 4]
print(intervale(13, 13)) # [13]
print(intervale(13, 10)) # []
Explanation: Si on veut les avoir dans l'ordre croissant, il faudrait utiliser miroir qui est quadratique.
Autant รฉcrire directement une fonction intervale(a, b) qui renvoie la liste simplement chaรฎnรฉe contenant a :: (a+1) :: ... :: b :
End of explanation
from typing import Callable
def mymap(fonction: Callable, liste: ListeChainee) -> ListeChainee:
if liste is None or liste.hd is None: # liste de taille 0
return ListeChainee(hd=None, tl=None)
elif liste.tl is None: # liste de taille 1
return ListeChainee(hd=fonction(liste.hd), tl=None)
else: # liste de taille >= 2
return ListeChainee(hd=fonction(liste.hd), tl=mymap(fonction, liste.tl))
print(myrange(10))
print(mymap(lambda x: x, myrange(10)))
def intervale_bis(a: int, b: int) -> ListeChainee:
return miroir(mymap(lambda x: x + (a - 1), myrange(b - a + 1)))
print(intervale_bis(1, 10)) # [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
print(intervale_bis(1, 4)) # [1, 2, 3, 4]
print(intervale_bis(13, 13)) # [13]
print(intervale_bis(13, 10)) # []
Explanation: Une autre approche est d'รฉcrire la fonction mymap et de dire que
python
intervale_bis(a, b) = miroir(mymap(lambda x: x + (a - 1), myrange(b - a + 1)))
End of explanation
def racine(n: int) -> int:
i = 1
for i in range(n + 1):
if i*i > n:
return i - 1
return i
print(racine(1)) #ย 1
print(racine(5)) #ย 2
print(racine(102)) # 10
print(racine(120031)) # 346
def intervale2(a: int, b: Optional[int]=None, pas: int=1) -> ListeChainee:
if b is None:
a, b = 1, a
n = b - a
if n < 0: # [a..b::p] = []
return ListeChainee(hd=None, tl=None)
elif n == 0: #ย [a..b::p] = [a]
return ListeChainee(hd=a, tl=None)
else: # [a..b::p] = a :: [a+p..b::p]
return ListeChainee(hd=a, tl=intervale2(a + pas, b=b, pas=pas))
print(intervale2(1, 10, 2)) # [1, 3, 5, 7, 9]
print(intervale2(1, 4, 2)) # [1, 3]
print(intervale2(13, 13, 2)) # [13]
print(intervale2(13, 10, 2)) # []
Explanation: Exercice 9 : premiers
Plusieurs possibilitรฉs. Un filtre d'Erathosthรจne marche bien, ou une filtrage.
Je ne vais pas utiliser de tableaux donc on est un peu rรฉduit d'utiliser une filtrage.
On a besoin des fonctions suivantes :
calculer la racine entiรจre de $n$, trรจs facile par une boucle,
calculer les nombres impairs entre 5 et $\lfloor \sqrt{n} \rfloor$,
filtrer cette liste de nombres impairs pour garder ceux qui divisent $n$,
et dire que $n$ est premier s'il a un diviseur non trivial.
End of explanation
def estDivisible(n: int, k: int) -> bool:
return (n % k) == 0
estDivisible(10, 2)
estDivisible(10, 3)
estDivisible(10, 4)
estDivisible(10, 5)
Explanation: Une version purement fonctionnelle est moins facile qu'une version impรฉrative avec une rรฉfรฉrence boolรฉenne.
End of explanation
def estPremier(n : int) -> bool:
return taille(filtrer(intervale2(2, racine(n), 1), lambda k: estDivisible(n, k))) == 0
Explanation: On est prรชt ร รฉcrire estPremier :
End of explanation
for n in range(2, 20):
print("Petits diviseurs de", n, " -> ", filtrer(intervale2(2, racine(n), 1), lambda k: estDivisible(n, k)))
Explanation: En effet il suffit de construire d'abord la liste des entiers impairs de 2 ร $\lfloor \sqrt{n} \rfloor$, de les filtrer par ceux qui divisent $n$, et de vรฉrifier si on a aucun diviseur (taille(..) == 0) auquel cas $n$ est premier, ou si $n$ a au moins un diviseur auquel cas $n$ n'est pas premier.
End of explanation
def premiers(n : int) -> ListeChainee:
return filtrer(intervale2(2, n, 1), estPremier)
premiers(10) # [2, 3, 5, 7]
premiers(100) #ย [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97]
Explanation: On voit dans l'exemple ci dessus les nombres premiers comme ceux n'ayant aucun diviseurs, et les nombres non premiers comme ceux ayant au moins un diviseur.
End of explanation
test = [3, 1, 8, 4, 5, 6, 1, 2]
Explanation: Quelques tris par comparaison
On fera les tris en ordre croissant.
End of explanation
from typing import TypeVar, List
_a = TypeVar('alpha')
def insere(x : _a, liste : List[_a]) -> List[_a]:
if len(liste) == 0:
return [x]
else:
t, q = liste[0], liste[1:]
if x <= t:
return [x] + liste
else:
return [t] + insere(x, q)
def tri_insertion(liste : List[_a]) -> List[_a]:
if len(liste) == 0:
return []
else:
t, q = liste[0], liste[1:]
return insere(t, tri_insertion(q))
tri_insertion(test)
Explanation: Exercice 10 : Tri insertion
End of explanation
from typing import TypeVar, List, Callable
_a = TypeVar('alpha')
def insere2(ordre : Callable[[_a, _a], bool], x : _a, liste : List[_a]) -> List[_a]:
if len(liste) == 0:
return [x]
else:
t, q = liste[0], liste[1:]
if ordre(x, t):
return [x] + liste
else:
return [t] + insere2(ordre, x, q)
def tri_insertion2(ordre : Callable[[_a, _a], bool], liste : List[_a]) -> List[_a]:
if len(liste) == 0:
return []
else:
t, q = liste[0], liste[1:]
return insere2(ordre, t, tri_insertion2(ordre, q))
ordre_croissant = lambda x, y: x <= y
tri_insertion2(ordre_croissant, test)
ordre_decroissant = lambda x, y: x >= y
tri_insertion2(ordre_decroissant, test)
Explanation: Complexitรฉ en temps $\mathcal{O}(n^2)$.
Exercice 11 : Tri insertion gรฉnรฉrique
End of explanation
from typing import TypeVar, List, Tuple
_a = TypeVar('alpha')
def selectionne_min(liste : List[_a]) -> Tuple[_a, List[_a]]:
if len(liste) == 0:
raise ValueError("Selectionne_min sur liste vide")
else:
def cherche_min(mini : _a, autres : List[_a], reste : List[_a]) -> Tuple[_a, List[_a]]:
if len(reste) == 0:
return (mini, autres)
else:
t, q = reste[0], reste[1:]
if t < mini:
return cherche_min(t, [mini] + autres, q)
else:
return cherche_min(mini, [t] + autres, q)
t, q = liste[0], liste[1:]
return cherche_min(t, [], q)
test
selectionne_min(test)
Explanation: Exercice 12 : Tri selection
End of explanation
def tri_selection(liste : List[_a]) -> List[_a]:
if len(liste) == 0:
return []
else:
mini, autres = selectionne_min(liste)
return [mini] + tri_selection(autres)
tri_selection(test)
Explanation: (On voit que la liste autre a รฉtรฉ inversรฉe)
End of explanation
from typing import TypeVar, List, Tuple
_a = TypeVar('alpha')
def separe(liste : List[_a]) -> Tuple[List[_a], List[_a]]:
if len(liste) == 0:
return ([], [])
elif len(liste) == 1:
return ([liste[0]], [])
else:
x, y, q = liste[0], liste[1], liste[2:]
a, b = separe(q)
return ([x] + a, [y] + b)
test
separe(test)
def fusion(liste1 : List[_a], liste2 : List[_a]) -> List[_a]:
if (len(liste1), len(liste2)) == (0, 0):
return []
elif len(liste1) == 0:
return liste2
elif len(liste2) == 0:
return liste1
else: # les deux sont non vides
x, a = liste1[0], liste1[1:]
y, b = liste2[0], liste2[1:]
if x <= y:
return [x] + fusion(a, [y] + b)
else:
return [y] + fusion([x] + a, b)
fusion([1, 3, 7], [2, 3, 8])
def tri_fusion(liste : List[_a]) -> List[_a]:
if len(liste) <= 1:
return liste
else:
a, b = separe(liste)
return fusion(tri_fusion(a), tri_fusion(b))
tri_fusion(test)
Explanation: Complexitรฉ en temps : $\mathcal{O}(n^2)$.
Exercices 13, 14, 15 : Tri fusion
End of explanation
%timeit tri_insertion(test)
%timeit tri_selection(test)
%timeit tri_fusion(test)
from sys import setrecursionlimit
setrecursionlimit(100000)
# nรฉcessaire pour tester les diffรฉrentes fonctions rรฉcursives sur de grosses listes
import random
def test_random(n : int) -> List[int]:
return [random.randint(-1000, 1000) for _ in range(n)]
for n in [10, 100, 1000]:
print("\nFor n =", n)
for tri in [tri_insertion, tri_selection, tri_fusion]:
print(" and tri = {}".format(tri.__name__))
%timeit tri(test_random(n))
Explanation: Complexitรฉ en temps $\mathcal{O}(n \log n)$.
Comparaisons
End of explanation
from typing import TypeVar, List, Callable
_a, _b = TypeVar('_a'), TypeVar('_b')
def applique(f : Callable[[_a], _b], liste : List[_a]) -> List[_b]:
# Triche :
return list(map(f, liste))
# 1รจre approche :
return [f(x) for x in liste]
# 2รจme approche :
fliste = []
for x in liste:
fliste.append(f(x))
return fliste
# 3รจme approche
n = len(liste)
if n == 0: return []
fliste = [liste[0] for _ in range(n)]
for i in range(n):
fliste[i] = f(liste[i])
return fliste
Explanation: C'est assez pour vรฉrifier que le tri fusion est bien plus efficace que les autres.
On voit aussi que les tris par insertion et sรฉlection sont pire que linรฉaires,
Mais que le tri par fusion est presque linรฉaire (pour $n$ petits, $n \log n$ est presque linรฉaire).
Listes : l'ordre supรฉrieur
Je ne corrige pas les questions qui รฉtaient traitรฉes dans le TP1.
Exercice 16 : applique
End of explanation
def premiers_carres_parfaits(n : int) -> List[int]:
return applique(lambda x : x * x, list(range(1, n + 1)))
premiers_carres_parfaits(12)
Explanation: Exercice 17
End of explanation
from typing import TypeVar, List, Callable
_a = TypeVar('_a')
def itere(f : Callable[[_a], None], liste : List[_a]) -> None:
for x in liste:
f(x)
Explanation: Exercice 18 : itere
End of explanation
print_int = lambda i: print("{}".format(i))
def affiche_liste_entiers(liste : List[int]) -> None:
print("Debut")
itere(print_int, liste)
print("Fin")
affiche_liste_entiers([1, 2, 4, 5, 12011993])
Explanation: Exercice 19
End of explanation
from typing import TypeVar, List, Callable
_a = TypeVar('_a')
# Comme all(map(f, liste))
def qqsoit(f : Callable[[_a], bool], liste : List[_a]) -> bool:
for x in liste:
if not f(x): return False # arret preliminaire
return True
# Comme any(map(f, liste))
def ilexiste(f : Callable[[_a], bool], liste : List[_a]) -> bool:
for x in liste:
if f(x): return True # arret preliminaire
return False
qqsoit(lambda x: (x % 2) == 0, [1, 2, 3, 4, 5])
ilexiste(lambda x: (x % 2) == 0, [1, 2, 3, 4, 5])
%timeit qqsoit(lambda x: (x % 2) == 0, [1, 2, 3, 4, 5])
%timeit all(map(lambda x: (x % 2) == 0, [1, 2, 3, 4, 5]))
%timeit ilexiste(lambda x: (x % 2) == 0, [1, 2, 3, 4, 5])
%timeit any(map(lambda x: (x % 2) == 0, [1, 2, 3, 4, 5]))
Explanation: Exercice 20 : qqsoit et ilexiste
End of explanation
def appartient_curry(x : _a) -> Callable[[List[_a]], bool]:
return lambda liste: ilexiste(lambda y: x == y, liste)
def appartient(x : _a, liste : List[_a]) -> bool:
return ilexiste(lambda y: x == y, liste)
def toutes_egales(x : _a, liste : List[_a]) -> bool:
return qqsoit(lambda y: x == y, liste)
appartient_curry(1)([1, 2, 3])
appartient(1, [1, 2, 3])
appartient(5, [1, 2, 3])
toutes_egales(1, [1, 2, 3])
toutes_egales(5, [1, 2, 3])
Explanation: Exercice 21 : appartient version 2
End of explanation
%timeit appartient(random.randint(-10, 10), [random.randint(-1000, 1000) for _ in range(1000)])
%timeit random.randint(-10, 10) in [random.randint(-1000, 1000) for _ in range(1000)]
Explanation: Est-ce que notre implรฉmentation peut รชtre plus rapide que le test x in liste ?
Non, mais elle est aussi rapide. C'est dรฉjร pas mal !
End of explanation
from typing import TypeVar, List, Callable
_a = TypeVar('_a')
# Comme list(filter(f, liste))
def filtre(f : Callable[[_a], bool], liste : List[_a]) -> List[_a]:
# return [x for x in liste if f(x)]
liste2 = []
for x in liste:
if f(x):
liste2.append(x)
return liste2
filtre(lambda x: (x % 2) == 0, [1, 2, 3, 4, 5])
filtre(lambda x: (x % 2) != 0, [1, 2, 3, 4, 5])
Explanation: Exercice 22 : filtre
End of explanation
pairs = lambda liste: filtre(lambda x: (x % 2) == 0, liste)
impairs = lambda liste: filtre(lambda x: (x % 2) != 0, liste)
pairs(list(range(10)))
impairs(list(range(10)))
Explanation: Exercice 23
Je vous laisse trouver pour premiers.
End of explanation
from typing import TypeVar, List, Callable
_a = TypeVar('_a')
# Comme list(filter(f, liste))
def reduit_rec(f : Callable[[_a, _b], _a], acc : _a, liste : List[_b]) -> _a:
if len(liste) == 0:
return acc
else:
h, q = liste[0], liste[1:]
return reduit(f, f(acc, h), q)
# Version non rรฉcursive, bien plus efficace
def reduit(f : Callable[[_a, _b], _a], acc : _a, liste : List[_b]) -> _a:
acc_value = acc
for x in liste:
acc_value = f(acc_value, x)
return acc_value
Explanation: Exercice 24 : reduit
End of explanation
from operator import add
somme_rec = lambda liste: reduit_rec(add, 0, liste)
somme = lambda liste: reduit(add, 0, liste)
somme_rec(list(range(10)))
somme(list(range(10)))
sum(list(range(10)))
%timeit somme_rec(list(range(10)))
%timeit somme(list(range(10)))
%timeit sum(list(range(10)))
Explanation: Trรจs pratique pour calculer des sommes, notamment.
Exercice 25 : somme, produit
End of explanation
%timeit somme_rec(list(range(1000)))
%timeit somme(list(range(1000)))
%timeit sum(list(range(1000)))
from operator import mul
produit = lambda liste: reduit(mul, 1, liste)
produit(list(range(1, 6))) # 5! = 120
Explanation: Pour de petites listes, la version rรฉcursive est aussi efficace que la version impรฉrative. Chouette !
End of explanation
def factorielle(n : int) -> int:
return produit(range(1, n + 1))
for n in range(1, 15):
print("{:>7}! = {:>13}".format(n, factorielle(n)))
Explanation: Bonus :
End of explanation
miroir = lambda liste: reduit(lambda l, x : [x] + l, [], liste)
miroir([2, 3, 5, 7, 11])
Explanation: Exercice 26 : miroir version 2
End of explanation
from typing import Dict, Optional, Tuple
# Impossible de dรฉfinir un type rรฉcursivement, pas comme en Caml
arbre_bin = Dict[str, Optional[Tuple[Dict, Dict]]]
from pprint import pprint
arbre_test = {'Noeud': (
{'Noeud': (
{'Noeud': (
{'Feuille': None},
{'Feuille': None}
)},
{'Feuille': None}
)},
{'Feuille': None}
)}
pprint(arbre_test)
Explanation: Attention en Python, les listes ne sont PAS simplement chainรฉes, donc lambda l, x : [x] + l est en temps linรฉaire en $|l| = n$, pas en $\mathcal{O}(1)$ comme en Caml/OCaml pour fun l x -> x :: l.
Arbres
/!\ Les deux derniรจres parties sont bien plus difficiles en Python qu'en Caml.
Exercice 27
End of explanation
Feuille = {'Feuille': None}
Noeud = lambda x, y : {'Noeud': (x, y)}
arbre_test = Noeud(Noeud(Noeud(Feuille, Feuille), Feuille), Feuille)
pprint(arbre_test)
Explanation: Avec une syntaxe amรฉliorรฉe, on se rapproche de trรจs prรจs de la syntaxe de Caml/OCaml :
End of explanation
def taille(a : arbre_bin) -> int:
# Pattern matching ~= if, elif,.. sur les clรฉs de la profondeur 1
# du dictionnaire (une seule clรฉ)
if 'Feuille' in a:
return 1
elif 'Noeud' in a:
x, y = a['Noeud']
return 1 + taille(x) + taille(y)
taille(arbre_test) # 7
Explanation: Exercice 28
Compte le nombre de feuilles et de sommets.
End of explanation
def hauteur(a : arbre_bin) -> int:
if 'Feuille' in a:
return 0
elif 'Noeud' in a:
x, y = a['Noeud']
return 1 + max(hauteur(x), hauteur(y))
hauteur(arbre_test) # 3
Explanation: Exercice 29
End of explanation
from typing import TypeVar, Union, List
F = TypeVar('F')
N = TypeVar('N')
element_parcours = Union[F, N]
parcours = List[element_parcours]
Explanation: Exercice 30
Bonus. (รcrivez une fonction testant si un arbre รฉtiquetรฉ par des entiers est tournoi.)
Parcours d'arbres binaires
Aprรจs quelques exercices manipulant cette structure de dictionnaire, รฉcrire la suite n'est pas trop difficile.
Exercice 31
End of explanation
def parcours_prefixe(a : arbre_bin) -> parcours:
if 'Feuille' in a:
return [F]
elif 'Noeud' in a:
g, d = a['Noeud']
return [N] + parcours_prefixe(g) + parcours_prefixe(d)
parcours_prefixe(arbre_test)
def parcours_postfixe(a : arbre_bin) -> parcours:
if 'Feuille' in a:
return [F]
elif 'Noeud' in a:
g, d = a['Noeud']
return parcours_postfixe(g) + parcours_postfixe(d) + [N]
parcours_postfixe(arbre_test)
def parcours_infixe(a : arbre_bin) -> parcours:
if 'Feuille' in a:
return [F]
elif 'Noeud' in a:
g, d = a['Noeud']
return parcours_infixe(g) + [N] + parcours_infixe(d)
parcours_infixe(arbre_test)
Explanation: Exercice 32 : Parcours naifs (complexitรฉ quadratique)
End of explanation
def parcours_prefixe2(a : arbre_bin) -> parcours:
def parcours(vus, b):
if 'Feuille' in b:
vus.insert(0, F)
return vus
elif 'Noeud' in b:
vus.insert(0, N)
g, d = b['Noeud']
return parcours(parcours(vus, g), d)
p = parcours([], a)
return p[::-1]
parcours_prefixe2(arbre_test)
def parcours_postfixe2(a : arbre_bin) -> parcours:
def parcours(vus, b):
if 'Feuille' in b:
vus.insert(0, F)
return vus
elif 'Noeud' in b:
g, d = b['Noeud']
p = parcours(parcours(vus, g), d)
p.insert(0, N)
return p
p = parcours([], a)
return p[::-1]
parcours_postfixe2(arbre_test)
def parcours_infixe2(a : arbre_bin) -> parcours:
def parcours(vus, b):
if 'Feuille' in b:
vus.insert(0, F)
return vus
elif 'Noeud' in b:
g, d = b['Noeud']
p = parcours(vus, g)
p.insert(0, N)
return parcours(p, d)
p = parcours([], a)
return p[::-1]
parcours_infixe2(arbre_test)
Explanation: Pourquoi ont-ils une complexitรฉ quadratique ? La concatรฉnation (@ en OCaml, + en Python) ne se fait pas en temps constant mais linรฉaire dans la taille de la plus longue liste.
Exercice 33 : Parcours linรฉaires
On ajoute une fonction auxiliaire et un argument vus qui est une liste qui stocke les รฉlements observรฉs dans l'ordre du parcours
End of explanation
from collections import deque
def parcours_largeur(a : arbre_bin) -> parcours:
file = deque()
# fonction avec effet de bord sur la file
def vasy() -> parcours:
if len(file) == 0:
return []
else:
b = file.pop()
if 'Feuille' in b:
# return [F] + vasy()
v = vasy()
v.insert(0, F)
return v
elif 'Noeud' in b:
g, d = b['Noeud']
file.insert(0, g)
file.insert(0, d)
# return [N] + vasy()
v = vasy()
v.insert(0, N)
return v
file.insert(0, a)
return vasy()
parcours_largeur(arbre_test)
Explanation: Exercice 34 : parcours en largeur et en profondeur
Pour utiliser une file de prioritรฉ (priority queue), on utilise le module collections.deque.
End of explanation
def parcours_profondeur(a : arbre_bin) -> parcours:
pile = []
# fonction avec effet de bord sur la file
def vasy() -> parcours:
if len(pile) == 0:
return []
else:
b = pile.pop()
if 'Feuille' in b:
# return [F] + vasy()
v = vasy()
v.append(F)
return v
elif 'Noeud' in b:
g, d = b['Noeud']
pile.append(g)
pile.append(d)
# return [N] + vasy()
v = vasy()
v.insert(0, N)
return v
pile.append(a)
return vasy()
parcours_profondeur(arbre_test)
Explanation: En remplaรงant la file par une pile (une simple list), on obtient le parcours en profondeur, avec la mรชme complexitรฉ.
End of explanation
test_prefixe = parcours_prefixe2(arbre_test)
test_prefixe
Explanation: Exercice 35 et fin
Reconstruction depuis le parcours prefixe
End of explanation
from typing import Tuple
def reconstruit_prefixe(par : parcours) -> arbre_bin:
def reconstruit(p : parcours) -> Tuple[arbre_bin, parcours]:
if len(p) == 0:
raise ValueError("parcours invalide pour reconstruit_prefixe")
elif p[0] == F:
return (Feuille, p[1:])
elif p[0] == N:
g, q = reconstruit(p[1:])
d, r = reconstruit(q)
return (Noeud(g, d), r)
# call it
a, p = reconstruit(par)
if len(p) == 0:
return a
else:
raise ValueError("parcours invalide pour reconstruit_prefixe")
reconstruit_prefixe([F])
reconstruit_prefixe(test_prefixe)
Explanation: L'idรฉe de cette solution est la suivante :
j'aimerais une fonction rรฉcursive qui fasse le travail;
le problรจme c'est que si on prend un parcours prefixe, soit il commence
par F et l'arbre doit รชtre une feuille; soit il est de la forme N::q
oรน q n'est plus un parcours prefixe mais la concatรฉnation de DEUX parcours
prefixe, on ne peut donc plus appeler la fonction sur q.
On va donc รฉcrire une fonction qui prend une liste qui contient plusieurs
parcours concatรฉnรฉ et qui renvoie l'arbre correspondant au premier parcours
et ce qui n'a pas รฉtรฉ utilisรฉ :
End of explanation
reconstruit_prefixe([N, F, F] + test_prefixe) # รฉchoue
Explanation: Et cet exemple va รฉchouer :
End of explanation
largeur_test = parcours_largeur(arbre_test)
largeur_test
from collections import deque
def reconstruit_largeur(par : parcours) -> arbre_bin:
file = deque()
# Fonction avec effets de bord
def lire_element(e : element_parcours) -> None:
if e == F:
file.append(Feuille)
elif e == N:
d = file.popleft()
g = file.popleft() # attention ร l'ordre !
file.append(Noeud(g, d))
# Applique cette fonction ร chaque รฉlement du parcours
for e in reversed(par):
lire_element(e)
if len(file) == 1:
return file.popleft()
else:
raise ValueError("parcours invalide pour reconstruit_largeur")
largeur_test
reconstruit_largeur(largeur_test)
arbre_test
Explanation: Reconstruction depuis le parcours en largeur
Ce n'est pas รฉvident quand on ne connait pas. L'idรฉe est de se servir d'une file
pour stocker les arbres qu'on reconstruit peu ร peu depuis les feuilles. La file
permet de rรฉcupรฉrer les bons sous-arbres quand on rencontre un noeud
End of explanation
from collections import deque
def reconstruit_prefixe2(par : parcours) -> arbre_bin:
pile = deque()
# Fonction avec effets de bord
def lire_element(e : element_parcours) -> None:
if e == F:
pile.append(Feuille)
elif e == N:
g = pile.pop()
d = pile.pop() # attention ร l'ordre !
pile.append(Noeud(g, d))
# Applique cette fonction ร chaque รฉlement du parcours
for e in reversed(par):
lire_element(e)
if len(pile) == 1:
return pile.pop()
else:
raise ValueError("parcours invalide pour reconstruit_prefixe2")
prefixe_test = parcours_prefixe2(arbre_test)
prefixe_test
reconstruit_prefixe2(prefixe_test)
arbre_test
Explanation: Le mรชme algorithme (enfin presque, modulo interversion de g et d)
avec une pile donne une autre version de la reconstruction du parcours prefixe.
End of explanation
<END_TASK> |
15,870 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Example
of building a notebook-friendly object into the output of the data API
Author
Step1: Authorization
In the vanilla notebook, you need to manually set an auth. token. You'll need your own value for this, of course.
Get this from the running narrative, e.g. write a narrative code cell that has
Step2: Find and load an object
Open the workspace (1019) and get a Rhodobacter assembly from it
Step3: Get the contigs for the assembly
This takes a while because the current implementation loads the whole assembly, not just the 300 or so strings with the contig values.
Step4: View the contigs
The Contigs object wraps the list of contigs as a Pandas DataFrame (with the qgrid output enabled), so as you can see the plot() function is immediately available. The list of strings in the raw contig IDs is parsed to a set of columns and values for the DataFrame.
The default display is the nice sortable, scrollable, etc. table from the qgrid package. | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import qgrid
qgrid.nbinstall()
from biokbase import data_api
from biokbase.data_api import display
display.nbviewer_mode(True)
Explanation: Example
of building a notebook-friendly object into the output of the data API
Author: Dan Gunter
Initialization
Imports
Set up matplotlib, the qgrid (nice table), and import biokbase
End of explanation
import os
os.environ['KB_AUTH_TOKEN'] = open('/tmp/kb_auth_token.txt').read().strip()
Explanation: Authorization
In the vanilla notebook, you need to manually set an auth. token. You'll need your own value for this, of course.
Get this from the running narrative, e.g. write a narrative code cell that has:
import os; print(os.environ('KB_AUTH_TOKEN'))
End of explanation
b = data_api.browse(1019)
x = b[0].object # Assembly object
Explanation: Find and load an object
Open the workspace (1019) and get a Rhodobacter assembly from it
End of explanation
cid_strings = x.get_contig_ids() # 1 min
cids = display.Contigs(cid_strings)
Explanation: Get the contigs for the assembly
This takes a while because the current implementation loads the whole assembly, not just the 300 or so strings with the contig values.
End of explanation
from biokbase import data_api
from biokbase.data_api import display
list(b)
rg = b[0]
rgo = rg.object
type(rgo)
Explanation: View the contigs
The Contigs object wraps the list of contigs as a Pandas DataFrame (with the qgrid output enabled), so as you can see the plot() function is immediately available. The list of strings in the raw contig IDs is parsed to a set of columns and values for the DataFrame.
The default display is the nice sortable, scrollable, etc. table from the qgrid package.
End of explanation
<END_TASK> |
15,871 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Executed
Step1: Load software and filenames definitions
Step2: Data folder
Step3: List of data files
Step4: Data load
Initial loading of the data
Step5: Laser alternation selection
At this point we have only the timestamps and the detector numbers
Step6: We need to define some parameters
Step7: We should check if everithing is OK with an alternation histogram
Step8: If the plot looks good we can apply the parameters with
Step9: Measurements infos
All the measurement data is in the d variable. We can print it
Step10: Or check the measurements duration
Step11: Compute background
Compute the background using automatic threshold
Step12: Burst search and selection
Step13: Fret fit
Max position of the Kernel Density Estimation (KDE)
Step14: Weighted mean of $E$ of each burst
Step15: Gaussian fit (no weights)
Step16: Gaussian fit (using burst size as weights)
Step17: Stoichiometry fit
Max position of the Kernel Density Estimation (KDE)
Step18: The Maximum likelihood fit for a Gaussian population is the mean
Step19: Computing the weighted mean and weighted standard deviation we get
Step20: Save data to file
Step21: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
Step22: This is just a trick to format the different variables | <ASSISTANT_TASK:>
Python Code:
ph_sel_name = "None"
data_id = "27d"
# data_id = "7d"
Explanation: Executed: Mon Mar 27 11:36:54 2017
Duration: 8 seconds.
usALEX-5samples - Template
This notebook is executed through 8-spots paper analysis.
For a direct execution, uncomment the cell below.
End of explanation
from fretbursts import *
init_notebook()
from IPython.display import display
Explanation: Load software and filenames definitions
End of explanation
data_dir = './data/singlespot/'
import os
data_dir = os.path.abspath(data_dir) + '/'
assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
Explanation: Data folder:
End of explanation
from glob import glob
file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)
## Selection for POLIMI 2012-11-26 datatset
labels = ['17d', '27d', '7d', '12d', '22d']
files_dict = {lab: fname for lab, fname in zip(labels, file_list)}
files_dict
data_id
Explanation: List of data files:
End of explanation
d = loader.photon_hdf5(filename=files_dict[data_id])
Explanation: Data load
Initial loading of the data:
End of explanation
d.ph_times_t, d.det_t
Explanation: Laser alternation selection
At this point we have only the timestamps and the detector numbers:
End of explanation
d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)
Explanation: We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations:
End of explanation
plot_alternation_hist(d)
Explanation: We should check if everithing is OK with an alternation histogram:
End of explanation
loader.alex_apply_period(d)
Explanation: If the plot looks good we can apply the parameters with:
End of explanation
d
Explanation: Measurements infos
All the measurement data is in the d variable. We can print it:
End of explanation
d.time_max
Explanation: Or check the measurements duration:
End of explanation
d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)
dplot(d, timetrace_bg)
d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa
Explanation: Compute background
Compute the background using automatic threshold:
End of explanation
d_orig = d
d = bext.burst_search_and_gate(d, m=10, F=7)
assert d.dir_ex == 0
assert d.leakage == 0
print(d.ph_sel)
dplot(d, hist_fret);
# if data_id in ['7d', '27d']:
# ds = d.select_bursts(select_bursts.size, th1=20)
# else:
# ds = d.select_bursts(select_bursts.size, th1=30)
ds = d.select_bursts(select_bursts.size, add_naa=False, th1=30)
n_bursts_all = ds.num_bursts[0]
def select_and_plot_ES(fret_sel, do_sel):
ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel)
ds_do = ds.select_bursts(select_bursts.ES, **do_sel)
bpl.plot_ES_selection(ax, **fret_sel)
bpl.plot_ES_selection(ax, **do_sel)
return ds_fret, ds_do
ax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1)
if data_id == '7d':
fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False)
do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '12d':
fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '17d':
fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '22d':
fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '27d':
fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
bandwidth = 0.03
n_bursts_fret = ds_fret.num_bursts[0]
n_bursts_fret
dplot(ds_fret, hist2d_alex, scatter_alpha=0.1);
nt_th1 = 50
dplot(ds_fret, hist_size, which='all', add_naa=False)
xlim(-0, 250)
plt.axvline(nt_th1)
Th_nt = np.arange(35, 120)
nt_th = np.zeros(Th_nt.size)
for i, th in enumerate(Th_nt):
ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th)
nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th
plt.figure()
plot(Th_nt, nt_th)
plt.axvline(nt_th1)
nt_mean = nt_th[np.where(Th_nt == nt_th1)][0]
nt_mean
Explanation: Burst search and selection
End of explanation
E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size')
E_fitter = ds_fret.E_fitter
E_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
E_fitter.fit_histogram(mfit.factory_gaussian(center=0.5))
E_fitter.fit_res[0].params.pretty_print()
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(E_fitter, ax=ax[0])
mfit.plot_mfit(E_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, E_pr_fret_kde*100))
display(E_fitter.params*100)
# ds_fret.add(E_fitter = E_fitter)
# dplot(ds_fret, hist_fret_kde, weights='size', bins=np.r_[-0.2:1.2:bandwidth], bandwidth=bandwidth);
# plt.axvline(E_pr_fret_kde, ls='--', color='r')
# print(ds_fret.ph_sel, E_pr_fret_kde)
Explanation: Fret fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
ds_fret.fit_E_m(weights='size')
Explanation: Weighted mean of $E$ of each burst:
End of explanation
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None)
Explanation: Gaussian fit (no weights):
End of explanation
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size')
E_kde_w = E_fitter.kde_max_pos[0]
E_gauss_w = E_fitter.params.loc[0, 'center']
E_gauss_w_sig = E_fitter.params.loc[0, 'sigma']
E_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0]))
E_gauss_w_fiterr = E_fitter.fit_res[0].params['center'].stderr
E_kde_w, E_gauss_w, E_gauss_w_sig, E_gauss_w_err, E_gauss_w_fiterr
Explanation: Gaussian fit (using burst size as weights):
End of explanation
S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True)
S_fitter = ds_fret.S_fitter
S_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
S_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(S_fitter, ax=ax[0])
mfit.plot_mfit(S_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, S_pr_fret_kde*100))
display(S_fitter.params*100)
S_kde = S_fitter.kde_max_pos[0]
S_gauss = S_fitter.params.loc[0, 'center']
S_gauss_sig = S_fitter.params.loc[0, 'sigma']
S_gauss_err = float(S_gauss_sig/np.sqrt(ds_fret.num_bursts[0]))
S_gauss_fiterr = S_fitter.fit_res[0].params['center'].stderr
S_kde, S_gauss, S_gauss_sig, S_gauss_err, S_gauss_fiterr
Explanation: Stoichiometry fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
S = ds_fret.S[0]
S_ml_fit = (S.mean(), S.std())
S_ml_fit
Explanation: The Maximum likelihood fit for a Gaussian population is the mean:
End of explanation
weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.)
S_mean = np.dot(weights, S)/weights.sum()
S_std_dev = np.sqrt(
np.dot(weights, (S - S_mean)**2)/weights.sum())
S_wmean_fit = [S_mean, S_std_dev]
S_wmean_fit
Explanation: Computing the weighted mean and weighted standard deviation we get:
End of explanation
sample = data_id
Explanation: Save data to file
End of explanation
variables = ('sample n_bursts_all n_bursts_fret '
'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err E_gauss_w_fiterr '
'S_kde S_gauss S_gauss_sig S_gauss_err S_gauss_fiterr '
'nt_mean\n')
Explanation: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
End of explanation
variables_csv = variables.replace(' ', ',')
fmt_float = '{%s:.6f}'
fmt_int = '{%s:d}'
fmt_str = '{%s}'
fmt_dict = {**{'sample': fmt_str},
**{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}
var_dict = {name: eval(name) for name in variables.split()}
var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n'
data_str = var_fmt.format(**var_dict)
print(variables_csv)
print(data_str)
# NOTE: The file name should be the notebook name but with .csv extension
with open('results/usALEX-5samples-PR-raw-AND-gate.csv', 'a') as f:
f.seek(0, 2)
if f.tell() == 0:
f.write(variables_csv)
f.write(data_str)
Explanation: This is just a trick to format the different variables:
End of explanation
<END_TASK> |
15,872 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Introduction
Step1: Graph traversal is akin to walking along the graph, node by node,
constrained by the edges that connect the nodes.
Graph traversal is particularly useful for understanding
the local structure of certain portions of the graph
and for finding paths that connect two nodes in the network.
In this chapter, we are going to learn how to perform pathfinding in a graph,
specifically by looking for shortest paths via the breadth-first search algorithm.
Breadth-First Search
The BFS algorithm is a staple of computer science curricula,
and for good reason
Step4: Exercise
Step5: Visualizing Paths
One of the objectives of that exercise before was to help you "think on graphs".
Now that you've learned how to do so, you might be wondering,
"How do I visualize that path through the graph?"
Well first off, if you inspect the test_path_exists function above,
you'll notice that NetworkX provides a shortest_path() function
that you can use. Here's what using nx.shortest_path() looks like.
Step6: As you can see, it returns the nodes along the shortest path,
incidentally in the exact order that you would traverse.
One thing to note, though!
If there are multiple shortest paths from one node to another,
NetworkX will only return one of them.
So how do you draw those nodes only?
You can use the G.subgraph(nodes)
to return a new graph that only has nodes in nodes
and only the edges that exist between them.
After that, you can use any plotting library you like.
We will show an example here that uses nxviz's matrix plot.
Let's see it in action
Step7: Voila! Now we have the subgraph (1) extracted and (2) drawn to screen!
In this case, the matrix plot is a suitable visualization for its compactness.
The off-diagonals also show that each node is a neighbor to the next one.
You'll also notice that if you try to modify the graph g, say by adding a node
Step9: In this case, we opted for an Arc plot because we only have one grouping of nodes but have a logical way to order them.
Because the path follows the order, the edges being highlighted automatically look like hops through the graph.
Bottleneck nodes
We're now going to revisit the concept of an "important node",
this time now leveraging what we know about paths.
In the "hubs" chapter, we saw how a node that is "important"
could be so because it is connected to many other nodes.
Paths give us an alternative definition.
If we imagine that we have to pass a message on a graph
from one node to another,
then there may be "bottleneck" nodes
for which if they are removed,
then messages have a harder time flowing through the graph.
One metric that measures this form of importance
is the "betweenness centrality" metric.
On a graph through which a generic "message" is flowing,
a node with a high betweenness centrality
is one that has a high proportion of shortest paths
flowing through it.
In other words, it behaves like a bottleneck.
Betweenness centrality in NetworkX
NetworkX provides a "betweenness centrality" function
that behaves consistently with the "degree centrality" function,
in that it returns a mapping from node to metric
Step10: Exercise
Step11: Think about it...
...does it make sense that degree centrality and betweenness centrality
are not well-correlated?
Can you think of a scenario where a node has a
"high" betweenness centrality
but a "low" degree centrality?
Before peeking at the graph below,
think about your answer for a moment.
Step12: Recap
In this chapter, you learned the following things | <ASSISTANT_TASK:>
Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo(id="JjpbztqP9_0", width="100%")
Explanation: Introduction
End of explanation
from nams import load_data as cf
G = cf.load_sociopatterns_network()
from nams.solutions.paths import bfs_algorithm
# UNCOMMENT NEXT LINE TO GET THE ANSWER.
# bfs_algorithm()
Explanation: Graph traversal is akin to walking along the graph, node by node,
constrained by the edges that connect the nodes.
Graph traversal is particularly useful for understanding
the local structure of certain portions of the graph
and for finding paths that connect two nodes in the network.
In this chapter, we are going to learn how to perform pathfinding in a graph,
specifically by looking for shortest paths via the breadth-first search algorithm.
Breadth-First Search
The BFS algorithm is a staple of computer science curricula,
and for good reason:
it teaches learners how to "think on" a graph,
putting one in the position of
"the dumb computer" that can't use a visual cortex to
"just know" how to trace a path from one node to another.
As a topic, learning how to do BFS
additionally imparts algorithmic thinking to the learner.
Exercise: Design the algorithm
Try out this exercise to get some practice with algorithmic thinking.
On a piece of paper, conjure up a graph that has 15-20 nodes. Connect them any way you like.
Pick two nodes. Pretend that you're standing on one of the nodes, but you can't see any further beyond one neighbor away.
Work out how you can find a path from the node you're standing on to the other node, given that you can only see nodes that are one neighbor away but have an infinitely good memory.
If you are successful at designing the algorithm, you should get the answer below.
End of explanation
# FILL IN THE BLANKS BELOW
def path_exists(node1, node2, G):
This function checks whether a path exists between two nodes (node1,
node2) in graph G.
visited_nodes = _____
queue = [_____]
while len(queue) > 0:
node = ___________
neighbors = list(_________________)
if _____ in _________:
# print('Path exists between nodes {0} and {1}'.format(node1, node2))
return True
else:
visited_nodes.___(____)
nbrs = [_ for _ in _________ if _ not in _____________]
queue = ____ + _____
# print('Path does not exist between nodes {0} and {1}'.format(node1, node2))
return False
# UNCOMMENT THE FOLLOWING TWO LINES TO SEE THE ANSWER
from nams.solutions.paths import path_exists
# path_exists??
# CHECK YOUR ANSWER AGAINST THE TEST FUNCTION BELOW
from random import sample
import networkx as nx
def test_path_exists(N):
N: The number of times to spot-check.
for i in range(N):
n1, n2 = sample(G.nodes(), 2)
assert path_exists(n1, n2, G) == bool(nx.shortest_path(G, n1, n2))
return True
assert test_path_exists(10)
Explanation: Exercise: Implement the algorithm
Now that you've seen how the algorithm works, try implementing it!
End of explanation
path = nx.shortest_path(G, 7, 400)
path
Explanation: Visualizing Paths
One of the objectives of that exercise before was to help you "think on graphs".
Now that you've learned how to do so, you might be wondering,
"How do I visualize that path through the graph?"
Well first off, if you inspect the test_path_exists function above,
you'll notice that NetworkX provides a shortest_path() function
that you can use. Here's what using nx.shortest_path() looks like.
End of explanation
import nxviz as nv
g = G.subgraph(path)
nv.matrix(g, sort_by="order")
Explanation: As you can see, it returns the nodes along the shortest path,
incidentally in the exact order that you would traverse.
One thing to note, though!
If there are multiple shortest paths from one node to another,
NetworkX will only return one of them.
So how do you draw those nodes only?
You can use the G.subgraph(nodes)
to return a new graph that only has nodes in nodes
and only the edges that exist between them.
After that, you can use any plotting library you like.
We will show an example here that uses nxviz's matrix plot.
Let's see it in action:
End of explanation
from nams.solutions.paths import plot_path_with_neighbors
### YOUR SOLUTION BELOW
plot_path_with_neighbors(G, 7, 400)
Explanation: Voila! Now we have the subgraph (1) extracted and (2) drawn to screen!
In this case, the matrix plot is a suitable visualization for its compactness.
The off-diagonals also show that each node is a neighbor to the next one.
You'll also notice that if you try to modify the graph g, say by adding a node:
python
g.add_node(2048)
you will get an error:
```python
NetworkXError Traceback (most recent call last)
<ipython-input-10-ca6aa4c26819> in <module>
----> 1 g.add_node(2048)
~/anaconda/envs/nams/lib/python3.7/site-packages/networkx/classes/function.py in frozen(args, kwargs)
156 def frozen(args, **kwargs):
157 Dummy method for raising errors when trying to modify frozen graphs
--> 158 raise nx.NetworkXError("Frozen graph can't be modified")
159
160
NetworkXError: Frozen graph can't be modified
```
From the perspective of semantics, this makes a ton of sense:
the subgraph g is a perfect subset of the larger graph G,
and should not be allowed to be modified
unless the larger container graph is modified.
Exercise: Draw path with neighbors one degree out
Try out this next exercise:
Extend graph drawing with the neighbors of each of those nodes.
Use any of the nxviz plots (nv.matrix, nv.arc, nv.circos);
try to see which one helps you tell the best story.
End of explanation
import pandas as pd
pd.Series(nx.betweenness_centrality(G))
Explanation: In this case, we opted for an Arc plot because we only have one grouping of nodes but have a logical way to order them.
Because the path follows the order, the edges being highlighted automatically look like hops through the graph.
Bottleneck nodes
We're now going to revisit the concept of an "important node",
this time now leveraging what we know about paths.
In the "hubs" chapter, we saw how a node that is "important"
could be so because it is connected to many other nodes.
Paths give us an alternative definition.
If we imagine that we have to pass a message on a graph
from one node to another,
then there may be "bottleneck" nodes
for which if they are removed,
then messages have a harder time flowing through the graph.
One metric that measures this form of importance
is the "betweenness centrality" metric.
On a graph through which a generic "message" is flowing,
a node with a high betweenness centrality
is one that has a high proportion of shortest paths
flowing through it.
In other words, it behaves like a bottleneck.
Betweenness centrality in NetworkX
NetworkX provides a "betweenness centrality" function
that behaves consistently with the "degree centrality" function,
in that it returns a mapping from node to metric:
End of explanation
import matplotlib.pyplot as plt
import seaborn as sns
# YOUR ANSWER HERE:
from nams.solutions.paths import plot_degree_betweenness
plot_degree_betweenness(G)
Explanation: Exercise: compare degree and betweenness centrality
Make a scatterplot of degree centrality on the x-axis
and betweenness centrality on the y-axis.
Do they correlate with one another?
End of explanation
nx.draw(nx.barbell_graph(5, 1))
Explanation: Think about it...
...does it make sense that degree centrality and betweenness centrality
are not well-correlated?
Can you think of a scenario where a node has a
"high" betweenness centrality
but a "low" degree centrality?
Before peeking at the graph below,
think about your answer for a moment.
End of explanation
from nams.solutions import paths
import inspect
print(inspect.getsource(paths))
Explanation: Recap
In this chapter, you learned the following things:
You figured out how to implement the breadth-first-search algorithm to find shortest paths.
You learned how to extract subgraphs from a larger graph.
You implemented visualizations of subgraphs, which should help you as you communicate with colleagues.
You calculated betweenness centrality metrics for a graph, and visualized how they correlated with degree centrality.
Solutions
Here are the solutions to the exercises above.
End of explanation
<END_TASK> |
15,873 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Data Actually
David Robinson posted a great article Analyzing networks of characters in 'Love Actually' on 25th December 2015, which uses R to analyse the connections between the characters in the film Love Actually.
This notebook and the associated python code attempts to reproduce his analysis using tools from the Python ecosystem. The code in this notebook is copied from love_actually.py as of 29th December 2015 (ideally I'd keep them in sync but no promises)
Package setup
To start we need to import some useful packages, shown below. Some of these needed to be installed using the Anaconda distribution or pip if conda failed
Step3: Data import
First we need to define a couple of functions to read the 'Love Actually' script into a list of lines and the cast into a Pandas DataFrame. The data_dir variable gets the location of the directory containing the input files from an environment variable so set this environment variable to the location appropriate for you if you're following along.
Step4: The cell below reproduces the logic in the first cell of the original article. It doesn't feel quite as nice to me as the dplyr syntax but is not too bad.
Step5: Constructing the n_character x n_scene matrix showing how many lines each character has in each scene is quite easy using pandas groupby method to create a hierarchical index, followed by the unstack method to convert the second level of the index into columns.
Step6: Analysis
Now we get to the analysis itself. First we perform a hierarchical clustering of the data using the same data normalization and clustering method as the original article. The leaf order in the dendrogram is returned for use in later steps of the analysis, as it has similar characters close to each other.
Step7: Timeline
Plotting the timeline of character versus scene is very similar to the R version since we make use of the Python ggplot port from yhat. It seems like there are a few differences between this package and the R ggplot2 library.
In particular
Step8: Co-occurrence matrix
Next we construct the co-occurrence matrix showing how often characters share scenes, and visualize using a heatmap and network graph.
Step9: The heatmap below is not as nice as the default R heatmap as it is missing the dendrograms on each axis and also the character names, so could be extended e.g. following Hierarchical Clustering Heatmaps in Python.
It otherwise shows a similar result to the original article in that ordering using the dendrogram leaf order has resulted in a co-occurrence matrix predominantly of block diagonal form.
Step10: The network plot gives similar results to the original article. This could be extended, for example by adding weights to the graph edges. | <ASSISTANT_TASK:>
Python Code:
from __future__ import division
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
from scipy.cluster.hierarchy import dendrogram, linkage
import ggplot as gg
import networkx as nx
%matplotlib inline
Explanation: Data Actually
David Robinson posted a great article Analyzing networks of characters in 'Love Actually' on 25th December 2015, which uses R to analyse the connections between the characters in the film Love Actually.
This notebook and the associated python code attempts to reproduce his analysis using tools from the Python ecosystem. The code in this notebook is copied from love_actually.py as of 29th December 2015 (ideally I'd keep them in sync but no promises)
Package setup
To start we need to import some useful packages, shown below. Some of these needed to be installed using the Anaconda distribution or pip if conda failed:
pip install ggplot
conda install graphviz
conda install networkx
conda install pyplot
End of explanation
data_dir = os.path.join(os.getenv('MDA_DATA_DIR', '/home/mattmcd/Work/Data'), 'LoveActually')
def read_script():
Read the Love Actually script from text file into list of lines
The script is first Google hit for 'Love Actually script' as a doc
file. Use catdoc or Libre Office to save to text format.
with open(os.path.join(data_dir, 'love_actually.txt'), 'r') as f:
lines = [line.strip() for line in f]
return lines
def read_actors():
Read the mapping from character to actor using the varianceexplained data file
Used curl -O http://varianceexplained.org/files/love_actually_cast.csv to get a local copy
return pd.read_csv(os.path.join(data_dir, 'love_actually_cast.csv'))
Explanation: Data import
First we need to define a couple of functions to read the 'Love Actually' script into a list of lines and the cast into a Pandas DataFrame. The data_dir variable gets the location of the directory containing the input files from an environment variable so set this environment variable to the location appropriate for you if you're following along.
End of explanation
def parse_script(raw):
df = pd.DataFrame(raw, columns=['raw'])
df = df.query('raw != ""')
df = df[~df.raw.str.contains("(song)")]
lines = (df.
assign(is_scene=lambda d: d.raw.str.contains(" Scene ")).
assign(scene=lambda d: d.is_scene.cumsum()).
query('not is_scene'))
speakers = lines.raw.str.extract('(?P<speaker>[^:]*):(?P<dialogue>.*)')
lines = (pd.concat([lines, speakers], axis=1).
dropna().
assign(line=lambda d: np.cumsum(~d.speaker.isnull())))
lines.drop(['raw', 'is_scene'], axis=1, inplace=True)
return lines
def read_all():
lines = parse_script(read_script())
cast = read_actors()
combined = lines.merge(cast).sort('line').assign(
character=lambda d: d.speaker + ' (' + d.actor + ')').reindex()
# Decode bytes to unicode
combined['character'] = map(lambda s: s.decode('utf-8'), combined['character'])
return combined
# Read in script and cast into a dataframe
lines = read_all()
# Print the first few rows
lines.head(10)
Explanation: The cell below reproduces the logic in the first cell of the original article. It doesn't feel quite as nice to me as the dplyr syntax but is not too bad.
End of explanation
def get_scene_speaker_matrix(lines):
by_speaker_scene = lines.groupby(['character', 'scene'])['line'].count()
speaker_scene_matrix = by_speaker_scene.unstack().fillna(0)
return by_speaker_scene, speaker_scene_matrix
# Group by speaker and scene and construct the speaker-scene matrix
by_speaker_scene, speaker_scene_matrix = get_scene_speaker_matrix(lines)
Explanation: Constructing the n_character x n_scene matrix showing how many lines each character has in each scene is quite easy using pandas groupby method to create a hierarchical index, followed by the unstack method to convert the second level of the index into columns.
End of explanation
def plot_dendrogram(mat, normalize=True):
# Cluster and plot dendrogram. Return order after clustering.
if normalize:
# Normalize by number of lines
mat = mat.div(mat.sum(axis=1), axis=0)
Z = linkage(mat, method='complete', metric='cityblock')
labels = mat.index
f = plt.figure()
ax = f.add_subplot(111)
R = dendrogram(Z, leaf_rotation=90, leaf_font_size=8,
labels=labels, ax=ax, color_threshold=-1)
f.tight_layout()
ordering = R['ivl']
return ordering
# Hierarchical cluster and return order of leaves
ordering = plot_dendrogram(speaker_scene_matrix)
print(ordering)
Explanation: Analysis
Now we get to the analysis itself. First we perform a hierarchical clustering of the data using the same data normalization and clustering method as the original article. The leaf order in the dendrogram is returned for use in later steps of the analysis, as it has similar characters close to each other.
End of explanation
def get_scenes_with_multiple_characters(by_speaker_scene):
# Filter speaker scene dataframe to remove scenes with only one speaker
# n_scene x 1 Series with index 'scene'
filt = by_speaker_scene.count('scene') > 1
# n_scene x n_character Index
scene_index = by_speaker_scene.index.get_level_values('scene')
# n_scene x n_character boolean vector
ind = filt[scene_index].values
return by_speaker_scene[ind]
def order_scenes(scenes, ordering=None):
# Order scenes by e.g. leaf order after hierarchical clustering
scenes = scenes.reset_index()
scenes['scene'] = scenes['scene'].astype('category')
scenes['character'] = scenes['character'].astype('category', categories=ordering)
scenes['character_code'] = scenes['character'].cat.codes
return scenes
# Order the scenes by cluster leaves order
scenes = order_scenes(get_scenes_with_multiple_characters(by_speaker_scene), ordering)
def plot_timeline(scenes):
# Plot character vs scene timelime
# NB: due to limitations in Python ggplot we need to plot with scene on y-axis
# in order to label x-ticks by character.
# scale_x_continuous and scale_y_continuous behave slightly differently.
print (gg.ggplot(gg.aes(y='scene', x='character_code'), data=scenes) +
gg.geom_point() + gg.labs(x='Character', y='Scene') +
gg.scale_x_continuous(
labels=scenes['character'].cat.categories.values.tolist(),
breaks=range(len(scenes['character'].cat.categories))) +
gg.theme(axis_text_x=gg.element_text(angle=30, hjust=1, size=10)))
# Plot a timeline of characters vs scene
plot_timeline(scenes);
Explanation: Timeline
Plotting the timeline of character versus scene is very similar to the R version since we make use of the Python ggplot port from yhat. It seems like there are a few differences between this package and the R ggplot2 library.
In particular:
ggplot does not seem to handle categorical variables so it was necessary to introduce an extra character_code dimension
it didn't seem possible to change the y-axis tick labels to character names so here the axis directions are swapped
the aes (aesthetic) does not seem to support 'group' so the geom_path joining characters in the same scene has been left out
(note that these points may just be limitations of my understanding of ggplot in Python)
End of explanation
def get_cooccurrence_matrix(speaker_scene_matrix, ordering=None):
# Co-occurrence matrix for the characters, ignoring last scene where all are present
scene_ind = speaker_scene_matrix.astype(bool).sum() < 10
if ordering:
mat = speaker_scene_matrix.loc[ordering, scene_ind]
else:
mat = speaker_scene_matrix.loc[:, scene_ind]
return mat.dot(mat.T)
cooccur_mat = get_cooccurrence_matrix(speaker_scene_matrix, ordering)
Explanation: Co-occurrence matrix
Next we construct the co-occurrence matrix showing how often characters share scenes, and visualize using a heatmap and network graph.
End of explanation
def plot_heatmap(cooccur_mat):
# Plot co-ccurrence matrix as heatmap
plt.figure()
plt.pcolor(cooccur_mat)
# Plot heatmap of co-occurrence matrix
plot_heatmap(cooccur_mat)
Explanation: The heatmap below is not as nice as the default R heatmap as it is missing the dendrograms on each axis and also the character names, so could be extended e.g. following Hierarchical Clustering Heatmaps in Python.
It otherwise shows a similar result to the original article in that ordering using the dendrogram leaf order has resulted in a co-occurrence matrix predominantly of block diagonal form.
End of explanation
def plot_network(cooccur_mat):
# Plot co-occurence matrix as network diagram
G = nx.Graph(cooccur_mat.values)
pos = nx.graphviz_layout(G) # NB: needs pydot installed
plt.figure()
nx.draw_networkx_nodes(G, pos, node_size=700, node_color='c')
nx.draw_networkx_edges(G, pos)
nx.draw_networkx_labels(
G, pos,
labels={i: s for (i, s) in enumerate(cooccur_mat.index.values)},
font_size=10)
plt.axis('off')
plt.show()
# Plot network graph of co-occurrence matrix
plot_network(cooccur_mat)
Explanation: The network plot gives similar results to the original article. This could be extended, for example by adding weights to the graph edges.
End of explanation
<END_TASK> |
15,874 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Python
Pero la estrella indiscutible de Jupyter es Python, que se estรก convirtiendo poco a poco en el lenguaje de facto para el anรกlisis de datos, decantando lentamente R, SAS, Matlab...
Lo importante no son los lenguajes, sino el enorme ecosistema de herramientas que han aparecido gracias a la apertura y facilidad de Python.
Jake VanderPlas mantiene una colecciรณn de notebooks muy interesantes
Step1: Poner NFQ en un mapa
ยฟCรณmo de difรญcil puede ser pintar un mapa interactivo de Madrid y poner la geolocalizaciรณn de las oficinas de NFQ en รฉl?
Step2: Leer un archivo Excel
Hay otra herramienta que tambiรฉn mezcla datos, lรณgica y repesentacรณn | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
from sklearn import linear_model
from matplotlib import pylab as plt
plt.style.use('bmh')
%matplotlib notebook
wine = pd.read_csv('data/winequality-white.csv',delimiter=';')
wine.describe()
fig = plt.figure(2)
ax = [fig.add_subplot(3,4,i) for i in range(1,12)]
models = [linear_model.LinearRegression() for i in range(11)]
for column, model in zip(wine.columns, models):
model.fit(wine['quality'].reshape(-1,1),
wine[column].as_matrix().reshape(-1,1))
for qual, group in wine.groupby('quality'):
for column, axis in zip(group.columns, ax):
axis.plot(qual, group[column].mean(), 'ob')
axis.set_title(column + ' (avg)', fontsize=10)
qual = np.arange(3,10)
for model, axi in zip(models, ax):
axi.plot(qual, model.coef_[0][0]*qual + model.intercept_,
'r--', linewidth=4, label='Regression')
axi.legend(fontsize=6)
fig.tight_layout()
Explanation: Python
Pero la estrella indiscutible de Jupyter es Python, que se estรก convirtiendo poco a poco en el lenguaje de facto para el anรกlisis de datos, decantando lentamente R, SAS, Matlab...
Lo importante no son los lenguajes, sino el enorme ecosistema de herramientas que han aparecido gracias a la apertura y facilidad de Python.
Jake VanderPlas mantiene una colecciรณn de notebooks muy interesantes:
Para aprender Python
Para aprender Machine Learning
ยฟPodemos predecir si un vino blanco serรก bueno?
Existe una base de datos de propiedades quรญmicas de vinos y su valoraciรณn en catas que proviene de este paper.
P. Cortez, A. Cerdeira, F. Almeida, T. Matos and J. Reis. Modeling wine preferences by data mining from physicochemical properties. In Decision Support Systems, Elsevier, 47(4):547-553, 2009.
Se trata de mรกs de 4000 vinos verdes portugueses. El objetivo es entender los datos y proporcionar una guรญa visual sencilla de quรฉ propiedades debe tener un buen vino blanco.
End of explanation
import folium
madrid = folium.Map(location=[40.429857, -3.685812], tiles="Stamen toner",
zoom_start=15)
nfqsolutions = folium.Marker([40.429857, -3.685812], popup='NFQ Solutions')
madrid.add_children(nfqsolutions)
madrid.save('madrid.html')
madrid
Explanation: Poner NFQ en un mapa
ยฟCรณmo de difรญcil puede ser pintar un mapa interactivo de Madrid y poner la geolocalizaciรณn de las oficinas de NFQ en รฉl?
End of explanation
with pd.ExcelFile('./data/winequality-white.xls') as xls:
wines = pd.read_excel(xls, 'Sheet1')
wines.describe()
Explanation: Leer un archivo Excel
Hay otra herramienta que tambiรฉn mezcla datos, lรณgica y repesentacรณn: una hoja de cรกlculo. Pero Excel (la hoja de cรกlculo mรกs frecuente) no escala con los datos.
Mala eficiencia de memoria.
Uso de datos no tabulables.
Simplicidad y apariencia vs eficiencia y rendimiento.
El formato xlsx es la estrategia de vendor lock-in mรกs exitosa de la historia.
La definicรณn prรกctica de Big Data es algo demasiado grande para ser seleccionado con el ratรณn en una hoja Excel.
End of explanation
<END_TASK> |
15,875 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Data frames 2. ๅฏ่ฆๅ (Visualization)
```
ASSIGNMENT METADATA
assignment_id
Step1: Visualization with plotly_express
lang
Step2: lang
Step3: lang
Step4: lang
Step5: lang
Step6: lang
Step7: lang
Step9: ไบ็ฟ่ชฒ้ก. ใใผใฟใใฌใผใ ใฎๅฏ่ฆๅ ๏ผVisualize the dataset๏ผ
```
EXERCISE METADATA
exercise_id
Step10: lang | <ASSISTANT_TASK:>
Python Code:
# CSVใใกใคใซใใใใผใฟใ่ชญใฟ่พผใฟใพใใใใ Read the data from CSV file.
df = pd.read_csv('data/16-July-2019-Tokyo-hourly.csv')
print("่กๆฐใฏ %d ใงใ" % len(df))
print(df.dtypes)
df.head()
Explanation: Data frames 2. ๅฏ่ฆๅ (Visualization)
```
ASSIGNMENT METADATA
assignment_id: "DataFrame2"
```
lang:en
In this unit, we will get acquainted with how to visualize tidy data frames
using the library plotly_express.
Let's start with reading the data from a CSV file and peeking inside.
lang:ja ใใฎ่ฌ็พฉใงใฏใใใผใฟใใฌใผใ ใฎๅฏ่ฆๅใ็ดนไปใใพใใplotly_expressใจใใใฉใคใใฉใชใไฝฟ็จใใพใใ
ใพใใฏใใผใฟใCSVใใกใคใซใใ่ชญใฟ่พผใฟใพใใใใ
End of explanation
px.line(df, y='Temperature_degC')
Explanation: Visualization with plotly_express
lang:en
Today we will use the visualization library called Plotly Express, which is designed to make visualization very easy and concise, provided the data frame is tidy. To get the access to Plotly Express, you need to import it as follows:
python
import plotly_express as px
If you get import errors at this point, you may need to install the library. The details of installation may differ depending on the platform you are using. Here is the example command line with Pip:
pip install plotly_express
With anaconda, enter the following installation command in the conda shell:
conda install -c plotly plotly_express
All plotting commands with plotly_express follow the same pattern:
px.plot_method(data_frame, variable1='column1', variable2='column2', ...)
Here, plot_method can be on of the plotting methods supported by the library (see https://www.plotly.express/plotly_express/ for comprehensive documentation). The data_frame is the name of the data frame that provides the data to visualize, variable1, variable2 are the names of the plot variables that the plotting method is capable of representing. Here are some examples: x, y, color, size, symbol, facet_col, facet_row. The column1, column2 are the names of the columns in the given data frame that map to the visualization variables.
Depending on the plot chosen, some variables may be omitted, and the visualization library will automatically fill
in something reasonable. Let's have a look at the simplest example:
python
px.line(df, y='Temperature_degC')
Here we are calling the plot method px.line which is a line plot. The line plot uses two variables: x and y to put the points on to the coordinate grid and connects them in order. It is possible to give only one of the x and y, then the other will be automatically filled with an integer from 0 to the number of rows in the data frame. In our example, we specify y to map to the Temperature_degC column of our data frame, and do not specify x mapping.
plotly_expressใ็จใใฆๅฏ่ฆๅ (Visualization with plotly_express)
lang:ja
plotly_expressใจใใใฉใคใใฉใชใฏใใผใฟใฎ็ฐกๅใชๅฏ่ฆๅใฎใใใซ้็บใใใพใใใๅฏ่ฆๅใ็ฐกๅใซใใใซใฏใใใผใฟใใฌใผใ ใฏใญใฌใคใช็ถๆ
ใซไฟใใชใใใฐใชใใพใใใใใฎใฉใคใใฉใชใไฝฟใใซใฏใไปฅไธใฎใคใณใใผใใๅฟ
่ฆใงใใ
python
import plotly_express as px
ใใฎใณใใณใใใจใฉใผใ่ฟใใฆใใใใใฉใคใใฉใชใใคใณในใใผใซๅฟ
่ฆใใใใพใใใคใณในใใผใซๆนๆณใฏใณใณใใฅใผใฟใผใทในใใ ใซใใฃใฆ็ฐใชใใพใใฎใงใ
README.mdใซๅ็
งใใใใTAใซใๅฐใญใใ ใใใใๅ่ใพใงใใกใใฏPipใ็จใใใคใณในใใผใซๆนๆณใงใใ
pip install plotly_express
Anacondaใไฝฟใๅ ดๅ, ไปฅไธใฎใณใใณใใCondaใทใงใซใซๅ
ฅๅใใฆใใ ใใใ
conda install -c plotly plotly_express
ๅฏ่ฆๅใฎๅฝไปคใฏๅ
จใฆๅใๅฝขใงใใ
px.ๅฏ่ฆๅๆนๆณ(ใใผใฟใใฌใผใ , ๅคๆฐ1='ๅ1', ๅคๆฐ2='ๅ2', ...)
ๅฏ่ฆๅๆนๆณใฏใฉใคใใฉใชใฎ่ชฌๆๆธใใๅ็
งใใ ใใ ๏ผhttps://www.plotly.express/plotly_express/)ใ
ใใผใฟใใฌใผใ ใฏใใผใฟใใฌใผใ ใๆๅฎใใพใใๅคๆฐ1, ๅคๆฐ2ใฏๅฏ่ฆๅใฎๅคๆฐใใใฆใใใพใใๅฏ่ฆๅๆนๆณใซใใฃใฆ็ฐใชใใพใใใไปฅไธใฎ
ๅฏ่ฆๅๅคๆฐใฏใใไฝฟใใใพใ๏ผใx, y, color, size, symbol, facet_col, facet_row.ใ
ๅ1, ๅ2ใฏใใผใฟใใฌใผใ ใซๅ
ฅใฃใฆใใๅใฎๅ็
งใงใๅฏ่ฆๅๅคๆฐใจใใผใฟใ้ข้ฃใฅใใพใใ
ๅฏ่ฆๅๆนๆณใซใใฃใฆๅคๆฐใฏๆๅฎใใชใใฆใใใๅ ดๅใใใใพใใใใฎๅ ดๅใฏๅฏ่ฆๅๅคๆฐใฏ่ชๅใง็ๆใใใพใใ็ฐกๅใชไพใ่ฆใพใใใใ
python
px.line(df, y='Temperature_degC')
px.lineใฏ็ทใฎๅฏ่ฆๅใๆๅฎใใพใใ็ทใฎๅฏ่ฆๅใซใฏ๏ผใคใฎๅฏ่ฆๅๅคๆฐใไฝฟใใพใ๏ผ xใจyใ็็ฅใใใฐใ0ใใN-1ใฎๆดๆฐใฎๅบ้ใซ่ชๅ็ใซใชใใพใ๏ผNใฏใใผใฟใใฌใผใ ใฎ่กๆฐใงใ๏ผใ
ไปฅไธใฎไพใงใฏใyใฏๆฐๆธฉ(Temperature_degC)ใซๆๅฎใใฆใxใฏ่ชๅใง็ๆใใใพใใ
End of explanation
px.line(df, x='Time_Hour', y='Temperature_degC')
Explanation: lang:en You can see that specifying x explicitly to map to the time column Time_Hour does not change the plot much. The only difference is that the automatic range is varying from 0 to 23, while Time_Hour varies from 1 to 24.
lang:ja xใฎๅฏ่ฆๅๅคๆฐใๆ็ขบใซๆๅฎใใใใจใๅฏ่ฝใงใใใไปฅไธใฎไพใงใฏใxใซๆ้(Time_Hour)ใๆๅฎใใใฐใใฐใฉใใฏใปใจใใฉๅคใใใพใใใ
้ใใฏใใฃใไธใคใงใใTime_Hour(ๆ้)ใฎๅบ้ใฏ[1,24]ใงใใใๅ
ใฎ่ชๅ็ใซ็ๆใใใๅบ้ใฏ[0,23]ใงใใใ
End of explanation
df.dtypes
px.line(df, y='Pressure_hPa')
Explanation: lang:en Similarly, one can plot any other variable that is present in the data frame. To remind yourself what columns
you have in the data frame, use df.dtypes.
lang:ja ใใผใฟใใฌใผใ ใซๅ
ฅใฃใฆใใๅคๆฐใฏๅ
จใฆๅฏ่ฆๅใงใใพใใใใผใฟใใฌใผใ ใซๅ
ฅใฃใฆใใๅคๆฐใไธ่ฆงใ่ฆใใใใซdf.dtypesใใๅ่ใใ ใใใ
End of explanation
px.scatter(df, x='Temperature_degC', y='Pressure_hPa', color='Time_Hour')
Explanation: lang:en You can enable the plot to show additional dimensions by providing additional mappings. The mapping color makes the points to use color range to represent an additional variable. Here is a different kind of plot, with x and y showing different variables, and using the color to represent time of the day.
This example uses plotting method px.scatter which is a scatter plot, where the coordinates of points to plot are given with plot variable mappings x and y.
lang:jaxใจyใฎไปใซๅฏ่ฆๅๅคๆฐใใใใพใใcolorใฏ่ฒใไฝฟใฃใฆใใไธใคใฎๅคๆฐใๅฏ่ฆๅใใใใจใใงใใพใใไปฅไธใฎไพใฏๆฃๅธๅณใงใใxใจyใฏใใใใ้ใใใผใฟๅคๆฐใๅฏ่ฆๅใใ่ฒใฏๆ้ใ่กจใใพใใๆฃๅธๅณใฏpx.scatterใซใใฃใฆๆๅฎใใพใใ
End of explanation
px.scatter(df, x='Time_Hour', y='Temperature_degC', size='Precipitation_mm')
Explanation: lang:en Another useful additional dimension is size. Let's use it to visualize how the amount of rain is related to the temperature.
lang:jaใใไธใคใฎไพฟๅฉใชๅฏ่ฆๅๅคๆฐใใใใพใ๏ผsize(ๅคงใใ)ใไฝฟใฃใฆใ้จใฎ้ใๅฏ่ฆๅใใฆใๆธฉๅบฆใๆฐๅงใจใฎ้ขไฟใใฟใฆใฟใพใใใใ
End of explanation
px.scatter(df, x='Time_Hour', y='Temperature_degC', size='Precipitation_mm', color='Precipitation_mm')
# MASTER ONLY
df.melt(id_vars=['Time_Hour'], value_vars=['Temperature_degC', 'Pressure_hPa']).iloc[[0,1,-2,-1]]
# MASTER ONLY: This example is too confusing.
# This example is demonstrating how Grammar of Graphics makes it hard to create confusing plots
# when starting from a tidy dataset. E.g it is hard to plot different variables together.
# In this example it takes some effort even to try plotting temperature and pressure together,
# and even then it does not work well because of different value range. Grammar of Graphics
# assumes that all plots shown together have one variable, and as a consequence of that chooses
# a single axis scale, which does not work here for both variables, because their ranges
# are very different and cannot be plotted with the same axis scale.
px.line(df.melt(id_vars=['Time_Hour'], value_vars=['Temperature_degC', 'Pressure_hPa']), x='Time_Hour', y='value', facet_col='variable')
Explanation: lang:en It is possible to use the same column of the data frame for multiple plot variable mapping to make
the plot show information in a redundant way, but please take care not to overuse it!
lang:jaๅใใใผใฟๅคๆฐใ่คๆฐใฎๅฏ่ฆๅๅคๆฐใซ็จใใฆใใใใงใใใใใใใฐใใฐใฉใใ่ชญใฟใใใใชใใใจใใใใพใใ
่ฒใชใฉใไฝฟใใใใชใใใใซๆฐใใคใใพใใใใ
End of explanation
px.scatter_matrix(df)
Explanation: lang:en Finally, we introduce one special plotting method that does not take almost any inputs, and automatically plots scatter plots of all pairs of variables present in the data frame. This makes it easy to visually see if there are any pairs of the variables that are highly correlated, which looks like regular line-like pattern in the scatter plot. Note, that all plots on the diagonal looks like lines. This is not surprising, because every variable is perfectly correlated to itself!
lang:jaๆๅพใซใชใใพใใใใปใผ่ชๅ็ใชๅฏ่ฆๅๆนๆณใ็ดนไปใใพใใใใใใใฐใใผใฟใซๅ
ฅใฃใฆใใๅคๆฐใไบใคใใคๅฉ็จใใฆใๆฃๅธๅณใๆใใพใใใใใฎใฐใฉใใ่ฆใชใใๅคๆฐๅๅฃซใฏใฉใฎใใใซ้ขไฟใใใฎใไธ็ฎใง็ขบ่ชใงใใพใใๅฏพ่ง็ทใฎใฐใฉใใฏๅ
จใฆ็ทใซ่ฆใใพใใใใใใฏใชใใใจใใใจใใฉใฎๅคๆฐใงใใฃใฆใใ่ชๅ่ช่บซใซๅฏพใใฆใฏๅฎๅ
จใซ็ธ้ขใใใใใงใใ
End of explanation
%%solution
# BEGIN PROMPT
df15 = pd.read_csv(...)
px.___(...)
# END PROMPT
# BEGIN SOLUTION
df15 = pd.read_csv('data/15-July-2019-Tokyo-hourly.csv')
px.bar(df15, x='Time_Hour', y='SunshineDuration_h')
# END SOLUTION
Explanation: ไบ็ฟ่ชฒ้ก. ใใผใฟใใฌใผใ ใฎๅฏ่ฆๅ ๏ผVisualize the dataset๏ผ
```
EXERCISE METADATA
exercise_id: "VisualizeDataset"
```
lang:en
Load the weather data set from data/15-July-2019-Tokyo-hourly.csv and
visualize the amount of sunshine per hour and find out from the plot which hour had most sunshine.
lang:ja
ๅคฉๆฐใซใคใใฆใฎใใผใฟใdata/15-July-2019-Tokyo-hourly.csvใใ่ชญใฟ่พผใใงใ
ๆฅๅใฎ้๏ผSunshineDuration_h๏ผใๅฏ่ฆๅใใฆใใฉใฎๆ้ๅธฏใงๆฅๅใไธ็ชๅคใใฃใใใ่ฆใคใใฆใใ ใใใ
End of explanation
%%inlinetest FigureTest
try:
df15
assert len(df15) == 24, "Did you load the right data set? Expected to see 24 rows, but got %d" % len(df15)
except NameError:
assert False, "Your code does not define df15"
# Check the submission syntactically.
import ast
# This import will be uncommented when executed on autograder server.
#import submission_source
try:
a = ast.parse(submission_source.source)
assert len(a.body) > 0, "Is your code submission empty?"
e = None
for x in a.body:
if x.__class__ == ast.Expr:
e = x
break
assert e is not None, "Your code does not have any function call in it?"
assert e.value.__class__ == ast.Call, "Do you have a function call in your cell? The code may be okay, but I am just not sure"
assert e.value.func.__class__ == ast.Attribute, "I do not recognize your function call. The code may be okay, but I am just not sure"
assert e.value.func.attr in set(['line', 'bar', 'scatter']), "Expected to see a px.line() or px.bar() or px.scatter plot, but got %s. This may be okay, but I am just not sure" % e.value.func.attr
except AssertionError as e:
raise e
except SyntaxError as e:
assert False, "Your code does not compile: %s" % e
except IndentationError as e:
assert False, "Something is wrong with the test: %s" % e
except Exception as e:
assert False, "Something is wrong with the test: %s" % e
# TODO(salikh): Implement a library for easy checking of syntactic conditiions.
#assert ae.TopExprIsCall(a), "Your "
# MASTER ONLY
# This visualization should work as well.
px.line(df15, x='Time_Hour', y='SunshineDuration_h')
%%submission
px.line(df15, y='SunshineDuration_h')
Explanation: lang:en The expected answer is 13 and 14, which both had 0.1h of sunshine. Please check if your plot clearly shows that.
lang:ja ็ญใใฏ13ๆใจ14ๆใงใใไธกๆนใจใๆฅๅใฎๆ้ใฏ0.1ใงใใใๅฏ่ฆๅใซใใฃใฆใใใฏใฏใฃใใ่ฆใใใใ็ขบ่ชใใ ใใใ
End of explanation
<END_TASK> |
15,876 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Text generation with an RNN
Learning Objectives
Learn how to generate text using a RNN
Create training examples and targets for text generation
Build a RNN model for sequence generation using Keras Subclassing
Create a text generator and evaluate the output
This tutorial demonstrates how to generate text using a character-based RNN. You will work with a dataset of Shakespeare's writing from Andrej Karpathy's The Unreasonable Effectiveness of Recurrent Neural Networks. Given a sequence of characters from this data ("Shakespear"), train a model to predict the next character in the sequence ("e"). Longer sequences of text can be generated by calling the model repeatedly.
Below is the sample output when the model in this tutorial trained for 30 epochs, and started with the prompt "Q"
Step1: Download the Shakespeare dataset
Change the following line to run this code on your own data.
Step2: Read the data
First, we'll download the file and then decode.
Step3: Let's take a look at the first 250 characters in text
Step4: Let's check to see how many unique characters are in our corpus/document.
Step5: Process the text
Vectorize the text
Before training, you need to convert the strings to a numerical representation.
TODO 1
Using tf.keras.layers.StringLookup layer can convert each character into a numeric ID. It just needs the text to be split into tokens first.
Step6: Now create the tf.keras.layers.StringLookup layer
Step7: It converts from tokens to character IDs
Step8: Since the goal of this tutorial is to generate text, it will also be important to invert this representation and recover human-readable strings from it. For this you can use tf.keras.layers.StringLookup(..., invert=True).
Note
Step9: This layer recovers the characters from the vectors of IDs, and returns them as a tf.RaggedTensor of characters
Step10: You can tf.strings.reduce_join to join the characters back into strings.
Step11: The prediction task
Given a character, or a sequence of characters, what is the most probable next character? This is the task you're training the model to perform. The input to the model will be a sequence of characters, and you train the model to predict the outputโthe following character at each time step.
Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character?
Create training examples and targets
Next divide the text into example sequences. Each input sequence will contain seq_length characters from the text.
For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right.
So break the text into chunks of seq_length+1. For example, say seq_length is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".
TODO 2
First use the tf.data.Dataset.from_tensor_slices function to convert the text vector into a stream of character indices.
Step12: The batch method lets you easily convert these individual characters to sequences of the desired size.
Step13: It's easier to see what this is doing if you join the tokens back into strings
Step14: For training you'll need a dataset of (input, label) pairs. Where input and
label are sequences. At each time step the input is the current character and the label is the next character.
Here's a function that takes a sequence as input, duplicates, and shifts it to align the input and label for each timestep
Step15: Create training batches
You used tf.data to split the text into manageable sequences. But before feeding this data into the model, you need to shuffle the data and pack it into batches.
Step16: Build The Model
This section defines the model as a keras.Model subclass (For details see Making new Layers and Models via subclassing).
TODO 3 Build a model with the following layers
tf.keras.layers.Embedding
Step17: The class below does the following
Step18: For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-likelihood of the next character.
Try the model
Now run the model to see that it behaves as expected.
First check the shape of the output
Step19: In the above example the sequence length of the input is 100 but the model can be run on inputs of any length
Step20: To get actual predictions from the model you need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary.
Note
Step21: This gives us, at each timestep, a prediction of the next character index
Step22: Decode these to see the text predicted by this untrained model
Step23: Train the model
At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character.
TODO 4 Attach an optimizer, and a loss function
The standard tf.keras.losses.sparse_categorical_crossentropy loss function works in this case because it is applied across the last dimension of the predictions.
Because your model returns logits, you need to set the from_logits flag.
Step24: A newly initialized model shouldn't be too sure of itself, the output logits should all have similar magnitudes. To confirm this you can check that the exponential of the mean loss is approximately equal to the vocabulary size. A much higher loss means the model is sure of its wrong answers, and is badly initialized
Step25: Configure the training procedure using the tf.keras.Model.compile method. Use tf.keras.optimizers.Adam with default arguments and the loss function.
Step26: Configure checkpoints
Use a tf.keras.callbacks.ModelCheckpoint to ensure that checkpoints are saved during training
Step27: Execute the training
To keep training time reasonable, use 10 epochs to train the model. In Colab, set the runtime to GPU for faster training.
Step28: TODO 5 Generate text
The simplest way to generate text with this model is to run it in a loop, and keep track of the model's internal state as you execute it.
Each time you call the model you pass in some text and an internal state. The model returns a prediction for the next character and its new state. Pass the prediction and state back in to continue generating text.
The following makes a single step prediction
Step29: Run it in a loop to generate some text. Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences.
Step30: The easiest thing you can do to improve the results is to train it for longer (try EPOCHS = 30).
You can also experiment with a different start string, try adding another RNN layer to improve the model's accuracy, or adjust the temperature parameter to generate more or less random predictions.
If you want the model to generate text faster the easiest thing you can do is batch the text generation. In the example below the model generates 5 outputs in about the same time it took to generate 1 above.
Step31: Export the generator
This single-step model can easily be saved and restored, allowing you to use it anywhere a tf.saved_model is accepted.
Step32: Advanced
Step33: The above implementation of the train_step method follows Keras' train_step conventions. This is optional, but it allows you to change the behavior of the train step and still use keras' Model.compile and Model.fit methods.
Step34: Or if you need more control, you can write your own complete custom training loop | <ASSISTANT_TASK:>
Python Code:
import os
import time
import numpy as np
import tensorflow as tf
Explanation: Text generation with an RNN
Learning Objectives
Learn how to generate text using a RNN
Create training examples and targets for text generation
Build a RNN model for sequence generation using Keras Subclassing
Create a text generator and evaluate the output
This tutorial demonstrates how to generate text using a character-based RNN. You will work with a dataset of Shakespeare's writing from Andrej Karpathy's The Unreasonable Effectiveness of Recurrent Neural Networks. Given a sequence of characters from this data ("Shakespear"), train a model to predict the next character in the sequence ("e"). Longer sequences of text can be generated by calling the model repeatedly.
Below is the sample output when the model in this tutorial trained for 30 epochs, and started with the prompt "Q":
<pre>
QUEENE:
I had thought thou hadst a Roman; for the oracle,
Thus by All bids the man against the word,
Which are so weak of care, by old care done;
Your children were in your holy love,
And the precipitation through the bleeding throne.
BISHOP OF ELY:
Marry, and will, my lord, to weep in such a one were prettiest;
Yet now I was adopted heir
Of the world's lamentable day,
To watch the next way with his father with his face?
ESCALUS:
The cause why then we are all resolved more sons.
VOLUMNIA:
O, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it is no sin it should be dead,
And love and pale as any will to that word.
QUEEN ELIZABETH:
But how long have I heard the soul for this world,
And show his hands of life be proved to stand.
PETRUCHIO:
I say he look'd on, if I must be content
To stay him from the fatal of our country's bliss.
His lordship pluck'd from this sentence then for prey,
And then let us twain, being the moon,
were she such a case as fills m
</pre>
While some of the sentences are grammatical, most do not make sense. The model has not learned the meaning of words, but here are some things to consider:
The model is character-based. When training started, the model did not know how to spell an English word, or that words were even a unit of text.
The structure of the output resembles a playโblocks of text generally begin with a speaker name, in all capital letters similar to the dataset.
As demonstrated below, the model is trained on small batches of text (100 characters each), and is still able to generate a longer sequence of text with coherent structure.
Setup
Import TensorFlow and other libraries
End of explanation
path_to_file = tf.keras.utils.get_file(
"shakespeare.txt",
"https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt",
)
Explanation: Download the Shakespeare dataset
Change the following line to run this code on your own data.
End of explanation
text = open(path_to_file, "rb").read().decode(encoding="utf-8")
print(f"Length of text: {len(text)} characters")
Explanation: Read the data
First, we'll download the file and then decode.
End of explanation
print(text[:250])
Explanation: Let's take a look at the first 250 characters in text
End of explanation
vocab = sorted(set(text))
print(f"{len(vocab)} unique characters")
Explanation: Let's check to see how many unique characters are in our corpus/document.
End of explanation
example_texts = ["abcdefg", "xyz"]
# TODO 1
chars = #insert code here
Explanation: Process the text
Vectorize the text
Before training, you need to convert the strings to a numerical representation.
TODO 1
Using tf.keras.layers.StringLookup layer can convert each character into a numeric ID. It just needs the text to be split into tokens first.
End of explanation
ids_from_chars = tf.keras.layers.StringLookup(
vocabulary=list(vocab), mask_token=None
)
Explanation: Now create the tf.keras.layers.StringLookup layer:
End of explanation
ids = ids_from_chars(chars)
ids
Explanation: It converts from tokens to character IDs:
End of explanation
chars_from_ids = tf.keras.layers.StringLookup(
vocabulary=ids_from_chars.get_vocabulary(), invert=True, mask_token=None
)
Explanation: Since the goal of this tutorial is to generate text, it will also be important to invert this representation and recover human-readable strings from it. For this you can use tf.keras.layers.StringLookup(..., invert=True).
Note: Here instead of passing the original vocabulary generated with sorted(set(text)) use the get_vocabulary() method of the tf.keras.layers.StringLookup layer so that the [UNK] tokens is set the same way.
End of explanation
chars = chars_from_ids(ids)
chars
Explanation: This layer recovers the characters from the vectors of IDs, and returns them as a tf.RaggedTensor of characters:
End of explanation
tf.strings.reduce_join(chars, axis=-1).numpy()
def text_from_ids(ids):
return tf.strings.reduce_join(chars_from_ids(ids), axis=-1)
Explanation: You can tf.strings.reduce_join to join the characters back into strings.
End of explanation
# TODO 2
all_ids = # insert code here
all_ids
ids_dataset = tf.data.Dataset.from_tensor_slices(all_ids)
for ids in ids_dataset.take(10):
print(chars_from_ids(ids).numpy().decode("utf-8"))
seq_length = 100
examples_per_epoch = len(text) // (seq_length + 1)
Explanation: The prediction task
Given a character, or a sequence of characters, what is the most probable next character? This is the task you're training the model to perform. The input to the model will be a sequence of characters, and you train the model to predict the outputโthe following character at each time step.
Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character?
Create training examples and targets
Next divide the text into example sequences. Each input sequence will contain seq_length characters from the text.
For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right.
So break the text into chunks of seq_length+1. For example, say seq_length is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".
TODO 2
First use the tf.data.Dataset.from_tensor_slices function to convert the text vector into a stream of character indices.
End of explanation
sequences = ids_dataset.batch(seq_length + 1, drop_remainder=True)
for seq in sequences.take(1):
print(chars_from_ids(seq))
Explanation: The batch method lets you easily convert these individual characters to sequences of the desired size.
End of explanation
for seq in sequences.take(5):
print(text_from_ids(seq).numpy())
Explanation: It's easier to see what this is doing if you join the tokens back into strings:
End of explanation
def split_input_target(sequence):
input_text = sequence[:-1]
target_text = sequence[1:]
return input_text, target_text
split_input_target(list("Tensorflow"))
dataset = sequences.map(split_input_target)
for input_example, target_example in dataset.take(1):
print("Input :", text_from_ids(input_example).numpy())
print("Target:", text_from_ids(target_example).numpy())
Explanation: For training you'll need a dataset of (input, label) pairs. Where input and
label are sequences. At each time step the input is the current character and the label is the next character.
Here's a function that takes a sequence as input, duplicates, and shifts it to align the input and label for each timestep:
End of explanation
# Batch size
BATCH_SIZE = 64
# Buffer size to shuffle the dataset
# (TF data is designed to work with possibly infinite sequences,
# so it doesn't attempt to shuffle the entire sequence in memory. Instead,
# it maintains a buffer in which it shuffles elements).
BUFFER_SIZE = 10000
dataset = (
dataset.shuffle(BUFFER_SIZE)
.batch(BATCH_SIZE, drop_remainder=True)
.prefetch(tf.data.experimental.AUTOTUNE)
)
dataset
Explanation: Create training batches
You used tf.data to split the text into manageable sequences. But before feeding this data into the model, you need to shuffle the data and pack it into batches.
End of explanation
# Length of the vocabulary in chars
vocab_size = len(vocab)
# The embedding dimension
embedding_dim = 256
# Number of RNN units
rnn_units = 1024
Explanation: Build The Model
This section defines the model as a keras.Model subclass (For details see Making new Layers and Models via subclassing).
TODO 3 Build a model with the following layers
tf.keras.layers.Embedding: The input layer. A trainable lookup table that will map each character-ID to a vector with embedding_dim dimensions;
tf.keras.layers.GRU: A type of RNN with size units=rnn_units (You can also use an LSTM layer here.)
tf.keras.layers.Dense: The output layer, with vocab_size outputs. It outputs one logit for each character in the vocabulary. These are the log-likelihood of each character according to the model.
End of explanation
class MyModel(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, rnn_units):
super().__init__(self)
# TODO - Create an embedding layer
self.embedding = #insert code here
# TODO - Create a GRU layer
self.gru = #insert code here
# TODO - Finally connect it with a dense layer
self.dense = #insert code here
def call(self, inputs, states=None, return_state=False, training=False):
x = self.embedding(inputs, training=training)
# since we are training a text generation model,
# we use the previous state, in training. If there is no state,
# then we initialize the state
if states is None:
states = self.gru.get_initial_state(x)
x, states = self.gru(x, initial_state=states, training=training)
x = self.dense(x, training=training)
if return_state:
return x, states
else:
return x
model = MyModel(
# Be sure the vocabulary size matches the `StringLookup` layers.
vocab_size=len(ids_from_chars.get_vocabulary()),
embedding_dim=embedding_dim,
rnn_units=rnn_units,
)
Explanation: The class below does the following:
- We derive a class from tf.keras.Model
- The constructor is used to define the layers of the model
- We define the pass forward using the layers defined in the constructor
End of explanation
for input_example_batch, target_example_batch in dataset.take(1):
example_batch_predictions = model(input_example_batch)
print(
example_batch_predictions.shape,
"# (batch_size, sequence_length, vocab_size)",
)
Explanation: For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-likelihood of the next character.
Try the model
Now run the model to see that it behaves as expected.
First check the shape of the output:
End of explanation
model.summary()
Explanation: In the above example the sequence length of the input is 100 but the model can be run on inputs of any length:
End of explanation
sampled_indices = tf.random.categorical(
example_batch_predictions[0], num_samples=1
)
sampled_indices = tf.squeeze(sampled_indices, axis=-1).numpy()
Explanation: To get actual predictions from the model you need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary.
Note: It is important to sample from this distribution as taking the argmax of the distribution can easily get the model stuck in a loop.
Try it for the first example in the batch:
End of explanation
sampled_indices
Explanation: This gives us, at each timestep, a prediction of the next character index:
End of explanation
print("Input:\n", text_from_ids(input_example_batch[0]).numpy())
print()
print("Next Char Predictions:\n", text_from_ids(sampled_indices).numpy())
Explanation: Decode these to see the text predicted by this untrained model:
End of explanation
# TODO - add a loss function here
loss = #insert code here
example_batch_mean_loss = loss(target_example_batch, example_batch_predictions)
print(
"Prediction shape: ",
example_batch_predictions.shape,
" # (batch_size, sequence_length, vocab_size)",
)
print("Mean loss: ", example_batch_mean_loss)
Explanation: Train the model
At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character.
TODO 4 Attach an optimizer, and a loss function
The standard tf.keras.losses.sparse_categorical_crossentropy loss function works in this case because it is applied across the last dimension of the predictions.
Because your model returns logits, you need to set the from_logits flag.
End of explanation
tf.exp(example_batch_mean_loss).numpy()
Explanation: A newly initialized model shouldn't be too sure of itself, the output logits should all have similar magnitudes. To confirm this you can check that the exponential of the mean loss is approximately equal to the vocabulary size. A much higher loss means the model is sure of its wrong answers, and is badly initialized:
End of explanation
model.compile(optimizer="adam", loss=loss)
Explanation: Configure the training procedure using the tf.keras.Model.compile method. Use tf.keras.optimizers.Adam with default arguments and the loss function.
End of explanation
# Directory where the checkpoints will be saved
checkpoint_dir = "./training_checkpoints"
# Name of the checkpoint files
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_prefix, save_weights_only=True
)
Explanation: Configure checkpoints
Use a tf.keras.callbacks.ModelCheckpoint to ensure that checkpoints are saved during training:
End of explanation
EPOCHS = 10
history = model.fit(dataset, epochs=EPOCHS, callbacks=[checkpoint_callback])
Explanation: Execute the training
To keep training time reasonable, use 10 epochs to train the model. In Colab, set the runtime to GPU for faster training.
End of explanation
class OneStep(tf.keras.Model):
def __init__(self, model, chars_from_ids, ids_from_chars, temperature=1.0):
super().__init__()
self.temperature = temperature
self.model = model
self.chars_from_ids = chars_from_ids
self.ids_from_chars = ids_from_chars
# Create a mask to prevent "[UNK]" from being generated.
skip_ids = self.ids_from_chars(["[UNK]"])[:, None]
sparse_mask = tf.SparseTensor(
# Put a -inf at each bad index.
values=[-float("inf")] * len(skip_ids),
indices=skip_ids,
# Match the shape to the vocabulary
dense_shape=[len(ids_from_chars.get_vocabulary())],
)
self.prediction_mask = tf.sparse.to_dense(sparse_mask)
#TODO 5 - Fill in the code below to generate text
@tf.function
def generate_one_step(self, inputs, states=None):
# Convert strings to token IDs.
input_chars = tf.strings.unicode_split(inputs, "UTF-8")
input_ids = self.ids_from_chars(input_chars).to_tensor()
# Run the model.
# predicted_logits.shape is [batch, char, next_char_logits]
predicted_logits, states = #insert code here
# Only use the last prediction.
predicted_logits = predicted_logits[:, -1, :]
predicted_logits = predicted_logits / self.temperature
# Apply the prediction mask: prevent "[UNK]" from being generated.
predicted_logits = #insert code here
# Sample the output logits to generate token IDs.
predicted_ids = #insert code here
predicted_ids = #insert code here
# Convert from token ids to characters
predicted_chars = self.chars_from_ids(predicted_ids)
# Return the characters and model state.
return predicted_chars, states
one_step_model = OneStep(model, chars_from_ids, ids_from_chars)
Explanation: TODO 5 Generate text
The simplest way to generate text with this model is to run it in a loop, and keep track of the model's internal state as you execute it.
Each time you call the model you pass in some text and an internal state. The model returns a prediction for the next character and its new state. Pass the prediction and state back in to continue generating text.
The following makes a single step prediction:
End of explanation
start = time.time()
states = None
next_char = tf.constant(["ROMEO:"])
result = [next_char]
for n in range(1000):
next_char, states = one_step_model.generate_one_step(
next_char, states=states
)
result.append(next_char)
result = tf.strings.join(result)
end = time.time()
print(result[0].numpy().decode("utf-8"), "\n\n" + "_" * 80)
print("\nRun time:", end - start)
Explanation: Run it in a loop to generate some text. Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences.
End of explanation
start = time.time()
states = None
next_char = tf.constant(["ROMEO:", "ROMEO:", "ROMEO:", "ROMEO:", "ROMEO:"])
result = [next_char]
for n in range(1000):
next_char, states = one_step_model.generate_one_step(
next_char, states=states
)
result.append(next_char)
result = tf.strings.join(result)
end = time.time()
print(result, "\n\n" + "_" * 80)
print("\nRun time:", end - start)
Explanation: The easiest thing you can do to improve the results is to train it for longer (try EPOCHS = 30).
You can also experiment with a different start string, try adding another RNN layer to improve the model's accuracy, or adjust the temperature parameter to generate more or less random predictions.
If you want the model to generate text faster the easiest thing you can do is batch the text generation. In the example below the model generates 5 outputs in about the same time it took to generate 1 above.
End of explanation
tf.saved_model.save(one_step_model, "one_step")
one_step_reloaded = tf.saved_model.load("one_step")
states = None
next_char = tf.constant(["ROMEO:"])
result = [next_char]
for n in range(100):
next_char, states = one_step_reloaded.generate_one_step(
next_char, states=states
)
result.append(next_char)
print(tf.strings.join(result)[0].numpy().decode("utf-8"))
Explanation: Export the generator
This single-step model can easily be saved and restored, allowing you to use it anywhere a tf.saved_model is accepted.
End of explanation
class CustomTraining(MyModel):
@tf.function
def train_step(self, inputs):
inputs, labels = inputs
with tf.GradientTape() as tape:
predictions = self(inputs, training=True)
loss = self.loss(labels, predictions)
grads = tape.gradient(loss, model.trainable_variables)
self.optimizer.apply_gradients(zip(grads, model.trainable_variables))
return {"loss": loss}
Explanation: Advanced: Customized Training
The above training procedure is simple, but does not give you much control.
It uses teacher-forcing which prevents bad predictions from being fed back to the model, so the model never learns to recover from mistakes.
So now that you've seen how to run the model manually next you'll implement the training loop. This gives a starting point if, for example, you want to implement curriculum learning to help stabilize the model's open-loop output.
The most important part of a custom training loop is the train step function.
Use tf.GradientTape to track the gradients. You can learn more about this approach by reading the eager execution guide.
The basic procedure is:
Execute the model and calculate the loss under a tf.GradientTape.
Calculate the updates and apply them to the model using the optimizer.
End of explanation
model = CustomTraining(
vocab_size=len(ids_from_chars.get_vocabulary()),
embedding_dim=embedding_dim,
rnn_units=rnn_units,
)
model.compile(
optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
model.fit(dataset, epochs=1)
Explanation: The above implementation of the train_step method follows Keras' train_step conventions. This is optional, but it allows you to change the behavior of the train step and still use keras' Model.compile and Model.fit methods.
End of explanation
EPOCHS = 10
mean = tf.metrics.Mean()
for epoch in range(EPOCHS):
start = time.time()
mean.reset_states()
for (batch_n, (inp, target)) in enumerate(dataset):
logs = model.train_step([inp, target])
mean.update_state(logs["loss"])
if batch_n % 50 == 0:
template = (
f"Epoch {epoch+1} Batch {batch_n} Loss {logs['loss']:.4f}"
)
print(template)
# saving (checkpoint) the model every 5 epochs
if (epoch + 1) % 5 == 0:
model.save_weights(checkpoint_prefix.format(epoch=epoch))
print()
print(f"Epoch {epoch+1} Loss: {mean.result().numpy():.4f}")
print(f"Time taken for 1 epoch {time.time() - start:.2f} sec")
print("_" * 80)
model.save_weights(checkpoint_prefix.format(epoch=epoch))
Explanation: Or if you need more control, you can write your own complete custom training loop:
End of explanation
<END_TASK> |
15,877 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Group sizes
Get all unique size labels from the database.
Step1: Sizes per distributor
Step2: Print joint table with first 60 sizes.
Step3: Calculate entropy
Step4: Create new collection from data only with '_id', 'source' and 'size' fields
Step5: Sizes list per distributor
Step8: Tagging according to size
Since the number of sizes is low (1117 uniq sizes), the task could be resolved using tivial brute force, i.e. map sizes using mapping table.
During the observation of data i noticed that sizes are defined for adult, youth, toddler and baby
Step9: Let's calculate data entropy for results | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
from scipy.stats import entropy
from tabulate import tabulate
from pymongo import MongoClient
import matplotlib.pyplot as plt
plt.style.use('seaborn')
plt.rcParams["figure.figsize"] = (20,8)
db = MongoClient()['stores']
TOTAL_NUMBER_OF_PRODUCTS = db.data.count()
results = db.data.aggregate(
[
{
"$group": {
"_id": "$size",
"count": {"$sum": 1},
}
},
{
"$sort": {
"count": -1,
}
}
]
)
ALL_SIZES = [(str(x['_id']), x['count']) for x in list(results)]
print('Number of uniq. sizes: {}'.format(len(ALL_SIZES)))
Explanation: Group sizes
Get all unique size labels from the database.
End of explanation
DISTRIBUTORS = list(db.data.distinct("source"))
results = db.data.aggregate(
[
{
"$group": {
"_id": "$source",
"sizes": {"$addToSet": "$size"},
}
},
{
"$project": {
"_id": 1,
"count": {"$size": "$sizes"}
}
},
{
"$sort": {
"count": -1,
}
}
]
)
SIZES_PER_DISTRIBUTOR = [
(str(x['_id']), x['count'])
for x in list(results)
]
print(tabulate(SIZES_PER_DISTRIBUTOR,
headers=['Distributor', 'Number of uniq. Sizes'],
tablefmt="simple"))
df_values_by_key = pd.DataFrame(SIZES_PER_DISTRIBUTOR,
index=[x[0] for x in SIZES_PER_DISTRIBUTOR],
columns=['Distributor', 'Sizes'])
df_values_by_key.iloc[::-1].plot.barh()
Explanation: Sizes per distributor
End of explanation
import operator
all_sizes_table = []
number_of_sizes = 180
for sizes in zip(ALL_SIZES[0:number_of_sizes:3],
ALL_SIZES[1:number_of_sizes:3],
ALL_SIZES[2:number_of_sizes:3]):
all_sizes_table.append(list(reduce(operator.add, sizes)))
print(
tabulate(
all_sizes_table[:60],
headers=3*['Size', 'Number of Products'],
tablefmt="simple"))
Explanation: Print joint table with first 60 sizes.
End of explanation
# calculate probability vector
p = [x[1] for x in ALL_SIZES]
size_prob_vector = np.array(p) / TOTAL_NUMBER_OF_PRODUCTS
# calculate entropy
first_entropy = entropy(size_prob_vector)
print("Data entropy:", first_entropy)
Explanation: Calculate entropy
End of explanation
# create new collection
db.data.aggregate(
[
{
"$project": {
"_id": 1,
"source": 1,
"size": 1,
},
},
{
"$out": "size_mapping"
}
]
)
print('Db "size_mapping" created')
# create indexes
db.size_mapping.create_index([("size", 1)])
db.size_mapping.create_index([("source", 1)])
print('Indexes "size", "source" for "size_mapping" created.')
print(list(db.size_mapping.find().limit(5)))
Explanation: Create new collection from data only with '_id', 'source' and 'size' fields
End of explanation
SIZES_LIST_PER_DISTRIBUTOR = db.size_mapping.aggregate(
[
{
"$group": {
"_id": "$source",
"sizes": {"$addToSet": "$size"},
},
},
{
"$project": {
"_id": 1,
"sizes": 1,
"number_of_sizes": {"$size": "$sizes"},
}
},
{
"$sort": {
"number_of_sizes": -1
}
}
]
)
TABLE_SIZES_LIST_PER_DISTRIBUTOR = [
(str(x['_id']), x['sizes'], x['number_of_sizes'])
for x in SIZES_LIST_PER_DISTRIBUTOR
]
for distr, sizes, num in TABLE_SIZES_LIST_PER_DISTRIBUTOR:
print('Sizes for: "{}"'.format(distr))
print(", ".join(sizes))
print(80*"-")
Explanation: Sizes list per distributor
End of explanation
SIZES_MAPPING = {
'ALL': [],
'NO SIZE': ['PLAIN', 'CONE', 'BLANKET'],
'ONE': ['OS', 'ONE SIZE', '1 SIZ', 'O/S'],
'XS': ['XXS', 'XX-SMALL', '2XS'],
'S': ['SMALL', 'S/M'],
'M': ['MEDIUM', 'S/M', 'M/L'],
'L': ['LARGE', 'L/XL', 'M/L'],
'XL': ['EXTRA', 'XLT', 'XT', 'L/XL'],
'2XL': ['2X', 'XXL', '2XT', '2XLL', '2X/', '2XLT'],
'3XL': ['3X', '3XT', '3XLL', '3XLT'],
'4XL': ['4X', '4XT', '4XLT'],
'5XL': ['5X', '5XT', '5XLT'],
'6XL': ['6X'],
}
def build_matching_table(matching_rules):
Build matching table from matching rules
:param matching_rules: matching rules used to build matching table
:type matching_rules: dict
:return: matching table `{'S/M: ['S', 'M'], '2X': ['2XL'], ...}`
:rtype: dict
matching_table = {}
# transform matching rules to the "shortcut": "group_key" table
for key, values in matching_rules.items():
if not values: # skip undefined rules i.e. "[]"
continue
# add rule for key
if key not in matching_table:
# NOTE: set('ab') would be {'a', 'b'}
# so it's impossible to matching_table[key] = set(key)
matching_table[key] = set()
matching_table[key].add(key)
for value in values:
if value not in matching_table:
matching_table[value] = set()
matching_table[value].add(key)
else:
matching_table[value].add(key)
return matching_table
MATCHING_RULES = build_matching_table(SIZES_MAPPING)
print(tabulate(MATCHING_TABLE.items(), headers=['From', 'To'], tablefmt="simple"))
# process data into the new table
# def get_groups(mtable, size):
# Get size groups for the given `size` according to matching table
# :param size: size (case insensetive)
# :type size: str
# :return: list of strings i.e. size groups or ``['UNDEFINED']``
# if not found
# :rtype: list or ['UNDEFINED']
#
# return list(mtable.get(size, default=size))
# for k, v in MATCHING_TABLE.items():
# res = db.size_mapping.update_many(
# {"size": k},
# {"$set": {"size": get_groups(MATCHING_TABLE, k)}})
# print(res.raw_result)
Explanation: Tagging according to size
Since the number of sizes is low (1117 uniq sizes), the task could be resolved using tivial brute force, i.e. map sizes using mapping table.
During the observation of data i noticed that sizes are defined for adult, youth, toddler and baby:
Adult: 'S', 'M', 'L' etc.
Youth: 'YS', 'YL' etc.
Kid: '4', '6' etc.
Toddler: '2T', '3T' etc.
Baby: '3M', '6M', 'NB' (new born) etc.
kid, toddler, baby sizes chart
youth sizes chart
I.e. could tag products accodring to the size.
python
TAG_FROM_SIZE = {
'adult': ['XS', 'S', 'M', 'L', 'XL', '2XL', '3XL', '4XL', '5XL', '6XL'],
'youth': ['YXS', 'YSM', 'YMD', 'YLG', 'YXL', '8H', '10H', '12H', '14H', '16H', '18H', '20H'],
'kid': []
}
End of explanation
results = db.size_mapping.aggregate(
[
{
"$group": {
"_id": "$size",
"count": {"$sum": 1},
}
},
{
"$sort": {
"count": -1,
}
}
]
)
NEW_SIZES = [(str(x['_id']), x['count']) for x in list(results)]
print(
"\n" +
tabulate(NEW_SIZES[:20], headers=['Size', 'Number of Products'], tablefmt="orgtbl") +
"\n"
)
# calculate probability vector
p = []
for _, count in NEW_SIZES:
p.append(count)
size_prob_vector = np.array(p) / TOTAL_NUMBER_OF_PRODUCTS
# calculate entropy
first_entropy = entropy(size_prob_vector)
print("Data entropy: ", first_entropy)
from functools import reduce
total_matched_products = (sum([x[1] for x in NEW_SIZES[:11]]))
percent_from_db_total = round((total_matched_products / TOTAL_NUMBER_OF_PRODUCTS) * 100, 2)
print("Matched: {} Percent from total: {}".format(total_matched_products, percent_from_db_total))
Explanation: Let's calculate data entropy for results
End of explanation
<END_TASK> |
15,878 | <SYSTEM_TASK:>
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
<END_TASK>
<USER_TASK:>
Problem:
I have two tensors of dimension 11 * 1. I want to check how many of the 11 elements are equal in the two tensors. I think I should be able to do this in few lines like Numpy but couldn't find a similar function. | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import torch
A, B = load_data()
cnt_equal = int((A == B).sum())
<END_TASK> |
15,879 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Scikit Learn et donnรฉes
Scikit-learn propose quelques ensembles de donnรฉes, notamment iris et digits (classification) et le boston house prices dataset (regression).
Exercice
Step1: Mรชme avant scikit-learn
Les libraries numpy et scipy ont plein de bonnes choses dedans. Explorez-les un peu.
sklearn.datasets
Un dataset ressemble ร un dict. Explorez les membres suivants (e.g., iris.DESCR)
Step2: Nous pouvons regarder l'image.
Qu'est-ce qui est l'effet de cmap?
Step3: ร savoir (mais pour un autre jour)
Step4: Le classifieur le plus simple imagineable s'appelle kNN. Avec scikit-learn, c'est facile. (Visualisaton ร suivre.)
Le nombre de dimensions peut monter trรจs vite, ce qui pose des problรจmes pour kNN.
* Il y a combien de point sur une lattice espacรฉs de $1/n$ en dimension 1, 2, 3, ..., n ?
* Qu'est-ce qui est la distance entre 0 et 1 (les vecteurs des coins opposรฉs) dans $[0,1]^d$?
Step5: La rรฉgression logistique est un algorithm important de classification dans l'apprentissage. Le voilร sur les mรชmes donnรฉes
Step6: Exercice
Step7: Ce qu'on vient de faire s'appelle "cross validation" (validation croisรฉe). On peut le faire plus facilement
Step8: En validation croisรฉe, plus c'est grand, plus c'est bon.
ร voir รฉgalement
Step9: Grid search
Note pour plus tard
Step10: Pipelining
Grace ร l'interface uniforme des classes estimateurs, nous avons la possibilitรฉ de crรฉer des pipelines
Step13: Eigenfaces
Une deuxiรจme exemple d'un pipeline. | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import scipy as sp
from sklearn import datasets
iris = datasets.load_iris()
digits = datasets.load_digits()
boston = datasets.load_boston()
Explanation: Scikit Learn et donnรฉes
Scikit-learn propose quelques ensembles de donnรฉes, notamment iris et digits (classification) et le boston house prices dataset (regression).
Exercice : en trouvez d'autres...
End of explanation
from sklearn import svm
model = svm.SVC(gamma=0.002, C=100.)
print(model.gamma)
model.set_params(gamma=.001)
print(model.gamma)
model.fit(digits.data[:-1], digits.target[:-1])
model.predict([digits.data[-1]])
Explanation: Mรชme avant scikit-learn
Les libraries numpy et scipy ont plein de bonnes choses dedans. Explorez-les un peu.
sklearn.datasets
Un dataset ressemble ร un dict. Explorez les membres suivants (e.g., iris.DESCR) :
* data
* target
* feature_names
* DESCR
Puis utilisez ce que vous avez appris dans la module pandas pour explorer les donnรฉes elles-mรชme.
<img src="petal_sepal_label.png">
En anglais (pour correspondre aux noms des fonctions) : "We fit an estimator to the data to predict the classes to which unseen samples belong". Donc, un estimator implemente les mรฉthode fit(X, y) et predit(T).
Le constructeur d'un estimateur accepte les paramรจtes du modรจle.
Il est รฉgalement possible de changer les paramรจtes aprรจs crรฉation.
End of explanation
import pylab as pl
%matplotlib inline
pl.imshow(digits.images[-1], cmap=pl.cm.gray_r)
Explanation: Nous pouvons regarder l'image.
Qu'est-ce qui est l'effet de cmap?
End of explanation
iris = datasets.load_iris()
iris_X = iris.data
iris_y = iris.target
Explanation: ร savoir (mais pour un autre jour) :
* pickle marche
* sklearn.externals.joblib est parfois plus efficace
Un estimator prend un ensemble de donnรฉes, typiquement un array de dimension 2 (np.ndarray, cf. .shape).
Regardons les iris :
* Il y a combien de classes d'iris?
* Il y a combien de vecteurs dans le training data?
* Il y a combien de dimensions?
End of explanation
# Split iris data in train and test data
# A random permutation, to split the data randomly
np.random.seed(0)
indices = np.random.permutation(len(iris_X))
iris_X_train = iris_X[indices[:-10]]
iris_y_train = iris_y[indices[:-10]]
iris_X_test = iris_X[indices[-10:]]
iris_y_test = iris_y[indices[-10:]]
# Create and fit a nearest-neighbor classifier
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier()
knn.fit(iris_X_train, iris_y_train)
print(knn.predict(iris_X_test))
print(iris_y_test)
knn.score(iris_X_test, iris_y_test)
Explanation: Le classifieur le plus simple imagineable s'appelle kNN. Avec scikit-learn, c'est facile. (Visualisaton ร suivre.)
Le nombre de dimensions peut monter trรจs vite, ce qui pose des problรจmes pour kNN.
* Il y a combien de point sur une lattice espacรฉs de $1/n$ en dimension 1, 2, 3, ..., n ?
* Qu'est-ce qui est la distance entre 0 et 1 (les vecteurs des coins opposรฉs) dans $[0,1]^d$?
End of explanation
from sklearn import linear_model
logistic = linear_model.LogisticRegression(C=1e5)
logistic.fit(iris_X_train, iris_y_train)
print(logistic.predict(iris_X_test))
print(iris_y_test)
logistic.score(iris_X_test, iris_y_test)
Explanation: La rรฉgression logistique est un algorithm important de classification dans l'apprentissage. Le voilร sur les mรชmes donnรฉes :
End of explanation
scores = []
for k in range(10):
indices = np.random.permutation(len(iris_X))
iris_X_train = iris_X[indices[:-10]]
iris_y_train = iris_y[indices[:-10]]
iris_X_test = iris_X[indices[-10:]]
iris_y_test = iris_y[indices[-10:]]
knn = KNeighborsClassifier()
knn.fit(iris_X_train, iris_y_train)
scores.append(knn.score(iris_X_test, iris_y_test))
print(scores)
X_digits = digits.data
y_digits = digits.target
svc = svm.SVC(C=1, kernel='linear')
N = 10
X_folds = np.array_split(X_digits, N)
y_folds = np.array_split(y_digits, N)
scores = list()
for k in range(N):
# We use 'list' to copy, in order to 'pop' later on
X_train = list(X_folds)
X_test = X_train.pop(k)
X_train = np.concatenate(X_train)
y_train = list(y_folds)
y_test = y_train.pop(k)
y_train = np.concatenate(y_train)
scores.append(svc.fit(X_train, y_train).score(X_test, y_test))
scores
Explanation: Exercice :
* Pourquoi sont les scores les mรชmes dans les deux exemples prรฉcรฉdents?
* ร quoi sert le score?
End of explanation
from sklearn import model_selection
k_fold = cross_validation.KFold(n=6, n_folds=3)
for train_indices, test_indices in k_fold:
print('Train: %s | test: %s' % (train_indices, test_indices))
kfold = cross_validation.KFold(len(X_digits), n_folds=N)
[svc.fit(X_digits[train], y_digits[train]).score(
X_digits[test], y_digits[test])
for train, test in kfold]
cross_validation.cross_val_score(
svc, X_digits, y_digits, cv=kfold, n_jobs=-1)
Explanation: Ce qu'on vient de faire s'appelle "cross validation" (validation croisรฉe). On peut le faire plus facilement :
End of explanation
import numpy as np
from sklearn import cross_validation, datasets, svm
digits = datasets.load_digits()
X = digits.data
y = digits.target
svc = svm.SVC(kernel='linear')
C_s = np.logspace(-10, 0, 10)
scores = list()
scores_std = list()
for C in C_s:
svc.C = C
this_scores = cross_validation.cross_val_score(svc, X, y, n_jobs=1)
scores.append(np.mean(this_scores))
scores_std.append(np.std(this_scores))
# Do the plotting
import matplotlib.pyplot as plt
plt.figure(1, figsize=(4, 3))
plt.clf()
plt.semilogx(C_s, scores)
plt.semilogx(C_s, np.array(scores) + np.array(scores_std), 'b--')
plt.semilogx(C_s, np.array(scores) - np.array(scores_std), 'b--')
locs, labels = plt.yticks()
plt.yticks(locs, list(map(lambda x: "%g" % x, locs)))
plt.ylabel('CV score')
plt.xlabel('Parameter C')
plt.ylim(0, 1.1)
plt.show()
Explanation: En validation croisรฉe, plus c'est grand, plus c'est bon.
ร voir รฉgalement :
* KFold
* StratifiedKFold
* LeaveOneOut
* LeaveOneLabelOut
Estimation d'un paramรจtre
Nous voudrions trouver quelle valeur du paramรจtre $C$ nous donne un bon rendu de SVM avec noyau linรฉaire. Pour l'instant, on ne parle ni de SVM ni des noyaux : ce sont simplement des classificateurs. L'important ici est qu'il existe un paramรจtre $C$ qui touche sur la qualitรฉ de nos rรฉsultats.
C'est $C$ qui gรจre le sรฉparateur : marge dure ($C$ grand) ou molle (douce) ($C>0$ petit).
End of explanation
from sklearn.model_selection import GridSearchCV, cross_val_score
Cs = np.logspace(-6, -1, 10)
clf = GridSearchCV(estimator=svc, param_grid=dict(C=Cs), n_jobs=-1)
clf.fit(X_digits[:1000], y_digits[:1000])
print(clf.best_score_)
print(clf.best_estimator_.C)
# Prediction performance on test set is not as good as on train set
print(clf.score(X_digits[1000:], y_digits[1000:]))
Explanation: Grid search
Note pour plus tard : Voir l'argument cv. GridSearch fait 3-fold validation croisรฉe pour la rรฉgression, stratified 3-fold pour un classificateur.
End of explanation
from sklearn import linear_model, decomposition, datasets
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
logistic = linear_model.LogisticRegression()
pca = decomposition.PCA()
pipe = Pipeline(steps=[('pca', pca), ('logistic', logistic)])
digits = datasets.load_digits()
X_digits = digits.data
y_digits = digits.target
###############################################################################
# Plot the PCA spectrum
pca.fit(X_digits)
plt.figure(1, figsize=(4, 3))
plt.clf()
plt.axes([.2, .2, .7, .7])
plt.plot(pca.explained_variance_, linewidth=2)
plt.axis('tight')
plt.xlabel('n_components')
plt.ylabel('explained_variance_')
###############################################################################
# Prediction
n_components = [20, 40, 64]
Cs = np.logspace(-4, 4, 3)
#Parameters of pipelines can be set using โ__โ separated parameter names:
estimator = GridSearchCV(pipe,
dict(pca__n_components=n_components,
logistic__C=Cs))
estimator.fit(X_digits, y_digits)
plt.axvline(estimator.best_estimator_.named_steps['pca'].n_components,
linestyle=':', label='n_components chosen')
plt.legend(prop=dict(size=12))
Explanation: Pipelining
Grace ร l'interface uniforme des classes estimateurs, nous avons la possibilitรฉ de crรฉer des pipelines : des composition de plusieurs estimateurs.
Digits
Une premiรจre exemple d'un pipeline.
End of explanation
===================================================
Faces recognition example using eigenfaces and SVMs
===================================================
The dataset used in this example is a preprocessed excerpt of the
"Labeled Faces in the Wild", aka LFW_:
http://vis-www.cs.umass.edu/lfw/lfw-funneled.tgz (233MB)
.. _LFW: http://vis-www.cs.umass.edu/lfw/
Expected results for the top 5 most represented people in the dataset:
================== ============ ======= ========== =======
precision recall f1-score support
================== ============ ======= ========== =======
Ariel Sharon 0.67 0.92 0.77 13
Colin Powell 0.75 0.78 0.76 60
Donald Rumsfeld 0.78 0.67 0.72 27
George W Bush 0.86 0.86 0.86 146
Gerhard Schroeder 0.76 0.76 0.76 25
Hugo Chavez 0.67 0.67 0.67 15
Tony Blair 0.81 0.69 0.75 36
avg / total 0.80 0.80 0.80 322
================== ============ ======= ========== =======
from __future__ import print_function
from time import time
import logging
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.datasets import fetch_lfw_people
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.decomposition import PCA
from sklearn.svm import SVC
print(__doc__)
# Display progress logs on stdout
logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s')
###############################################################################
# Download the data, if not already on disk and load it as numpy arrays
lfw_people = fetch_lfw_people(min_faces_per_person=70, resize=0.4)
# introspect the images arrays to find the shapes (for plotting)
n_samples, h, w = lfw_people.images.shape
# for machine learning we use the 2 data directly (as relative pixel
# positions info is ignored by this model)
X = lfw_people.data
n_features = X.shape[1]
# the label to predict is the id of the person
y = lfw_people.target
target_names = lfw_people.target_names
n_classes = target_names.shape[0]
print("Total dataset size:")
print("n_samples: %d" % n_samples)
print("n_features: %d" % n_features)
print("n_classes: %d" % n_classes)
###############################################################################
# Split into a training set and a test set using a stratified k fold
# split into a training and testing set
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.25, random_state=42)
###############################################################################
# Compute a PCA (eigenfaces) on the face dataset (treated as unlabeled
# dataset): unsupervised feature extraction / dimensionality reduction
n_components = 150
print("Extracting the top %d eigenfaces from %d faces"
% (n_components, X_train.shape[0]))
t0 = time()
pca = PCA(n_components=n_components, svd_solver='randomized',
whiten=True).fit(X_train)
print("done in %0.3fs" % (time() - t0))
eigenfaces = pca.components_.reshape((n_components, h, w))
print("Projecting the input data on the eigenfaces orthonormal basis")
t0 = time()
X_train_pca = pca.transform(X_train)
X_test_pca = pca.transform(X_test)
print("done in %0.3fs" % (time() - t0))
###############################################################################
# Train a SVM classification model
print("Fitting the classifier to the training set")
t0 = time()
param_grid = {'C': [1e3, 5e3, 1e4, 5e4, 1e5],
'gamma': [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.1], }
clf = GridSearchCV(SVC(kernel='rbf', class_weight='balanced'), param_grid)
clf = clf.fit(X_train_pca, y_train)
print("done in %0.3fs" % (time() - t0))
print("Best estimator found by grid search:")
print(clf.best_estimator_)
###############################################################################
# Quantitative evaluation of the model quality on the test set
print("Predicting people's names on the test set")
t0 = time()
y_pred = clf.predict(X_test_pca)
print("done in %0.3fs" % (time() - t0))
print(classification_report(y_test, y_pred, target_names=target_names))
print(confusion_matrix(y_test, y_pred, labels=range(n_classes)))
###############################################################################
# Qualitative evaluation of the predictions using matplotlib
def plot_gallery(images, titles, h, w, n_row=3, n_col=4):
Helper function to plot a gallery of portraits
plt.figure(figsize=(1.8 * n_col, 2.4 * n_row))
plt.subplots_adjust(bottom=0, left=.01, right=.99, top=.90, hspace=.35)
for i in range(n_row * n_col):
plt.subplot(n_row, n_col, i + 1)
plt.imshow(images[i].reshape((h, w)), cmap=plt.cm.gray)
plt.title(titles[i], size=12)
plt.xticks(())
plt.yticks(())
# plot the result of the prediction on a portion of the test set
def title(y_pred, y_test, target_names, i):
pred_name = target_names[y_pred[i]].rsplit(' ', 1)[-1]
true_name = target_names[y_test[i]].rsplit(' ', 1)[-1]
return 'predicted: %s\ntrue: %s' % (pred_name, true_name)
prediction_titles = [title(y_pred, y_test, target_names, i)
for i in range(y_pred.shape[0])]
plot_gallery(X_test, prediction_titles, h, w)
# plot the gallery of the most significative eigenfaces
eigenface_titles = ["eigenface %d" % i for i in range(eigenfaces.shape[0])]
plot_gallery(eigenfaces, eigenface_titles, h, w)
plt.show()
Explanation: Eigenfaces
Une deuxiรจme exemple d'un pipeline.
End of explanation
<END_TASK> |
15,880 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Grade
Step1: If you get an error stating that database "homework2" does not exist, make sure that you followed the instructions above exactly. If necessary, drop the database you created (with, e.g., DROP DATABASE your_database_name) and start again.
In all of the cells below, I've provided the necessary Python scaffolding to perform the query and display the results. All you need to do is write the SQL statements.
As noted in the tutorial, if your SQL statement has a syntax error, you'll need to rollback your connection before you can fix the error and try the query again. As a convenience, I've included the following cell, which performs the rollback process. Run it whenever you hit trouble.
Step2: Problem set 1
Step3: Problem set 2
Step4: Nicely done. Now, in the cell below, fill in the indicated string with a SQL statement that returns all occupations, along with their count, from the uuser table that have more than fifty users listed for that occupation. (I.e., the occupation librarian is listed for 51 users, so it should be included in these results. There are only 12 lawyers, so lawyer should not be included in the result.)
Expected output
Step5: Problem set 3
Step6: Problem set 4
Step7: BONUS | <ASSISTANT_TASK:>
Python Code:
import pg8000
conn = pg8000.connect(database="homework2")
Explanation: Grade: 5 / 6 -- I think for the last one, you accidentally overwrote the answer you had for Question 5 with the answer you had for the "bonus" number 6 question. Also, run the cells and THEN save your notebook -- that way, I can see the output you got. Great job on the assignment overall -- your SQL queries look good.
Homework 2: Working with SQL (Data and Databases 2016)
This homework assignment takes the form of an IPython Notebook. There are a number of exercises below, with notebook cells that need to be completed in order to meet particular criteria. Your job is to fill in the cells as appropriate.
You'll need to download this notebook file to your computer before you can complete the assignment. To do so, follow these steps:
Make sure you're viewing this notebook in Github.
Ctrl+click (or right click) on the "Raw" button in the Github interface, and select "Save Link As..." or your browser's equivalent. Save the file in a convenient location on your own computer.
Rename the notebook file to include your own name somewhere in the filename (e.g., Homework_2_Allison_Parrish.ipynb).
Open the notebook on your computer using your locally installed version of IPython Notebook.
When you've completed the notebook to your satisfaction, e-mail the completed file to the address of the teaching assistant (as discussed in class).
Setting the scene
These problem sets address SQL, with a focus on joins and aggregates.
I've prepared a SQL version of the MovieLens data for you to use in this homework. Download this .psql file here. You'll be importing this data into your own local copy of PostgreSQL.
To import the data, follow these steps:
Launch psql.
At the prompt, type CREATE DATABASE homework2;
Connect to the database you just created by typing \c homework2
Import the .psql file you downloaded earlier by typing \i followed by the path to the .psql file.
After you run the \i command, you should see the following output:
CREATE TABLE
CREATE TABLE
CREATE TABLE
COPY 100000
COPY 1682
COPY 943
The table schemas for the data look like this:
Table "public.udata"
Column | Type | Modifiers
-----------+---------+-----------
user_id | integer |
item_id | integer |
rating | integer |
timestamp | integer |
Table "public.uuser"
Column | Type | Modifiers
------------+-----------------------+-----------
user_id | integer |
age | integer |
gender | character varying(1) |
occupation | character varying(80) |
zip_code | character varying(10) |
Table "public.uitem"
Column | Type | Modifiers
--------------------+------------------------+-----------
movie_id | integer | not null
movie_title | character varying(81) | not null
release_date | date |
video_release_date | character varying(32) |
imdb_url | character varying(134) |
unknown | integer | not null
action | integer | not null
adventure | integer | not null
animation | integer | not null
childrens | integer | not null
comedy | integer | not null
crime | integer | not null
documentary | integer | not null
drama | integer | not null
fantasy | integer | not null
film_noir | integer | not null
horror | integer | not null
musical | integer | not null
mystery | integer | not null
romance | integer | not null
scifi | integer | not null
thriller | integer | not null
war | integer | not null
western | integer | not null
Run the cell below to create a connection object. This should work whether you have pg8000 installed or psycopg2.
End of explanation
conn.rollback()
Explanation: If you get an error stating that database "homework2" does not exist, make sure that you followed the instructions above exactly. If necessary, drop the database you created (with, e.g., DROP DATABASE your_database_name) and start again.
In all of the cells below, I've provided the necessary Python scaffolding to perform the query and display the results. All you need to do is write the SQL statements.
As noted in the tutorial, if your SQL statement has a syntax error, you'll need to rollback your connection before you can fix the error and try the query again. As a convenience, I've included the following cell, which performs the rollback process. Run it whenever you hit trouble.
End of explanation
cursor = conn.cursor()
statement = "SELECT movie_title, release_date FROM uitem WHERE scifi = 1 AND horror = 1 ORDER BY release_date DESC;
"
cursor.execute(statement)
for row in cursor:
print(row[0])
Explanation: Problem set 1: WHERE and ORDER BY
In the cell below, fill in the string assigned to the variable statement with a SQL query that finds all movies that belong to both the science fiction (scifi) and horror genres. Return these movies in reverse order by their release date. (Hint: movies are located in the uitem table. A movie's membership in a genre is indicated by a value of 1 in the uitem table column corresponding to that genre.) Run the cell to execute the query.
Expected output:
Deep Rising (1998)
Alien: Resurrection (1997)
Hellraiser: Bloodline (1996)
Robert A. Heinlein's The Puppet Masters (1994)
Body Snatchers (1993)
Army of Darkness (1993)
Body Snatchers (1993)
Alien 3 (1992)
Heavy Metal (1981)
Alien (1979)
Night of the Living Dead (1968)
Blob, The (1958)
End of explanation
cursor = conn.cursor()
statement = "SELECT count(*) from uitem WHERE musical = 1 or childrens =1;"
cursor.execute(statement)
for row in cursor:
print(row[0])
Explanation: Problem set 2: Aggregation, GROUP BY and HAVING
In the cell below, fill in the string assigned to the statement variable with a SQL query that returns the number of movies that are either musicals or children's movies (columns musical and childrens respectively). Hint: use the count(*) aggregate.
Expected output: 157
End of explanation
cursor = conn.cursor()
statement = "SELECT uuser.occupation, count(*) FROM uuser GROUP BY occupation HAVING count(*) > 50;
"
cursor.execute(statement)
for row in cursor:
print(row[0], row[1])
Explanation: Nicely done. Now, in the cell below, fill in the indicated string with a SQL statement that returns all occupations, along with their count, from the uuser table that have more than fifty users listed for that occupation. (I.e., the occupation librarian is listed for 51 users, so it should be included in these results. There are only 12 lawyers, so lawyer should not be included in the result.)
Expected output:
administrator 79
programmer 66
librarian 51
student 196
other 105
engineer 67
educator 95
Hint: use GROUP BY and HAVING. (If you're stuck, try writing the query without the HAVING first.)
End of explanation
cursor = conn.cursor()
statement = "SELECT distinct(uitem.movie_title) FROM uitem JOIN udata ON uitem.movie_id = udata.item_id WHERE udata.rating = 5 AND uitem.documentary = 1 AND uitem.release_date < '1992-01-01';"
cursor.execute(statement)
for row in cursor:
print(row[0])
Explanation: Problem set 3: Joining tables
In the cell below, fill in the indicated string with a query that finds the titles of movies in the Documentary genre released before 1992 that received a rating of 5 from any user. Expected output:
Madonna: Truth or Dare (1991)
Koyaanisqatsi (1983)
Paris Is Burning (1990)
Thin Blue Line, The (1988)
Hints:
JOIN the udata and uitem tables.
Use DISTINCT() to get a list of unique movie titles (no title should be listed more than once).
The SQL expression to include in order to find movies released before 1992 is uitem.release_date < '1992-01-01'.
End of explanation
cursor = conn.cursor()
statement = "SELECT uitem.movie_title, avg(udata.rating) FROM uitem JOIN udata ON uitem.movie_id = udata.item_id WHERE uitem.horror = 1 GROUP BY uitem.movie_title HAVING count(udata.rating) >= 10 ORDER BY avg(udata.rating) limit 10;"
cursor.execute(statement)
for row in cursor:
print(row[0], "%0.2f" % row[1])
Explanation: Problem set 4: Joins and aggregations... together at last
This one's tough, so prepare yourself. Go get a cup of coffee. Stretch a little bit. Deep breath. There you go.
In the cell below, fill in the indicated string with a query that produces a list of the ten lowest rated movies in the Horror genre. For the purposes of this problem, take "lowest rated" to mean "has the lowest average rating." The query should display the titles of the movies, not their ID number. (So you'll have to use a JOIN.)
Expected output:
Amityville 1992: It's About Time (1992) 1.00
Beyond Bedlam (1993) 1.00
Amityville: Dollhouse (1996) 1.00
Amityville: A New Generation (1993) 1.00
Amityville 3-D (1983) 1.17
Castle Freak (1995) 1.25
Amityville Curse, The (1990) 1.25
Children of the Corn: The Gathering (1996) 1.32
Machine, The (1994) 1.50
Body Parts (1991) 1.62
End of explanation
cursor = conn.cursor()
statement = ""
cursor.execute(statement)
for row in cursor:
print(row[0], "%0.2f" % row[1])
Explanation: BONUS: Extend the query above so that it only includes horror movies that have ten or more ratings. Fill in the query as indicated below.
Expected output:
Children of the Corn: The Gathering (1996) 1.32
Body Parts (1991) 1.62
Amityville II: The Possession (1982) 1.64
Jaws 3-D (1983) 1.94
Hellraiser: Bloodline (1996) 2.00
Tales from the Hood (1995) 2.04
Audrey Rose (1977) 2.17
Addiction, The (1995) 2.18
Halloween: The Curse of Michael Myers (1995) 2.20
Phantoms (1998) 2.23
End of explanation
<END_TASK> |
15,881 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
CSAL4243
Step1: <br>
Task 1
Step4: Linear Regression with Gradient Descent code
Step5: Run Gradient Descent on training data
Step6: Plot trained line on data
Step7: <br>
Task 2
Step8: Upload .csv file to Kaggle.com
Create an account at https
Step9: <br>
Task 3
Step10: <br>
Task 4 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import seaborn as sns
from sklearn import linear_model
import matplotlib.pyplot as plt
import matplotlib as mpl
# read house_train.csv data in pandas dataframe df_train using pandas read_csv function
df_train = pd.read_csv('datasets/house_price/train.csv', encoding='utf-8')
# check data by printing first few rows
df_train.head()
# check columns in dataset
df_train.columns
# check correlation matrix, darker means more correlation
corrmat = df_train.corr()
f, aX_train= plt.subplots(figsize=(12, 9))
sns.heatmap(corrmat, vmax=.8, square=True);
# SalePrice correlation matrix with top k variables
k = 10 #number of variables for heatmap
cols = corrmat.nlargest(k, 'SalePrice')['SalePrice'].index
cm = np.corrcoef(df_train[cols].values.T)
sns.set(font_scale=1.25)
hm = sns.heatmap(cm, cbar=True, annot=True, square=True, fmt='.2f', annot_kws={'size': 10}, yticklabels=cols.values, xticklabels=cols.values)
plt.show()
#scatterplot with some important variables
cols = ['SalePrice', 'OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'YearBuilt']
sns.set()
sns.pairplot(df_train[cols], size = 2.5)
plt.show();
Explanation: CSAL4243: Introduction to Machine Learning
Muhammad Mudassir Khan (mudasssir.khan@ucp.edu.pk)
Assignment 1: Linear Regression
In this assignment you are going to learn how Linear Regression works by using the code for linear regression and gradient descent we have been looking at in the class. You are also going to use linear regression from scikit-learn library for machine learning. You are going to learn how to download data from kaggle (a website for datasets and machine learning) and upload submissions to kaggle competitions. And you will be able to compete with the world.
Overview
Pseudocode
Tasks
Load and analyze data
Task 1: Effect of Learning Rate $\alpha$
Load X and y
Linear Regression with Gradient Descent code
Run Gradient Descent on training data
Plot trained line on data
Task 2: Predict test data output and submit it to Kaggle
Upload .csv file to Kaggle.com
Task 3: Use scikit-learn for Linear Regression
Task 4: Multivariate Linear Regression
Resources
Credits
<br>
<br>
Pseudocode
Linear Regressio with Gradient Descent
Load training data into X_train and y_train
[Optionally] normalize features X_train using $x^i = \frac{x^i - \mu^i}{\rho^i}$ where $\mu^i$ is mean and $\rho^i$ is standard deviation of feature $i$
Initialize hyperparameters
iterations
learning rate $\alpha$
Initialize $\theta_s$
At each iteration
Compute cost using $J(\theta) = \frac{1}{2m}\sum_{i=1}^{m} (h(x^i) - y^i)^2$ where $h(x) = \theta_0 + \theta_1 x_1 + \theta_2 x_2 .... + \theta_n x_n$
Update $\theta_s$ using $\begin{align} \; \; & \theta_j := \theta_j - \alpha \frac{1}{m} \sum\limits_{i=1}^{m} (h_\theta(x_{i}) - y_{i}) \cdot x^j_{i} \; & & \text{for j := 0...n} \end{align}$
[Optionally] Break if cost $J(\theta)$ does not change.
<br>
<br>
Download House Prices dataset
The dataset you are going to use in this assignment is called House Prices, available at kaggle. To download the dataset go to dataset data tab. Download 'train.csv', 'test.csv', 'data_description.txt' and 'sample_submission.csv.gz' files. 'train.csv' is going to be used for training the model. 'test.csv' is used to test the model i.e. generalization. 'data_description.txt' contain feature description of the dataset. 'sample_submission.csv.gz' contain sample submission file that you need to generate to be submitted to kaggle.
<br>
Tasks
Effect of Learning Rate $\alpha$
Predict test data output and submit it to Kaggle
Use scikit-learn for Linear Regression
Multivariate Linear Regression
Load and analyze data
End of explanation
# Load X and y variables from pandas dataframe df_train
cols = ['GrLivArea']
X_train = np.array(df_train[cols])
y_train = np.array(df_train[["SalePrice"]])
# Get m = number of samples and n = number of features
m = X_train.shape[0]
n = X_train.shape[1]
# append a column of 1's to X for theta_0
X_train = np.insert(X_train,0,1,axis=1)
Explanation: <br>
Task 1: Effect of Learning Rate $\alpha$
Use Linear Regression code below using X="GrLivArea" as input variable and y="SalePrice" as target variable. Use different values of $\alpha$ given in table below and comment on why they are useful or not and which one is a good choice.
$\alpha=0.000001$:
$\alpha=0.00000001$:
$\alpha=0.000000001$:
<br>
Load X and y
End of explanation
iterations = 1500
alpha = 0.000000001 # change it and find what happens
def h(X, theta): #Linear hypothesis function
hx = np.dot(X,theta)
return hx
def computeCost(theta,X,y): #Cost function
theta is an n- dimensional vector, X is matrix with n- columns and m- rows
y is a matrix with m- rows and 1 column
#note to self: *.shape is (rows, columns)
return float((1./(2*m)) * np.dot((h(X,theta)-y).T,(h(X,theta)-y)))
#Actual gradient descent minimizing routine
def gradientDescent(X,y, theta_start = np.zeros((n+1,1))):
theta_start is an n- dimensional vector of initial theta guess
X is input variable matrix with n- columns and m- rows. y is a matrix with m- rows and 1 column.
theta = theta_start
j_history = [] #Used to plot cost as function of iteration
theta_history = [] #Used to visualize the minimization path later on
for meaninglessvariable in range(iterations):
tmptheta = theta
# append for plotting
j_history.append(computeCost(theta,X,y))
theta_history.append(list(theta[:,0]))
#Simultaneously updating theta values
for j in range(len(tmptheta)):
tmptheta[j] = theta[j] - (alpha/m)*np.sum((h(X,theta) - y)*np.array(X[:,j]).reshape(m,1))
theta = tmptheta
return theta, theta_history, j_history
Explanation: Linear Regression with Gradient Descent code
End of explanation
#Actually run gradient descent to get the best-fit theta values
initial_theta = np.zeros((n+1,1));
theta, theta_history, j_history = gradientDescent(X_train,y_train,initial_theta)
plt.plot(j_history)
plt.title("Convergence of Cost Function")
plt.xlabel("Iteration number")
plt.ylabel("Cost function")
plt.show()
Explanation: Run Gradient Descent on training data
End of explanation
# predict output for training data
hx_train= h(X_train, theta)
# plot it
plt.scatter(X_train[:,1],y_train)
plt.plot(X_train[:,1],hx_train[:,0], color='red')
plt.show()
Explanation: Plot trained line on data
End of explanation
# read data in pandas frame df_test and check first few rows
# write code here
df_test.head()
# check statistics of test data, make sure no data is missing.
print(df_test.shape)
df_test[cols].describe()
# Get X_test, no target variable (SalePrice) provided in test data. It is what we need to predict.
X_test = np.array(df_test[cols])
#Insert the usual column of 1's into the "X" matrix
X_test = np.insert(X_test,0,1,axis=1)
# predict test data labels i.e. y_test
predict = h(X_test, theta)
# save prediction as .csv file
pd.DataFrame({'Id': df_test.Id, 'SalePrice': predict[:,0]}).to_csv("predict1.csv", index=False)
Explanation: <br>
Task 2: Predict test data output and submit it to Kaggle
In this task we will use the model trained above to predict "SalePrice" on test data. Test data has all the input variables/features but no target variable. Out aim is to use the trained model to predict the target variable for test data. This is called generalization i.e. how good your model works on unseen data. The output in the form "Id","SalePrice" in a .csv file should be submitted to kaggle. Please provide your score on kaggle after this step as an image. It will be compared to the 5 feature Linear Regression later.
End of explanation
from IPython.display import Image
Image(filename='images/asgn_01.png', width=500)
Explanation: Upload .csv file to Kaggle.com
Create an account at https://www.kaggle.com
Go to https://www.kaggle.com/c/house-prices-advanced-regression-techniques/submit
Upload "predict1.csv" file created above.
Upload your score as an image below.
End of explanation
# import scikit-learn linear model
from sklearn import linear_model
# get X and y
# write code here
# Create linear regression object
# write code here check link above for example
# Train the model using the training sets. Use fit(X,y) command
# write code here
# The coefficients
print('Intercept: \n', regr.intercept_)
print('Coefficients: \n', regr.coef_)
# The mean squared error
print("Mean squared error: %.2f"
% np.mean((regr.predict(X_train) - y_train) ** 2))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % regr.score(X_train, y_train))
# read test X without 1's
# write code here
# predict output for test data. Use predict(X) command.
predict2 = # write code here
# remove negative sales by replacing them with zeros
predict2[predict2<0] = 0
# save prediction as predict2.csv file
# write code here
Explanation: <br>
Task 3: Use scikit-learn for Linear Regression
In this task we are going to use Linear Regression class from scikit-learn library to train the same model. The aim is to move from understanding algorithm to using an exisiting well established library. There is a Linear Regression example available on scikit-learn website as well.
Use the scikit-learn linear regression class to train the model on df_train
Compare the parameters from scikit-learn linear_model.LinearRegression.coef_ to the $\theta_s$ from earlier.
Use the linear_model.LinearRegression.predict on test data and upload it to kaggle. See if your score improves. Provide screenshot.
Note: no need to append 1's to X_train. Scitkit linear regression has parameter called fit_intercept that is by defauly enabled.
End of explanation
# define columns ['OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'YearBuilt']
# write code here
# check features range and statistics. Training dataset looks fine as all features has same count.
df_train[cols].describe()
# Load X and y variables from pandas dataframe df_train
# write code here
# Get m = number of samples and n = number of features
# write code here
#Feature normalizing the columns (subtract mean, divide by standard deviation)
#Store the mean and std for later use
#Note don't modify the original X matrix, use a copy
stored_feature_means, stored_feature_stds = [], []
Xnorm = np.array(X_train).copy()
for icol in range(Xnorm.shape[1]):
stored_feature_means.append(np.mean(Xnorm[:,icol]))
stored_feature_stds.append(np.std(Xnorm[:,icol]))
#Skip the first column if 1's
# if not icol: continue
#Faster to not recompute the mean and std again, just used stored values
Xnorm[:,icol] = (Xnorm[:,icol] - stored_feature_means[-1])/stored_feature_stds[-1]
# check data after normalization
pd.DataFrame(data=Xnorm,columns=cols).describe()
# Run Linear Regression from scikit-learn or code given above.
# write code here. Repeat from above.
# To predict output using ['OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'YearBuilt'] as input features.
# Check features range and statistics to see if there is any missing data.
# As you can see from count "GarageCars" and "TotalBsmtSF" has 1 missing value each.
df_test[cols].describe()
# Replace missing value with the mean of the feature
df_test['GarageCars'] = df_test['GarageCars'].fillna((df_test['GarageCars'].mean()))
df_test['TotalBsmtSF'] = df_test['TotalBsmtSF'].fillna((df_test['TotalBsmtSF'].mean()))
df_test[cols].describe()
# read test X without 1's
# write code here
# predict using trained model
predict3 = # write code here
# replace any negative predicted saleprice by zero
predict3[predict3<0] = 0
# predict target/output variable for test data using the trained model and upload to kaggle.
# write code to save output as predict3.csv here
Explanation: <br>
Task 4: Multivariate Linear Regression
Lastly use columns ['OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'YearBuilt'] and scikit-learn or the code given above to predict output on test data. Upload it to kaggle like earlier and see how much it improves your score.
Everything remains same except dimensions of X changes.
There might be some data missing from the test or train data that you can check using pandas.DataFrame.describe() function. Below we provide some helping functions for removing that data.
End of explanation
<END_TASK> |
15,882 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
GBDT(Gradient Boosting Decision Tree) ๅ็็ฎไป
0. ๅ่จ
ๆๆๅผๅงไบ่งฃ GBDT ๆถ๏ผๆญปๆดปไธ็่งฃๅณ็ญๆ ่ฟ็งๅๆฎตๅฝๆฐ๏ผๆไนๅฏ่ฝ็ฎๅบไธ้ถๅฏผๆฐใ่ฏปไบ่ฎบๆ Friedman - Greedy Function Approximation
Step1: 2.1 ๅข้ๅฏปไผ
ๆขฏๅบฆไธ้ๆณๆฏๅฉ็จไบๅฏผๆฐๅไธบๆญฅ่ฟๅ่ๅผ๏ผๅฎ่ฆๆฑๅฝๆฐๆฏๅทฒ็ฅไธๅฏๅฏผ็ใ้ฎ้ขๆฏ๏ผ็ฐๅฎไธญ๏ผๆๆถ่ฟไธชๅฝๆฐๆฏๆช็ฅ็๏ผๆฏๅฆ่ฏดๅฏนๅบ็ๆฏไธๅฐๆบๅจใๆไธ็ฅ้ๆไนๅฏนๅฎๅปบๆจก๏ผๅฏไธ็ไฟกๆฏๅฐฑๆฏๅๅฎ่พๅ
ฅ๏ผๅฎๅฐฑไผ่ฟๅ่พๅบใ้ฃๆไนๅๅข๏ผ
้ฆๅ
๏ผๆไปฌๅๅฐ $\frac{\partial f(x)}{\partial x}$ ๆฅ็๏ผๅ ไธบๆญฅ่ฟๅคงๅฐ่ฟๅ $\lambda$ ่ฐๆงใ่ฟ้ๆขฏๅบฆ็ไฝ็จ๏ผๅ
ถๅฎไธป่ฆๆฏๆ็คบๆญฅ่ฟๆนๅใ
็ถๅ๏ผๆไปฌๆขไธชๆ่ทฏๆพๆญฅ่ฟๆนๅใ
ๅ่ฎพๆบๅจ็็ๅฎๆจกๅๆฏ $f(x)$๏ผๅฏนๆไปฌๆฅ่ฏดๅฎๆฏๆช็ฅ็ใๅฐ่ฏๅฆไธ๏ผ
็ฌฌไธๆฌก่ฏๆข๏ผ้ๆบ็ปไธชๅๅงๅผ $x_0$๏ผๅพๅฐ $f(x_0)$ใ
็ฌฌไบๆฌก่ฏๆข๏ผๅ้ๆบ่พๅ
ฅ $x_1$๏ผๅพๅฐ $f(x_1)$ใ
่ฟๆถ๏ผๆๅฆไธไธ็งๆ
ๅต๏ผๅฏไปฅๆๅฏผ็ฌฌไธๆฌก่ฏๆข๏ผ
$f(x_1) < f(x_0)$๏ผไนๅฐฑๆฏ $x_0 \to x_1$็ๆนๅๆฏๅฏน็ใ
$f(x_1) = f(x_0)$๏ผๆฒกๆๆ็จไฟกๆฏใ
$f(x_1) > f(x_0)$๏ผๆนๅๅไบใ
Step2: ๅ ไธบๆไปฌๅง็ปๆฏๆพๆๅฐๅผ็น๏ผ่ฟไธช่ฟ็จๅฐฑๅง็ปๅฆไธๅพๆ็คบ"U"ๅฝขใ้ฃไนๆฏๆฌก็ๆญฅ่ฟๆนๅๅฐฑๅฏ็จ $f(x_1) - f(x_0)$ ๆฅๆ็คบใไนๅฐฑๆฏ่ฏด๏ผ่ฝ็ถๆไปฌๆ ๆณๅฏนๆบๅจ$f(x)$ๅปบๆจก๏ผไฝๆไปฌๅฏไปฅๅฏนๅฏปไผ็่ฟ็จ $z = f(x_i) - f(x_{i+1})$ ๅปบๆจก $g(z)$ใ่ $g(z)$ ๅช่ฆๆปก่ถณๅ่ฐไธ่ฟ้ถ็น๏ผ็่ฎบไธๅฎๅฏไปฅๆฏไปปๆ็๏ผไธๅๅฝๆฐๅฝฑๅ็ๅชๆฏๆถๆ้ๅบฆ๏ผไฝ้ฝไผๆถๆๅฐ $w^*$ ้่ฟใ
2.2 ้ผ่ฟ
ๅๆขไธช่งๅบฆ๏ผๆไปฌไธๅๆณๅฏปไผใ่ๆฏๅ่ฎพ่ฟไธชๆๅฐๅผ$w^$ๅทฒ็ป็ฅ้ไบ๏ผๅฆไฝไปๅๅงๅผ$w_0 = f(x_0)$ไธๆญฅๆญฅ้ผ่ฟๅฐ่ฟไธชๆๅฐๅผ$w^$ใๅ
ถๅฎๅฏไปฅๅฎๅ
จๅ็จไธ่็ๅข้ๅฏปไผๅปบๆจกๆๆณ๏ผๆไปฌๅฎไน $z = f(x_i) - w^*$๏ผไป่ๅฏน้ผ่ฟ่ฟ็จๅปบๆจก $g(z)$ใ
Step3: ๅๆ ทๅฐ๏ผ$g(z)$ๅช่ฆๆปก่ถณไธๅฎๆกไปถ็ๅฝๆฐใๅจ่ฟ้๏ผๅฐฑๆฏ่ฟ้ถ็น๏ผๆญฃๆฐๅบ้ดๅๅข๏ผ่ดๆฐๅบ้ดๅๅ๏ผๆๅธธ่ง็ไปฃ่กจๅฐฑๆฏไบๆฌกๅฝๆฐ $g(z) = z^2$ใ
ๆไปฌๆขณ็ไธๆดไธช้ผ่ฟ่ฟ็จ๏ผๅฆไธๅพ๏ผ | <ASSISTANT_TASK:>
Python Code:
show_image("./res/gradient_descent.jpg", figsize=(12,8))
show_image("./res/iterator.jpg")
Explanation: GBDT(Gradient Boosting Decision Tree) ๅ็็ฎไป
0. ๅ่จ
ๆๆๅผๅงไบ่งฃ GBDT ๆถ๏ผๆญปๆดปไธ็่งฃๅณ็ญๆ ่ฟ็งๅๆฎตๅฝๆฐ๏ผๆไนๅฏ่ฝ็ฎๅบไธ้ถๅฏผๆฐใ่ฏปไบ่ฎบๆ Friedman - Greedy Function Approximation: A Gradient Boosting Machine ๅ๏ผๆๅ็ฐ่ชๅทฑๅฎๅ
จ่ฏฏ่งฃไบๅณ็ญๆ ๅจGBDTไธญ็ไฝ็จใ่ฎบๆๆปๆฏๅพๅๆ็ฎๅไบๆ
ๆ่ฟฐๅคๆ๏ผๅๅฎขๅๅธธๅธธ่ฟไบ็ฎ็ปๅฐๅช็ปๅ
ทไฝ็ฎๆณใๅ่ฟ็ฏๆ็ซ ็ๆณๆณ๏ผๅฐฑๆฏๆณ็จ่ชๅทฑ็็่งฃ๏ผๆข็
ง้กพๅฐๆฐๅญฆๆกๆถ๏ผๅ่ฝๅฎๅฐๅ
ทไฝ็ฎๆณๅฎ็ฐ๏ผๆGBDT่ฏดๆธ
ๆฅใ่ฟ้ๅฏ่ฝๆ่ฐฌ่ฏฏ๏ผ่ฏป่
ๅบๅฎกๆ
ๅฐ้
่ฏปใ
1. Gradient Boosting ไธๆไผๅ
ref:
ๆบๅจๅญฆไน ไธญ็็ฎๆณ(1)-ๅณ็ญๆ ๆจกๅ็ปๅไน้ๆบๆฃฎๆไธGBDT
ๅจ่ฏดGBDTๅ๏ผๆไปฌๅ
่ฏดไธๅฎ็ไฟฉๅ็ผGradient Boosting๏ผ
Boosting: ่ฟๆฏไธ็ง่ฟญไปฃ็ฎๆณ๏ผๆฏไธๆฌก่ฎญ็ป้ฝๆฏๅจๅ้ขๅทฒๆๆจกๅ็้ขๆตๅบ็กไธ่ฟ่กใ
ๆ็ฎๅๅฐ่ฏด๏ผๅ
่ฎญ็ปไธไธชๅๅงๆจกๅ๏ผๅฏนๆฏ็ๅฎๅผๅ้ขๆตๅผ็ๆฎๅทฎ๏ผ็จๆฎๅทฎๅ่ฎญ็ปไธไธชๆจกๅ๏ผๅ่ฎก็ฎๆฎๅทฎ๏ผๅ่ฎญ็ปโฆโฆใ่ฟๆ ท๏ผๆฏไธไธชๆจกๅ้ฝไธๆณจไบไฟฎๆญฃๅ้ขๆจกๅๆไธ็ปๅ็ๆๆๆน้ขใไบๆฏ๏ผ้่ฟ่ฟ็งๆนๅผ่ๅๅคไธชๅผฑๅ็ฑปๅจ๏ผๅฐฑ่ฝๅพๅฐไธไธชๅผบๅ็ฑปๅจใ
Gradient: ๆขฏๅบฆใๅฏนไบๅไธชๅ้ๆฅ่ฏด๏ผๅฐฑๆฏไธ้ถๅฏผๆฐใ
ๅ้ขBoosting่ฆ่ฎก็ฎๆฎๅทฎ๏ผๆนๆณๅฐฑๅพๅคไบ๏ผไฝ ๅฏไปฅ็ดๆฅ็ธๅ๏ผไนๅฏไปฅๅฎไนไธชๅฝๆฐใ่ฟ้Gradientๅฐฑๆฏ่ฏด็จๆขฏๅบฆๆฅ่ฎก็ฎๆฎๅทฎใ
ๆไปฅ่ฏดGradient Boostingๆฏไธ็ง็ฎๆณๆกๆถ๏ผๅฎๆฏ็จGrdient่ฎก็ฎๆฎๅทฎ็Boosting่ฟ็จ๏ผ่GBDTๅชๆฏ็จๅณ็ญๆ ๆฅ่ฎญ็ป็ไธ็งๅ
ทไฝๅฎ็ฐใ่ฏดๅฐ่ฟ๏ผๅญฆไน ่ฟๆไผๅ็็ซฅ้ไธๅฎไผๅฏนGradient Boostingๆ็งๅผบ็็ๆข่งๆ๏ผๆฏ็๏ผ้ฃๅฐฑๆฏๆไปฌ่ณ็่ฝ่ฏฆ็ๆ้ๆขฏๅบฆไธ้ๆณใ
ๆไปฅ๏ผๆไปฌๅ
ไปๆ้ๆขฏๅบฆไธ้ๆณไฝๅผ๏ผไธๆญฅๆญฅ่ตฐๅฐGradeint Boostingๆกๆถ๏ผ็ถๅๅ่ฝๅฐๅณ็ญๆ ไธ๏ผๆๅๅฆๆๆ็ฒพๅ๏ผไป็ปไธ่ฟไธๆญฅ็ไผๅTreeBoostใ
2. ๆ้ๆขฏๅบฆไธ้ๆณ
ๆ็ฎ่ฆ่ฏดไธๅบๆฌๆ่ทฏ๏ผ่ฏฆ็ปๅฏ่งๆขฏๅบฆไธ้ๆณ - ็ปดๅบ็พ็งใ
ๅ่ฎพๆไธชๆ็ๅฝๆฐ $| f(x) | < M$๏ผๆไปฌ้่ฆ็ฅ้ๅฎ็ๆๅฐๅผใๆฅๅธธ็ไพๅญไธญ๏ผ่ฟไธชๅฝๆฐๅฏ่ฝๆฏไธ้กนๅทฅ็จๆ็จ็่ฑ้๏ผไธไธช้กน็ฎๅ ็จ็ๆถ้ด๏ผๆไปฌๆณๆพๅฐๆๅคฑๆๅฐ็ๆนๆก๏ผๅณ๏ผ$\operatorname{arg min}_x \, f(x) $ใ ๆพๅฐๆๅฐ่งฃ็่ฟ็จๅฐฑๅซๅฏปไผ๏ผๅฝๆฐ$f(x)$ๅธธ่ขซ็งฐไธบๆๅคฑๅฝๆฐใ
2.0 ็่ฎบ
ๆขฏๅบฆไธ้ๆณๆฏไธ็งๆฐๅผๅฏปไผ็ฎๆณใไปไนๆฏๆฐๅผๅฏปไผๅข๏ผๅฐฑๆฏ็ฐๅฎไธญ$f(x)$ๅพๅคๆถๅ้พไปฅ็จ็่ฎบๆฑ่งฃๅบๆๅผ๏ผๆ่
$f(x)$ๅฐฑๆฏไธช้ป็ๅญ๏ผๆไปฌๆฒกๆๅๆณ็ดๆฅๆพๅฐๆๅผ๏ผๅช่ฝๅปๆ้็ๆฐๅผ่งฃไธญๆ็ดข๏ผๆไปฅๅซๆฐๅผๅฏปไผใ
ๆขฏๅบฆไธ้ๆณๅฐฑๆฏๆๅฝๆฐๆขฏๅบฆๆฅๆญฅ่ฟๅฐๆๅผ็น็่ฟ็จ๏ผๅ
ถๆฐๅญฆ่กจ่พพๅผๅฝขๅฆ๏ผ
\begin{equation}
x_{m+1} = x_m - \lambda \frac{\partial f(x)}{\partial x}
\end{equation}
ๅ
ทไฝ่ฟ็จๅฆไธๅพๆ็คบ๏ผๅ่ชๅฏๆฉๅฑๆบๅจๅญฆไน โโๆขฏๅบฆไธ้(Gradient Descent)๏ผ
End of explanation
show_image("./res/incr_opt.png", figsize=(10,5))
Explanation: 2.1 ๅข้ๅฏปไผ
ๆขฏๅบฆไธ้ๆณๆฏๅฉ็จไบๅฏผๆฐๅไธบๆญฅ่ฟๅ่ๅผ๏ผๅฎ่ฆๆฑๅฝๆฐๆฏๅทฒ็ฅไธๅฏๅฏผ็ใ้ฎ้ขๆฏ๏ผ็ฐๅฎไธญ๏ผๆๆถ่ฟไธชๅฝๆฐๆฏๆช็ฅ็๏ผๆฏๅฆ่ฏดๅฏนๅบ็ๆฏไธๅฐๆบๅจใๆไธ็ฅ้ๆไนๅฏนๅฎๅปบๆจก๏ผๅฏไธ็ไฟกๆฏๅฐฑๆฏๅๅฎ่พๅ
ฅ๏ผๅฎๅฐฑไผ่ฟๅ่พๅบใ้ฃๆไนๅๅข๏ผ
้ฆๅ
๏ผๆไปฌๅๅฐ $\frac{\partial f(x)}{\partial x}$ ๆฅ็๏ผๅ ไธบๆญฅ่ฟๅคงๅฐ่ฟๅ $\lambda$ ่ฐๆงใ่ฟ้ๆขฏๅบฆ็ไฝ็จ๏ผๅ
ถๅฎไธป่ฆๆฏๆ็คบๆญฅ่ฟๆนๅใ
็ถๅ๏ผๆไปฌๆขไธชๆ่ทฏๆพๆญฅ่ฟๆนๅใ
ๅ่ฎพๆบๅจ็็ๅฎๆจกๅๆฏ $f(x)$๏ผๅฏนๆไปฌๆฅ่ฏดๅฎๆฏๆช็ฅ็ใๅฐ่ฏๅฆไธ๏ผ
็ฌฌไธๆฌก่ฏๆข๏ผ้ๆบ็ปไธชๅๅงๅผ $x_0$๏ผๅพๅฐ $f(x_0)$ใ
็ฌฌไบๆฌก่ฏๆข๏ผๅ้ๆบ่พๅ
ฅ $x_1$๏ผๅพๅฐ $f(x_1)$ใ
่ฟๆถ๏ผๆๅฆไธไธ็งๆ
ๅต๏ผๅฏไปฅๆๅฏผ็ฌฌไธๆฌก่ฏๆข๏ผ
$f(x_1) < f(x_0)$๏ผไนๅฐฑๆฏ $x_0 \to x_1$็ๆนๅๆฏๅฏน็ใ
$f(x_1) = f(x_0)$๏ผๆฒกๆๆ็จไฟกๆฏใ
$f(x_1) > f(x_0)$๏ผๆนๅๅไบใ
End of explanation
show_image("./res/approx.png", figsize=(10,5))
Explanation: ๅ ไธบๆไปฌๅง็ปๆฏๆพๆๅฐๅผ็น๏ผ่ฟไธช่ฟ็จๅฐฑๅง็ปๅฆไธๅพๆ็คบ"U"ๅฝขใ้ฃไนๆฏๆฌก็ๆญฅ่ฟๆนๅๅฐฑๅฏ็จ $f(x_1) - f(x_0)$ ๆฅๆ็คบใไนๅฐฑๆฏ่ฏด๏ผ่ฝ็ถๆไปฌๆ ๆณๅฏนๆบๅจ$f(x)$ๅปบๆจก๏ผไฝๆไปฌๅฏไปฅๅฏนๅฏปไผ็่ฟ็จ $z = f(x_i) - f(x_{i+1})$ ๅปบๆจก $g(z)$ใ่ $g(z)$ ๅช่ฆๆปก่ถณๅ่ฐไธ่ฟ้ถ็น๏ผ็่ฎบไธๅฎๅฏไปฅๆฏไปปๆ็๏ผไธๅๅฝๆฐๅฝฑๅ็ๅชๆฏๆถๆ้ๅบฆ๏ผไฝ้ฝไผๆถๆๅฐ $w^*$ ้่ฟใ
2.2 ้ผ่ฟ
ๅๆขไธช่งๅบฆ๏ผๆไปฌไธๅๆณๅฏปไผใ่ๆฏๅ่ฎพ่ฟไธชๆๅฐๅผ$w^$ๅทฒ็ป็ฅ้ไบ๏ผๅฆไฝไปๅๅงๅผ$w_0 = f(x_0)$ไธๆญฅๆญฅ้ผ่ฟๅฐ่ฟไธชๆๅฐๅผ$w^$ใๅ
ถๅฎๅฏไปฅๅฎๅ
จๅ็จไธ่็ๅข้ๅฏปไผๅปบๆจกๆๆณ๏ผๆไปฌๅฎไน $z = f(x_i) - w^*$๏ผไป่ๅฏน้ผ่ฟ่ฟ็จๅปบๆจก $g(z)$ใ
End of explanation
show_image("./res/model.png", figsize=(10,5))
Explanation: ๅๆ ทๅฐ๏ผ$g(z)$ๅช่ฆๆปก่ถณไธๅฎๆกไปถ็ๅฝๆฐใๅจ่ฟ้๏ผๅฐฑๆฏ่ฟ้ถ็น๏ผๆญฃๆฐๅบ้ดๅๅข๏ผ่ดๆฐๅบ้ดๅๅ๏ผๆๅธธ่ง็ไปฃ่กจๅฐฑๆฏไบๆฌกๅฝๆฐ $g(z) = z^2$ใ
ๆไปฌๆขณ็ไธๆดไธช้ผ่ฟ่ฟ็จ๏ผๅฆไธๅพ๏ผ
End of explanation
<END_TASK> |
15,883 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Introduction
Idea behind decision tree
Step1: Get iris data and make a simple prediction
Step2: Create training and test data sets
Step3: Train a classification tree
Step4: Print the predicted class of iris
Step5: Based on the target values in the training set, calculate the training accuracy
Step6: And here's the testing accuracy
Step7: Visualize the tree
Step8: How it works
Split definition
To decide which variable is considered at each node, and what the split point is, we need a metric to minimize
Step9: where p_mk is the proportion of training data in the m-th region that are from the k-th class.
Values of p_mk close to 0 or 1 represent better purity, so we minimize G.
Cross validation
Step10: The tree shown above is overtrained. Let's limit the depth.
Step11: Take-away
Step12: where x_i and y_i are the attribute and target variables for the i-th training data point, and y_hat is the mean of the target variables in the region.
Example
Let's create an example with a noisy sine function.
Step13: Test a set of regression trees that have different depth limits.
Step14: Visualization of tree with depth=2
Step15: Visualization of tree with depth=5
Step16: Visualization of tree with no limitation on depth. | <ASSISTANT_TASK:>
Python Code:
%pylab inline
Explanation: Introduction
Idea behind decision tree: split the space of the attribute variables with recursive, binary splits, with the aim of high purity for all regions.
The collection of paths to all regions makes a tree.
Vocabulary
attribute, or attribute variable: a dimension in which a data point has a value (typically excluding the target variable)
target variable: the variable whose value is to be predicted
the attributes of the i-th data point are labeled X_i.
the value of the target variable for the i-th data point is labeled y_i.
Trees that predict a quantitative target variable are called regression trees, and trees that predict qualitative targets are called classification trees.
Play
End of explanation
from sklearn import datasets
iris = datasets.load_iris()
import numpy as np
import random
Explanation: Get iris data and make a simple prediction
End of explanation
iris_X = iris.data
iris_y = iris.target
r = random.randint(0,100)
np.random.seed(r)
idx = np.random.permutation(len(iris_X))
subset = 25
iris_X_train = iris_X[idx[:-subset]] # all but the last 'subset' rows
iris_y_train = iris_y[idx[:-subset]]
iris_X_test = iris_X[idx[-subset:]] # the last 'subset' rows
iris_y_test = iris_y[idx[-subset:]]
Explanation: Create training and test data sets
End of explanation
from sklearn import tree
clf = tree.DecisionTreeClassifier()
clf = clf.fit(iris_X_train,iris_y_train)
Explanation: Train a classification tree
End of explanation
clf.predict(iris_X_train)
Explanation: Print the predicted class of iris
End of explanation
def accuracy(x,y):
output = []
for i,j in zip(x,y):
if i == j:
output.append(1)
else:
output.append(0)
return np.mean(output)
print("training accuracy: {}".format(accuracy(clf.predict(iris_X_train),iris_y_train)))
Explanation: Based on the target values in the training set, calculate the training accuracy:
End of explanation
print("testing accuracy: {}".format(accuracy(clf.predict(iris_X_test),iris_y_test)))
Explanation: And here's the testing accuracy:
End of explanation
from sklearn.externals.six import StringIO # StringIO streams data as a string to "output file"
from IPython.display import Image # need Image to display inline
# export the tree data as a string to a file
dot_data = StringIO()
tree.export_graphviz(clf, out_file=dot_data)
# compatible with modern pyparsing
import pydotplus as pydot
# or olde-timey
# import pydot
graph = pydot.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
Explanation: Visualize the tree:
End of explanation
Image(filename="gini.png")
Explanation: How it works
Split definition
To decide which variable is considered at each node, and what the split point is, we need a metric to minimize:
End of explanation
classifier_1 = tree.DecisionTreeClassifier()
X = numpy.array([[0],[1],[2],[3],[4],[5],[6],[7],[8],[9],[10]])
Y = numpy.array([0,1,2,3,4,5,6,7,8,9,10])
classifier_1 = classifier_1.fit(X,Y)
classifier_1.predict(X)
## print the tree
# export the tree data as a string to a file
dot_data = StringIO()
tree.export_graphviz(classifier_1, out_file=dot_data)
#
import pydotplus as pydot
graph = pydot.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
Explanation: where p_mk is the proportion of training data in the m-th region that are from the k-th class.
Values of p_mk close to 0 or 1 represent better purity, so we minimize G.
Cross validation: a side note
Cross validation is a generalization of the testing/training data set paradigm. A reasonable test for the validity of a tree is to re-sample the training and testing data set, re-fitting the tree each time. Small variations in the resulting trees indicate a stable model.
A Problematic Example
End of explanation
classifier_2 = tree.DecisionTreeClassifier(max_depth=3)
classifier_2 = classifier_2.fit(X,Y)
classifier_2.predict(X)
dot_data = StringIO()
tree.export_graphviz(classifier_2, out_file=dot_data)
graph = pydot.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
Explanation: The tree shown above is overtrained. Let's limit the depth.
End of explanation
Image(filename="rss.png")
Explanation: Take-away:
trees aren't great at predicting linear relationships between attrtibute and target variables. But standard linear regression is.
tree size needs to be controlled to avoid over training
Regression Trees
Concepts
The predicted target variable is the mean of all the training target variable in the region
The split betwee R_1 and R_2 minimizes the following:
End of explanation
# Create a random dataset
rng = np.random.RandomState(1)
# Set the range to [0,5] and sort it numerically
X = np.sort(5 * rng.rand(80, 1), axis=0)
# for target, take the sine of the data, and place it in an array
y = np.sin(X).ravel()
# add some noise to every fifth point
y[::5] += 3 * (0.5 - rng.rand(16))
Explanation: where x_i and y_i are the attribute and target variables for the i-th training data point, and y_hat is the mean of the target variables in the region.
Example
Let's create an example with a noisy sine function.
End of explanation
# use a regression tree model
from sklearn.tree import DecisionTreeRegressor
clf_1 = DecisionTreeRegressor(max_depth=2)
clf_2 = DecisionTreeRegressor(max_depth=5)
clf_3 = DecisionTreeRegressor()
clf_1.fit(X, y)
clf_2.fit(X, y)
clf_3.fit(X, y)
# generate test data in correct range, and place each pt in its own array
X_test = np.arange(0.0, 5.0, 0.01)[:, np.newaxis]
y_1 = clf_1.predict(X_test)
y_2 = clf_2.predict(X_test)
y_3 = clf_3.predict(X_test)
import matplotlib.pyplot as plt
plt.figure()
plt.scatter(X, y, c="k", label="data")
plt.plot(X_test, y_1, c="g", label="max_depth=2", linewidth=2)
plt.plot(X_test, y_2, c="r", label="max_depth=5", linewidth=2)
plt.plot(X_test, y_3, c="b", label="max_depth=inf", linewidth=1)
plt.xlabel("data")
plt.ylabel("target")
plt.title("Decision Tree Regression")
plt.legend()
plt.show()
dot_data = StringIO()
tree.export_graphviz(clf_1, out_file=dot_data)
tree.export_graphviz(clf_2, out_file=dot_data)
tree.export_graphviz(clf_3, out_file=dot_data)
graph = pydot.graph_from_dot_data(dot_data.getvalue())
Explanation: Test a set of regression trees that have different depth limits.
End of explanation
Image(graph[0].create_png())
Explanation: Visualization of tree with depth=2
End of explanation
Image(graph[1].create_png())
Explanation: Visualization of tree with depth=5
End of explanation
Image(graph[2].create_png())
Explanation: Visualization of tree with no limitation on depth.
End of explanation
<END_TASK> |
15,884 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
In this project, youโre going to take a peek into the realm of neural network machine translation. Youโll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer
Step24: Decoding - Training
Create a training decoding layer
Step27: Decoding - Inference
Create inference decoder
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step40: Batch and pad the source and target sequences
Step43: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step45: Save Parameters
Save the batch_size and save_path parameters for inference.
Step47: Checkpoint
Step50: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step52: Translate
This will translate translate_sentence from English to French. | <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, youโre going to take a peek into the realm of neural network machine translation. Youโll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
###source_sent = [ sent for sent in source_text.split("\n") ]
###target_sent = [ sent + ' <EOS>' for sent in target_text.split("\n") ]
###source_ids = [ [ source_vocab_to_int[word] for word in sent.split() ] for sent in source_sent ]
###target_ids = [ [ target_vocab_to_int[word] for word in sent.split() ] for sent in target_sent ]
# Advice from Udacity Reviewer
target_ids = [[target_vocab_to_int[w] for w in s.split()] + [target_vocab_to_int['<EOS>']] for s in target_text.split('\n')]
source_ids = [[source_vocab_to_int[w] for w in s.split()] for s in source_text.split('\n')]
return source_ids, target_ids
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
# TODO: Implement Function
input_ = tf.placeholder( tf.int32, [None, None], name = "input" )
target_ = tf.placeholder( tf.int32, [None, None], name = "target" )
learn_rate_ = tf.placeholder( tf.float32, None, name = "learn_rate" )
keep_prob_ = tf.placeholder( tf.float32, None, name = "keep_prob" )
target_sequence_length = tf.placeholder( tf.int32, [None], name="target_sequence_length" )
max_target_sequence_length = tf.reduce_max( target_sequence_length )
source_sequence_length = tf.placeholder( tf.int32, [None], name="source_sequence_length" )
return input_, target_, learn_rate_, keep_prob_, target_sequence_length, max_target_sequence_length, source_sequence_length
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoder_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Target sequence length placeholder named "target_sequence_length" with rank 1
Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
Source sequence length placeholder named "source_sequence_length" with rank 1
Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
End of explanation
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
go_id = source_vocab_to_int[ '<GO>' ]
ending_text = tf.strided_slice( target_data, [0, 0], [batch_size, -1], [1, 1] )
decoded_text = tf.concat( [ tf.fill([batch_size, 1], go_id), ending_text ], 1)
return decoded_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
Explanation: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
# TODO: Implement Function
encod_inputs = tf.contrib.layers.embed_sequence( rnn_inputs, source_vocab_size, encoding_embedding_size )
rnn_cell = tf.contrib.rnn.MultiRNNCell( [ tf.contrib.rnn.LSTMCell( rnn_size ) for _ in range(num_layers) ] )
# Adding dropout layer
rnn_cell = tf.contrib.rnn.DropoutWrapper( rnn_cell, output_keep_prob = keep_prob )
rnn_output, rnn_state = tf.nn.dynamic_rnn( rnn_cell, encod_inputs, source_sequence_length, dtype = tf.float32 )
return rnn_output, rnn_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer:
* Embed the encoder input using tf.contrib.layers.embed_sequence
* Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper
* Pass cell and embedded input to tf.nn.dynamic_rnn()
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
# TODO: Implement Function
decode_helper = tf.contrib.seq2seq.TrainingHelper( dec_embed_input, target_sequence_length )
decoder = tf.contrib.seq2seq.BasicDecoder( dec_cell, decode_helper, encoder_state, output_layer )
decoder_outputs, decoder_state = tf.contrib.seq2seq.dynamic_decode( decoder, impute_finished=True,
maximum_iterations= max_summary_length )
return decoder_outputs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create a training decoding layer:
* Create a tf.contrib.seq2seq.TrainingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
# TODO: Implement Function
start_tokens = tf.tile( tf.constant( [start_of_sequence_id], dtype=tf.int32),
[ batch_size ], name = "start_tokens" )
decode_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper( dec_embeddings, start_tokens, end_of_sequence_id )
decoder = tf.contrib.seq2seq.BasicDecoder( dec_cell, decode_helper, encoder_state, output_layer = output_layer )
decoder_outputs, decoder_state = tf.contrib.seq2seq.dynamic_decode( decoder, impute_finished=True,
maximum_iterations = max_target_sequence_length )
return decoder_outputs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference decoder:
* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
from tensorflow.python.layers import core as layers_core
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
decode_embed = tf.Variable( tf.random_uniform( [ target_vocab_size, decoding_embedding_size ] ) )
decode_embed_input = tf.nn.embedding_lookup( decode_embed, dec_input )
decode_cell = tf.contrib.rnn.MultiRNNCell( [ tf.contrib.rnn.LSTMCell(rnn_size) for _ in range(num_layers) ] )
# Adding dropout layer
decode_cell = tf.contrib.rnn.DropoutWrapper( decode_cell, output_keep_prob = keep_prob )
output_layer = layers_core.Dense( target_vocab_size,
kernel_initializer = tf.truncated_normal_initializer( mean = 0.0, stddev=0.1 ) )
with tf.variable_scope( "decoding" ) as decoding_scope:
decode_outputs_train = decoding_layer_train( encoder_state, decode_cell, decode_embed_input,
target_sequence_length, max_target_sequence_length, output_layer, keep_prob )
SOS_id = target_vocab_to_int[ "<GO>" ]
EOS_id = target_vocab_to_int[ "<EOS>" ]
with tf.variable_scope( "decoding", reuse=True) as decoding_scope:
decode_outputs_infer = decoding_layer_infer( encoder_state, decode_cell, decode_embed, SOS_id,EOS_id,
max_target_sequence_length,target_vocab_size, output_layer, batch_size, keep_prob )
return decode_outputs_train, decode_outputs_infer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
encode_output, encode_state = encoding_layer( input_data, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size, enc_embedding_size )
decode_input = process_decoder_input( target_data, target_vocab_to_int, batch_size )
decode_outputs_train, decode_outputs_infer = decoding_layer( decode_input, encode_state,
target_sequence_length, tf.reduce_max( target_sequence_length ), rnn_size, num_layers,
target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size )
return decode_outputs_train, decode_outputs_infer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.
End of explanation
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 256
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 128
decoding_embedding_size = 128
# Learning Rate
learning_rate = 0.01
# Dropout Keep Probability
keep_probability = 0.8
display_step = 10
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
Set display_step to state how many steps between each debug output statement
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
Explanation: Batch and pad the source and target sequences
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
sequence = [ vocab_to_int.get( word, vocab_to_int[ "<UNK>"] ) for word in sentence.lower().split() ]
return sequence
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation
<END_TASK> |
15,885 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Lifetimes
<h3>Learning goals</h3>
<ul>
<li>Relativistic kinematics.
<li>Standard model particles.
<li>Special Relativity.
</ul>
<b>Background</b>
Every type of particle has different characteristics. They each have different masses, lifetimes, decay methods and many other properties.
To find the distance a particle travels in one lifetime, you need to know the lifetime of the particle and the speed of the particle. Classically, the formula to find the distance travelled in one lifetime is $ d= vt. $ Where $v$ is the speed of light and $t$ is the time the particle lives before it decays.
However, in many particle physics experiments, the particles are moving close to (but always less than!) the speed of light and this means that they experience time dilation, which means their internal clocks run more slowly.
Suppose a particle lives for the length of time equal to its mean lifetime. This quantity is defined in the particle's rest frame, which means that the time that we measure in the lab (in our particle physics experiment) is generally longer. The really useful quantity we are looking for is the flight-length
Step1: <h3>Particles</h3>
<ul>
<li>$\mu^\pm$
<li>$\tau^\pm$
<li>$\pi^\pm$
<li>$\pi^0$
<li>$\rho^0$
<li>$K^\pm$
<li>$K^0_{\rm short}$
<li>$K^0_{\rm long}$
<li>$K^*(892)$
<li>$D^\pm$
<li>$B^\pm$
<li>$B0$
<li>$J/ \psi$
<li>$\Upsilon(1S)$
<li>proton
<li>neutron
<li>$\Delta^+$
<li>$\Lambda^0$
<li>$\Lambda_c$
</ul>
<h2><font color="red">Challenge!</font></h2>
Finish the table for every particle listed above with an output of the particle's name, the lifetime, the mass, the momentum, and how far they travel in one lifetime assuming a momentum of 1 GeV/c, 10 GeV/c, and 100 GeV/c. You may need the following relativistic relationships.
$$\vec{\beta} = \frac{\vec{pc}}{E} \qquad \qquad E^2 = (mc^2)^2 + (pc)^2$$
Step2: The scale of many modern physics detectors ranges from the order of centimeters to 10's of meters. Given that information, what particles do you think will actually live long enough to travel <i>through</i> parts of the detector?
Step3: Which particles will decay (on average) before they reach the detectors? This means that these particles have to be reconstructed from their decay products.
Step4: Make a plot where the x-axis is the names of the above particles (or a number corresponding to each, where the number/particle relationship is clearly identified) and the y-axis is the lifetime of the particle. Color code the data points according to whether the primary decay is EM, weak, or strong. {\it Do not plot the proton and neutron on this plot}. Do you see the "pattern"? You may need to use a log-scale for the y-axis. | <ASSISTANT_TASK:>
Python Code:
particles = ["B+/-","D+/-","J/Psi"]
lifetimes = [1.64e-12,1.4e-12,7e-21]
c = 3e8 # m/s
v = c
for p,l in zip(particles,lifetimes):
distance = v*l
print "%-5s lifetime=%4.2e s distance traveled=%4.2e m" % (p,l,distance)
Explanation: Lifetimes
<h3>Learning goals</h3>
<ul>
<li>Relativistic kinematics.
<li>Standard model particles.
<li>Special Relativity.
</ul>
<b>Background</b>
Every type of particle has different characteristics. They each have different masses, lifetimes, decay methods and many other properties.
To find the distance a particle travels in one lifetime, you need to know the lifetime of the particle and the speed of the particle. Classically, the formula to find the distance travelled in one lifetime is $ d= vt. $ Where $v$ is the speed of light and $t$ is the time the particle lives before it decays.
However, in many particle physics experiments, the particles are moving close to (but always less than!) the speed of light and this means that they experience time dilation, which means their internal clocks run more slowly.
Suppose a particle lives for the length of time equal to its mean lifetime. This quantity is defined in the particle's rest frame, which means that the time that we measure in the lab (in our particle physics experiment) is generally longer. The really useful quantity we are looking for is the flight-length: the distance the particle travels between the time it is created and the time it decays. This flight-length is longer in the lab, because of the time dilation effect. The distance measured in the lab is the mean free path and is given by
$$d = \gamma \beta c \tau$$
where $\beta = v/c$ and $\gamma = \frac{1}{\sqrt{1-\beta^2}}$.
<b>Let's code!</b>
Here is a sample code that creates a table of the lifetime and distance traveled in one lifetime for three different particles, but the distance is calculated incorrectly.
End of explanation
# Your code here
Explanation: <h3>Particles</h3>
<ul>
<li>$\mu^\pm$
<li>$\tau^\pm$
<li>$\pi^\pm$
<li>$\pi^0$
<li>$\rho^0$
<li>$K^\pm$
<li>$K^0_{\rm short}$
<li>$K^0_{\rm long}$
<li>$K^*(892)$
<li>$D^\pm$
<li>$B^\pm$
<li>$B0$
<li>$J/ \psi$
<li>$\Upsilon(1S)$
<li>proton
<li>neutron
<li>$\Delta^+$
<li>$\Lambda^0$
<li>$\Lambda_c$
</ul>
<h2><font color="red">Challenge!</font></h2>
Finish the table for every particle listed above with an output of the particle's name, the lifetime, the mass, the momentum, and how far they travel in one lifetime assuming a momentum of 1 GeV/c, 10 GeV/c, and 100 GeV/c. You may need the following relativistic relationships.
$$\vec{\beta} = \frac{\vec{pc}}{E} \qquad \qquad E^2 = (mc^2)^2 + (pc)^2$$
End of explanation
# Your code here
Explanation: The scale of many modern physics detectors ranges from the order of centimeters to 10's of meters. Given that information, what particles do you think will actually live long enough to travel <i>through</i> parts of the detector?
End of explanation
# Your code here
Explanation: Which particles will decay (on average) before they reach the detectors? This means that these particles have to be reconstructed from their decay products.
End of explanation
# Your code here
Explanation: Make a plot where the x-axis is the names of the above particles (or a number corresponding to each, where the number/particle relationship is clearly identified) and the y-axis is the lifetime of the particle. Color code the data points according to whether the primary decay is EM, weak, or strong. {\it Do not plot the proton and neutron on this plot}. Do you see the "pattern"? You may need to use a log-scale for the y-axis.
End of explanation
<END_TASK> |
15,886 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Create Fake Index Data
Step1: Build and run Target Strategy
I will first run a strategy that rebalances everyday.
Then I will use those weights as target to rebalance to whenever the PTE is too high.
Step2: Now use the PTE rebalance algo to trigger a rebalance whenever predicted tracking error is greater than 1%.
Step3: If we plot the total risk contribution of each asset class and divide by the total volatility, then we can see that both strategy's contribute roughly similar amounts of volatility from both of the securities.
Step4: Looking at the Target strategy's and PTE strategy's Total Risk they are very similar. | <ASSISTANT_TASK:>
Python Code:
names = ['foo','bar','rf']
dates = pd.date_range(start='2015-01-01',end='2018-12-31', freq=pd.tseries.offsets.BDay())
n = len(dates)
rdf = pd.DataFrame(
np.zeros((n, len(names))),
index = dates,
columns = names
)
np.random.seed(1)
rdf['foo'] = np.random.normal(loc = 0.1/252,scale=0.2/np.sqrt(252),size=n)
rdf['bar'] = np.random.normal(loc = 0.04/252,scale=0.05/np.sqrt(252),size=n)
rdf['rf'] = 0.
pdf = 100*np.cumprod(1+rdf)
pdf.plot()
Explanation: Create Fake Index Data
End of explanation
selectTheseAlgo = bt.algos.SelectThese(['foo','bar'])
# algo to set the weights to 1/vol contributions from each asset
# with data over the last 3 months excluding yesterday
weighInvVolAlgo = bt.algos.WeighInvVol(
lookback=pd.DateOffset(months=3),
lag=pd.DateOffset(days=1)
)
# algo to rebalance the current weights to weights set in target.temp
rebalAlgo = bt.algos.Rebalance()
# a strategy that rebalances daily to 1/vol weights
strat = bt.Strategy(
'Target',
[
selectTheseAlgo,
weighInvVolAlgo,
rebalAlgo
]
)
# set integer_positions=False when positions are not required to be integers(round numbers)
backtest = bt.Backtest(
strat,
pdf,
integer_positions=False
)
res_target = bt.run(backtest)
res_target.get_security_weights().plot()
Explanation: Build and run Target Strategy
I will first run a strategy that rebalances everyday.
Then I will use those weights as target to rebalance to whenever the PTE is too high.
End of explanation
# algo to fire whenever predicted tracking error is greater than 1%
wdf = res_target.get_security_weights()
PTE_rebalance_Algo = bt.algos.PTE_Rebalance(
0.01,
wdf,
lookback=pd.DateOffset(months=3),
lag=pd.DateOffset(days=1),
covar_method='standard',
annualization_factor=252
)
selectTheseAlgo = bt.algos.SelectThese(['foo','bar'])
# algo to set the weights to 1/vol contributions from each asset
# with data over the last 12 months excluding yesterday
weighTargetAlgo = bt.algos.WeighTarget(
wdf
)
rebalAlgo = bt.algos.Rebalance()
# a strategy that rebalances monthly to specified weights
strat = bt.Strategy(
'PTE',
[
PTE_rebalance_Algo,
selectTheseAlgo,
weighTargetAlgo,
rebalAlgo
]
)
# set integer_positions=False when positions are not required to be integers(round numbers)
backtest = bt.Backtest(
strat,
pdf,
integer_positions=False
)
res_PTE = bt.run(backtest)
fig, ax = plt.subplots(nrows=1,ncols=1)
res_target.get_security_weights().plot(ax=ax)
realized_weights_df = res_PTE.get_security_weights()
realized_weights_df['PTE foo'] = realized_weights_df['foo']
realized_weights_df['PTE bar'] = realized_weights_df['bar']
realized_weights_df = realized_weights_df.loc[:,['PTE foo', 'PTE bar']]
realized_weights_df.plot(ax=ax)
ax.set_title('Target Weights vs PTE Weights')
ax.plot()
trans_df = pd.DataFrame(
index=res_target.prices.index,
columns=['Target','PTE']
)
transactions = res_target.get_transactions()
transactions = (transactions['quantity'] * transactions['price']).reset_index()
bar_mask = transactions.loc[:,'Security'] == 'bar'
foo_mask = transactions.loc[:,'Security'] == 'foo'
trans_df.loc[trans_df.index[4:],'Target'] = np.abs(transactions[bar_mask].iloc[:,2].values) + np.abs(transactions[foo_mask].iloc[:,2].values)
transactions = res_PTE.get_transactions()
transactions = (transactions['quantity'] * transactions['price']).reset_index()
bar_mask = transactions.loc[:,'Security'] == 'bar'
foo_mask = transactions.loc[:,'Security'] == 'foo'
trans_df.loc[transactions[bar_mask].iloc[:,0],'PTE'] = np.abs(transactions[bar_mask].iloc[:,2].values)
trans_df.loc[transactions[foo_mask].iloc[:,0],'PTE'] += np.abs(transactions[foo_mask].iloc[:,2].values)
trans_df = trans_df.fillna(0)
fig, ax = plt.subplots(nrows=1,ncols=1)
trans_df.cumsum().plot(ax=ax)
ax.set_title('Cumulative sum of notional traded')
ax.plot()
Explanation: Now use the PTE rebalance algo to trigger a rebalance whenever predicted tracking error is greater than 1%.
End of explanation
weights_target = res_target.get_security_weights()
rolling_cov_target = pdf.loc[:,weights_target.columns].pct_change().rolling(window=3*20).cov()*252
weights_PTE = res_PTE.get_security_weights().loc[:,weights_target.columns]
rolling_cov_PTE = pdf.loc[:,weights_target.columns].pct_change().rolling(window=3*20).cov()*252
trc_target = pd.DataFrame(
np.nan,
index = weights_target.index,
columns = weights_target.columns
)
trc_PTE = pd.DataFrame(
np.nan,
index = weights_PTE.index,
columns = [x + " PTE" for x in weights_PTE.columns]
)
for dt in pdf.index:
trc_target.loc[dt,:] = weights_target.loc[dt,:].values*(rolling_cov_target.loc[dt,:].values@weights_target.loc[dt,:].values)/np.sqrt(weights_target.loc[dt,:].values@rolling_cov_target.loc[dt,:].values@weights_target.loc[dt,:].values)
trc_PTE.loc[dt,:] = weights_PTE.loc[dt,:].values*(rolling_cov_PTE.loc[dt,:].values@weights_PTE.loc[dt,:].values)/np.sqrt(weights_PTE.loc[dt,:].values@rolling_cov_PTE.loc[dt,:].values@weights_PTE.loc[dt,:].values)
fig, ax = plt.subplots(nrows=1,ncols=1)
trc_target.plot(ax=ax)
trc_PTE.plot(ax=ax)
ax.set_title('Total Risk Contribution')
ax.plot()
Explanation: If we plot the total risk contribution of each asset class and divide by the total volatility, then we can see that both strategy's contribute roughly similar amounts of volatility from both of the securities.
End of explanation
fig, ax = plt.subplots(nrows=1,ncols=1)
trc_target.sum(axis=1).plot(ax=ax,label='Target')
trc_PTE.sum(axis=1).plot(ax=ax,label='PTE')
ax.legend()
ax.set_title('Total Risk')
ax.plot()
transactions = res_PTE.get_transactions()
transactions = (transactions['quantity'] * transactions['price']).reset_index()
bar_mask = transactions.loc[:,'Security'] == 'bar'
dates_of_PTE_transactions = transactions[bar_mask].iloc[:,0]
dates_of_PTE_transactions
fig, ax = plt.subplots(nrows=1,ncols=1)
np.sum(np.abs(trc_target.values - trc_PTE.values))
#.abs().sum(axis=1).plot()
ax.set_title('Total Risk')
ax.plot(
trc_target.index,
np.sum(np.abs(trc_target.values - trc_PTE.values),axis=1),
label='PTE'
)
for i,dt in enumerate(dates_of_PTE_transactions):
if i == 0:
ax.axvline(x=dt,color='red',label='PTE Transaction')
else:
ax.axvline(x=dt,color='red')
ax.legend()
Explanation: Looking at the Target strategy's and PTE strategy's Total Risk they are very similar.
End of explanation
<END_TASK> |
15,887 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
RoadRunner transit model example III - LDTk-based limb darkening
Author
Step1: Import the model
First, we import the RoadRunnerModel and LDTkLDModel and some simple transmission functions from LDTk.
Step2: Example 1
Step3: and given to the RoadRunnnerModel as any other limb darkening model.
Step4: after which the transit model evaluation goes as usual
Step5: Example 2
Step6: Thawing the model
After thawing, the model takes a random sample from the limb darkening profile sample set every time it is evaluated. We don't want this behaviour when fitting a model to observations, since this sort of randomness can easily confuse even the best optimiser, but is exactly what we want then doing MCMC for parameter posterior estimation. | <ASSISTANT_TASK:>
Python Code:
%pylab inline
rc('figure', figsize=(13,6))
def plot_lc(time, flux, c=None, ylim=(0.9865, 1.0025), ax=None, alpha=1):
if ax is None:
fig, ax = subplots()
else:
fig, ax = None, ax
ax.plot(time, flux, c=c, alpha=alpha)
ax.autoscale(axis='x', tight=True)
setp(ax, xlabel='Time [d]', ylabel='Flux', xlim=time[[0,-1]], ylim=ylim)
if fig is not None:
fig.tight_layout()
return ax
Explanation: RoadRunner transit model example III - LDTk-based limb darkening
Author: Hannu Parviainen<br>
Last modified: 16.9.2020
The LDTk limb darkening model, pytransit.LDTkLDModel, works as an example of a more complex limb darkening model that is best implemented as a subclass of pytransit.LDModel. The LDTk limb darkening model uses LDTk to create a set of stellar limb darkening profile samples given the stellar $T_\mathrm{Eff}$, $\log g$, and $z$ with their uncertainties, and uses the profiles directly to calculate the transit. The profiles are created from the PHOENIX-calculated specific intensity spectra by Husser (2013), and the model completely avoids approximating the limb darkening profile with an analytical function.
The model is parameter free after the stellar parameters have been given. The model can be frozen for model optimisation, and thawn for MCMC posterior estimation. When frozen, the model returns the average limb darkening profile interpolated from the profile at the given $\mu$ locations. When thawn, each model evaluation chooses a random limb darkening profile from the sample and uses interpolation to evaluate the model at the wanted $\mu$ values.
End of explanation
from pytransit import RoadRunnerModel, LDTkLDModel
from ldtk import sdss_g, sdss_i, sdss_z
time = linspace(-0.05, 0.05, 1500)
Explanation: Import the model
First, we import the RoadRunnerModel and LDTkLDModel and some simple transmission functions from LDTk.
End of explanation
ldm = LDTkLDModel(teff=(5500, 150), logg=(4.5, 0.1), z=(0.0, 0.1), pbs=[sdss_i], frozen=True)
Explanation: Example 1: single passband
The LDTkLDModel is initialised by giving it the stellar parameters and passband transmission functions,
End of explanation
tm = RoadRunnerModel(ldm)
tm.set_data(time)
Explanation: and given to the RoadRunnnerModel as any other limb darkening model.
End of explanation
flux1 = tm.evaluate(k=0.1, ldc=[None], t0=0.0, p=1.0, a=4.2, i=0.5*pi, e=0.0, w=0.0)
plot_lc(time, flux1);
Explanation: after which the transit model evaluation goes as usual
End of explanation
ldm = LDTkLDModel([sdss_g, sdss_z], (5500, 150), (4.5, 0.1), (0.0, 0.1), frozen=True)
lcids = zeros(time.size, int)
lcids[time.size//2:] = 1
tm = RoadRunnerModel(ldm)
tm.set_data(time, lcids=lcids, pbids=[0,1])
flux1 = tm.evaluate(k=0.1, ldc=[None], t0=0.0, p=1.0, a=4.2, i=0.5*pi, e=0.0, w=0.0)
plot_lc(time, flux1, ylim=(0.986, 1.0025));
Explanation: Example 2: multiple passbands
End of explanation
ldm.frozen = False
flux1 = tm.evaluate(k=0.1, ldc=[None], t0=0.0, p=1.0, a=4.2, i=0.5*pi, e=0.0, w=0.0)
ax = plot_lc(time, flux1);
for i in range(10):
flux1 = tm.evaluate(k=0.1, ldc=[None], t0=0.0, p=1.0, a=4.2, i=0.5*pi, e=0.0, w=0.0)
ax = plot_lc(time, flux1, ax=ax, c='C0', alpha=0.25);
setp(ax, ylim=(0.986, 1.0025))
Explanation: Thawing the model
After thawing, the model takes a random sample from the limb darkening profile sample set every time it is evaluated. We don't want this behaviour when fitting a model to observations, since this sort of randomness can easily confuse even the best optimiser, but is exactly what we want then doing MCMC for parameter posterior estimation.
End of explanation
<END_TASK> |
15,888 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Copyright 2022 The TensorFlow Authors.
Step1: Proximities and Prototypes with Random Forests
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Train a Random Forest model
The method relies on a pre-trained random forest model. First, we train a random forest model with TensorFlow Decision Forests library on the Adult binary classification dataset. The Adult dataset is well suited for this example as it contains columns that don't have a natural way to be compared.
Step3: Following are the first five examples of the training dataset. Notice that
different columns represent different quantities. For example, how would you compare
the distance between relationship and age?
Step4: A Random Forest is trained as follows
Step5: The performance of the Random Forest model is
Step6: This is an expected accuracy value for Random Forest models on this dataset. It indicates that the model is correctly trained.
We can also measure the accuracy of the model on the test datasets
Step7: Proximities
First, we inspect the number of trees in the model and the number of examples in the test datasets.
Step8: The method predict_get_leaves() returns the index of the active leaf for each example and each tree.
Step10: Here, leaves[i,j] is the index of the active leaf of the i-th
example in the j-th tree.
Note
Step11: Here, proximity[i,j] is the proximity in between the example i and j.
The proximity matrix
Step12: The proximity matrix has several interesting properties, notably, it is symmetrical, positive, and the diagonal elements are all 1.
Projection
Our first use of the proximity is to project the examples on the two dimensional plane.
If $\mathrm{prox} \in [0,1]$ is a proximity, $1 - \mathrm{prox}$ is a distance
between examples. Breiman proposes to compute the inner products of those distances, and to plot
the eigenvalues. See details
here.
Instead, we wil use the
t-SNE
which is a more modern way to visualize high-dimensional data.
Note
Step13: The next plot shows a two-dimensional projection of the test example features. The color of the points
represent the label values. Note that the label values were not available to the model.
Step15: Observations
Step16: Instructions
Step19: Prototypes
Trying to make sense of an example by looking at all its neighbors is not always efficient. Instead, we could "group" similar examples to make this task easier. This is the underlying idea behind prototypes.
Prototypes are examples, not necessarily in the original dataset, that are representative of large trends in the dataset. Looking at prototypes is a solution to understand a dataset. For more details, see the chapter 8.7 of Interpretable Machine Learning by Molnar.
Prototypes can be computed in different ways, for example using a clustering algorithm. Instead, Breiman proposed a specific solution based on a simple iterative algorithm. The algorithm is as follow
Step20: Using the methods above, let's extract the examples for 10 prototypes.
Note
Step22: For each of those prototypes, we want to display the statistics of the feature values. In this example, we will look at the quartiles of the numerical features, and the most frequent values for the categorical features.
Step23: Now, let's look at our prototypes.
Note
Step24: Try to make sense of the prototypes.
Let's extract and plot the mean 2d t-SNE projection of the elements in those prototypes.
Step25: We see that the 10 prototypes cover around half of the domain. Clusters of examples without a prototype would be best explained with more prototypes.
In the example above, we extracted the prototypes automatically. However, we can also build prototypes around specific examples.
Let's create the prototype around the example #0. | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2022 The TensorFlow Authors.
End of explanation
# Install TensorFlow Dececision Forests and the dependencies used in this colab.
!pip install tensorflow_decision_forests plotly wurlitzer -U -qq
import tensorflow_decision_forests as tfdf
import matplotlib.colors as mcolors
import math
import os
import numpy as np
import pandas as pd
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
from plotly.offline import iplot
import plotly.graph_objs as go
Explanation: Proximities and Prototypes with Random Forests
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/decision_forests/tutorials/proximities_colab"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/decision-forests/blob/main/documentation/tutorials/proximities_colab.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/decision-forests/blob/main/documentation/tutorials/proximities_colab.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/decision-forests/documentation/tutorials/proximities_colab.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Introduction
Leo Breiman, the author of the random forest learning algorithm, proposed a method to
measure the proximity (also known as similarity) between two examples using a pre-trained Random Forest (RF) model. He qualifies this method as <i>"[...] one of the most useful tools in random forests."</i>. In this Notebook, we implement this method and show how to use it to interpret models.
This notebook is implmented using the TensorFlow Decision Forests library. This document is easier to understand if you are familiar with the content of the Beginner colab.
Proximities
A proximity (or a similarity) between two examples is a number
indicating how "close" those two examples are. Following is an example of similarity in between the 3 examples ${e_1, e_2, e_3}$:
$$
\mathrm{proxy}(e_1, e_2) = 0.1 \
\mathrm{proxy}(e_2, e_3) = 9.6 \
\mathrm{proxy}(e_3, e_1) = 4.1 \
$$
For convenience, the proximity between examples is represented in matrix form:
| | $e_1$ | $e_2$ | $e_3$ |
|---- |---- |---- |---- |
| $e_1$ | $\mathrm{proxy}(e_1, e_1)$ | $\mathrm{proxy}(e_1, e_2)$ | $\mathrm{proxy}(e_1, e_3)$ |
| $e_2$ | $\mathrm{proxy}(e_2, e_1)$ | $\mathrm{proxy}(e_2, e_2)$ | $\mathrm{proxy}(e_2, e_3)$ |
| $e_3$ | $\mathrm{proxy}(e_3, e_1)$ | $\mathrm{proxy}(e_3, e_2)$ | $\mathrm{proxy}(e_3, e_3)$ |
Proximities are used in multiple data analysis techniques, including clustering, dimensionality reductions or nearest neighbor analysis. For this reason, it is a great tool for models and predictions interpretation.
Unfortunately, measuring the proximity between two tabular examples is not straightforward as different columns might describe different quantities. For example, try to define the proximity in between the following examples.
species | weight | num_legs | age | sex
------- | ------ | -------- | ------- | ------
cat | 2 kg | 4 | 2 y | male
dog | 6 kg | 4 | 12 y | female
spider | 5 g | 8 | 3 weeks | female
To define the similarity between two rows in the table above, you need to specify how much a difference in weight compares to a difference in the number of legs, or in ages. In addition, relations might be non-linear or be conditionnal on other columns. For example, dogs live longer than spiders, so maybe, a one year difference for a spider should not count the same one year of age for a dog.
Instead of manually defining those relations, Breiman's proximity turns a random forest model (which we know how to train on a tabular dataset), into a proximity metric.
Proximities with random forests
A random forest is a collection of decision trees. The prediction of the random the aggregation of the predictions of the individual trees. The prediction of a decision tree is computed by routing an example from the root to forest is one of the leaves according to node conditions. The leaf reached
by the example $i$ in the tree $t$ is called its active leaf and noted $\mathrm{leaf}(i,t)$
Breiman defines the proximity between two examples as the ratio of shared active leafs between those two examples. Formally, the proximity between example $i$ and example $j$ is:
$$
\mathrm{prox}(i,j) = \mathrm{prox}(j,i) = \frac{1}{|\mathrm{Trees}|} \sum_{t \in \mathrm{Trees}} \left[ \mathrm{leaf}(i,t) = \mathrm{leaf}(j,t) \right]
$$
with $\mathrm{leaf}(j,t)$ the index of the active leaf for the example $j$ in
the tree $t$.
Informally, if two examples are often routed to the same leaves (i.e. the two examples have the same active leaves), those examples are similar.
Let's implement this proximity function and use it in some examples.
Setup
End of explanation
# Download a copy of the adult dataset.
!wget -q https://raw.githubusercontent.com/google/yggdrasil-decision-forests/main/yggdrasil_decision_forests/test_data/dataset/adult_train.csv -O /tmp/adult_train.csv
!wget -q https://raw.githubusercontent.com/google/yggdrasil-decision-forests/main/yggdrasil_decision_forests/test_data/dataset/adult_test.csv -O /tmp/adult_test.csv
# Load the dataset in memory
train_df = pd.read_csv("/tmp/adult_train.csv")
test_df = pd.read_csv("/tmp/adult_test.csv")
# , and convert it into a TensorFlow dataset.
train_ds = tfdf.keras.pd_dataframe_to_tf_dataset(train_df, label="income")
test_ds = tfdf.keras.pd_dataframe_to_tf_dataset(test_df, label="income")
Explanation: Train a Random Forest model
The method relies on a pre-trained random forest model. First, we train a random forest model with TensorFlow Decision Forests library on the Adult binary classification dataset. The Adult dataset is well suited for this example as it contains columns that don't have a natural way to be compared.
End of explanation
# Print the first 5 examples.
train_df.head()
Explanation: Following are the first five examples of the training dataset. Notice that
different columns represent different quantities. For example, how would you compare
the distance between relationship and age?
End of explanation
# Train a Random Forest
model = tfdf.keras.RandomForestModel(num_trees=1000)
model.fit(train_ds)
Explanation: A Random Forest is trained as follows:
End of explanation
model_inspector = model.make_inspector()
out_of_bag_accuracy = model_inspector.evaluation().accuracy
print(f"Out-of-bag accuracy: {out_of_bag_accuracy:.4f}")
Explanation: The performance of the Random Forest model is:
End of explanation
# The test accuracy is measured on the test datasets.
model.compile(["accuracy"])
test_accuracy = model.evaluate(test_ds, return_dict=True, verbose=0)["accuracy"]
print(f"Test accuracy: {test_accuracy:.4f}")
Explanation: This is an expected accuracy value for Random Forest models on this dataset. It indicates that the model is correctly trained.
We can also measure the accuracy of the model on the test datasets:
End of explanation
print("The model contains", model_inspector.num_trees(), "trees.")
print("The test dataset contains", test_df.shape[0], "examples.")
Explanation: Proximities
First, we inspect the number of trees in the model and the number of examples in the test datasets.
End of explanation
leaves = model.predict_get_leaves(test_ds)
print("The leaf indices:\n", leaves)
print("The predicted leaves have shape", leaves.shape,
"(we expect [num_examples, num_trees]")
Explanation: The method predict_get_leaves() returns the index of the active leaf for each example and each tree.
End of explanation
def compute_proximity(leaves, step_size=100):
Computes the proximity between each pair of examples.
Args:
leaves: A matrix of shape [num_example, num_tree] where the value [i,j] is
the index of the leaf reached by example "i" in the tree "j".
step_size: Size of the block of examples for the computation of the
proximity. Does not impact the results.
Returns:
The example pair-wise proximity matrix of shape [n,n] with "n" the number of
examples.
example_idx = 0
num_examples = leaves.shape[0]
t_leaves = np.transpose(leaves)
proximities = []
# Instead of computing the proximity in between all the examples at the same
# time, we compute the similarity in blocks of "step_size" examples. This
# makes the code more efficient with the the numpy broadcast.
while example_idx < num_examples:
end_idx = min(example_idx + step_size, num_examples)
proximities.append(
np.mean(
leaves[..., np.newaxis] == t_leaves[:,
example_idx:end_idx][np.newaxis,
...],
axis=1))
example_idx = end_idx
return np.concatenate(proximities, axis=1)
proximity = compute_proximity(leaves)
print("The shape of proximity is", proximity.shape)
Explanation: Here, leaves[i,j] is the index of the active leaf of the i-th
example in the j-th tree.
Note: In this notebook, we won't need the actual leaf prediction values. However, they are available through the model_inspector.
Next, we implement the $\mathrm{prox}$ equation define earlier.
Note: This step is slow.
End of explanation
proximity
Explanation: Here, proximity[i,j] is the proximity in between the example i and j.
The proximity matrix:
End of explanation
distance = 1 - proximity
t_sne = TSNE(
# Number of dimensions to display. 3d is also possible.
n_components=2,
# Control the shape of the projection. Higher values create more
# distinct but also more collapsed clusters. Can be in 5-50.
perplexity=20,
metric="precomputed",
init="random",
verbose=1,
square_distances=True,
learning_rate="auto").fit_transform(distance)
Explanation: The proximity matrix has several interesting properties, notably, it is symmetrical, positive, and the diagonal elements are all 1.
Projection
Our first use of the proximity is to project the examples on the two dimensional plane.
If $\mathrm{prox} \in [0,1]$ is a proximity, $1 - \mathrm{prox}$ is a distance
between examples. Breiman proposes to compute the inner products of those distances, and to plot
the eigenvalues. See details
here.
Instead, we wil use the
t-SNE
which is a more modern way to visualize high-dimensional data.
Note: We use the t-SNE's Scikit-learn implementation.
End of explanation
fig, ax = plt.subplots(1, 1, figsize=(8, 8))
ax.grid(False)
# Color the points according to the label value.
colors = (test_df["income"] == ">50K").map(lambda x: ["orange", "green"][x])
ax.scatter(
t_sne[:, 0], t_sne[:, 1], c=colors, linewidths=0.5, marker="x", s=20)
Explanation: The next plot shows a two-dimensional projection of the test example features. The color of the points
represent the label values. Note that the label values were not available to the model.
End of explanation
# docs_infra: no_execute
# Note: Run the colab (click the "Run in Google Colab" link at the top) to see
# the interactive plot.
def interactive_plot(dataset, projections):
def label_fn(row):
HTML printer over each example.
return "<br>".join([f"<b>{k}:</b> {v}" for k, v in row.items()])
labels = list(dataset.apply(label_fn, axis=1).values)
iplot({
"data": [
go.Scatter(
x=projections[:, 0],
y=projections[:, 1],
text=labels,
mode="markers",
marker={
"color": colors,
"size": 3,
})
],
"layout": go.Layout(width=600, height=600, template="simple_white")
})
interactive_plot(test_df, t_sne)
Explanation: Observations:
There are custers of points with similar colors. Those are examples that are easy for the model to classify.
There are multiple clusters with the same color. Those multiple clusters show examples with the same label, but for "different reasons" according to the model.
Clusters with mixed colors contain examples where the model performs poorly. In the part above, we evaluated the model test accuracy to ~86%. Those are likely those examples.
The previous plot is a static image. Let's turn it into an interactive plot and inspect the individual examples.
End of explanation
# Number of columns and rows in the multi-plot.
num_plot_cols = 5
num_plot_rows = math.ceil(test_df.shape[1] / num_plot_cols)
# Color palette for the categorical features.
palette = list(mcolors.TABLEAU_COLORS.values())
# Create the plot
plot_size_in = 3.5
fig, axs = plt.subplots(
num_plot_rows,
num_plot_cols,
figsize=(num_plot_cols * plot_size_in, num_plot_rows * plot_size_in))
# Hide the borders.
for row in axs:
for ax in row:
ax.set_axis_off()
for col_idx, col_name in enumerate(test_df):
ax = axs[col_idx // num_plot_cols, col_idx % num_plot_cols]
colors = test_df[col_name]
if colors.dtypes in [str, object]:
# Use the color palette on categorical features.
unique_values = list(colors.unique())
colors = colors.map(
lambda x: palette[unique_values.index(x) % len(palette)])
ax.set_title(col_name)
ax.scatter(t_sne[:, 0], t_sne[:, 1], c=colors.values, linewidths=0.5,
marker="x", s=5)
Explanation: Instructions: Put the mouse pointer over some examples, and try to make sense of them. Compare them to their neighbors.
Not seeing the interactive plot?: Run the colab with this link to see the interactive plot.
Instead of coloring the examples according to the label values, we can color the examples according to each feature values:
End of explanation
def select_example(labels, distance_matrix, k):
Selects the example with the highest number of neighbors with the same class.
Usage example:
n = 5
select_example(
np.random.randint(0,2, size=n),
np.random.uniform(size=(n,n)),
2)
Returns:
The list of neighbors for the selected example. Includes the selected
example.
partition = np.argpartition(distance_matrix, k)[:,:k]
same_label = np.mean(np.equal(labels[partition], np.expand_dims(labels, axis=1)), axis=1)
selected_example = np.argmax(same_label)
return partition[selected_example, :]
def extract_prototype_examples(labels, distance_matrix, k, num_prototypes):
Extracts a list of examples in each prototype.
Usage example:
n = 50
print(extract_prototype_examples(
labels=np.random.randint(0, 2, size=n),
distance_matrix=np.random.uniform(size=(n, n)),
k=2,
num_prototypes=3))
Returns:
An array where E[i][j] is the index of the j-th examples of the i-th
prototype.
example_idxs = np.arange(len(labels))
prototypes = []
examples_per_prototype = []
for iter in range(num_prototypes):
print(f"Iter #{iter}")
# Select the example
neighbors = select_example(labels, distance_matrix, k)
# Index of the examples in the prototype
examples_per_prototype.append(list(example_idxs[neighbors]))
# Remove the selected examples
example_idxs = np.delete(example_idxs, neighbors)
labels = np.delete(labels, neighbors)
distance_matrix = np.delete(distance_matrix, neighbors, axis=0)
distance_matrix = np.delete(distance_matrix, neighbors, axis=1)
return examples_per_prototype
Explanation: Prototypes
Trying to make sense of an example by looking at all its neighbors is not always efficient. Instead, we could "group" similar examples to make this task easier. This is the underlying idea behind prototypes.
Prototypes are examples, not necessarily in the original dataset, that are representative of large trends in the dataset. Looking at prototypes is a solution to understand a dataset. For more details, see the chapter 8.7 of Interpretable Machine Learning by Molnar.
Prototypes can be computed in different ways, for example using a clustering algorithm. Instead, Breiman proposed a specific solution based on a simple iterative algorithm. The algorithm is as follow:
Select the example surrounded with the highest number of neighbors with the same class among its k nearest neighbors.
Create a prototype example using the median feature values of the selected example and its k neighbors.
Remove those k+1 examples
Repeat
Informally, prototypes are centers of clusters in the plots we created above.
Let's implement this algorithm and look at some prototypes.
First the method that selects the example in step 1.
End of explanation
examples_per_prototype = extract_prototype_examples(test_df["income"].values, distance, k=20, num_prototypes=10)
print(f"Found examples for {len(examples_per_prototype)} prototypes.")
Explanation: Using the methods above, let's extract the examples for 10 prototypes.
Note: The parameter k controls the number of elements in a cluster. Changing its value will impact the prototypes.
End of explanation
def build_prototype(dataset):
Exacts the feature statistics of a prototype.
For numerical features, returns the quantiles.
For categorical features, returns the most frequent value.
Usage example:
n = 50
print(build_prototype(
pd.DataFrame({
"f1": np.random.uniform(size=n),
"f2": np.random.uniform(size=n),
"f3": [f"v_{x}" for x in np.random.randint(0, 2, size=n)],
"label": np.random.randint(0, 2, size=n)
})))
Return:
A prototype as a dictionary of strings.
prototype = {}
for col in dataset.columns:
col_values = dataset[col]
if col_values.dtypes in [str, object]:
# A categorical feature.
# Remove the missing values
col_values = [x for x in col_values if isinstance(x,str) or not math.isnan(x)]
# Frequency of each possible value.
frequency_item, frequency_count = np.unique(col_values, return_counts=True)
top_item_idx = np.argmax(frequency_count)
top_item_probability = frequency_count[top_item_idx] / np.sum(frequency_count)
# Print the most common item.
prototype[col] = f"{frequency_item[top_item_idx]} ({100*top_item_probability:.0f}%)"
else:
# A numerical feature.
quartiles = np.nanquantile(col_values.values, [0.25, 0.5, 0.75])
# Print the 3 quantiles.
prototype[col] = f"{quartiles[0]} {quartiles[1]} {quartiles[2]}"
return prototype
Explanation: For each of those prototypes, we want to display the statistics of the feature values. In this example, we will look at the quartiles of the numerical features, and the most frequent values for the categorical features.
End of explanation
# Extract the statistics of each prototype.
prototypes = []
for examples in examples_per_prototype:
# Prorotype statistics.
prototypes.append(build_prototype(test_df.iloc[examples, :]))
prototypes = pd.DataFrame(prototypes)
prototypes
Explanation: Now, let's look at our prototypes.
Note: The table shows the "25%-quantile median 75%-quantile" for each numerical feature.
End of explanation
# Extract the projection of each prototype.
prototypes_projection = []
for examples in examples_per_prototype:
# t-SNE for each prototype.
prototypes_projection.append(np.mean(t_sne[examples,:],axis=0))
prototypes_projection = np.stack(prototypes_projection)
# Plot the mean 2d t-SNE projection of the elements in the prototypes.
fig, ax = plt.subplots(1, 1, figsize=(8, 8))
ax.grid(False)
# Color the points according to the label value.
colors = (test_df["income"] == ">50K").map(lambda x: ["orange", "green"][x])
ax.scatter(
t_sne[:, 0], t_sne[:, 1], c=colors, linewidths=0.5, marker="x", s=20)
# Add the prototype indices.
for i in range(prototypes_projection.shape[0]):
ax.text(prototypes_projection[i, 0],
prototypes_projection[i, 1],
f"{i}",
fontdict={"size":18},
c="red")
Explanation: Try to make sense of the prototypes.
Let's extract and plot the mean 2d t-SNE projection of the elements in those prototypes.
End of explanation
example_idx = 0
k = 20
neighbors = np.argpartition(distance[example_idx, :], k)[:k]
print(f"The example #{example_idx} is:")
print("===============================")
print(test_df.iloc[example_idx, :])
print("")
print(f"The prototype around the example #{example_idx} is:")
print("============================================")
print(pd.Series(build_prototype(test_df.iloc[neighbors, :])))
Explanation: We see that the 10 prototypes cover around half of the domain. Clusters of examples without a prototype would be best explained with more prototypes.
In the example above, we extracted the prototypes automatically. However, we can also build prototypes around specific examples.
Let's create the prototype around the example #0.
End of explanation
<END_TASK> |
15,889 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Poster popularity by state
This notebook loads data of poster viewership at the SfN 2016 annual meeting, organized by the states that were affiliated with each poster.
We find that the posters are most popular
Import libraries and load data
Step1: 1. Summarize data by state
Step2: 2. Poster popularity vs. prevalence
Across states in the United States, we found a positive correlation between the number of posters from a state and the popularity of those posters.
Step3: 3. Permutation tests | <ASSISTANT_TASK:>
Python Code:
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
import pandas as pd
# Load data
df = pd.DataFrame.from_csv('./posterviewers_by_state.csv')
key_N = 'Number of people'
Explanation: Poster popularity by state
This notebook loads data of poster viewership at the SfN 2016 annual meeting, organized by the states that were affiliated with each poster.
We find that the posters are most popular
Import libraries and load data
End of explanation
# 0. Count number of posters from each state
# Calculate mean poster popularity
states = df['State'].unique()
dict_state_counts = {'State':states,'count':np.zeros(len(states),dtype=int),'popularity':np.zeros(len(states))}
for i, s in enumerate(states):
dict_state_counts['count'][i] = int(sum(df['State']==s))
dict_state_counts['popularity'][i] = np.round(np.mean(df[df['State']==s][key_N]),3)
df_counts = pd.DataFrame.from_dict(dict_state_counts)
# Visualize dataframe
# count = total number of posters counted affiliated with that country
# popularity = average number of viewers at a poster affiliated with that country
df_counts.head()
Explanation: 1. Summarize data by state
End of explanation
print sp.stats.spearmanr(np.log10(df_counts['count']),df_counts['popularity'])
plt.figure(figsize=(3,3))
plt.semilogx(df_counts['count'],df_counts['popularity'],'k.')
plt.xlabel('Number of posters\nin the state')
plt.ylabel('Average number of viewers per poster')
plt.ylim((-.1,3.6))
plt.xlim((.9,1000))
Explanation: 2. Poster popularity vs. prevalence
Across states in the United States, we found a positive correlation between the number of posters from a state and the popularity of those posters.
End of explanation
# Simulate randomized data
Nperm = 100
N_posters = len(df)
rand_statepop = np.zeros((Nperm,len(states)),dtype=np.ndarray)
rand_statepopmean = np.zeros((Nperm,len(states)))
for i in range(Nperm):
# Random permutation of posters, organized by state
randperm_viewers = np.random.permutation(df[key_N].values)
for j, s in enumerate(states):
rand_statepop[i,j] = randperm_viewers[np.where(df['State']==s)[0]]
rand_statepopmean[i,j] = np.mean(randperm_viewers[np.where(df['State']==s)[0]])
# True data: Calculate all p-values for the difference between 1 state's popularity and the rest
min_N_posters = 10
states_big = states[np.where(df_counts['count']>=min_N_posters)[0]]
N_big = len(states_big)
t_true_all = np.zeros(N_big)
p_true_all = np.zeros(N_big)
for i, state in enumerate(states_big):
t_true_all[i], _ = sp.stats.ttest_ind(df[df['State']==state][key_N],df[df['State']!=state][key_N])
_, p_true_all[i] = sp.stats.mannwhitneyu(df[df['State']==state][key_N],df[df['State']!=state][key_N])
pmin_pop = np.min(p_true_all[np.where(t_true_all>0)[0]])
pmin_unpop = np.min(p_true_all[np.where(t_true_all<0)[0]])
print 'Most popular state: ', states_big[np.argmax(t_true_all)], '. p=', str(pmin_pop)
print 'Least popular state: ', states_big[np.argmin(t_true_all)], '. p=', str(pmin_unpop)
# Calculate minimum p-values for each permutation
# Calculate all p and t values
t_rand_all = np.zeros((Nperm,N_big))
p_rand_all = np.zeros((Nperm,N_big))
pmin_pop_rand = np.zeros(Nperm)
pmin_unpop_rand = np.zeros(Nperm)
for i in range(Nperm):
for j, state in enumerate(states_big):
idx_use = range(len(states_big))
idx_use.pop(j)
t_rand_all[i,j], _ = sp.stats.ttest_ind(rand_statepop[i,j],np.hstack(rand_statepop[i,idx_use]))
_, p_rand_all[i,j] = sp.stats.mannwhitneyu(rand_statepop[i,j],np.hstack(rand_statepop[i,idx_use]))
# Identify the greatest significance of a state being more popular than the rest
pmin_pop_rand[i] = np.min(p_rand_all[i][np.where(t_rand_all[i]>0)[0]])
# Identify the greatest significance of a state being less popular than the rest
pmin_unpop_rand[i] = np.min(p_rand_all[i][np.where(t_rand_all[i]<0)[0]])
# Test if most popular and least popular countries are outside of expectation
print 'Chance of a state being more distinctly popular than Minnesota: '
print sum(i < pmin_pop for i in pmin_pop_rand) / float(len(pmin_pop_rand))
print 'Chance of a state being less distinctly popular than Connecticut: '
print sum(i < pmin_unpop for i in pmin_unpop_rand) / float(len(pmin_unpop_rand))
Explanation: 3. Permutation tests: difference in popularity across states
In this code, we test if the relative popularity / unpopularity observed for any state is outside what is expected by chance
Here, the most popular and least popular states are defined by a nonparametric statiscal test between the number of viewers at posters from their country, compared to posters from all other countries.
End of explanation
<END_TASK> |
15,890 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Data Preparation for Linear Regression and Classification
From Command Line - Import CSV file (From Data Preparation Part 1) into MongoDB
mongoimport --db airbnb --type csv --file listings_31Mar.csv --headerline -c Rawdata
Step1: Drop columns that are not important
Step2: Convert string date to type date, then convert it to number of days since user became a host till date of analysis (4th april 2017)
Step3: convert host_response_rate from string to integer. if "na", convert to not a number (NaN)
Step4: for i in range(0,len(a))
Step5: a = test_superhost.astype("str")
Step6: data['host_is_superhost'] = data['host_is_superhost'].astype("str")
data['host_is_superhost'] = data['host_is_superhost'].str.replace('t', '1')
data['host_is_superhost'] = data['host_is_superhost'].str.replace('f', '0')
data['host_is_superhost'] = data.host_is_superhost.astype(int)
Step7: convert columns with string names to dummy variables | <ASSISTANT_TASK:>
Python Code:
from pymongo import MongoClient
import pandas as pd
from datetime import datetime
client = MongoClient()
client = MongoClient('localhost', 27017)
db = client.airbnb
cursor = db.Rawdata.find()
data = pd.DataFrame(list(cursor))
data.head(1)
data.columns
Explanation: Data Preparation for Linear Regression and Classification
From Command Line - Import CSV file (From Data Preparation Part 1) into MongoDB
mongoimport --db airbnb --type csv --file listings_31Mar.csv --headerline -c Rawdata
End of explanation
data = data.drop("listing_url",axis=1)
data = data.drop("scrape_id",axis=1)
data = data.drop("name",axis=1)
data = data.drop("notes",axis=1)
data = data.drop("access",axis=1)
data = data.drop("thumbnail_url",axis=1)
data = data.drop("medium_url",axis=1)
data = data.drop("picture_url",axis=1)
data = data.drop("xl_picture_url",axis=1)
data = data.drop("host_url",axis=1)
data = data.drop("host_thumbnail_url",axis=1)
data = data.drop("host_picture_url",axis=1)
data = data.drop("street",axis=1)
data = data.drop("neighbourhood",axis=1)
data = data.drop("neighbourhood_cleansed",axis=1)
data = data.drop("city",axis=1)
data = data.drop("state",axis=1)
data = data.drop("zipcode",axis=1)
data = data.drop("market",axis=1)
data = data.drop("smart_location",axis=1)
data = data.drop("country_code",axis=1)
data = data.drop("country",axis=1)
data = data.drop("is_location_exact",axis=1)
data = data.drop("property_type",axis=1)
data = data.drop("bed_type",axis=1)
data = data.drop("amenities",axis=1)
data = data.drop("square_feet",axis=1)
data = data.drop("weekly_price",axis=1)
data = data.drop("monthly_price",axis=1)
data = data.drop("availability_30",axis=1)
data = data.drop("availability_60",axis=1)
data = data.drop("availability_90",axis=1)
data = data.drop("calendar_last_scraped",axis=1)
data = data.drop("license",axis=1)
data = data.drop("jurisdiction_names",axis=1)
data = data.drop("first_review",axis=1)
data = data.drop("last_review",axis=1)
data = data.drop("Shampoo",axis=1)
data = data.drop("nearest_attr_lat",axis=1)
data = data.drop("nearest_attr_long",axis=1)
data = data.drop("Dryer",axis=1)
data = data.drop("Doorman",axis=1)
data = data.drop("Essentials",axis=1)
#data = data.drop("translation missing: en.hosting_amenity_50",axis=1)
data = data.drop("Washer",axis=1)
data = data.drop("Washer / Dryer",axis=1)
data = data.drop("First aid kit",axis=1)
data = data.drop("Smoke detector",axis=1)
#data = data.drop("translation missing: en.hosting_amenity_49",axis=1)
data = data.drop("Hangers",axis=1)
data = data.drop("Fire extinguisher",axis=1)
data = data.drop("Iron",axis=1)
data = data.drop("Carbon monoxide detector",axis=1)
data = data.drop("Wireless Internet",axis=1)
data = data.drop("Laptop friendly workspace",axis=1)
data = data.drop("Hot tub",axis=1)
data = data.drop("Dog(s)",axis=1)
data = data.drop("Cat(s)",axis=1)
data = data.drop("Buzzer/wireless intercom",axis=1)
data = data.drop("Hair dryer",axis=1)
data = data.drop("Safety card",axis=1)
data = data.drop("last_scraped",axis=1)
data = data.drop("house_rules",axis=1)
data = data.drop("interaction",axis=1)
data = data.drop("transit",axis=1)
data = data.drop("neighborhood_overview",axis=1)
data = data.drop("experiences_offered",axis=1)
data = data.drop("id",axis=1)
data = data.drop("summary",axis=1)
data = data.drop("space",axis=1)
data = data.drop("description",axis=1)
data = data.drop("host_id",axis=1)
data = data.drop("host_name",axis=1)
data = data.drop("host_about",axis=1)
data = data.drop("latitude",axis=1)
data = data.drop("longitude",axis=1)
data = data.drop("host_neighbourhood",axis=1)
data = data.drop("host_location",axis=1)
data = data.drop("calendar_updated",axis=1)
data = data.drop("host_listings_count",axis=1)
#data = data.drop("Unnamed: 0",axis=1)
data = data.drop("calculated_host_listings_count",axis=1)
data = data.drop("host_acceptance_rate",axis=1)
data.head(1)
data = data.drop("",axis=1)
data.dtypes
data.host_response_rate.tail()
Explanation: Drop columns that are not important
End of explanation
x = pd.to_datetime(data.host_since)
x.head()
today = '2016-04-01'
y = datetime.strptime('2017-04-01', '%Y-%m-%d')
x[16]
y-x
data["host_since_days"] = y-x
data = data.drop("host_since",axis=1)
data = data.drop("host_response_time",axis=1)
data.head(1)
Explanation: Convert string date to type date, then convert it to number of days since user became a host till date of analysis (4th april 2017)
End of explanation
test_host_response = data.host_response_rate
a = test_host_response.map(lambda x: str(x)[:-1])
for i in range(0,len(a)):
print(i)
if a[i] == "na":
continue
if a[i] == "":
continue
else:
a[i] = int(a[i])
a[i] = a[i]/100
Explanation: convert host_response_rate from string to integer. if "na", convert to not a number (NaN)
End of explanation
data["host_response_rate"] = a
test_superhost = data['host_is_superhost']
Explanation: for i in range(0,len(a)):
if a[i] == "na":
a[i] = float("nan")
End of explanation
a = test_superhost.str.replace('t', '1')
a = a.str.replace('f', '0')
test_superhost.isnull().sum()
Explanation: a = test_superhost.astype("str")
End of explanation
data['host_is_superhost'] = a
data = data.drop("host_verifications",axis=1)
data.head()
Explanation: data['host_is_superhost'] = data['host_is_superhost'].astype("str")
data['host_is_superhost'] = data['host_is_superhost'].str.replace('t', '1')
data['host_is_superhost'] = data['host_is_superhost'].str.replace('f', '0')
data['host_is_superhost'] = data.host_is_superhost.astype(int)
End of explanation
df_dummies1= pd.get_dummies(data, prefix='neighbourhood', columns=['neighbourhood_group_cleansed'])
df_dummies2= pd.get_dummies(df_dummies1, prefix='roomtype', columns=['room_type'])
test_profilepic = df_dummies2['host_has_profile_pic']
a = test_profilepic.str.replace('t', '1')
a = a.str.replace('f', '0')
df_dummies2["host_has_profile_pic"] = a
test_host_identity_verified = df_dummies2['host_identity_verified']
a = test_host_identity_verified.str.replace('t', '1')
a = a.str.replace('f', '0')
df_dummies2["host_identity_verified"] = a
df_dummies2 = df_dummies2.drop("has_availability",axis=1)
df_dummies2 = df_dummies2.drop("requires_license",axis=1)
test_instant_bookable = df_dummies2['instant_bookable']
a = test_instant_bookable.str.replace('t', '1')
a = a.str.replace('f', '0')
df_dummies2['instant_bookable'] = a
df_dummies2.head()
df_dummies3= pd.get_dummies(df_dummies2, prefix='cancellation_policy', columns=['cancellation_policy'])
test_instant_bookable = df_dummies3['require_guest_profile_picture']
a = test_instant_bookable.str.replace('t', '1')
a = a.str.replace('f', '0')
df_dummies3['require_guest_profile_picture'] = a
df_dummies3["require_guest_phone_verification"].head()
df_dummies3["require_guest_phone_verification"].isnull().sum()
test_phone = df_dummies3['require_guest_phone_verification']
a = test_phone.str.replace('t', '1')
a = a.str.replace('f', '0')
df_dummies3['require_guest_phone_verification'] = a
df_dummies3.head()
df_dummies3.nearest_attr_rating.head()
df_dummies3.nearest_attr_rating.isnull().sum()
len(df_dummies3)
data.host_since_days[0]
from datetime import timedelta
data.host_since_days[0].days
data.host_since_days.head()
data.host_since_days[1]
data['host_since_days'] = data['host_since_days'].apply(lambda x: x.days if pd.isnull(x) == False else 0)
#pd.DataFrame.to_csv(df_dummies3, "preprocessed_data.csv")
Explanation: convert columns with string names to dummy variables
End of explanation
<END_TASK> |
15,891 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Day 7
Day 7.1
Approach 1
Step2: Test
Step4: This approach seems correct, but it creates huge expressions along the way that become harder and harder to parse. Thus the time to a final expression that wraps up all the computations is very long. Two ideas to carry on
Step5: Test 0
Step6: Test 1
Step7: Solution
Step9: Approach 3
Step10: Test
Step11: Although the approach passes the test, it does not end in reasonable time for the full input.
Approach 4
Step12: Test
Step13: Solution
Step14: Although this approach is more natural than defining a LazyWrapper in Python, it takes quite a lot of time to execute, so this is not a very cool solution after all.
Day 7.2 | <ASSISTANT_TASK:>
Python Code:
binary_command = {'NOT': '~', 'AND': '&', 'OR': '|', 'LSHIFT': '<<', 'RSHIFT': '>>'}
operators = binary_command.values()
import csv
def translate(l):
return [binary_command[a] if a in binary_command else a for a in l]
def display(input_file):
produce a dict mapping variables to expressions
commands = []
with open(input_file, 'rt') as f_input:
csv_reader = csv.reader(f_input, delimiter=' ')
for line in csv_reader:
commands.append((line[-1], ' '.join(list(translate(line[:-2])))))
return dict(commands)
import re
def extract_variables(expr):
varbls = []
regex_pattern = '\s|\\)|\\('
l = re.split(regex_pattern, expr)
for a in l:
if (a not in operators) and (not a.isnumeric()) and (a != ''):
varbls.append(a)
return set(varbls)
def create_instance(wire):
exec_python = commands[wire]
pending = extract_variables(commands[wire])
count = 0
while pending and (count < 200):
s = pending.pop()
expr = commands[s]
exec_python = re.sub('({0})'.format(s), '( {0} )'.format(expr), exec_python)
pending = pending.union(extract_variables(exec_python))
count += 1
return wire + ' = ' + exec_python
def evaluate(var):
instance = create_instance(var)
exec(instance)
return np.uint16(locals()[var])
Explanation: Day 7
Day 7.1
Approach 1: Create a single expression by recursive substitution, then evaluate!
End of explanation
commands = display('inputs/input7.test.txt')
def test():
assert(evaluate('d') == 72)
assert(evaluate('e') == 507)
assert(evaluate('f') == 492)
assert(evaluate('g') == 114)
assert(evaluate('h') == 65412)
assert(evaluate('i') == 65079)
assert(evaluate('x') == 123)
assert(evaluate('y') == 456)
test()
Explanation: Test
End of explanation
import numpy as np
def RSHIFT(a, b):
result = np.uint16(a) >> int(b)
return int(result)
def LSHIFT(a, b):
result = np.uint16(a) << int(b)
return int(result)
def OR(a, b):
result = np.uint16(a) | np.uint16(b)
return int(result)
def AND(a, b):
result = np.uint16(a) & np.uint16(b)
return int(result)
def NOT(a):
result = ~ np.uint16(a)
return int(result)
import csv
def display(input_file):
produce a dict mapping variables to expressions
commands = []
with open(input_file, 'rt') as f_input:
csv_reader = csv.reader(f_input, delimiter=' ')
for line in csv_reader:
commands.append((line[-1], line[:-2]))
return dict(commands)
def evaluate(wire):
known = {}
while wire not in known:
if wire in known:
break
for k, v in commands.items():
if (len(v) == 1) and (v[0].isnumeric()) and (k not in known):
known[k] = int(v[0])
elif (len(v) == 1) and (v[0] in known) and (k not in known):
known[k] = known[v[0]]
elif ('AND' in v) and (v[0] in known) and (v[2] in known):
known[k] = AND(known[v[0]], known[v[2]])
elif ('AND' in v) and (v[0].isnumeric()) and (v[2] in known):
known[k] = AND(int(v[0]), known[v[2]])
elif ('AND' in v) and (v[0] in known) and (v[2].isnumeric()):
known[k] = AND(known[v[0]], int(v[2]))
elif ('OR' in v) and (v[0] in known) and (v[2] in known):
known[k] = OR(known[v[0]], known[v[2]])
elif ('OR' in v) and (v[0].isnumeric()) and (v[2] in known):
known[k] = OR(int(v[0]), known[v[2]])
elif ('OR' in v) and (v[0] in known) and (v[2].isnumeric()):
known[k] = OR(known[v[0]], int(v[2]))
elif ('LSHIFT' in v) and (v[0] in known):
known[k] = LSHIFT(known[v[0]], v[2])
elif ('RSHIFT' in v) and (v[0] in known):
known[k] = RSHIFT(known[v[0]], v[2])
elif ('NOT' in v) and (v[1] in known):
known[k] = NOT(known[v[1]])
return known[wire]
Explanation: This approach seems correct, but it creates huge expressions along the way that become harder and harder to parse. Thus the time to a final expression that wraps up all the computations is very long. Two ideas to carry on: i) concurrent evaluation of expressions; ii) define lazy variables/functions that collect all the dependencies of the circuit and start firing upon request.
Approach 2: Concurrent evaluation from known variables.
The solution provided hereto owes credit to this source: https://www.reddit.com/r/adventofcode/comments/5id6w0/2015_day_7_part_1_python_wrong_answer/
End of explanation
commands = display('inputs/input7.test1.txt')
commands
evaluate('a')
Explanation: Test 0
End of explanation
commands = display('inputs/input7.test2.txt')
commands
test()
Explanation: Test 1
End of explanation
commands = display('inputs/input7.txt')
evaluate('a')
Explanation: Solution
End of explanation
import csv
import numpy as np
def display(input_file):
produce a dict mapping variables to expressions
commands = []
with open(input_file, 'rt') as f_input:
csv_reader = csv.reader(f_input, delimiter=' ')
for line in csv_reader:
commands.append((line[-1], line[:-2]))
return dict(commands)
class LazyVar(object):
def __init__(self, func):
self.func = func
self.value = None
def __call__(self):
if self.value is None:
self.value = self.func()
return self.value
binary_command = {'NOT': '~', 'AND': '&', 'OR': '|', 'LSHIFT': '<<', 'RSHIFT': '>>'}
def translate(l):
translated = []
for a in l:
if a in binary_command:
b = binary_command[a]
elif a.isnumeric():
b = 'np.uint16({})'.format(a)
else:
b = '{}.func()'.format('var_' + a)
translated.append(b)
return translated
Explanation: Approach 3: With Lazy Variable Wrapper (Python)
End of explanation
commands = display('inputs/input7.test2.txt')
commands = display('inputs/input7.test2.txt')
for k, v in commands.items():
command_str = '{0} = LazyVar(lambda: {1})'.format('var_' + k, ''.join(translate(v)))
print(command_str)
exec(command_str)
def test():
assert(var_d.func() == 72)
assert(var_e.func() == 507)
assert(var_f.func() == 492)
assert(var_g.func() == 114)
assert(var_h.func() == 65412)
assert(var_i.func() == 65079)
assert(var_x.func() == 123)
assert(var_y.func() == 456)
test()
Explanation: Test
End of explanation
def rscript_command(var, l):
vocab = {'AND' : 'bitwAnd',
'OR' : 'bitwOr',
'LSHIFT' : 'bitwShiftL',
'RSHIFT' : 'bitwShiftR'}
if len(l) == 3:
func = vocab[l[1]]
arg1 = l[0] if l[0].isdigit() else 'var_' + l[0] + '()'
arg2 = l[2] if l[2].isdigit() else 'var_' + l[2] + '()'
return 'var_{0} <- function(a={1}, b={2})'.format(var, arg1, arg2) + ' {' + '{0}(a,b)'.format(func) + '}'
elif len(l) == 2:
func = 'bitwNot'
arg1 = l[1] if l[1].isdigit() else 'var_' + l[1] + '()'
return 'var_{0} <- function(a={1})'.format(var, arg1) + ' {' + '{0}(a)'.format(func) + '}'
else:
arg1 = l[0] if l[0].isdigit() else 'var_' + l[0] + '()'
return 'var_{0} <- function(a={1})'.format(var, arg1) + ' {' + 'a' + '}'
def generate_rscript(commands, target):
with open('day7_commands.R', 'wt') as f:
for k, v in commands.items():
f.write(rscript_command(k, v)+'\n')
f.write('var_' + target + '()')
Explanation: Although the approach passes the test, it does not end in reasonable time for the full input.
Approach 4: With Lazy Evaluation in R
The approach now is to exploit the lazy evaluation capabilities in R. So we leverage Python to create an R script that does the job.
End of explanation
commands = display('inputs/input7.test2.txt')
generate_rscript(commands, 'd')
! cat day7_commands.R
!Rscript day7_commands.R
Explanation: Test
End of explanation
commands = display('inputs/input7.txt')
generate_rscript(commands, 'a')
! cat day7_commands.R
!Rscript day7_commands.R
Explanation: Solution
End of explanation
commands = display('inputs/input7.txt')
commands['b'] = ['16076']
evaluate('a')
Explanation: Although this approach is more natural than defining a LazyWrapper in Python, it takes quite a lot of time to execute, so this is not a very cool solution after all.
Day 7.2
End of explanation
<END_TASK> |
15,892 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Head data is generated for a pumping test in a two-aquifer model. The well starts pumping at time $t=0$ with a discharge $Q=800$ m$^3$/d. The head is measured in an observation well 10 m from the pumping well. The thickness of the aquifer is 20 m. Questions
Step1: Model as semi-confined | <ASSISTANT_TASK:>
Python Code:
def generate_data():
# 2 layer model with some random error
ml = ModelMaq(kaq=[10, 20], z=[0, -20, -22, -42], c=[1000],
Saq=[0.0002, 0.0001], tmin=0.001, tmax=100)
w = Well(ml, 0, 0, rw=0.3, tsandQ=[(0, 800)])
ml.solve()
t = np.logspace(-2, 1, 100)
h = ml.head(10, 0, t)
plt.figure()
r = 0.01 * np.random.randn(100)
n = np.zeros_like(r)
alpha = 0.8
for i in range(1, len(n)):
n[i] = 0.8 * n[i - 1] + r[i]
ho = h[0] + n
plt.plot(t, ho, '.')
data = np.zeros((len(ho), 2))
data[:, 0] = t
data[:, 1] = ho
#np.savetxt('pumpingtestdata.txt', data, fmt='%2.3f', header='time (d), head (m)')
return data
np.random.seed(11)
data = generate_data()
to = data[:, 0]
ho = data[:, 1]
def func(p, to=to, ho=ho, returnmodel=False):
k = p[0]
S = p[1]
ml = ModelMaq(kaq=k, z=[0, -20], Saq=S, tmin=0.001, tmax=100)
w = Well(ml, 0, 0, rw=0.3, tsandQ=[(0, 800)])
ml.solve(silent=True)
if returnmodel:
return ml
h = ml.head(10, 0, to)
return np.sum((h[0] - ho) ** 2)
from scipy.optimize import fmin
lsopt = fmin(func, [10, 1e-4])
print('optimal parameters:', lsopt)
print('rmse:', np.sqrt(func(lsopt) / len(ho)))
ml = func(lsopt, returnmodel=True)
plt.figure()
plt.plot(data[:, 0], data[:, 1], '.', label='observed')
hm = ml.head(10, 0, to)
plt.plot(to, hm[0], 'r', label='modeled')
plt.legend()
plt.xlabel('time (d)')
plt.ylabel('head (m)');
cal = Calibrate(ml)
cal.set_parameter(name='kaq0', initial=10, pmin=0.1, pmax=1000)
cal.set_parameter(name='Saq0', initial=1e-4, pmin=1e-5, pmax=1e-3)
cal.series(name='obs1', x=10, y=0, layer=0, t=to, h=ho)
cal.fit(report=False)
print('rmse:', cal.rmse())
cal.parameters.style.set_precision(3)
Explanation: Head data is generated for a pumping test in a two-aquifer model. The well starts pumping at time $t=0$ with a discharge $Q=800$ m$^3$/d. The head is measured in an observation well 10 m from the pumping well. The thickness of the aquifer is 20 m. Questions:
Determine the optimal values of the hydraulic conductivity and specific storage coefficient of the aquifer when the aquifer is approximated as confined. Use a least squares approach and make use of the fmin function of scipy.optimize to find the optimal values. Plot the data with dots and the best-fit model in one graph. Print the optimal values of $k$ and $S_s$ to the screen as well as the root mean squared error of the residuals.
Repeat Question 1 but now approximate the aquifer as semi-confined. Plot the data with dots and the best-fit model in one graph. Print to the screen the optimal values of $k$, $S_s$ and $c$ to the screen as well as the root mean squared error of the residuals. Is the semi-cofined model a better fit than the confined model?
End of explanation
def func2(p, to=to, ho=ho, returnmodel=False):
k = p[0]
S = p[1]
c = p[2]
ml = ModelMaq(kaq=k, z=[2, 0, -20], Saq=S, c=c, topboundary='semi',
tmin=0.001, tmax=100)
w = Well(ml, 0, 0, rw=0.3, tsandQ=[(0, 800)])
ml.solve(silent=True)
if returnmodel:
return ml
h = ml.head(10, 0, to)
return np.sum((h[0] - ho) ** 2)
lsopt2 = fmin(func2, [10, 1e-4, 1000])
print('optimal parameters:', lsopt2)
print('rmse:', np.sqrt(func2(lsopt2) / len(ho)))
ml = func2(lsopt2, returnmodel=True)
plt.figure()
plt.plot(data[:, 0], data[:, 1], '.', label='observed')
hm = ml.head(10, 0, to)
plt.plot(to, hm[0], 'r', label='modeled')
plt.legend()
plt.xlabel('time (d)')
plt.ylabel('head (m)');
ml = ModelMaq(kaq=10, z=[2, 0, -20], Saq=1e-4, c=1000, topboundary='semi', tmin=0.001, tmax=100)
w = Well(ml, 0, 0, rw=0.3, tsandQ=[(0, 800)])
ml.solve(silent=True)
cal = Calibrate(ml)
cal.set_parameter(name='kaq0', initial=10)
cal.set_parameter(name='Saq0', initial=1e-4)
cal.set_parameter(name='c0', initial=1000)
cal.series(name='obs1', x=10, y=0, layer=0, t=to, h=ho)
cal.fit(report=False)
cal.parameters.style.set_precision(5)
cal.rmse(), ml.aq.kaq
plt.figure()
plt.plot(data[:, 0], data[:, 1], '.', label='observed')
hm = ml.head(10, 0, to)
plt.plot(to, hm[0], 'r', label='modeled')
plt.legend()
plt.xlabel('time (d)')
plt.ylabel('head (m)');
Explanation: Model as semi-confined
End of explanation
<END_TASK> |
15,893 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
SVM - Kernel Transforms
One of the condition in the Linear SVM is that the points should be linearly separable. However this is not always possible in classification problems
Building Intution
Step1: Non Linear Separable Points
Lets see an example of this in a two-dimensional space
Step2: Non-Linear Transformations
If the points are not linearly separable, then we can use some transformation to make them linear seperable. This is called Non-Linear Transformation.
Here we can use the square transformation to see the output
$$ z = x_1^2 + x_2^2 $$
Step3: Kernel Transformations
The kernel function allows us to do this without needing to do these transformation ourselves can be any of the following | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (6, 6)
from ipywidgets import interact
Explanation: SVM - Kernel Transforms
One of the condition in the Linear SVM is that the points should be linearly separable. However this is not always possible in classification problems
Building Intution
End of explanation
np.random.seed(1234)
def plot_points(p):
theta = np.random.uniform(0, 2*np.pi*2, p)
raidus = np.array(np.random.randn(p)*2 + 25)
circle = [[np.cos(t), np.sin(t)] for t in theta]
x0 = raidus.reshape(p,1) * circle
x1 = np.random.randn(p, 2)*4
X = np.r_[x0,x1]
y = [0] * p + [1] * p
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.RdBu, s=40)
#plt.xlim(-20,20)
#plt.ylim(-20,20)
plt.show()
plot_points(20)
Explanation: Non Linear Separable Points
Lets see an example of this in a two-dimensional space
End of explanation
def plot_points_transforms(p):
theta = np.random.uniform(0, 2*np.pi*2, p)
raidus = np.array(np.random.randn(p)*2 + 25)
circle = [[np.cos(t), np.sin(t)] for t in theta]
x0 = raidus.reshape(p,1) * circle
x1 = np.random.randn(p, 2)*4
X = np.r_[x0,x1]
y = [0] * p + [1] * p
z = np.square(X).sum(axis =1)
#plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.RdBu, s=40)
plt.scatter(X[:, 0], z, c=y, cmap=plt.cm.RdBu, s=40)
#plt.xlim(-20,20)
#plt.ylim(-20,20)
plt.show()
plot_points_transforms(30)
Explanation: Non-Linear Transformations
If the points are not linearly separable, then we can use some transformation to make them linear seperable. This is called Non-Linear Transformation.
Here we can use the square transformation to see the output
$$ z = x_1^2 + x_2^2 $$
End of explanation
from sklearn import svm
def plot_kernels(p, k):
theta = np.random.uniform(0, 2*np.pi*2, p)
raidus = np.array(np.random.randn(p)*2 + 25)
circle = [[np.cos(t), np.sin(t)] for t in theta]
x0 = raidus.reshape(p,1) * circle
x1 = np.random.randn(p, 2)*4
X = np.r_[x0,x1]
y = [0] * p + [1] * p
print(k)
lin_svc = svm.SVC(kernel='linear', C=1)
rbf_svc = svm.SVC(kernel='rbf', gamma=0.7, C=1)
poly_svc = svm.SVC(kernel='poly', degree=2, C=1)
if k == "linear":
clf = lin_svc
elif k == "rbf":
clf = rbf_svc
else:
clf = poly_svc
clf.fit(X,y)
clf.predict(X)
# plot the boundaries
x_min, x_max = -30, 30
y_min, y_max = -30, 30
step = 0.1
xx, yy = np.meshgrid(np.arange(x_min, x_max, step), np.arange(y_min, y_max, step))
xxyy = np.c_[xx.ravel(), yy.ravel()]
Z = clf.predict(xxyy)
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=plt.cm.viridis, alpha = 0.5)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.RdBu, s=40)
plt.xlim(-30,30)
plt.ylim(-30,30)
plt.show()
plot_kernels(30,"linear")
plot_kernels(30,"poly")
plot_kernels(30,"rbf")
Explanation: Kernel Transformations
The kernel function allows us to do this without needing to do these transformation ourselves can be any of the following:
- linear: $\langle x, x'\rangle.$
- polynomial: $(\gamma \langle x, x'\rangle + r)^d$. d is specified by keyword degree, r by coef0.
- rbf: $\exp(-\gamma |x-x'|^2)$. $\gamma$ is specified by keyword gamma, must be greater than 0.
- sigmoid $(\tanh(\gamma \langle x,x'\rangle + r))$, where r is specified by coef0.
End of explanation
<END_TASK> |
15,894 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Young Star Magnetic Models
Convergence of magnetic models is improved with a new treatment of the peak magnetic field strength definition. Previously, it was defined as either $R_{\rm peak} = 0.5R_{\star}$ or $R_{\rm peak} = R_{\rm tach}$, where $R_{\rm tach}$ is the radial location of the interface region between the stellar radiation and convection zones (i.e., the tachocline). This caused problems for young star models as models start off fully convective but develop radiative cores as the central temperature increases throughout its gravitational contraction. Magnetic fields therefore jumped rapidly from a fully convective treatment to a partially convective treatment, leading to excessively large interior magnetic field strengths. To avoid this problem, the peak magnetic field strength is treated as either $R_{\rm peak} = 0.5R_{\star}$ or $R_{\rm peak} = R_{\rm tach}$, whichever is larger, in all cases.
Two small grids of magnetic models are computed with GS98 and GAS07 solar abundances. These may be incorporated into the Young Star manuscript, where we present models of young stars that have now been used in several publications (e.g., Malo et al. 2014; Herczeg & Hillenbrand 2015). However, these models are computed, specifically, at the request of I. Song, who wishes to incorporate magnetic models into an analysis. The tracks themselves will not be incorporated into the GitHub repo, as publishing the full grid would require too much disk space, but they are available upon request by creating an "issue".
Update
Step1: Magnetic Mass Tracks
We'll start with loading mass tracks from the GAS07 solar abundance subset. These adopt surface boundary conditions from the MARCS model atmosphere structures. While we typically recommend surface boundary conditions be attached at an optical depth where $\tau_{\rm ross} \ge 50$, the magnetic models are computed by fitting surface boundary conditions where $\tau_{\rm ross} \ge 10$. Magnetic fields largely affect the super-adiabiatic layers near the stellar surface, with deeper field strenghts playing a less critical role (Feiden & Chaboyer 2013, 2014). However, the motivation for attaching the boundary conditions at larger optical depths is to provide a better treatment of super-adiabatic layers where radiation and convection are both significnat contributors to the total energy flux (Chabrier & Baraffe 1997), which is in opposition to our efforts of including the effects of magnetic fields.
We provide a compromise by fixing the surface boundary conditions at higher layer in the star. This provides a sufficiently large super-adiabatic layer to give the magnetic field a reasonable influence, while still providing a reliable estimate of the surface conditions that help set the overall thermal structure of the star.
Step2: Magnetic Isochrones
Process the magnetic mass tracks into isochrones. Since mass tracks are computed with a relatively course mass resolution ($0.05 M_{\odot}$), spline interpolation is used to smooth the resulting isochrones with a finer mass resolution.
Below, a grid of isochrones is computed from 5 to 30 Myr in steps of 1 Myr.
Step3: Dartmouth & MARCS; Solar abundance
Step4: Interpolate isochrones onto a finer mass grid.
Step5: Magnetic isochrones are stored in the directory files/ and follow the format outline in the two code snippets above. We can take a quick look at some of the proeprties of these isochrones and how they compare to standard stellar evolution isochrones (i.e., without a magnetic perturbation).
A tarball with all of the above computed isochrones can be found in files/dmestar_gas07_z+0.00_a+0.00_mag25kG.tgz.
Dartmouth & PHOENIX; Solar abundance
Step6: Interpolate onto a finer mass grid,
Step7: Magnetic isochrones are stored in the directory files/ and follow the format outline in the two code snippets above. We can take a quick look at some of the proeprties of these isochrones and how they compare to standard stellar evolution isochrones (i.e., without a magnetic perturbation).
A tarball with all of the above computed isochrones can be found in files/dmestar_gs98_z+0.00_a+0.00_mag25kG.tgz.
Simple Diagnostic Plots
Here are some simple diagnostic figures to assess that isochrones look smooth and do not deviate too significantly from expectation (i.e., they're smooth and properties change monotonically). Plot a few isochrones
Step8: There looks to be some noise in the GS98 isochrones at the highest temperatures, which is likely related to the convergence issues with those above $0.90 M_{\odot}$. Nevertheless, the isochrones appear quite smooth.
Quick look at Li depletion curves. ~~(note | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Young Star Magnetic Models
Convergence of magnetic models is improved with a new treatment of the peak magnetic field strength definition. Previously, it was defined as either $R_{\rm peak} = 0.5R_{\star}$ or $R_{\rm peak} = R_{\rm tach}$, where $R_{\rm tach}$ is the radial location of the interface region between the stellar radiation and convection zones (i.e., the tachocline). This caused problems for young star models as models start off fully convective but develop radiative cores as the central temperature increases throughout its gravitational contraction. Magnetic fields therefore jumped rapidly from a fully convective treatment to a partially convective treatment, leading to excessively large interior magnetic field strengths. To avoid this problem, the peak magnetic field strength is treated as either $R_{\rm peak} = 0.5R_{\star}$ or $R_{\rm peak} = R_{\rm tach}$, whichever is larger, in all cases.
Two small grids of magnetic models are computed with GS98 and GAS07 solar abundances. These may be incorporated into the Young Star manuscript, where we present models of young stars that have now been used in several publications (e.g., Malo et al. 2014; Herczeg & Hillenbrand 2015). However, these models are computed, specifically, at the request of I. Song, who wishes to incorporate magnetic models into an analysis. The tracks themselves will not be incorporated into the GitHub repo, as publishing the full grid would require too much disk space, but they are available upon request by creating an "issue".
Update: raw magnetic mass tracks are contained in a tarball in the files/ directory with the extension _mtrks.tgz.
End of explanation
masses = np.arange(0.1, 0.96, 0.05) # list of masses
Explanation: Magnetic Mass Tracks
We'll start with loading mass tracks from the GAS07 solar abundance subset. These adopt surface boundary conditions from the MARCS model atmosphere structures. While we typically recommend surface boundary conditions be attached at an optical depth where $\tau_{\rm ross} \ge 50$, the magnetic models are computed by fitting surface boundary conditions where $\tau_{\rm ross} \ge 10$. Magnetic fields largely affect the super-adiabiatic layers near the stellar surface, with deeper field strenghts playing a less critical role (Feiden & Chaboyer 2013, 2014). However, the motivation for attaching the boundary conditions at larger optical depths is to provide a better treatment of super-adiabatic layers where radiation and convection are both significnat contributors to the total energy flux (Chabrier & Baraffe 1997), which is in opposition to our efforts of including the effects of magnetic fields.
We provide a compromise by fixing the surface boundary conditions at higher layer in the star. This provides a sufficiently large super-adiabatic layer to give the magnetic field a reasonable influence, while still providing a reliable estimate of the surface conditions that help set the overall thermal structure of the star.
End of explanation
from scipy.interpolate import interp1d
ages = np.arange(5.0e6, 3.1e7, 1.0e6) # ages requested
Explanation: Magnetic Isochrones
Process the magnetic mass tracks into isochrones. Since mass tracks are computed with a relatively course mass resolution ($0.05 M_{\odot}$), spline interpolation is used to smooth the resulting isochrones with a finer mass resolution.
Below, a grid of isochrones is computed from 5 to 30 Myr in steps of 1 Myr.
End of explanation
# open output file objects
output_files = [open('files/dmestar_{:07.1f}myr_gas07_z+0.00_a+0.00_mag25kG.iso'.format(age/1.0e6), 'w')
for age in ages]
trk_directory = '../../evolve/dmestar/trk/gas07/p000/a0/amlt2040/mag25kG'
for mass in masses:
trk_filename = 'm{:04.0f}_GAS07_p000_p0_y26_mlt2.040_mag25kG.trk'.format(mass*1000.)
try:
gas07_trk = np.genfromtxt('{0}/{1}'.format(trk_directory, trk_filename), usecols=(0, 1, 2, 3, 4, 8))
except IOError:
continue
# extract only relevant age chunk for easier interpolation
gas07_trk = np.array([time_step for time_step in gas07_trk if 1.0e6 <= time_step[0] <= 5.0e7])
# generate linear interpolation curve as a function of age
try:
icurve = interp1d(gas07_trk[:, 0], gas07_trk[:, 1:], kind='linear', axis=0)
except IndexError:
continue
# extract properties at the requested age
trk_props = icurve(ages)
i = 0
for props in trk_props:
s = '{:6.3f}'.format(mass)
for prop in props:
if np.isnan(prop) or prop < -12.0:
prop = -12.0
s += '{:14.6f}'.format(prop)
s += '\n'
output_files[i].write(s)
i += 1
#print "{:4.2f} Mo Track Processed.".format(mass)
# close output files
for f in output_files:
f.close()
Explanation: Dartmouth & MARCS; Solar abundance: Grevesse, Asplund, & Sauval 2007
End of explanation
fine_mass_grid = np.arange(0.1, 0.95, 0.02)
for age in ages:
iso_filename = 'files/dmestar_{:07.1f}myr_gas07_z+0.00_a+0.00_mag25kG.iso'.format(age/1.0e6)
isochrone = np.genfromtxt(iso_filename)
# generate interpolation curve
icurve = interp1d(isochrone[:,0], isochrone[:,1:], axis=0, kind='slinear')
# interpolate onto a finer mass grid
fine_isochrone = icurve(fine_mass_grid)
fine_isochrone = np.column_stack((fine_mass_grid, fine_isochrone))
# write header
header = 'Dartmouth Stellar Evolution Model: Quick Isochrone \n\n'
header += 'Age = {:7.1f} Myr [Fe/H] = {:+5.2f} [a/Fe] = {:+5.2f} \n\n'.format(age/1.e6, 0.0, 0.0)
header += '{:^14} {:^14} {:^14} {:^14} {:^14} {:^14}'.format('Mass', 'log(Teff)', 'log(g)', 'log(L/Lo)',
'log(R/Ro)', 'A(Li)')
# overwrite original file
np.savetxt(iso_filename, fine_isochrone, fmt='%14.6f', header=header)
Explanation: Interpolate isochrones onto a finer mass grid.
End of explanation
masses = np.arange(0.10, 0.86, 0.05) # higher masses did not converge (investigating)
# open output file objects
output_files = [open('files/dmestar_{:07.1f}myr_gs98_z+0.00_a+0.00_mag25kG.iso'.format(age/1.0e6), 'w')
for age in ages]
trk_directory = '../../evolve/dmestar/trk/gs98/p000/a0/amlt1884/mag25kG'
for mass in masses:
trk_filename = 'm{:04.0f}_GS98_p000_p0_y28_mlt1.884_mag25kG.trk'.format(mass*1000.)
try:
gs98_trk = np.genfromtxt('{0}/{1}'.format(trk_directory, trk_filename), usecols=(0, 1, 2, 3, 4, 8))
except IOError:
continue
# extract only relevant age chunk for easier interpolation
gs98_trk = np.array([time_step for time_step in gs98_trk if 1.0e6 <= time_step[0] <= 5.0e7])
# generate linear interpolation curve as a function of age
try:
icurve = interp1d(gs98_trk[:, 0], gs98_trk[:, 1:], kind='linear', axis=0)
except IndexError:
continue
# extract properties at the requested age
trk_props = icurve(ages)
i = 0
for props in trk_props:
s = '{:6.3f}'.format(mass)
for prop in props:
if np.isnan(prop) or prop < -12.0:
prop = -12.0
s += '{:14.6f}'.format(prop)
s += '\n'
output_files[i].write(s)
i += 1
#print "{:4.2f} Mo Track Processed.".format(mass)
# close output files
for f in output_files:
f.close()
Explanation: Magnetic isochrones are stored in the directory files/ and follow the format outline in the two code snippets above. We can take a quick look at some of the proeprties of these isochrones and how they compare to standard stellar evolution isochrones (i.e., without a magnetic perturbation).
A tarball with all of the above computed isochrones can be found in files/dmestar_gas07_z+0.00_a+0.00_mag25kG.tgz.
Dartmouth & PHOENIX; Solar abundance: Grevesse & Sauval 1998
End of explanation
fine_mass_grid = np.arange(0.1, 0.85, 0.02)
for age in ages:
iso_filename = 'files/dmestar_{:07.1f}myr_gs98_z+0.00_a+0.00_mag25kG.iso'.format(age/1.0e6)
isochrone = np.genfromtxt(iso_filename)
# generate interpolation curves
icurve = interp1d(isochrone[:,0], isochrone[:,1:], axis=0, kind='slinear')
# interpolate onto a finer mass grid
fine_isochrone = icurve(fine_mass_grid)
fine_isochrone = np.column_stack((fine_mass_grid, fine_isochrone))
# write header
header = 'Dartmouth Stellar Evolution Model: Quick Isochrone \n\n'
header += 'Age = {:7.1f} Myr [Fe/H] = {:+5.2f} [a/Fe] = {:+5.2f} \n\n'.format(age/1.e6, 0.0, 0.0)
header += '{:^14} {:^14} {:^14} {:^14} {:^14} {:^14}'.format('Mass', 'log(Teff)', 'log(g)', 'log(L/Lo)',
'log(R/Ro)', 'A(Li)')
# overwrite original file
np.savetxt(iso_filename, fine_isochrone, fmt='%14.6f', header=header)
Explanation: Interpolate onto a finer mass grid,
End of explanation
# GS98 isochrones
gs98_05 = np.genfromtxt('files/dmestar_00005.0myr_gs98_z+0.00_a+0.00_mag25kG.iso')
gs98_12 = np.genfromtxt('files/dmestar_00012.0myr_gs98_z+0.00_a+0.00_mag25kG.iso')
gs98_30 = np.genfromtxt('files/dmestar_00030.0myr_gs98_z+0.00_a+0.00_mag25kG.iso')
# GAS07 isochrones
gas07_05 = np.genfromtxt('files/dmestar_00005.0myr_gas07_z+0.00_a+0.00_mag25kG.iso')
gas07_12 = np.genfromtxt('files/dmestar_00012.0myr_gas07_z+0.00_a+0.00_mag25kG.iso')
gas07_30 = np.genfromtxt('files/dmestar_00030.0myr_gas07_z+0.00_a+0.00_mag25kG.iso')
fig, ax = plt.subplots(1, 2, figsize=(12, 6))
ax[0].set_title('GAS07 Series', fontsize=22.)
ax[1].set_title('GS98 Series', fontsize=22.)
for axis in ax:
axis.set_xlabel('Effective Temperature (K)', fontsize=20.)
axis.set_ylabel('$\\log (L / L_{\\odot})$', fontsize=20.)
axis.set_xlim(4500., 2500.)
axis.set_ylim(-2.5, 0.0)
axis.tick_params(which='major', axis='both', length=10., labelsize=16.)
# GAS07 series
ax[0].plot(10.0**gas07_05[:, 1], gas07_05[:, 3], '-', lw=2, color='#1e90ff')
ax[0].plot(10.0**gas07_12[:, 1], gas07_12[:, 3], '--', lw=2, color='#1e90ff')
ax[0].plot(10.0**gas07_30[:, 1], gas07_30[:, 3], '-.', lw=2, color='#1e90ff')
# GS98 series
ax[1].plot(10.0**gs98_05[:, 1], gs98_05[:, 3], '-', lw=2, color='#1e90ff')
ax[1].plot(10.0**gs98_12[:, 1], gs98_12[:, 3], '--', lw=2, color='#1e90ff')
ax[1].plot(10.0**gs98_30[:, 1], gs98_30[:, 3], '-.', lw=2, color='#1e90ff')
fig.tight_layout()
Explanation: Magnetic isochrones are stored in the directory files/ and follow the format outline in the two code snippets above. We can take a quick look at some of the proeprties of these isochrones and how they compare to standard stellar evolution isochrones (i.e., without a magnetic perturbation).
A tarball with all of the above computed isochrones can be found in files/dmestar_gs98_z+0.00_a+0.00_mag25kG.tgz.
Simple Diagnostic Plots
Here are some simple diagnostic figures to assess that isochrones look smooth and do not deviate too significantly from expectation (i.e., they're smooth and properties change monotonically). Plot a few isochrones: 5 Myr, 12 Myr, and 30 Myr.
End of explanation
fig, ax = plt.subplots(1, 2, figsize=(12, 6))
ax[0].set_title('GAS07 Series', fontsize=22.)
ax[1].set_title('GS98 Series', fontsize=22.)
for axis in ax:
axis.set_xlabel('Effective Temperature (K)', fontsize=20.)
axis.set_ylabel('A(Li)', fontsize=20.)
axis.set_xlim(4500., 2500.)
axis.set_ylim(2.5, 3.5)
axis.tick_params(which='major', axis='both', length=10., labelsize=16.)
axis.plot([4500., 2500.], [3.30, 3.30], '--', lw=1, color="#555555")
# GAS07 series
ax[0].plot(10.0**gas07_05[:, 1], gas07_05[:, 5], '-', lw=2, color='#1e90ff')
ax[0].plot(10.0**gas07_12[:, 1], gas07_12[:, 5], '--', lw=2, color='#1e90ff')
ax[0].plot(10.0**gas07_30[:, 1], gas07_30[:, 5], '-.', lw=2, color='#1e90ff')
# GS98 series
ax[1].plot(10.0**gs98_05[:, 1], gs98_05[:, 5], '-', lw=2, color='#1e90ff')
ax[1].plot(10.0**gs98_12[:, 1], gs98_12[:, 5], '--', lw=2, color='#1e90ff')
ax[1].plot(10.0**gs98_30[:, 1], gs98_30[:, 5], '-.', lw=2, color='#1e90ff')
fig.tight_layout()
Explanation: There looks to be some noise in the GS98 isochrones at the highest temperatures, which is likely related to the convergence issues with those above $0.90 M_{\odot}$. Nevertheless, the isochrones appear quite smooth.
Quick look at Li depletion curves. ~~(note: due to issues with NaNs in the 28+ Myr isochrones, switching from 30 Myr to 27 Myr.)~~
End of explanation
<END_TASK> |
15,895 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Cumulative Sum Prediction
This is the fifth toy example from Jason Brownlee's Long Short Term Memory Networks with Python. It demonstrates the solution to a sequence-to-sequence (aka seq2seq) prediction problem. Per section 10.2 of the book
Step1: Prepare Data
Step2: Define Network
The sequence length for the input and output sequences are the same size. Our network follows the model built (using Keras) in the book. Unlike the typical encoder-decoder LSTM architecture that is used for most seq2seq problems, here we have a single LSTM followed by a FCN layer at each timestep of its output. Each FCN returns a binary 0/1 output, which is concatenated to produce the predicted result.
Step3: Train Network
Step4: Evaluate Network | <ASSISTANT_TASK:>
Python Code:
from __future__ import division, print_function
from sklearn.metrics import accuracy_score, confusion_matrix
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import numpy as np
import matplotlib.pyplot as plt
import os
import shutil
%matplotlib inline
DATA_DIR = "../../data"
MODEL_FILE = os.path.join(DATA_DIR, "torch-10-cumsum-predict-{:d}.model")
TRAIN_SIZE = 7500
VAL_SIZE = 100
TEST_SIZE = 500
SEQ_LENGTH = 10
EMBED_SIZE = 1
BATCH_SIZE = 32
NUM_EPOCHS = 10
LEARNING_RATE = 1e-3
Explanation: Cumulative Sum Prediction
This is the fifth toy example from Jason Brownlee's Long Short Term Memory Networks with Python. It demonstrates the solution to a sequence-to-sequence (aka seq2seq) prediction problem. Per section 10.2 of the book:
The problem is defined as a sequence of random values between 0 and 1. This sequence is taken as input for the problem with each number provided once per time step. A binary label (0 or 1) is associated with each input. The output values are initially all 0. Once the cumulative sum of the input values in the sequence exceeds a threshold, then the output value flips from 0 to 1. A threshold of one quarter (1/4) of the sequence length is used, so for a sequence of length 10, the threshold is 2.5.
We will frame the problem to make the best use of the Bidirectional LSTM architecture.
The output sequence will be produced after the entire input sequence has been fed into the
model. Technically, this means this is a sequence-to-sequence prediction problem that requires
a many-to-many prediction model. It is also the case that the input and output sequences have
the same number of time steps (length).
End of explanation
def generate_sequence(seq_len):
xs = np.random.random(seq_len)
ys = np.array([0 if x < 2.5 else 1 for x in np.cumsum(xs).tolist()])
return xs, ys
X, Y = generate_sequence(SEQ_LENGTH)
print(X)
print(Y)
def generate_data(seq_len, num_seqs):
xseq, yseq = [], []
for i in range(num_seqs):
X, Y = generate_sequence(seq_len)
xseq.append(X)
yseq.append(Y)
return np.expand_dims(np.array(xseq), axis=2), np.array(yseq)
Xtrain, Ytrain = generate_data(SEQ_LENGTH, TRAIN_SIZE)
Xval, Yval = generate_data(SEQ_LENGTH, VAL_SIZE)
Xtest, Ytest = generate_data(SEQ_LENGTH, TEST_SIZE)
print(Xtrain.shape, Ytrain.shape, Xval.shape, Yval.shape, Xtest.shape, Ytest.shape)
Explanation: Prepare Data
End of explanation
class CumSumPredictor(nn.Module):
def __init__(self, seq_len, input_dim, hidden_dim, output_dim):
super(CumSumPredictor, self).__init__()
self.seq_len = seq_len
self.hidden_dim = hidden_dim
self.output_dim = output_dim
# network layers
self.enc_lstm = nn.LSTM(input_dim, hidden_dim, 1, batch_first=True,
bidirectional=True)
self.fcn = nn.Linear(hidden_dim * 2, output_dim) # bidirectional input
self.fcn_relu = nn.ReLU()
self.fcn_softmax = nn.Softmax()
def forward(self, x):
if torch.cuda.is_available():
h = (Variable(torch.randn(2, x.size(0), self.hidden_dim).cuda()),
Variable(torch.randn(2, x.size(0), self.hidden_dim).cuda()))
else:
h = (Variable(torch.randn(2, x.size(0), self.hidden_dim)),
Variable(torch.randn(2, x.size(0), self.hidden_dim)))
x, h = self.enc_lstm(x, h) # encoder LSTM
x_fcn = Variable(torch.zeros(x.size(0), self.seq_len, self.output_dim))
for i in range(self.seq_len): # decoder LSTM -> fcn for each timestep
x_fcn[:, i, :] = self.fcn_softmax(self.fcn_relu(self.fcn(x[:, i, :])))
x = x_fcn
return x
model = CumSumPredictor(SEQ_LENGTH, EMBED_SIZE, 50, 2)
if torch.cuda.is_available():
model.cuda()
print(model)
# size debugging
print("--- size debugging ---")
inp = Variable(torch.randn(BATCH_SIZE, SEQ_LENGTH, EMBED_SIZE))
outp = model(inp)
print(outp.size())
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)
Explanation: Define Network
The sequence length for the input and output sequences are the same size. Our network follows the model built (using Keras) in the book. Unlike the typical encoder-decoder LSTM architecture that is used for most seq2seq problems, here we have a single LSTM followed by a FCN layer at each timestep of its output. Each FCN returns a binary 0/1 output, which is concatenated to produce the predicted result.
End of explanation
def compute_accuracy(pred_var, true_var):
if torch.cuda.is_available():
ypred = pred_var.cpu().data.numpy()
ytrue = true_var.cpu().data.numpy()
else:
ypred = pred_var.data.numpy()
ytrue = true_var.data.numpy()
pred_nums, true_nums = [], []
for i in range(pred_var.size(0)): # for each row of output
pred_nums.append(int("".join([str(x) for x in ypred[i].tolist()]), 2))
true_nums.append(int("".join([str(x) for x in ytrue[i].tolist()]), 2))
return pred_nums, true_nums, accuracy_score(pred_nums, true_nums)
history = []
for epoch in range(NUM_EPOCHS):
num_batches = Xtrain.shape[0] // BATCH_SIZE
shuffled_indices = np.random.permutation(np.arange(Xtrain.shape[0]))
train_loss, train_acc = 0., 0.
for bid in range(num_batches):
# extract one batch of data
Xbatch_data = Xtrain[shuffled_indices[bid * BATCH_SIZE : (bid + 1) * BATCH_SIZE]]
Ybatch_data = Ytrain[shuffled_indices[bid * BATCH_SIZE : (bid + 1) * BATCH_SIZE]]
Xbatch = Variable(torch.from_numpy(Xbatch_data).float())
Ybatch = Variable(torch.from_numpy(Ybatch_data).long())
if torch.cuda.is_available():
Xbatch = Xbatch.cuda()
Ybatch = Ybatch.cuda()
# initialize gradients
optimizer.zero_grad()
# forward
loss = 0.
Ybatch_ = model(Xbatch)
for i in range(Ybatch.size(1)):
loss += loss_fn(Ybatch_[:, i, :], Ybatch[:, i])
# backward
loss.backward()
train_loss += loss.data[0]
_, ybatch_ = Ybatch_.max(2)
_, _, acc = compute_accuracy(ybatch_, Ybatch)
train_acc += acc
optimizer.step()
# compute training loss and accuracy
train_loss /= num_batches
train_acc /= num_batches
# compute validation loss and accuracy
val_loss, val_acc = 0., 0.
num_val_batches = Xval.shape[0] // BATCH_SIZE
for bid in range(num_val_batches):
# data
Xbatch_data = Xval[bid * BATCH_SIZE : (bid + 1) * BATCH_SIZE]
Ybatch_data = Yval[bid * BATCH_SIZE : (bid + 1) * BATCH_SIZE]
Xbatch = Variable(torch.from_numpy(Xbatch_data).float())
Ybatch = Variable(torch.from_numpy(Ybatch_data).long())
if torch.cuda.is_available():
Xbatch = Xbatch.cuda()
Ybatch = Ybatch.cuda()
loss = 0.
Ybatch_ = model(Xbatch)
for i in range(Ybatch.size(1)):
loss += loss_fn(Ybatch_[:, i, :], Ybatch[:, i])
val_loss += loss.data[0]
_, ybatch_ = Ybatch_.max(2)
_, _, acc = compute_accuracy(ybatch_, Ybatch)
val_acc += acc
val_loss /= num_val_batches
val_acc /= num_val_batches
torch.save(model.state_dict(), MODEL_FILE.format(epoch+1))
print("Epoch {:2d}/{:d}: loss={:.3f}, acc={:.3f}, val_loss={:.3f}, val_acc={:.3f}"
.format((epoch+1), NUM_EPOCHS, train_loss, train_acc, val_loss, val_acc))
history.append((train_loss, val_loss, train_acc, val_acc))
losses = [x[0] for x in history]
val_losses = [x[1] for x in history]
accs = [x[2] for x in history]
val_accs = [x[3] for x in history]
plt.subplot(211)
plt.title("Accuracy")
plt.plot(accs, color="r", label="train")
plt.plot(val_accs, color="b", label="valid")
plt.legend(loc="best")
plt.subplot(212)
plt.title("Loss")
plt.plot(losses, color="r", label="train")
plt.plot(val_losses, color="b", label="valid")
plt.legend(loc="best")
plt.tight_layout()
plt.show()
Explanation: Train Network
End of explanation
saved_model = CumSumPredictor(SEQ_LENGTH, EMBED_SIZE, 50, 2)
saved_model.load_state_dict(torch.load(MODEL_FILE.format(NUM_EPOCHS)))
if torch.cuda.is_available():
saved_model.cuda()
ylabels, ypreds = [], []
num_test_batches = Xtest.shape[0] // BATCH_SIZE
for bid in range(num_test_batches):
Xbatch_data = Xtest[bid * BATCH_SIZE : (bid + 1) * BATCH_SIZE]
Ybatch_data = Ytest[bid * BATCH_SIZE : (bid + 1) * BATCH_SIZE]
Xbatch = Variable(torch.from_numpy(Xbatch_data).float())
Ybatch = Variable(torch.from_numpy(Ybatch_data).long())
if torch.cuda.is_available():
Xbatch = Xbatch.cuda()
Ybatch = Ybatch.cuda()
Ybatch_ = saved_model(Xbatch)
_, ybatch_ = Ybatch_.max(2)
pred_nums, true_nums, _ = compute_accuracy(ybatch_, Ybatch)
ylabels.extend(true_nums)
ypreds.extend(pred_nums)
print("Test accuracy: {:.3f}".format(accuracy_score(ylabels, ypreds)))
Xbatch_data = Xtest[0:10]
Ybatch_data = Ytest[0:10]
Xbatch = Variable(torch.from_numpy(Xbatch_data).float())
Ybatch = Variable(torch.from_numpy(Ybatch_data).long())
if torch.cuda.is_available():
Xbatch = Xbatch.cuda()
Ybatch = Ybatch.cuda()
Ybatch_ = saved_model(Xbatch)
_, ybatch_ = Ybatch_.max(2)
if torch.cuda.is_available():
ybatch__data = ybatch_.cpu().data.numpy()
else:
ybatch__data = ybatch_.data.numpy()
for i in range(Ybatch_data.shape[0]):
label = Ybatch_data[i]
pred = ybatch__data[i]
correct = "True" if np.array_equal(label, pred) else "False"
print("y={:s}, yhat={:s}, correct={:s}".format(str(label), str(pred), correct))
for i in range(NUM_EPOCHS):
os.remove(MODEL_FILE.format(i + 1))
Explanation: Evaluate Network
End of explanation
<END_TASK> |
15,896 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
Step1: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
<img src='assets/convolutional_autoencoder.png' width=500px>
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see transposed convolution layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, tf.nn.conv2d_transpose.
However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise
Step2: Training
As before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
Step3: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practice. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise
Step4: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is. | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
Explanation: Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
End of explanation
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1))
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1))
### Encoder
conv1 = tf.layers.conv2d(inputs_, 8, (5, 5), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), (2, 2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 16, (5, 5), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), (2, 2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 32, (5, 5), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2, 2), (2, 2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (5, 5), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 16, (5, 5), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 32, (5, 5), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
<img src='assets/convolutional_autoencoder.png' width=500px>
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see transposed convolution layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, tf.nn.conv2d_transpose.
However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor. For convolutional layers, use tf.layers.conv2d. For example, you would write conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu) for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use tf.layers.max_pooling2d.
End of explanation
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
Explanation: Training
As before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
End of explanation
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =tf.nn.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
Explanation: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practice. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
Explanation: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
End of explanation
<END_TASK> |
15,897 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Introduction to Regression.
Author
Step1: 1. The regression problem
The goal of regression methods is to predict the value of some target variable $S$ from the observation of one or more input variables $X_1, X_2, \ldots, X_N$ (that we will collect in a single vector $\bf X$).
Regression problems arise in situations where the value of the target variable is not easily accessible, but we can measure other dependent variables, from which we can try to predict $S$.
<img src="figs/block_diagram.png", width=600>
The only information available to estimate the relation between the inputs and the target is a dataset $\mathcal D$ containing several observations of all variables.
$$\mathcal{D} = {{\bf x}^{(k)}, s^{(k)}}_{k=1}^K$$
The dataset $\mathcal{D}$ must be used to find a function $f$ that, for any observation vector ${\bf x}$, computes an output $\hat{s} = f({\bf x})$ that is a good predition of the true value of the target, $s$.
<img src="figs/predictor.png", width=300>
2. Examples of regression problems.
The <a href=http
Step2: This dataset contains
Step3: observations of the target variable and
Step4: input variables.
3. Scatter plots
3.1. 2D scatter plots
When the instances of the dataset are multidimensional, they cannot be visualized directly, but we can get a first rough idea about the regression task if we plot the target variable versus one of the input variables. These representations are known as <i>scatter plots</i>
Python methods plot and scatter from the matplotlib package can be used for these graphical representations.
Step5: 3.2. 3D Plots
With the addition of a third coordinate, plot and scatter can be used for 3D plotting.
Exercise 1
Step6: 4. Evaluating a regression task
In order to evaluate the performance of a given predictor, we need to quantify the quality of predictions. This is usually done by means of a loss function $l(s,\hat{s})$. Two common losses are
Square error
Step7: The overal prediction performance is computed as the average of the loss computed over a set of samples | <ASSISTANT_TASK:>
Python Code:
# Import some libraries that will be necessary for working with data and displaying plots
# To visualize plots in the notebook
%matplotlib inline
import numpy as np
import scipy.io # To read matlab files
import pandas as pd # To read data tables from csv files
# For plots and graphical results
import matplotlib
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import pylab
# For the student tests (only for python 2)
import sys
if sys.version_info.major==2:
from test_helper import Test
# That's default image size for this interactive session
pylab.rcParams['figure.figsize'] = 9, 6
Explanation: Introduction to Regression.
Author: Jerรณnimo Arenas Garcรญa (jarenas@tsc.uc3m.es)
Jesรบs Cid Sueiro (jcid@tsc.uc3m.es)
Notebook version: 1.1 (Sep 08, 2017)
Changes: v.1.0 - First version. Extracted from regression_intro_knn v.1.0.
v.1.1 - Compatibility with python 2 and python 3
Pending changes: test_helper does not work in python 3.
End of explanation
from sklearn import datasets
# Load the dataset. Select it by uncommenting the appropriate line
D_all = datasets.load_boston()
#D_all = datasets.load_diabetes()
# Extract data and data parameters.
X = D_all.data # Complete data matrix (including input and target variables)
S = D_all.target # Target variables
n_samples = X.shape[0] # Number of observations
n_vars = X.shape[1] # Number of variables (including input and target)
Explanation: 1. The regression problem
The goal of regression methods is to predict the value of some target variable $S$ from the observation of one or more input variables $X_1, X_2, \ldots, X_N$ (that we will collect in a single vector $\bf X$).
Regression problems arise in situations where the value of the target variable is not easily accessible, but we can measure other dependent variables, from which we can try to predict $S$.
<img src="figs/block_diagram.png", width=600>
The only information available to estimate the relation between the inputs and the target is a dataset $\mathcal D$ containing several observations of all variables.
$$\mathcal{D} = {{\bf x}^{(k)}, s^{(k)}}_{k=1}^K$$
The dataset $\mathcal{D}$ must be used to find a function $f$ that, for any observation vector ${\bf x}$, computes an output $\hat{s} = f({\bf x})$ that is a good predition of the true value of the target, $s$.
<img src="figs/predictor.png", width=300>
2. Examples of regression problems.
The <a href=http://scikit-learn.org/>scikit-learn</a> package contains several <a href=http://scikit-learn.org/stable/datasets/> datasets</a> related to regression problems.
<a href=http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.html#sklearn.datasets.load_boston > Boston dataset</a>: the target variable contains housing values in different suburbs of Boston. The goal is to predict these values based on several social, economic and demographic variables taken frome theses suburbs (you can get more details in the <a href = https://archive.ics.uci.edu/ml/datasets/Housing > UCI repository </a>).
<a href=http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html#sklearn.datasets.load_diabetes /> Diabetes dataset</a>.
We can load these datasets as follows:
End of explanation
print(n_samples)
Explanation: This dataset contains
End of explanation
print(n_vars)
Explanation: observations of the target variable and
End of explanation
# Select a dataset
nrows = 4
ncols = 1 + (X.shape[1]-1)/nrows
# Some adjustment for the subplot.
pylab.subplots_adjust(hspace=0.2)
# Plot all variables
for idx in range(X.shape[1]):
ax = plt.subplot(nrows,ncols,idx+1)
ax.scatter(X[:,idx], S) # <-- This is the key command
ax.get_xaxis().set_ticks([])
ax.get_yaxis().set_ticks([])
plt.ylabel('Target')
Explanation: input variables.
3. Scatter plots
3.1. 2D scatter plots
When the instances of the dataset are multidimensional, they cannot be visualized directly, but we can get a first rough idea about the regression task if we plot the target variable versus one of the input variables. These representations are known as <i>scatter plots</i>
Python methods plot and scatter from the matplotlib package can be used for these graphical representations.
End of explanation
# <SOL>
# </SOL>
Explanation: 3.2. 3D Plots
With the addition of a third coordinate, plot and scatter can be used for 3D plotting.
Exercise 1:
Select the diabetes dataset. Visualize the target versus components 2 and 4. (You can get more info about the <a href=http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.scatter>scatter</a> command and an <a href=http://matplotlib.org/examples/mplot3d/scatter3d_demo.html>example of use</a> in the <a href=http://matplotlib.org/index.html> matplotlib</a> documentation)
End of explanation
# In this section we will plot together the square and absolute errors
grid = np.linspace(-3,3,num=100)
plt.plot(grid, grid**2, 'b-', label='Square error')
plt.plot(grid, np.absolute(grid), 'r--', label='Absolute error')
plt.xlabel('Error')
plt.ylabel('Cost')
plt.legend(loc='best')
plt.show()
Explanation: 4. Evaluating a regression task
In order to evaluate the performance of a given predictor, we need to quantify the quality of predictions. This is usually done by means of a loss function $l(s,\hat{s})$. Two common losses are
Square error: $l(s, \hat{s}) = (s - \hat{s})^2$
Absolute error: $l(s, \hat{s}) = |s - \hat{s}|$
Note that both the square and absolute errors are functions of the estimation error $e = s-{\hat s}$. However, this is not necessarily the case. As an example, imagine a situation in which we would like to introduce a penalty which increases with the magnitude of the estimated variable. For such case, the following cost would better fit our needs: $l(s,{\hat s}) = s^2 \left(s-{\hat s}\right)^2$.
End of explanation
# Load dataset in arrays X and S
df = pd.read_csv('datasets/x01.csv', sep=',', header=None)
X = df.values[:,0]
S = df.values[:,1]
# <SOL>
# </SOL>
if sys.version_info.major==2:
Test.assertTrue(np.isclose(R, 153781.943889), 'Incorrect value for the average square error')
Explanation: The overal prediction performance is computed as the average of the loss computed over a set of samples:
$${\bar R} = \frac{1}{K}\sum_{k=1}^K l\left(s^{(k)}, \hat{s}^{(k)}\right)$$
Exercise 2:
The dataset in file 'datasets/x01.csv', taken from <a href="http://people.sc.fsu.edu/~jburkardt/datasets/regression/x01.txt">here</a> records the average weight of the brain and body for a number of mammal species.
* Represent a scatter plot of the targe variable versus the one-dimensional input.
* Plot, over the same plot, the prediction function given by $S = 1.2 X$
* Compute the square error rate for the given dataset.
End of explanation
<END_TASK> |
15,898 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Part 1
Step1: Next we establish a link with the database. We know that the database created by tweetharvester is called tweets_db and within it is a collection of tweets that goes by the name of the project, in this example
Step2: We now have an object, coll, that offers full access to the MongoDB API where we can analyse the data in the collected tweets. For instance, in our small example collection, we can count the number of tweets
Step3: Or we can count the number of tweets that are geolocated with a field containing the latitude and longitude of the user when they sent the tweet. We construct a MongoDB query that looks for a non-empty field called coordinates.
Step4: Or how many tweets had the hashtag #happy in them?
Step5: Pre-requisites for Analysis
In order to perform these analyses there are a few things one needs to know
Step6: This JSON data structure is documented on the Twitter API website where each field is described in detail. It is recommended that this description is studied in order to understand how to construct valid queries.
tweetharvest is faithful to the core structure of the tweets as described in that documentation, but with minor differences created for convenience
Step7: Establish a Link to the Dataset as a MongoDB Collection
Step8: Descriptive Statistics
Number of Tweets in Dataset
Step9: Number of Geolocated Tweets
Step10: Range of Creation Times for Tweets
Step11: Plot Tweets per Hour
Step12: More Complex Query
As an example of a more complex query, the following demonstrates how to extract all tweets that are not retweets, contain the hashtag #happy as well at least one other hashtag, and that are written in English. These attributes are passed to the .find method as a dictionary, and the hashtags are then extracted.
The hashtags of the first ten tweets meeting this specification are then printed out.
Step13: Build a Network of Hashtags
We could use this method to produce a network of hashtags. The following illustrates this by
Step14: Generate list of all pairs of hashtags
Step15: Build graph with weighted edges between hashtags
Step16: Remove rarer edges
Finally we remove rare edges (defined here arbitrarily as edges that have a weigthing of less than 25), then print a table of these edges sorted in descending order by weight.
Step17: View as Table
Step18: Plot the Network
Step19: Repeat for #sad
Step20: Graph is drawn with a spring layout to bring out more clearly the disconnected sub-graphs. | <ASSISTANT_TASK:>
Python Code:
import pymongo
Explanation: Part 1: tweetharvest Example Analysis
This is an example notebook demonstrating how to establish a connection to a database of tweets collected using tweetharvest. It presupposes that all the setup instructions have been completed (see README file for that repository) and that MongoDB server is running as described there. We start by importing core packages the PyMongo package, the official package to access MongoDB databases.
End of explanation
db = pymongo.MongoClient().tweets_db
coll = db.emotweets
coll
Explanation: Next we establish a link with the database. We know that the database created by tweetharvester is called tweets_db and within it is a collection of tweets that goes by the name of the project, in this example: emotweets.
End of explanation
coll.count()
Explanation: We now have an object, coll, that offers full access to the MongoDB API where we can analyse the data in the collected tweets. For instance, in our small example collection, we can count the number of tweets:
End of explanation
query = {'coordinates': {'$ne': None}}
coll.find(query).count()
Explanation: Or we can count the number of tweets that are geolocated with a field containing the latitude and longitude of the user when they sent the tweet. We construct a MongoDB query that looks for a non-empty field called coordinates.
End of explanation
query = {'hashtags': {'$in': ['happy']}}
coll.find(query).count()
Explanation: Or how many tweets had the hashtag #happy in them?
End of explanation
coll.find_one()
Explanation: Pre-requisites for Analysis
In order to perform these analyses there are a few things one needs to know:
At the risk of stating the obvious: how to code in Python (there is also an excellent tutorial). Please note that the current version of tweetharvest uses Python 2.7, and not Python 3.
How to perform mongoDB queries, including aggregation, counting, grouping of subsets of data. There is a most effective short introduction (The Little Book on MongoDB by Karl Seguin), as well as extremely rich documentation on the parent website.
How to use PyMongo to interface with the MongoDB API.
Apart from these skills, one needs to know how each status is stored in the database. Here is an easy way to look at the data structure of one tweet.
End of explanation
%matplotlib inline
import pymongo # in case we have run Part 1 above
import pandas as pd # for data manipulation and analysis
import matplotlib.pyplot as plt
Explanation: This JSON data structure is documented on the Twitter API website where each field is described in detail. It is recommended that this description is studied in order to understand how to construct valid queries.
tweetharvest is faithful to the core structure of the tweets as described in that documentation, but with minor differences created for convenience:
All date fields are stored as MongoDB Date objects and returned as Python datetime objects. This makes it easier to work on date ranges, sort by date, and do other date and time related manipulation.
A hashtags field is created for convenience. This contains a simple array of all the hashtags contained in a particular tweet and can be queried directly instead of looking for tags inside a dictionary, inside a list of other entities. It is included for ease of querying but may be ignored if one prefers.
Next Steps
This notebook establishes how you can connect to the database of tweets that you have harvested and how you can use the power of Python and MongoDB to access and analyse your collections. Good luck!
Part 2: tweetharvest Further Analysis
Assuming we need some more advanced work to be done on the dataset we have collected, below are some sample analyses to dip our toes in the water.
The examples below are further illustration of using our dataset with standard Python modules used in datascience. The typical idion is that of queryiong MongoDB to get a cursor on our dataset, importing that into an analytic tool such as Pandas, and then producing the analysis. The analyses below require that a few packages are installed on our system:
matplotlib: a python 2D plotting library (documentation)
pandas: "an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools" (documentation)
Important Note
The dataset used in this notebook is not published on the Github repository. If you want to experiment with your own data, you need to install the tweetharvest package, harvest some tweets to replicate the emotweets project embedded there, and then run the notebook. The intended use of this example notebook is simply as an illustration of the type of analysis one might want to do using your own tools.
End of explanation
db = pymongo.MongoClient().tweets_db
COLL = db.emotweets
COLL
Explanation: Establish a Link to the Dataset as a MongoDB Collection
End of explanation
COLL.count()
def count_by_tag(coll, hashtag):
query = {'hashtags': {'$in': [hashtag]}}
count = coll.find(query).count()
return count
print 'Number of #happy tweets: {}'.format(count_by_tag(COLL, 'happy'))
print 'Number of #sad tweets: {}'.format(count_by_tag(COLL, 'sad'))
Explanation: Descriptive Statistics
Number of Tweets in Dataset
End of explanation
query = {'coordinates': {'$ne': None}}
COLL.find(query).count()
Explanation: Number of Geolocated Tweets
End of explanation
# return a cursor that iterates over all documents and returns the creation date
cursor = COLL.find({}, {'created_at': 1, '_id': 0})
# list all the creation times and convert to Pandas DataFrame
times = pd.DataFrame(list(cursor))
times = pd.to_datetime(times.created_at)
earliest_timestamp = min(times)
latest_timestamp = max(times)
print 'Creation time for EARLIEST tweet in dataset: {}'.format(earliest_timestamp)
print 'Creation time for LATEST tweet in dataset: {}'.format(latest_timestamp)
Explanation: Range of Creation Times for Tweets
End of explanation
query = {} # empty query means find all documents
# return just two columns, the date of creation and the id of each document
projection = {'created_at': 1}
df = pd.DataFrame(list(COLL.find(query, projection)))
times = pd.to_datetime(df.created_at)
df.set_index(times, inplace=True)
df.drop('created_at', axis=1, inplace=True)
tweets_all = df.resample('60Min', how='count')
tweets_all.plot(figsize=[12, 7], title='Number of Tweets per Hour', legend=None);
Explanation: Plot Tweets per Hour
End of explanation
query = { # find all documents that:
'hashtags': {'$in': ['happy']}, # contain #happy hashtag
'retweeted_status': None, # are not retweets
'hashtags.1': {'$exists': True}, # and have more than 1 hashtag
'lang': 'en' # written in English
}
projection = {'hashtags': 1, '_id': 0}
cursor = COLL.find(query, projection)
for tags in cursor[:10]:
print tags['hashtags']
Explanation: More Complex Query
As an example of a more complex query, the following demonstrates how to extract all tweets that are not retweets, contain the hashtag #happy as well at least one other hashtag, and that are written in English. These attributes are passed to the .find method as a dictionary, and the hashtags are then extracted.
The hashtags of the first ten tweets meeting this specification are then printed out.
End of explanation
from itertools import combinations
import networkx as nx
Explanation: Build a Network of Hashtags
We could use this method to produce a network of hashtags. The following illustrates this by:
creating a generator function that yields every possible combination of two hashtags from each tweet
adding these pairs of tags as edges in a NetworkX graph
deleting the node happy (since it is connected to all the others by definition)
deleting those edges that are below a threshold weight
plotting the result
In order to run this, we need to install the NetworkX package (pip install networkx, documentation) and import it as well as the combinations function from Python's standard library itertools module.
End of explanation
def gen_edges(coll, hashtag):
query = { # find all documents that:
'hashtags': {'$in': [hashtag]}, # contain hashtag of interest
'retweeted_status': None, # are not retweets
'hashtags.1': {'$exists': True}, # and have more than 1 hashtag
'lang': 'en' # written in English
}
projection = {'hashtags': 1, '_id': 0}
cursor = coll.find(query, projection)
for tags in cursor:
hashtags = tags['hashtags']
for edge in combinations(hashtags, 2):
yield edge
Explanation: Generate list of all pairs of hashtags
End of explanation
def build_graph(coll, hashtag, remove_node=True):
g = nx.Graph()
for u,v in gen_edges(coll, hashtag):
if g.has_edge(u,v):
# add 1 to weight attribute of this edge
g[u][v]['weight'] = g[u][v]['weight'] + 1
else:
# create new edge of weight 1
g.add_edge(u, v, weight=1)
if remove_node:
# since hashtag is connected to every other node,
# it adds no information to this graph; remove it.
g.remove_node(hashtag)
return g
G = build_graph(COLL, 'happy')
Explanation: Build graph with weighted edges between hashtags
End of explanation
def trim_edges(g, weight=1):
# function from http://shop.oreilly.com/product/0636920020424.do
g2 = nx.Graph()
for u, v, edata in g.edges(data=True):
if edata['weight'] > weight:
g2.add_edge(u, v, edata)
return g2
Explanation: Remove rarer edges
Finally we remove rare edges (defined here arbitrarily as edges that have a weigthing of less than 25), then print a table of these edges sorted in descending order by weight.
End of explanation
G2 = trim_edges(G, weight=25)
df = pd.DataFrame([(u, v, edata['weight'])
for u, v, edata in G2.edges(data=True)],
columns = ['from', 'to', 'weight'])
df.sort(['weight'], ascending=False, inplace=True)
df
Explanation: View as Table
End of explanation
G3 = trim_edges(G, weight=35)
pos=nx.circular_layout(G3) # positions for all nodes
# nodes
nx.draw_networkx_nodes(G3, pos, node_size=700,
linewidths=0, node_color='#cccccc')
edge_list = [(u, v) for u, v in G3.edges()]
weight_list = [edata['weight']/5.0 for u, v, edata in G3.edges(data=True)]
# edges
nx.draw_networkx_edges(G3, pos,
edgelist=edge_list,
width=weight_list,
alpha=0.4,edge_color='b')
# labels
nx.draw_networkx_labels(G3, pos, font_size=20,
font_family='sans-serif', font_weight='bold')
fig = plt.gcf()
fig.set_size_inches(10, 10)
plt.axis('off');
Explanation: Plot the Network
End of explanation
G_SAD = build_graph(COLL, 'sad')
G2S = trim_edges(G_SAD, weight=5)
df = pd.DataFrame([(u, v, edata['weight'])
for u, v, edata in G2S.edges(data=True)],
columns = ['from', 'to', 'weight'])
df.sort(['weight'], ascending=False, inplace=True)
df
Explanation: Repeat for #sad
End of explanation
G3S = trim_edges(G_SAD, weight=5)
pos=nx.spring_layout(G3S) # positions for all nodes
# nodes
nx.draw_networkx_nodes(G3S, pos, node_size=700,
linewidths=0, node_color='#cccccc')
edge_list = [(u, v) for u, v in G3S.edges()]
weight_list = [edata['weight'] for u, v, edata in G3S.edges(data=True)]
# edges
nx.draw_networkx_edges(G3S, pos,
edgelist=edge_list,
width=weight_list,
alpha=0.4,edge_color='b')
# labels
nx.draw_networkx_labels(G3S, pos, font_size=12,
font_family='sans-serif', font_weight='bold')
fig = plt.gcf()
fig.set_size_inches(13, 13)
plt.axis('off');
Explanation: Graph is drawn with a spring layout to bring out more clearly the disconnected sub-graphs.
End of explanation
<END_TASK> |
15,899 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ZeroPadding1D
[convolutional.ZeroPadding1D.0] padding 1 on 3x5 input
Step1: [convolutional.ZeroPadding1D.1] padding 3 on 4x4 input
Step2: [convolutional.ZeroPadding1D.2] padding (3,2) on 4x4 input
Step3: export for Keras.js tests | <ASSISTANT_TASK:>
Python Code:
data_in_shape = (3, 5)
L = ZeroPadding1D(padding=1)
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(240)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.ZeroPadding1D.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: ZeroPadding1D
[convolutional.ZeroPadding1D.0] padding 1 on 3x5 input
End of explanation
data_in_shape = (4, 4)
L = ZeroPadding1D(padding=3)
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(241)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.ZeroPadding1D.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.ZeroPadding1D.1] padding 3 on 4x4 input
End of explanation
data_in_shape = (4, 4)
L = ZeroPadding1D(padding=(3,2))
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(242)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.ZeroPadding1D.2'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.ZeroPadding1D.2] padding (3,2) on 4x4 input
End of explanation
import os
filename = '../../../test/data/layers/convolutional/ZeroPadding1D.json'
if not os.path.exists(os.path.dirname(filename)):
os.makedirs(os.path.dirname(filename))
with open(filename, 'w') as f:
json.dump(DATA, f)
print(json.dumps(DATA))
Explanation: export for Keras.js tests
End of explanation
<END_TASK> |