Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 149
45.7k
| code_prompt
stringlengths 67
140k
|
---|---|---|
100 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Http Response
Usage
To construct a response packet you have a variety of facilities available.
Previously we saw how to parse HTTP responses using HttpParser. Of-course, we can also construct a response packet using HttpParser class.
Step1: But, this is a painful way to construct responses. Hence, other high level abstractions are available.
Example, following one liner will give us the same response packet.
Step2: Notice how okResponse will always add a Content-Length header for you.
You can also customize other headers
Step3: Let's add some content to our response packet
Step4: Note, how okResponse automatically added a Content-Length header for us.
Depending upon --min-compression-length flag, okResponse will also perform compression for content.
Example, default value for min compression length is 20.
Step5: You can pass a custom value for min_compression_length kwarg to okResponse.
Step6: Internally, okResponse uses build_http_response and hence you can also pass any argument also accepted by build_http_response. Example, it supports a conn_close argument which will add a Connection
Step7: Chunked Encoding
You can also send chunked encoded responses.
Step8: If we omit the min_compression_length flag | <ASSISTANT_TASK:>
Python Code:
from proxy.http.parser import HttpParser, httpParserTypes
from proxy.common.constants import HTTP_1_1
response = HttpParser(httpParserTypes.RESPONSE_PARSER)
response.code = b'200'
response.reason = b'OK'
response.version = HTTP_1_1
print(response.build_response())
Explanation: Http Response
Usage
To construct a response packet you have a variety of facilities available.
Previously we saw how to parse HTTP responses using HttpParser. Of-course, we can also construct a response packet using HttpParser class.
End of explanation
from proxy.http.responses import okResponse
print(okResponse().tobytes())
Explanation: But, this is a painful way to construct responses. Hence, other high level abstractions are available.
Example, following one liner will give us the same response packet.
End of explanation
response = okResponse(
headers={
b'X-Custom-Header': b'my value',
},
)
print(response.tobytes())
Explanation: Notice how okResponse will always add a Content-Length header for you.
You can also customize other headers
End of explanation
response = okResponse(
content=b'Hello World',
headers={
b'X-Custom-Header': b'my value',
},
)
print(response.tobytes())
Explanation: Let's add some content to our response packet
End of explanation
response = okResponse(
content=b'H' * 21,
headers={
b'X-Custom-Header': b'my value',
},
)
print(response.tobytes())
Explanation: Note, how okResponse automatically added a Content-Length header for us.
Depending upon --min-compression-length flag, okResponse will also perform compression for content.
Example, default value for min compression length is 20.
End of explanation
response = okResponse(
content=b'H' * 21,
headers={
b'Host': b'jaxl.com',
},
min_compression_length=21,
)
print(response.tobytes())
Explanation: You can pass a custom value for min_compression_length kwarg to okResponse.
End of explanation
response = okResponse(
content=b'Hello World',
headers={
b'Host': b'jaxl.com',
},
conn_close=True,
)
print(response.tobytes())
Explanation: Internally, okResponse uses build_http_response and hence you can also pass any argument also accepted by build_http_response. Example, it supports a conn_close argument which will add a Connection: close header. Simply, pass conn_close=True.
End of explanation
from proxy.http.parser import ChunkParser
chunks = ChunkParser.to_chunks(b'Hello World', chunk_size=5)
response = okResponse(
content=chunks,
headers={
b'Transfer-Encoding': b'chunked',
},
# Avoid compressing chunks for demo purposes here
# Ideally you should omit this flag and send
# compressed chunks.
min_compression_length=len(chunks),
)
print(response.tobytes())
Explanation: Chunked Encoding
You can also send chunked encoded responses.
End of explanation
response = okResponse(
content=chunks,
headers={
b'Transfer-Encoding': b'chunked',
},
)
print(response.tobytes())
Explanation: If we omit the min_compression_length flag
End of explanation
<END_TASK> |
101 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Dictionaries
A dictionary is datatype that contains a series of key-value pairs. It is similar to a list except for that the indices of the values can be strings, tuples, etc. not just integers. It is also different in that it is unordered. You cannot expect to get the keys in the same order out as you put them in.
To create a dictionary
Step1: To access a value, you index into it similarly to a list using square brackets.
value_of_key1 = my_dict['key1']
Step2: Trying to access a key not in the dictionary throws an error
Step3: To add an item to the dictionary set the value equal to the indexed keys
dict['new_key'] = value
Step4: To delete a key, use the del keyword
del dict['key to delete']
Step5: Rules on keys
Keys in dictionary must be unique. If you try to make a duplicate key, the data will be overwritten
Keys must be hashable. What this means is they must come from immutable values and be comparable. You can use strings, numbers, tuples, sets, (most) objects. You cannot use lists or dictionaries as keys.
Step6: TRY IT
Create a dictionary called vegetable_season with Eggplant-> July and Onion -> May
Step7: Dictionary Operators
The in operator returns a boolean for whether the key is in the dictionary or not.
key in dictionary
Step8: You can use this in if statement
Step9: TRY IT
Check if 'broccoli' is in vegetable_season. If so, print 'Yum, little trees!'
Step10: Dictionaries and Loops
You can use a for in loop to loop through dictionaries
for key in dictionary
Step11: Dictionary Methods
You can use the keys, values, or items methods to return lists of keys, values, or key-value tuples respectively.
You can then use these for sorting or for looping
Step12: TRY IT
Loop through the sorted keys of the vegetable_season dictionary. For each key, print the month it is in season
Step13: More complex dictionaries
Dictionary keys and values can be almost anything. The keys must be hashable which means it cannot change. That means that lists and dictionaries cannot be keys (but strings, tuples, and integers can).
Values can be just about anything, though.
Step14: Let's use this to create a more realistic fruit season dictionary
Step15: TRY IT
Add a key to the true_fruit_season for 'watermelons' the season is July, August, and September
Project
Step16: Bonus Material
Auto generating the dictionary for the acrostic | <ASSISTANT_TASK:>
Python Code:
fruit_season = {
'raspberry': 'May',
'apple' : 'September',
'peach' : 'July',
'grape' : 'August'
}
print(type(fruit_season))
print(fruit_season)
Explanation: Dictionaries
A dictionary is datatype that contains a series of key-value pairs. It is similar to a list except for that the indices of the values can be strings, tuples, etc. not just integers. It is also different in that it is unordered. You cannot expect to get the keys in the same order out as you put them in.
To create a dictionary:
my_dict = { key1: value1, key2: value2 }
Creating an empty dictionary
my_dict = {}
my_dict = dict()
End of explanation
raspberry_season = fruit_season['raspberry']
print(raspberry_season)
Explanation: To access a value, you index into it similarly to a list using square brackets.
value_of_key1 = my_dict['key1']
End of explanation
print(fruit_season['mangos'])
Explanation: Trying to access a key not in the dictionary throws an error
End of explanation
fruit_season['strawberry'] = 'May'
print(fruit_season)
fruit_season['strawberry']
Explanation: To add an item to the dictionary set the value equal to the indexed keys
dict['new_key'] = value
End of explanation
del fruit_season['strawberry']
print(fruit_season)
Explanation: To delete a key, use the del keyword
del dict['key to delete']
End of explanation
duplicate_fruit_season = {
'raspberry': 'May',
'raspberry': 'June',
}
print(duplicate_fruit_season)
mutable_key = {
['watermelon', 'cantaloupe', 'honeydew']: 'July'
}
# The solution is to use a tuple instead
immutable_key = {
('watermelon', 'cantelope', 'honeydew'): 'July'
}
Explanation: Rules on keys
Keys in dictionary must be unique. If you try to make a duplicate key, the data will be overwritten
Keys must be hashable. What this means is they must come from immutable values and be comparable. You can use strings, numbers, tuples, sets, (most) objects. You cannot use lists or dictionaries as keys.
End of explanation
vegetable_season = {
'Eggplant': 'July',
'Onion': 'May'
}
print(vegetable_season)
Explanation: TRY IT
Create a dictionary called vegetable_season with Eggplant-> July and Onion -> May
End of explanation
print('raspberry' in fruit_season)
print('mangos' in fruit_season)
Explanation: Dictionary Operators
The in operator returns a boolean for whether the key is in the dictionary or not.
key in dictionary
End of explanation
if 'pineapple' in fruit_season:
print('Lets eat tropical fruit')
else:
print("Temperate fruit it is.")
Explanation: You can use this in if statement
End of explanation
if 'broccoli' in vegetable_season:
print('Yum, little trees!')
else:
print('No little trees.')
Explanation: TRY IT
Check if 'broccoli' is in vegetable_season. If so, print 'Yum, little trees!'
End of explanation
for fruit in fruit_season:
print ("{0} is best in {1} (at least in Virginia)".format(fruit.title(), fruit_season[fruit]))
Explanation: Dictionaries and Loops
You can use a for in loop to loop through dictionaries
for key in dictionary:
print key
End of explanation
print(fruit_season.keys())
print(fruit_season.values())
print(fruit_season.items())
for key, value in fruit_season.items():
print ("In {0} eat a {1}".format(value, key))
print (sorted(fruit_season.keys()))
Explanation: Dictionary Methods
You can use the keys, values, or items methods to return lists of keys, values, or key-value tuples respectively.
You can then use these for sorting or for looping
End of explanation
for fruit in sorted(fruit_season.keys()):
print('In {0} {1} is in season.'.format(fruit_season[fruit], fruit))
Explanation: TRY IT
Loop through the sorted keys of the vegetable_season dictionary. For each key, print the month it is in season
End of explanation
my_complicated_dictionary = {
(1, 2, 3): 6,
'weevil': {
'e': 2,
'i': 1,
'l': 1,
'v': 1,
'w': 1,
},
9: [3, 3]
}
print (my_complicated_dictionary)
Explanation: More complex dictionaries
Dictionary keys and values can be almost anything. The keys must be hashable which means it cannot change. That means that lists and dictionaries cannot be keys (but strings, tuples, and integers can).
Values can be just about anything, though.
End of explanation
true_fruit_season = {
'raspberry': ['May', 'June'],
'apple': ['September', 'October', 'November', 'December'],
'peach': ['July', 'August'],
'grape': ['August', 'September', 'October']
}
print (true_fruit_season)
months = ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December']
for month in months:
print ('It is {0}'.format(month))
for fruit, season in true_fruit_season.items():
if month in season:
print ("\tEat {0}".format(fruit))
Explanation: Let's use this to create a more realistic fruit season dictionary
End of explanation
from random import randint #necessary for "challenge"
# step 1: create dictionary with one adjective per letter of the alphabet
myadjectives = {
'A':['admirable','aggressive','agile','agitated','agonizing','agreeable'],
'B':'biodegradable',
'C':['cloudy','creative'],
'D':'deserted',
'E':'everlasting',
'F':'flamboyant',
'G':'grotesque',
'H':'humming',
'I':'imperfect',
'J':'joyful',
'K':'kosher',
'L':'lively',
'M':'modest',
'N':'nervous',
'O':'ornate',
'P':'playful',
'Q':'quick',
'R':['restless','relieved','remarkable','remorseful', 'remote'],
'S':'strong',
'T':'tiny',
'U':'ugly',
'V':'vital',
'W':['wobbly','well-made'],
'X':'oops!',
'Y':'youthful',
'Z':'zesty'
}
type(myadjectives['C'])
# step 2: create funtion acrostic, takes name as an argument
def acrostic (name):
# step 3: capitalize name
capName = name.upper()
# step 4
for letter in capName:
current_adj_list = myadjectives[letter]
if type(current_adj_list) == list:
current_adj = current_adj_list[randint(0,len(current_adj_list)-1)]
else:
current_adj = current_adj_list
print("{0} - {1}".format(letter, current_adj))
acrostic('Lilly')
Explanation: TRY IT
Add a key to the true_fruit_season for 'watermelons' the season is July, August, and September
Project: Acrostic
Create an acrostic poem generator.
You will create a function that takes a name and generates an acrostic poem
Create a dictionary that has each of the capital letters as keys and an adjective that start with the letter as the value and store in variable named adjectives. (Reference: http://www.enchantedlearning.com/wordlist/adjectives.shtml)
Create a function called "acrostic", which takes one parameter "name" as argument.
In the acrostic function capitalize the name (use the upper method)
For each letter in the name
Get the adjective corresponding to that letter (from our dictionary) and store it in a variable called current_adj
Print out Letter-current_adj
Challenge instead of just one adjective have each letter's value be a list of adjectives. Use the random module to select a random adjective instead of always selecting the same one.
End of explanation
# If you have a list of adjectives
my_dict = {}
# Imaging this is the full alphabet
for i in ['A', 'B', 'C']:
my_dict[i] = []
for i in ['Adoreable', 'Acceptable', 'Bad', 'Cute', 'Basic', 'Dumb','Active']:
first_char = i[0]
if first_char in my_dict:
my_dict[first_char].append(i)
print (my_dict)
# Generating from a file
my_dict = {}
for i in ['A', 'B', 'C']:
my_dict[i] = []
# adjectives.txt has one adjective per line
with open('adjectives.txt') as fh:
for line in fh:
word = line.rstrip().title()
first_char = word[0]
if first_char in my_dict:
my_dict[first_char].append(word)
print (my_dict['A'])
Explanation: Bonus Material
Auto generating the dictionary for the acrostic:
End of explanation
<END_TASK> |
102 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Drive LEDs with the Raspberry Pi GPIO pins
This notebook will walk you through using the Raspberry Pi General Purpose Input/Output (GPIO) pins to make a LED light burn.
The GPIO pins are the 40 (numbered) pins that electronic components can be connected to.
Before actually using them, we have to agree with our Raspberry Pi how to address the pins and we do that with the setmode function.
IPython Instructions
Step1: BCM is the numbering that is engraved on the Raspberry Pi case we use and that you can also find back on the printed Pinout schema (BCM stands for Broadcom, the company that produces the Raspberry Pi chip).
Watch out 1
Step2: With all GPIO settings done, we can put pin GPIO18 to work.
To do this, we import the time library, so we can use its time related functionality
Step3: Note | <ASSISTANT_TASK:>
Python Code:
#load GPIO library
import RPi.GPIO as GPIO
#Set BCM (Broadcom) mode for the pin numbering
GPIO.setmode(GPIO.BCM)
Explanation: Drive LEDs with the Raspberry Pi GPIO pins
This notebook will walk you through using the Raspberry Pi General Purpose Input/Output (GPIO) pins to make a LED light burn.
The GPIO pins are the 40 (numbered) pins that electronic components can be connected to.
Before actually using them, we have to agree with our Raspberry Pi how to address the pins and we do that with the setmode function.
IPython Instructions:
Place your cursor in the cell below and press Shift+Enter or click the Play button in the toolbar above to execute the code in the cell.
Shift + Enter: Execute the cell and jump to the next one
Ctrl + Enter: Execute the cell, but stay in the current one
Alt + Enter: Execute the cell and create a new one below the current one
As long as a [*] appears to the left os the cell, it is still running. As soon as the code ends, a number appears and any output generated by the code is printed under the cell.
End of explanation
# If we assign the name 'PIN' to the pin number we intend to use, we can reuse it later
# yet still change easily in one place
PIN = 18
# set pin as output
GPIO.setup(PIN, GPIO.OUT)
Explanation: BCM is the numbering that is engraved on the Raspberry Pi case we use and that you can also find back on the printed Pinout schema (BCM stands for Broadcom, the company that produces the Raspberry Pi chip).
Watch out 1: a LED is a diode, which means it is important to send current through in the correct direction. So the difference between the long and short end of the LED connectors is important.
Watch out 2: if left to their own devices, LEDs will consume more current than is good for them and will subsequently burn out. If it's a bad day, the Raspberry Pi might get damaged because of it... To prevent this, we need to add a resistor (in series!)
Connect the long end of the LED with GPIO18 on the Pi.
Place a low value resistor (220-360 Ohm) in series with the LED.
Illustration:
<img src="LED01.png" height="300"/>
Then all that is left in our setup is to tell the Raspberry Pi that we intend to use GPIO18 as output, so we can change the voltage on that pin. This way we can send current on the pin to turn on the LED.
End of explanation
import time
# Repeat forever
while True:
# turn off pin 18
GPIO.output(PIN, 0)
# wait for half a second
time.sleep(.5)
# turn on pin 18
GPIO.output(PIN, 1)
# wait for half a second
time.sleep(.5)
#... and again ...
Explanation: With all GPIO settings done, we can put pin GPIO18 to work.
To do this, we import the time library, so we can use its time related functionality: time.sleep(x) is a function that tells the computer to wait for a number of seconds before continuing with the next instruction.
End of explanation
#reset the GPIO
PIN = 18
GPIO.cleanup()
GPIO.setmode(GPIO.BCM)
GPIO.setup(PIN, GPIO.OUT)
# Create PWM object and set its frequency in Hz (cycles per second)
led = GPIO.PWM(PIN, 60)
# Start PWM signal
led.start(0)
try:
while True:
# increase duty cycle by 1%
for dc in range(0, 101, 1):
led.ChangeDutyCycle(dc)
time.sleep(0.05)
# and down again ...
for dc in range(100, -1, -1):
led.ChangeDutyCycle(dc)
time.sleep(0.05)
except KeyboardInterrupt:
pass
led.stop()
GPIO.cleanup()
Explanation: Note: To stop the running code, you can:
click the stop button in the toolbar above
choose Kernel > Interrupt from the dropdown menu
use a keyboard shortcut: type i twice (only while the cell is in "command mode" (indicated by a grey border).
Pulse Width Modulation
So we can now send binary commands ("on" and "off"), but the GPIO library also allows us to simulate an analog signal, also called PWM (Pulse Width Modulation). A PWM signal consists of a quick succession of on and off signals where the ratio between the total on and the total off time indicates the pseudo-analog level between 0 and 1.
E.g. 75% of the time "on" == a duty cycle of 75% == an analog signal of 0.75 (or 75% of the max amplitude)
In the case of a LED, this will manifest itself as a stronger or weaker light being emitted from the LED. In different motors this can be the rotating speed or the angle to which the axis moves (or is held).
End of explanation
<END_TASK> |
103 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
On-the-fly aluminum potential
For the scheme/workflow of the on-the-fly training, please refer to Vandermause et al..
In our FLARE python package, we provide two ways for the on-the-fly training
Step1: Some Explanation about the parameters
kernels
Step2: Step 3
Step3: Some Explanation about the parameters
dt
Step4: After OTF training is finished, we can check log file al_otf_qe.out for all the information dumped. This output file can be parsed using our otf_parser.py module, which will be introduced in the after-training-tutorial
Step 5 (Optional) | <ASSISTANT_TASK:>
Python Code:
import numpy as np
from flare.gp import GaussianProcess
# make gp model
hyps = np.array([0.1, 1, 0.01])
hyp_labels = ['Signal Std', 'Length Scale', 'Noise Std']
cutoffs = {'threebody':3.9}
gp = \
GaussianProcess(kernels=['threebody'],
hyps=hyps,
cutoffs=cutoffs,
hyp_labels=hyp_labels,
maxiter=50)
Explanation: On-the-fly aluminum potential
For the scheme/workflow of the on-the-fly training, please refer to Vandermause et al..
In our FLARE python package, we provide two ways for the on-the-fly training:
1. Use OTF module and our MD engine (only supports NVE) $\to$ this tutorial
2. Use ASE Atoms with our ASE interface with OTF $\to$ see ase-tutorial
Here we give an example of running OTF (on-the-fly) training with QE (Quantum Espresso) and NVE ensemble.
Step 1: Set up a GP Model
Let’s start up with the GP model with three-body kernel function. (See kernels.py (single component) or mc_simple.py (multi-component) for more options.)
End of explanation
import os
# set up DFT calculator
qe_input = './pwscf.in' # quantum espresso input file
dft_loc = os.environ.get('PWSCF_COMMAND')
Explanation: Some Explanation about the parameters
kernels: set to be the name list of kernel functions to use
Currently we have the choices of twobody, threebody and manybody kernel functions.
If multiple kernels are listed, the resulted kernel is simply the summation of all listed kernels,
hyps: the array of hyperparameters, whose names are shown in hyp_labels.
For two-body kernel function, an array of length 3 is needed, hyps=[sigma_2, ls_2, sigma_n];
For three-body, hyps=[sigma_3, ls_3, sigma_n];
For twobody plus threebody, hyps=[sigma_2, ls_2, sigma_3, ls_3, sigma_n].
For twobody, threebody plus manybody, hyps=[sigma_2, ls_2, sigma_3, ls_3, sigma_m, ls_m, sigma_n].
cutoffs: a dictionary consists of corresponding cutoff values for each kernel. Usually we will set a larger one for two-body, and smaller one for threebody and manybody
maxiter: set to constrain the number of steps in training hyperparameters.
Note
See GaussianProcess for complete description of arguments of GaussianProcess class.
See AdvancedHyperparametersSetUp for more complicated hyper-parameters set up.
Step 2: Set up DFT Calculator
The next step is to set up DFT calculator, here we use QE (quantum espresso). Suppose we’ve prepared a QE input file in current directory ./pwscf.in, and have set the environment variable PWSCF_COMMAND to the location of our QE’s executable pw.x. Then we specify the input file and executable by qe_input and dft_loc.
End of explanation
# set up OTF parameters
dt = 0.001 # timestep (ps)
number_of_steps = 100 # number of steps
std_tolerance_factor = 1
max_atoms_added = 2
freeze_hyps = 3
otf = OTF(qe_input, dt, number_of_steps, gp, dft_loc,
std_tolerance_factor,
init_atoms=[0],
calculate_energy=True,
output_name='al_otf_qe',
freeze_hyps=freeze_hyps,
skip=5,
max_atoms_added=max_atoms_added,
write_model=3)
Explanation: Step 3: Set up OTF MD Training Engine
Then we can set up our On-The-Fly (OTF) MD engine for training and simulation.
End of explanation
# run OTF MD
otf.run()
Explanation: Some Explanation about the parameters
dt: the time step in unit of ps
number_of_steps: the number of steps that the MD is run
std_tolerance_factor: the uncertainty threshold = std_tolerance_factor x hyps[-1]. In OTF training, when GP predicts uncertainty above the uncertainty threshold, it will call DFT
max_atoms_added: constrain the number of atoms added to the training set after each DFT call
freeze_hyps: stop training hyperparameters and fix them from the freeze_hyps th step. Usually set to a small number, because for large dataset the training will take long.
init_atoms: list of atoms to be added in the first DFT call. Because there’s no uncertainty predicted in the initial DFT call, so there’s no selection rule to pick up “maximully uncertain” atoms into the training set, we have to specify which atoms to pick up by this variable.
calculate_energy: if True, the local energy on each atom will be calculated
output_name: the name of the logfile
skip: record/dump the information every skip steps.
write_model: identify the frequency of dumpping GP model during the training.
1: default, dump GP model only when the OTF is complete
2: dump GP model every time after training
3: dump GP model at every step
(We recommend using 3 if you are possibly gonna resume the OTF training, as instructed by the next section.)
Step 4: Launch the OTF Training
Finally, let’s run it!
End of explanation
new_otf = OTF.from_checkpoint(log_name + "_checkpt.json")
new_otf.run()
Explanation: After OTF training is finished, we can check log file al_otf_qe.out for all the information dumped. This output file can be parsed using our otf_parser.py module, which will be introduced in the after-training-tutorial
Step 5 (Optional): Resume Interrupted Training
At the end of each OTF training step, there will be a checkpoint file dumpped of name <output_name>_checkpt.json, in the above example, al_otf_qe_checkpt.json. This file saves the state at the current OTF MD step.
If you've set write_model=3, then there will be another file saving the trained GP model, <output_name>_gp.json, which will be loaded when restarting OTF.
Then, use OTF.from_checkpoint(<output_name>_checkpt.json) to load the OTF state, and resume the training by run().
End of explanation
<END_TASK> |
104 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fully-Connected Neural Nets
In the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures.
In this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this
Step4: Affine layer
Step5: Affine layer
Step6: ReLU layer
Step7: ReLU layer
Step8: "Sandwich" layers
There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py.
For now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass
Step9: Loss layers
Step10: Two-layer network
In the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations.
Open the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation.
Step11: Solver
In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class.
Open the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set.
Step12: Multilayer network
Next you will implement a fully-connected network with an arbitrary number of hidden layers.
Read through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py.
Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon.
Initial loss and gradient check
As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable?
For gradient checking, you should expect to see errors around 1e-6 or less.
Step13: As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.
Step14: Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.
Step15: Inline question
Step16: Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.
Step17: RMSProp and Adam
RMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients.
In the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below.
[1] Tijmen Tieleman and Geoffrey Hinton. "Lecture 6.5-rmsprop
Step18: Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules
Step19: Train a good model!
Train the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net.
If you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets.
You might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models.
Step20: Test you model
Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set. | <ASSISTANT_TASK:>
Python Code:
# As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in list(data.items()):
print(('%s: ' % k, v.shape))
Explanation: Fully-Connected Neural Nets
In the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures.
In this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this:
```python
def layer_forward(x, w):
Receive inputs x and weights w
# Do some computations ...
z = # ... some intermediate value
# Do some more computations ...
out = # the output
cache = (x, w, z, out) # Values we need to compute gradients
return out, cache
```
The backward pass will receive upstream derivatives and the cache object, and will return gradients with respect to the inputs and weights, like this:
```python
def layer_backward(dout, cache):
Receive derivative of loss with respect to outputs and cache,
and compute derivative with respect to inputs.
# Unpack cache values
x, w, z, out = cache
# Use values in cache to compute derivatives
dx = # Derivative of loss with respect to x
dw = # Derivative of loss with respect to w
return dx, dw
```
After implementing a bunch of layers this way, we will be able to easily combine them to build classifiers with different architectures.
In addition to implementing fully-connected networks of arbitrary depth, we will also explore different update rules for optimization, and introduce Dropout as a regularizer and Batch Normalization as a tool to more efficiently optimize deep networks.
End of explanation
# Test the affine_forward function
num_inputs = 2
input_shape = (4, 5, 6)
output_dim = 3
input_size = num_inputs * np.prod(input_shape)
weight_size = output_dim * np.prod(input_shape)
x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape)
w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim)
b = np.linspace(-0.3, 0.1, num=output_dim)
out, _ = affine_forward(x, w, b)
correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],
[ 3.25553199, 3.5141327, 3.77273342]])
# Compare your output with ours. The error should be around 1e-9.
print('Testing affine_forward function:')
print('difference: ', rel_error(out, correct_out))
Explanation: Affine layer: foward
Open the file cs231n/layers.py and implement the affine_forward function.
Once you are done you can test your implementaion by running the following:
End of explanation
# Test the affine_backward function
np.random.seed(231)
x = np.random.randn(10, 2, 3)
w = np.random.randn(6, 5)
b = np.random.randn(5)
dout = np.random.randn(10, 5)
dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout)
_, cache = affine_forward(x, w, b)
dx, dw, db = affine_backward(dout, cache)
# The error should be around 1e-10
print('Testing affine_backward function:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
Explanation: Affine layer: backward
Now implement the affine_backward function and test your implementation using numeric gradient checking.
End of explanation
# Test the relu_forward function
x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)
out, _ = relu_forward(x)
correct_out = np.array([[ 0., 0., 0., 0., ],
[ 0., 0., 0.04545455, 0.13636364,],
[ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])
# Compare your output with ours. The error should be around 5e-8
print('Testing relu_forward function:')
print('difference: ', rel_error(out, correct_out))
Explanation: ReLU layer: forward
Implement the forward pass for the ReLU activation function in the relu_forward function and test your implementation using the following:
End of explanation
np.random.seed(231)
x = np.random.randn(10, 10)
dout = np.random.randn(*x.shape)
dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout)
_, cache = relu_forward(x)
dx = relu_backward(dout, cache)
# The error should be around 3e-12
print('Testing relu_backward function:')
print('dx error: ', rel_error(dx_num, dx))
Explanation: ReLU layer: backward
Now implement the backward pass for the ReLU activation function in the relu_backward function and test your implementation using numeric gradient checking:
End of explanation
from cs231n.layer_utils import affine_relu_forward, affine_relu_backward
np.random.seed(231)
x = np.random.randn(2, 3, 4)
w = np.random.randn(12, 10)
b = np.random.randn(10)
dout = np.random.randn(2, 10)
out, cache = affine_relu_forward(x, w, b)
dx, dw, db = affine_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout)
print('Testing affine_relu_forward:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
Explanation: "Sandwich" layers
There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py.
For now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass:
End of explanation
np.random.seed(231)
num_classes, num_inputs = 10, 50
x = 0.001 * np.random.randn(num_inputs, num_classes)
y = np.random.randint(num_classes, size=num_inputs)
dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False)
loss, dx = svm_loss(x, y)
# Test svm_loss function. Loss should be around 9 and dx error should be 1e-9
print('Testing svm_loss:')
print('loss: ', loss)
print('dx error: ', rel_error(dx_num, dx))
dx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False)
loss, dx = softmax_loss(x, y)
# Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8
print('\nTesting softmax_loss:')
print('loss: ', loss)
print('dx error: ', rel_error(dx_num, dx))
Explanation: Loss layers: Softmax and SVM
You implemented these loss functions in the last assignment, so we'll give them to you for free here. You should still make sure you understand how they work by looking at the implementations in cs231n/layers.py.
You can make sure that the implementations are correct by running the following:
End of explanation
np.random.seed(231)
N, D, H, C = 3, 5, 50, 7
X = np.random.randn(N, D)
y = np.random.randint(C, size=N)
std = 1e-3
model = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std)
print('Testing initialization ... ')
W1_std = abs(model.params['W1'].std() - std)
b1 = model.params['b1']
W2_std = abs(model.params['W2'].std() - std)
b2 = model.params['b2']
assert W1_std < std / 10, 'First layer weights do not seem right'
assert np.all(b1 == 0), 'First layer biases do not seem right'
assert W2_std < std / 10, 'Second layer weights do not seem right'
assert np.all(b2 == 0), 'Second layer biases do not seem right'
print('Testing test-time forward pass ... ')
model.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H)
model.params['b1'] = np.linspace(-0.1, 0.9, num=H)
model.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C)
model.params['b2'] = np.linspace(-0.9, 0.1, num=C)
X = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T
scores = model.loss(X)
correct_scores = np.asarray(
[[11.53165108, 12.2917344, 13.05181771, 13.81190102, 14.57198434, 15.33206765, 16.09215096],
[12.05769098, 12.74614105, 13.43459113, 14.1230412, 14.81149128, 15.49994135, 16.18839143],
[12.58373087, 13.20054771, 13.81736455, 14.43418138, 15.05099822, 15.66781506, 16.2846319 ]])
scores_diff = np.abs(scores - correct_scores).sum()
assert scores_diff < 1e-6, 'Problem with test-time forward pass'
print('Testing training loss (no regularization)')
y = np.asarray([0, 5, 1])
loss, grads = model.loss(X, y)
correct_loss = 3.4702243556
assert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss'
model.reg = 1.0
loss, grads = model.loss(X, y)
correct_loss = 26.5948426952
assert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss'
for reg in [0.0, 0.7]:
print('Running numeric gradient check with reg = ', reg)
model.reg = reg
loss, grads = model.loss(X, y)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
Explanation: Two-layer network
In the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations.
Open the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation.
End of explanation
model = TwoLayerNet()
solver = None
##############################################################################
# TODO: Use a Solver instance to train a TwoLayerNet that achieves at least #
# 50% accuracy on the validation set. #
##############################################################################
solver = Solver(model, data,
update_rule='sgd',
optim_config={
'learning_rate': 1e-3,
},
lr_decay=0.95,
num_epochs=10, batch_size=100,
print_every=100)
solver.train()
##############################################################################
# END OF YOUR CODE #
##############################################################################
# Run this cell to visualize training loss and train / val accuracy
plt.subplot(2, 1, 1)
plt.title('Training loss')
plt.plot(solver.loss_history, 'o')
plt.xlabel('Iteration')
plt.subplot(2, 1, 2)
plt.title('Accuracy')
plt.plot(solver.train_acc_history, '-o', label='train')
plt.plot(solver.val_acc_history, '-o', label='val')
plt.plot([0.5] * len(solver.val_acc_history), 'k--')
plt.xlabel('Epoch')
plt.legend(loc='lower right')
plt.gcf().set_size_inches(15, 12)
plt.show()
Explanation: Solver
In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class.
Open the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set.
End of explanation
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print('Running check with reg = ', reg)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
Explanation: Multilayer network
Next you will implement a fully-connected network with an arbitrary number of hidden layers.
Read through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py.
Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon.
Initial loss and gradient check
As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable?
For gradient checking, you should expect to see errors around 1e-6 or less.
End of explanation
# TODO: Use a three-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 1e-2
learning_rate = 1e-2
model = FullyConnectedNet([100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
Explanation: As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.
End of explanation
# TODO: Use a five-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
learning_rate = 1e-3
weight_scale = 1e-1
model = FullyConnectedNet([100, 100, 100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
Explanation: Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.
End of explanation
from cs231n.optim import sgd_momentum
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
v = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-3, 'velocity': v}
next_w, _ = sgd_momentum(w, dw, config=config)
expected_next_w = np.asarray([
[ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789],
[ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526],
[ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263],
[ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]])
expected_velocity = np.asarray([
[ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158],
[ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105],
[ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053],
[ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]])
print('next_w error: ', rel_error(next_w, expected_next_w))
print('velocity error: ', rel_error(expected_velocity, config['velocity']))
Explanation: Inline question:
Did you notice anything about the comparative difficulty of training the three-layer net vs training the five layer net?
Answer:
Tuning learning rate is more effective to train the three-layer net, while initialization weight scale the five-layer net.
Update rules
So far we have used vanilla stochastic gradient descent (SGD) as our update rule. More sophisticated update rules can make it easier to train deep networks. We will implement a few of the most commonly used update rules and compare them to vanilla SGD.
SGD+Momentum
Stochastic gradient descent with momentum is a widely used update rule that tends to make deep networks converge faster than vanilla stochstic gradient descent.
Open the file cs231n/optim.py and read the documentation at the top of the file to make sure you understand the API. Implement the SGD+momentum update rule in the function sgd_momentum and run the following to check your implementation. You should see errors less than 1e-8.
End of explanation
num_train = 4000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
for update_rule in ['sgd', 'sgd_momentum']:
print('running with ', update_rule)
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': 1e-2,
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print()
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in list(solvers.items()):
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
Explanation: Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.
End of explanation
# Test RMSProp implementation; you should see errors less than 1e-7
from cs231n.optim import rmsprop
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
cache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'cache': cache}
next_w, _ = rmsprop(w, dw, config=config)
expected_next_w = np.asarray([
[-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247],
[-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774],
[ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447],
[ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]])
expected_cache = np.asarray([
[ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321],
[ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377],
[ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936],
[ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]])
print('next_w error: ', rel_error(expected_next_w, next_w))
print('cache error: ', rel_error(expected_cache, config['cache']))
# Test Adam implementation; you should see errors around 1e-7 or less
from cs231n.optim import adam
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
m = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
v = np.linspace(0.7, 0.5, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5}
next_w, _ = adam(w, dw, config=config)
expected_next_w = np.asarray([
[-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],
[-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929],
[ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969],
[ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]])
expected_v = np.asarray([
[ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,],
[ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,],
[ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,],
[ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]])
expected_m = np.asarray([
[ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474],
[ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316],
[ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158],
[ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]])
print('next_w error: ', rel_error(expected_next_w, next_w))
print('v error: ', rel_error(expected_v, config['v']))
print('m error: ', rel_error(expected_m, config['m']))
Explanation: RMSProp and Adam
RMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients.
In the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below.
[1] Tijmen Tieleman and Geoffrey Hinton. "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude." COURSERA: Neural Networks for Machine Learning 4 (2012).
[2] Diederik Kingma and Jimmy Ba, "Adam: A Method for Stochastic Optimization", ICLR 2015.
End of explanation
learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3}
for update_rule in ['adam', 'rmsprop']:
print('running with ', update_rule)
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': learning_rates[update_rule]
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print()
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in list(solvers.items()):
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
Explanation: Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules:
End of explanation
best_model = None
################################################################################
# TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might #
# batch normalization and dropout useful. Store your best model in the #
# best_model variable. #
################################################################################
model = FullyConnectedNet([100, 100, 100, 100], weight_scale=1e-2)
solver = Solver(model, data,
num_epochs=10, batch_size=100,
update_rule='adam',
optim_config={
'learning_rate': 1e-3
},
print_every=100,
verbose=True)
solver.train()
best_model = model
################################################################################
# END OF YOUR CODE #
################################################################################
Explanation: Train a good model!
Train the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net.
If you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets.
You might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models.
End of explanation
y_test_pred = np.argmax(best_model.loss(data['X_test']), axis=1)
y_val_pred = np.argmax(best_model.loss(data['X_val']), axis=1)
print('Validation set accuracy: ', (y_val_pred == data['y_val']).mean())
print('Test set accuracy: ', (y_test_pred == data['y_test']).mean())
Explanation: Test you model
Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set.
End of explanation
<END_TASK> |
105 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Deploying and Making Predictions with a Trained Model
Learning Objectives
- Deploy a model on Google CMLE
- Make online and batch predictions with a deployed model
Introduction
In this notebook, we will deploy the model we trained to predict birthweight and we will use that deployed model to make predictions using our cloud-hosted machine learning model. Cloud ML Engine provides two ways to get predictions from trained models; i.e., online prediction and batch prediction; and we do both in this notebook.
Have a look at this blog post on Online vs Batch Prediction to see the trade-offs of both approaches.
As usual we start by setting our environment variables to reference our Project and Bucket.
Step1: Deploy trained model
Next we'll deploy the trained model to act as a REST web service using a simple gcloud call. To start, we'll check if our model and version already exists and if so, we'll delete them.
Step2: We'll now deploy our model. This will take a few minutes. Once the cell below completes, you should be able to see your newly deployed model in the 'Models' portion of the AI Platform section of the GCP console.
Let's have a look at the contents of the exporter bucket to see which model binaries we have. We can deploy a model by specifying any of these locations. To make sure we grab the model trained from the most recent training job, we'll use tail -1
Step3: Exercise 1
After completing the TODOs in the code cell below, we will be able to deploy our saved model to the cloud and make predictions. There are two TODOs below.
- For the first TODO, write a gcloud command to create a model called babyweight.
- In the second TODO, write a gcloud to create a version called ml_on_gcp.
Look up the Cloud AI-Platform documentation to remind your self how to create these commands. You'll need to provide use the MODEL_NAME, MODEL_VERSION, MODEL_LOCATION, REGION and TFVERSION provided for you.
Step4: Use the deployed model to make online predictions
To make online predictions, we'll send a JSON request to the endpoint of the service to make it predict a baby's weight. The order of the responses are the order of the instances.
Exercise 2
In the cell below we'll make online predictions with the model we just deployed. In order to do that, we need to set up the right token and api to create the correct request post at the bottom. Complete the TODOs below. You will need to
- Specify the correct MODEL_NAME and MODEL_VERSION we want to use for prediction
- Use GoogleCredentials library to create an access token
- Create a variable called api which specifies the Google prediciton API using the Project, model name, and model version
- Add an addtional instance to prediction with the following properties
'is_male'
Step5: Use model for batch prediction
Batch prediction is commonly used when you want to make thousands to millions of predictions at a time. To perform batch prediction we'll create a file with one instance per line and submit the entire prediction job through a gcloud command.
To illustrate this, let's create a file inputs.json which has two instances on which we want to predict.
Step6: Exercise 3
In the cells below we'll write the inputs.json file we just created to our Cloud storage bucket, then submit a batch prediction job to the cloud pointing at that file. We'll also need to specify the output location in GCS where we'd like the final predicitons to be deposited. In the TODOs below, you will need to
- Use gsutil to copy the inputs.json file to the location specified by INPUT
- Use gsutil to clear out the directory specified by OUTPUT. This will ensure the only thing is that location are our predictions
- Complete the glcoud command to submit a batch prediction job.
- Specify the values of all the arguments for the gcloud command
Have a look at the documentation for submitting batch predictions via gcloud to remind yourself of the format.
Step7: Check the ML Engine jobs submitted to the GCP console to make sure the prediction job has completed, then let's have a look at the results of our predictions. | <ASSISTANT_TASK:>
Python Code:
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = "cloud-training-bucket" # Replace with your BUCKET
REGION = "us-central1" # Choose an available region for Cloud MLE
TFVERSION = "1.14" # TF version for CMLE to use
import os
os.environ["BUCKET"] = BUCKET
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = TFVERSION
%%bash
if ! gsutil ls -r gs://${BUCKET} | grep -q gs://${BUCKET}/babyweight/trained_model/; then
gsutil mb -l ${REGION} gs://${BUCKET}
# copy canonical model if you didn't do previous notebook
gsutil -m cp -R gs://cloud-training-demos/babyweight/trained_model gs://${BUCKET}/babyweight/trained_model
fi
Explanation: Deploying and Making Predictions with a Trained Model
Learning Objectives
- Deploy a model on Google CMLE
- Make online and batch predictions with a deployed model
Introduction
In this notebook, we will deploy the model we trained to predict birthweight and we will use that deployed model to make predictions using our cloud-hosted machine learning model. Cloud ML Engine provides two ways to get predictions from trained models; i.e., online prediction and batch prediction; and we do both in this notebook.
Have a look at this blog post on Online vs Batch Prediction to see the trade-offs of both approaches.
As usual we start by setting our environment variables to reference our Project and Bucket.
End of explanation
%%bash
MODEL_NAME="babyweight"
MODEL_VERSION="ml_on_gcp"
# Check to see if the model and version already exist,
# if so, delete them to deploy anew
if gcloud ai-platform models list | grep "$MODEL_NAME \+ $MODEL_VERSION"; then
echo "Deleting the version '$MODEL_VERSION' of model '$MODEL_NAME'"
yes | gcloud ai-platform versions delete ${MODEL_VERSION} --model=$MODEL_NAME
echo "Deleting the model '$MODEL_NAME'"
yes |gcloud ai-platform models delete ${MODEL_NAME}
else
echo "The model '$MODEL_NAME' with version '$MODEL_VERSION' does not exist."
fi
Explanation: Deploy trained model
Next we'll deploy the trained model to act as a REST web service using a simple gcloud call. To start, we'll check if our model and version already exists and if so, we'll delete them.
End of explanation
%%bash
gsutil ls gs://${BUCKET}/babyweight/trained_model/export/exporter/
Explanation: We'll now deploy our model. This will take a few minutes. Once the cell below completes, you should be able to see your newly deployed model in the 'Models' portion of the AI Platform section of the GCP console.
Let's have a look at the contents of the exporter bucket to see which model binaries we have. We can deploy a model by specifying any of these locations. To make sure we grab the model trained from the most recent training job, we'll use tail -1
End of explanation
%%bash
MODEL_NAME="babyweight"
MODEL_VERSION="ml_on_gcp"
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/babyweight/trained_model/export/exporter/ | tail -1)
echo "Deploying the model '$MODEL_NAME', version '$MODEL_VERSION' from $MODEL_LOCATION"
echo "... this will take a few minutes"
gcloud # TODO: Your code goes here
gcloud # TODO: Your code goes here
Explanation: Exercise 1
After completing the TODOs in the code cell below, we will be able to deploy our saved model to the cloud and make predictions. There are two TODOs below.
- For the first TODO, write a gcloud command to create a model called babyweight.
- In the second TODO, write a gcloud to create a version called ml_on_gcp.
Look up the Cloud AI-Platform documentation to remind your self how to create these commands. You'll need to provide use the MODEL_NAME, MODEL_VERSION, MODEL_LOCATION, REGION and TFVERSION provided for you.
End of explanation
from oauth2client.client import GoogleCredentials
import requests
import json
MODEL_NAME = # TODO: Your code goes here
MODEL_VERSION = # TODO: Your code goes here
token = # TODO: Your code goes here
api = # TODO: Your code goes here
headers = {"Authorization": "Bearer " + token }
data = {
"instances": [
{
"is_male": "True",
"mother_age": 26.0,
"plurality": "Single(1)",
"gestation_weeks": 39
},
{
"is_male": "False",
"mother_age": 29.0,
"plurality": "Single(1)",
"gestation_weeks": 38
},
{
"is_male": "True",
"mother_age": 26.0,
"plurality": "Triplets(3)",
"gestation_weeks": 39
},
# TODO: Your code goes here
]
}
response = # TODO: Your code goes here
print(response.content)
Explanation: Use the deployed model to make online predictions
To make online predictions, we'll send a JSON request to the endpoint of the service to make it predict a baby's weight. The order of the responses are the order of the instances.
Exercise 2
In the cell below we'll make online predictions with the model we just deployed. In order to do that, we need to set up the right token and api to create the correct request post at the bottom. Complete the TODOs below. You will need to
- Specify the correct MODEL_NAME and MODEL_VERSION we want to use for prediction
- Use GoogleCredentials library to create an access token
- Create a variable called api which specifies the Google prediciton API using the Project, model name, and model version
- Add an addtional instance to prediction with the following properties
'is_male': 'Unknown',
'mother_age': 29.0,
'plurality': 'Multiple(2+)',
'gestation_weeks': 38
- Create a variable called response which will post a request to our model API to make prediction
End of explanation
%%writefile inputs.json
{"is_male": "True", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
{"is_male": "False", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
Explanation: Use model for batch prediction
Batch prediction is commonly used when you want to make thousands to millions of predictions at a time. To perform batch prediction we'll create a file with one instance per line and submit the entire prediction job through a gcloud command.
To illustrate this, let's create a file inputs.json which has two instances on which we want to predict.
End of explanation
%%bash
INPUT=gs://${BUCKET}/babyweight/batchpred/inputs.json
OUTPUT=gs://${BUCKET}/babyweight/batchpred/outputs
gsutil # TODO: Your code goes here
gsutil # TODO: Your code goes here
gcloud ai-platform # TODO: Your code goes here
--data-format= # TODO: Your code goes here
--region= # TODO: Your code goes here
--input-paths= # TODO: Your code goes here
--output-path= # TODO: Your code goes here
--model= # TODO: Your code goes here
--version= # TODO: Your code goes here
Explanation: Exercise 3
In the cells below we'll write the inputs.json file we just created to our Cloud storage bucket, then submit a batch prediction job to the cloud pointing at that file. We'll also need to specify the output location in GCS where we'd like the final predicitons to be deposited. In the TODOs below, you will need to
- Use gsutil to copy the inputs.json file to the location specified by INPUT
- Use gsutil to clear out the directory specified by OUTPUT. This will ensure the only thing is that location are our predictions
- Complete the glcoud command to submit a batch prediction job.
- Specify the values of all the arguments for the gcloud command
Have a look at the documentation for submitting batch predictions via gcloud to remind yourself of the format.
End of explanation
!gsutil ls gs://$BUCKET/babyweight/batchpred/outputs
!gsutil cat gs://$BUCKET/babyweight/batchpred/outputs/prediction.results*
Explanation: Check the ML Engine jobs submitted to the GCP console to make sure the prediction job has completed, then let's have a look at the results of our predictions.
End of explanation
<END_TASK> |
106 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Visualize Glass Brain
Step1: 1. Upload all statistical maps into the data folder
The data folder can be found in the same folder as this notebook. Just drag and drop your NIfTI file into the data folder and press the upload button.
2. Specify relevant visualization parameters
Step2: 3. Run the visualization script
Step3: 4. Look at your data | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
Explanation: Visualize Glass Brain
End of explanation
stats_file = '../test_data/ALL_N95_Mean_cope2_thresh_zstat1.nii.gz'
view = 'ortho'
colormap = 'RdBu_r'
threshold = '2.3'
black_bg
Explanation: 1. Upload all statistical maps into the data folder
The data folder can be found in the same folder as this notebook. Just drag and drop your NIfTI file into the data folder and press the upload button.
2. Specify relevant visualization parameters
End of explanation
%run ../scripts/mni_glass_brain.py --cbar --display_mode $view --cmap $colormap --thr_abs $threshold $stats_file
Explanation: 3. Run the visualization script
End of explanation
from IPython.display import Image, display
from glob import glob as gg
outputs = gg('../test_data/*ortho.png')
for o in outputs:
a = Image(filename=o)
display(a)
plotting.plot_glass_brain??
Explanation: 4. Look at your data
End of explanation
<END_TASK> |
107 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Synchronous RVI (sRVI) on the 3-loop problem
Step1: For $\alpha=1$, sRVI does not converge on the (periodic) 3-loop problem.
Synchronous DVI (sDVI) on the 3-loop problem | <ASSISTANT_TASK:>
Python Code:
alphas = [1.0, 0.999, 0.99, 0.9, 0.7, 0.5, 0.3, 0.1, 0.01, 0.001]
max_iters = 50000
epsilon = 0.001
init_v = np.zeros(env.num_states())
init_r_bar_scalar = 0
convergence_flags = np.zeros(alphas.__len__())
for i, alpha in enumerate(alphas):
alg = RVI_Evaluation(env, init_v, alpha, ref_idx=0)
print(f'RVI Evaluation starting alpha:{alpha}', end=' ')
convergence = run_alg(alg, 'exec_sync', max_iters, epsilon)
print(f'Converged? {convergence}')
convergence_flags[i] = convergence
plt.figure(figsize=(15, 15))
plt.yticks(np.arange(alphas.__len__()), alphas)
plt.ylabel(r'$\alpha$', rotation=0, labelpad=20)
results = np.array([convergence_flags]).reshape(-1, 1)
plt.imshow(results, cmap='viridis', interpolation='nearest')
plt.colorbar()
plt.clim(0, 1)
plt.title('RVI Evaluation')
plt.show()
plt.close()
Explanation: Synchronous RVI (sRVI) on the 3-loop problem
End of explanation
betas = [1.0, 0.999, 0.99, 0.9, 0.7, 0.5, 0.3, 0.1, 0.01, 0.001]
convergence_flags = np.zeros((alphas.__len__(), betas.__len__()))
for i, alpha in enumerate(alphas):
for j, beta in enumerate(betas):
alg = DVI_Evaluation(env, init_v, init_r_bar_scalar, alpha, beta)
print(f'DVI Evaluation starting alpha:{alpha} beta:{beta}', end=' ')
convergence = run_alg(alg, 'exec_sync', max_iters, epsilon)
print(f'Converged? {convergence}')
convergence_flags[i][j] = convergence
plt.figure(figsize=(15, 15))
plt.yticks(np.arange(alphas.__len__()), alphas)
plt.ylabel(r'$\alpha$', rotation=0, labelpad=20)
plt.xlabel(r'$\beta$')
plt.xticks(np.arange(betas.__len__()), betas)
plt.imshow(convergence_flags, cmap='viridis', interpolation='nearest')
plt.colorbar()
plt.clim(0, 1)
plt.title('DVI Evaluation')
plt.show()
plt.close()
Explanation: For $\alpha=1$, sRVI does not converge on the (periodic) 3-loop problem.
Synchronous DVI (sDVI) on the 3-loop problem
End of explanation
<END_TASK> |
108 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Transformée de Fourier
Step1: TODO
Step2: \begin{eqnarray}
a_0 & = & \frac{1}{\pi} \int^{\pi}{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) \cos(0) ~ dt \
& = & \frac{1}{\pi} \int^{\pi}{-\pi} \cos(t) ~ dt \
& = & \frac{1}{\pi} \times 8\pi\
& = & 8\
\end{eqnarray}
Step3: \begin{eqnarray}
a_1 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) \cos(t) ~ dt \
& = & \frac{1}{\pi} \times \pi \
& = & 1 \
\end{eqnarray}
Step4: \begin{eqnarray}
b_1 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) \sin(t) ~ dt \
& = & \frac{1}{\pi} \times 2\pi \
& = & 2 \
\end{eqnarray}
Step5: \begin{eqnarray}
a_2 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) \cos(2t) ~ dt \
& = & \frac{1}{\pi} \times 0 \
& = & 0 \
\end{eqnarray}
Step6: \begin{eqnarray}
b_2 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) \sin(2t) ~ dt \
& = & \frac{1}{\pi} \times -3\pi \
& = & -3 \
\end{eqnarray}
Les coefficients $a_3$, $b_3$, $a_4$, $b_4$, etc. sont tous nuls.
Vérification
On a bien
Step7: \begin{eqnarray}
c_0 & = & \frac{1}{2\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) e^{0} ~ dt \
& = & \frac{1}{\pi} \times 8\pi\
& = & 8\
\end{eqnarray}
Step8: \begin{eqnarray}
c_1 & = & \frac{1}{2\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) e^{-it} ~ dt \
& = & \frac{1}{\pi} \times \pi - 2i\pi\
& = & 1 - 2i\
\end{eqnarray}
Step9: \begin{eqnarray}
c_{-1} & = & \frac{1}{2\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) e^{it} ~ dt \
& = & \frac{1}{\pi} \times \pi + 2i\pi\
& = & 1 + 2i\
\end{eqnarray}
Step10: \begin{eqnarray}
c_2 & = & \frac{1}{2\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) e^{-2it} ~ dt \
& = & \frac{1}{\pi} \times 3i\pi\
& = & 3i\
\end{eqnarray}
Step11: \begin{eqnarray}
c_{-2} & = & \frac{1}{2\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) e^{2it} ~ dt \
& = & \frac{1}{\pi} \times -3i\pi\
& = & -3i\
\end{eqnarray}
Step12: \begin{eqnarray}
c_3 & = & \frac{1}{2\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) e^{-3it} ~ dt \
& = & \frac{1}{\pi} \times 0 \
& = & 0\
\end{eqnarray}
Step13: \begin{eqnarray}
c_{-3} & = & \frac{1}{2\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) e^{3it} ~ dt \
& = & \frac{1}{\pi} \times 0 \
& = & 0 \
\end{eqnarray}
Les coefficients $c_4$, $b_-4$, $a_5$, $b_-5$, etc. sont tous nuls.
Vérification
On a bien | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (6, 6)
import math
import cmath # math functions for complex numbers
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets
from ipywidgets import interact
import sympy as sp
# See: http://docs.sympy.org/latest/tutorial/printing.html
sp.init_printing()
t = sp.symbols("t")
Explanation: Transformée de Fourier
End of explanation
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.cos(0), (t, -sp.pi, sp.pi))
sp.Eq(integ, integ.doit())
Explanation: TODO:
* https://cdn.uclouvain.be/public/Exports%20reddot/fsa/documents/Travail_Complet.pdf <- semble très intéressant !
* Exemples en notation exponentielle
* Cas d'un signal 2D
* Discrete Fourier Transform + algo pour implémentation concrète (du calcul des coefs) en Python
* FFT
The following notations come from the following book: Toutes les mathématiques et les bases de l'informatique, Horst Stöcker (Dunod, 2002)
Cas simple: fonction périodique de période $2\pi$
Définitions
Série de Fourier
$$
f(t) = \frac{a_0}{2} + \sum^{\infty}_{n=1} [a_n \cos(n t) + b_n \sin(n t)]
$$
Calcul des coefficients de Fourier
\begin{eqnarray}
\forall n \in \mathbb{N}, ~~~~~ a_n & = & \frac{1}{\pi} \int^{\pi}_{-\pi} f(t) \cos(n t) ~ dt \
\forall n \in \mathbb{N}^, ~~~~~ b_n & = & \frac{1}{\pi} \int^{\pi}_{-\pi} f(t) \sin(n t) ~ dt \
\end{eqnarray*}
Cf. section \ref{sec:exemples} pour des exemples de calculs de coefficients de Fourier.
Rappel: $\mathbb{N}^*$ est l'ensemble $\mathbb{N}$ privé de $0$.
Condition de Dirichlet
...
rem: si on avait que des cos (ou que des sin) dans la série de fourier, elle ne pourrait coder définir que des fonctions qui s'annulent tous les pi/2...
Questions / notes / remarques
Que représente le terme $\frac{a_0}{2}$ dans la définition de la série de Fourier ?
...
Que représente le facteur $\frac{1}{\pi}$ dans les coeffients de Fourier ?
...
Cas général: fonction périodique de période $T$
Définition
Série de Fourier
$$
f(t) = \frac{a_0}{2} + \sum^{\infty}_{n=1} [a_n \cos(\omega n t) + b_n \sin(\omega n t)]
$$
Calcul des coefficients de Fourier
\begin{eqnarray}
\forall n \in \mathbb{N}, ~~~~~ a_n & = & \frac{2}{T} \int^{T/2}_{-T/2} f(t) \cos(\omega n t) ~ dt \
\forall n \in \mathbb{N}^, ~~~~~ b_n & = & \frac{2}{T} \int^{T/2}_{-T/2} f(t) \sin(\omega n t) ~ dt \
\end{eqnarray*}
Représentation spectrale
TODO: rendre plus claire la définition suivante...
Série de Fourier
$$
f(t) = \frac{a_0}{2} + \sum^{\infty}_{n=1} [A_n \sin(\omega n t + \phi_n)]
$$
avec
$$A_n = \sqrt{a^2_n + b^2_n} \quad \text{ et } \quad \tan \phi_n = \frac{a_n}{b_n}$$
rem:
ne fait qu'écrire autrement les coefs (passage d'une notation carthésienne à une notation polaire)...
* $a_1$ et $b_1$ en $A_1$ et $\phi_1$,
* $a_2$ et $b_2$ en $A_2$ et $\phi_2$,
* ...
Pour chaque $n$, les coefs $a_n$ et $b_n$ peuvent être vus comme un point dans le plan.
Ces points peuvent aussi être définis avec une notation polaire (c'est ce qui est fait ici): $A_n$ et $\phi_n$.
Qu'est-ce que ça apporte d'un point de vue pratique ? Il serait intéressant de visualiser la différence entre ces deux espaces de coefficients... Le filtrage d'un signal est peut-être plus facile dans le 2e ?
Représentation complexe (notation exponentielle)
TODO: rendre plus claire la définition suivante...
Définitions pour une fonction $2\pi$ periodique
Série de Fourier
$$
f(t) = \sum^{\infty}_{n=\color{red}{-\infty}} c_n e^{i n t}
$$
Calcul des coefficients de Fourier
$$
c_n = \frac{1}{2\pi} \int^{\pi}_{-\pi} f(t) e^{-i n t} dt
$$
Définitions pour une fonction $T$ periodique
Série de Fourier
$$
f(t) = \sum^{\infty}_{n=\color{red}{-\infty}} c_n e^{i\omega n t}
$$
Calcul des coefficients de Fourier
$$
c_n = \frac{1}{T} \int^{T/2}_{-T/2} f(t) e^{-i \omega n t} dt
$$
Relation entre les coefficients $a_n$, $b_n$ et $c_n$
$$
c_n =
\left{
\begin{align}
\frac{a_0}{2} & \quad n = 0 \
\
\frac{a_n - ib_n}{2} & \quad n > 0 \
\
\frac{a_{-n} - ib_{-n}}{2} & \quad n < 0
\end{align}
\right.
$$
ou
$$
\left.
\begin{align}
a_n & = c_n + c_{-n} \
b_n & = i(c_n - c_{-n})
\end{align}
\right}
n > 0
$$
Les valeurs $\omega_n = \omega n$ sont appelées le spectre de $f(t)$.
rem:
ne fait que regrouper les coefs
* $a_1$ et $b_1$ dans un nombre complexe $z_1$,
* $a_2$ et $b_2$ dans un nombre complexe $z_2$,
* ...
Pour chaque $n$, les coefs $a_n$ et $b_n$ peuvent être vus comme un point dans le plan et donc comme un nombre complexe.
Du coup, ces points peuvent aussi être définis avec une notation exponentielle.
motivation: "ça simplifie les calculs et les notations"
Conclusion
TODO...
Synthèse
Définitions à connaitre:
* Série de Fourier
* Coefficients de Fourier
* Condition de Dirichlet
Annexes
Rappels de maths
TODO: add plots...
\begin{eqnarray}
\forall c \in \mathbb{N}^, \int^{\pi}{-\pi} c ~ dt & = & c \times 2\pi \
\forall c \in \mathbb{N}^, \int^{\pi}_{-\pi} c \cos(t) ~ dt & = & 0 \
\forall c \in \mathbb{N}^, \int^{\pi}{-\pi} c \sin(t) ~ dt & = & 0 \
\end{eqnarray*}
$$
\forall m \in \mathbb{N}^, n \in \mathbb{N}^,
\int^{\pi}_{-\pi} \cos(n ~ t) \cos(m ~ t) ~ dt =
\left{
\begin{array}{l l}
\pi & \quad \text{si $n = m$}\
0 & \quad \text{si $n \neq m$}
\end{array} \right.
$$
Par exemple:
\begin{eqnarray}
\int^{\pi}{-\pi} \cos(t) \cos(t) ~ dt & = & \pi \
\int^{\pi}{-\pi} \sin(t) \sin(t) ~ dt & = & \pi \
\int^{\pi}{-\pi} \cos(t) \sin(t) ~ dt & = & 0 \
\int^{\pi}{-\pi} \cos(t) \cos(2t) ~ dt & = & 0 \
\int^{\pi}{-\pi} \cos(t) \sin(2t) ~ dt & = & 0 \
\int^{\pi}{-\pi} \cos(2t) \cos(2t) ~ dt & = & \pi \
\int^{\pi}_{-\pi} \sin(2t) \sin(2t) ~ dt & = & \pi \
\end{eqnarray}
Exemples détaillés
Calcule des coefficients de Fourier pour la fonction $f(t) = 3$
\begin{eqnarray}
a_0 & = & \frac{1}{\pi} \int^{\pi}{-\pi} 3 \cos(0) ~ dt \
& = & \frac{1}{\pi} \int^{\pi}{-\pi} 3 ~ dt \
& = & \frac{1}{\pi} \times 3 \times 2 \pi \
& = & 6 \
\end{eqnarray}
\begin{eqnarray}
a_1 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} 3 \cos(t) ~ dt \
& = & 0 \
\end{eqnarray}
\begin{eqnarray}
b_1 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} 3 \sin(t) ~ dt \
& = & 0 \
\end{eqnarray}
De même, les coefficients $a_3$, $b_3$, $a_4$, $b_4$, etc. sont tous nuls.
Vérification
On a bien:
\begin{eqnarray}
f(t) & = & \frac{a_0}{2} + \sum^{\infty}_{n=1} (a_n \cos(n t) + b_n \sin(n t)) \
& = & \frac{6}{2} + 0 \times \cos(t) + 0 \times \sin(t) + 0 \times \cos(2t) + 0 \times \sin(2t) + ... \
& = & 3 \
\end{eqnarray}
Calcule des coefficients de Fourier pour la fonction $f(t) = \cos(t)$
TODO: add plots...
\begin{eqnarray}
a_0 & = & \frac{1}{\pi} \int^{\pi}{-\pi} \cos(t) \cos(0) ~ dt \
& = & \frac{1}{\pi} \int^{\pi}{-\pi} \cos(t) ~ dt \
& = & 0 \
\end{eqnarray}
\begin{eqnarray}
a_1 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} \cos(t) \cos(t) ~ dt \
& = & \frac{1}{\pi} \times \pi \
& = & 1 \
\end{eqnarray}
\begin{eqnarray}
b_1 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} \cos(t) \sin(t) ~ dt \
& = & \frac{1}{\pi} \times 0 \
& = & 0 \
\end{eqnarray}
\begin{eqnarray}
a_2 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} \cos(t) \cos(2t) ~ dt \
& = & \frac{1}{\pi} \times 0 \
& = & 0 \
\end{eqnarray}
\begin{eqnarray}
b_2 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} \cos(t) \sin(2t) ~ dt \
& = & \frac{1}{\pi} \times 0 \
& = & 0 \
\end{eqnarray}
De même, les coefficients $a_3$, $b_3$, $a_4$, $b_4$, etc. sont tous nuls.
Vérification
On a bien:
\begin{eqnarray}
f(t) & = & \frac{a_0}{2} + \sum^{\infty}_{n=1} (a_n \cos(n t) + b_n \sin(n t)) \
& = & \frac{0}{2} + 1 \times \cos(t) + 0 \times \sin(t) + 0 \times \cos(2t) + 0 \times \sin(2t) + ... \
& = & \cos(t) \
\end{eqnarray}
Calcule des coefficients de Fourier pour la fonction $f(t) = 3 \cos(t)$
\begin{eqnarray}
a_0 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} f(t) \cos(n t) ~ dt \
& = & ... \
\end{eqnarray}
\begin{eqnarray}
a_1 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} f(t) \cos(n t) ~ dt \
& = & ... \
\end{eqnarray}
\begin{eqnarray}
b_1 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} f(t) \sin(n t) ~ dt \
& = & ... \
\end{eqnarray}
Calcule des coefficients de Fourier pour la fonction $f(t) = \cos(t) + 3$
\begin{eqnarray}
a_0 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} f(t) \cos(n t) ~ dt \
& = & ... \
\end{eqnarray}
\begin{eqnarray}
a_1 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} f(t) \cos(n t) ~ dt \
& = & ... \
\end{eqnarray}
\begin{eqnarray}
b_1 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} f(t) \sin(n t) ~ dt \
& = & ... \
\end{eqnarray}
Calcule des coefficients de Fourier pour la fonction $f(t) = \cos(t + 3)$
TODO: ???
\begin{eqnarray}
a_0 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} f(t) \cos(n t) ~ dt \
& = & ... \
\end{eqnarray}
\begin{eqnarray}
a_1 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} f(t) \cos(n t) ~ dt \
& = & ... \
\end{eqnarray}
\begin{eqnarray}
b_1 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} f(t) \sin(n t) ~ dt \
& = & ... \
\end{eqnarray}
Calcule des coefficients de Fourier pour la fonction $f(t) = \cos(t) + 2 \sin(t) - 3 \sin(2t) + 4$
TODO: add plots...
End of explanation
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.cos(t), (t, -sp.pi, sp.pi))
sp.Eq(integ, integ.doit())
Explanation: \begin{eqnarray}
a_0 & = & \frac{1}{\pi} \int^{\pi}{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) \cos(0) ~ dt \
& = & \frac{1}{\pi} \int^{\pi}{-\pi} \cos(t) ~ dt \
& = & \frac{1}{\pi} \times 8\pi\
& = & 8\
\end{eqnarray}
End of explanation
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.sin(t), (t, -sp.pi, sp.pi))
sp.Eq(integ, integ.doit())
Explanation: \begin{eqnarray}
a_1 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) \cos(t) ~ dt \
& = & \frac{1}{\pi} \times \pi \
& = & 1 \
\end{eqnarray}
End of explanation
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.cos(2*t), (t, -sp.pi, sp.pi))
sp.Eq(integ, integ.doit())
Explanation: \begin{eqnarray}
b_1 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) \sin(t) ~ dt \
& = & \frac{1}{\pi} \times 2\pi \
& = & 2 \
\end{eqnarray}
End of explanation
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.sin(2*t), (t, -sp.pi, sp.pi))
sp.Eq(integ, integ.doit())
Explanation: \begin{eqnarray}
a_2 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) \cos(2t) ~ dt \
& = & \frac{1}{\pi} \times 0 \
& = & 0 \
\end{eqnarray}
End of explanation
sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4
n = 0
sp.plot(sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4, (t, -sp.pi, sp.pi));
import mpmath
n = 2
mpmath.cplot(lambda t: mpmath.exp(-mpmath.j * n * t), points=100000)
n = 0
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.exp(-sp.I * n * t), (t, -sp.pi, sp.pi))
integ_res = integ.doit()
sp.Eq(integ, integ_res)
sp.simplify(integ_res / sp.pi)
Explanation: \begin{eqnarray}
b_2 & = & \frac{1}{\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) \sin(2t) ~ dt \
& = & \frac{1}{\pi} \times -3\pi \
& = & -3 \
\end{eqnarray}
Les coefficients $a_3$, $b_3$, $a_4$, $b_4$, etc. sont tous nuls.
Vérification
On a bien:
\begin{eqnarray}
f(t) & = & \frac{a_0}{2} + \sum^{\infty}_{n=1} (a_n \cos(n t) + b_n \sin(n t)) \
& = & \frac{8}{2} + 1 \times \cos(t) + 2 \times \sin(t) + 0 \times \cos(2t) + (-3) \times \sin(2t) + \dots \
& = & \cos(t) + 2 \sin(t) - 3 \sin(2t) + 4 \
\end{eqnarray}
Calcule des coefficients de Fourier pour la fonction $f(t) = \cos(t) + 2 \sin(t) - 3 \sin(2t) + 4$ en utilisant la notation exponentielle
TODO: add plots...
TODO
$$
c_n = \frac{1}{2\pi} \int^{\pi}_{-\pi} f(t) e^{-i n t} dt
$$
End of explanation
n = 1
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.exp(-sp.I * n * t), (t, -sp.pi, sp.pi))
integ_res = integ.doit()
sp.Eq(integ, integ_res)
sp.simplify(integ_res / sp.pi)
Explanation: \begin{eqnarray}
c_0 & = & \frac{1}{2\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) e^{0} ~ dt \
& = & \frac{1}{\pi} \times 8\pi\
& = & 8\
\end{eqnarray}
End of explanation
n = -1
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.exp(-sp.I * n * t), (t, -sp.pi, sp.pi))
integ_res = integ.doit()
sp.Eq(integ, integ_res)
sp.simplify(integ_res / sp.pi)
Explanation: \begin{eqnarray}
c_1 & = & \frac{1}{2\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) e^{-it} ~ dt \
& = & \frac{1}{\pi} \times \pi - 2i\pi\
& = & 1 - 2i\
\end{eqnarray}
End of explanation
n = 2
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.exp(-sp.I * n * t), (t, -sp.pi, sp.pi))
integ_res = integ.doit()
sp.Eq(integ, integ_res)
sp.simplify(integ_res / sp.pi)
Explanation: \begin{eqnarray}
c_{-1} & = & \frac{1}{2\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) e^{it} ~ dt \
& = & \frac{1}{\pi} \times \pi + 2i\pi\
& = & 1 + 2i\
\end{eqnarray}
End of explanation
n = -2
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.exp(-sp.I * n * t), (t, -sp.pi, sp.pi))
integ_res = integ.doit()
sp.Eq(integ, integ_res)
sp.simplify(integ_res / sp.pi)
Explanation: \begin{eqnarray}
c_2 & = & \frac{1}{2\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) e^{-2it} ~ dt \
& = & \frac{1}{\pi} \times 3i\pi\
& = & 3i\
\end{eqnarray}
End of explanation
n = 3
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.exp(-sp.I * n * t), (t, -sp.pi, sp.pi))
integ_res = integ.doit()
sp.Eq(integ, integ_res)
sp.simplify(integ_res / sp.pi)
Explanation: \begin{eqnarray}
c_{-2} & = & \frac{1}{2\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) e^{2it} ~ dt \
& = & \frac{1}{\pi} \times -3i\pi\
& = & -3i\
\end{eqnarray}
End of explanation
n = -3
integ = sp.Integral((sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2*t) + 4) * sp.exp(-sp.I * n * t), (t, -sp.pi, sp.pi))
integ_res = integ.doit()
sp.Eq(integ, integ_res)
sp.simplify(integ_res / sp.pi)
Explanation: \begin{eqnarray}
c_3 & = & \frac{1}{2\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) e^{-3it} ~ dt \
& = & \frac{1}{\pi} \times 0 \
& = & 0\
\end{eqnarray}
End of explanation
eq1 = sp.simplify(8 * sp.exp(0) \
+ 1 - 2 * sp.I * sp.exp(-sp.I * t) \
+ 1 + 2 * sp.I * sp.exp( sp.I * t) \
+ 3 * sp.I * sp.exp(-2* sp.I * t) \
- 3 * sp.I * sp.exp( 2* sp.I * t) )
eq1
sp.simplify(eq1 - sp.cos(t) + 2 * sp.sin(t) - 3 * sp.sin(2 * t) + 4)
Explanation: \begin{eqnarray}
c_{-3} & = & \frac{1}{2\pi} \int^{\pi}_{-\pi} (\cos(t) + 2 \sin(t) - 3 \sin(2t) + 4) e^{3it} ~ dt \
& = & \frac{1}{\pi} \times 0 \
& = & 0 \
\end{eqnarray}
Les coefficients $c_4$, $b_-4$, $a_5$, $b_-5$, etc. sont tous nuls.
Vérification
On a bien:
\begin{eqnarray}
f(t) & = & \sum^{\infty}_{n=-\infty} c_n e^{i n t} \
& = & \underbrace{8 \times e^{0}} + \underbrace{1-2i \times e^{-it}} + \underbrace{1+2i \times e^{it}} + \underbrace{3i \times e^{-2it}} + \underbrace{-3i \times e^{2it}} + \dots \
& = & TODO !!!!!!! \
& = & \cos(t) + 2 \sin(t) - 3 \sin(2t) + 4 \
\end{eqnarray}
End of explanation
<END_TASK> |
109 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Create Some Fake Entires That demonstrates Plotting
In the constructor, give a title, number of total datasets that you want to plot side by side, a list of methods for which you are plotting results. logit_y will logit transforms the y axis for emphasis on low and high end part of the metric. eval_type simply is for naming purposes, and will appear in y-axis titles.
In the following block we just create fake SNR and metrics just fo demonstration purposes.
If you want to compute SNR of a templates (np.ndarray of shape (# time samples, # channels, # units)) just call main_channels(templates).
Step1: Generate SNR vs Metric
Change save_to to a file path to save it to file.
Step2: Generate the curve plots
Similar to the other plot, give path to file in save_to for saving to file. | <ASSISTANT_TASK:>
Python Code:
plot = ChristmasPlot('Fake', n_dataset=3, methods=['yass', 'kilosort', 'spyking circus'], logit_y=True, eval_type="Accuracy")
for method in plot.methods:
for i in range(plot.n_dataset):
x = (np.random.rand(30) - 0.5) * 10
y = 1 / (1 + np.exp(-x + np.random.rand()))
plot.add_metric(x, y, dataset_number=i, method_name=method)
Explanation: Create Some Fake Entires That demonstrates Plotting
In the constructor, give a title, number of total datasets that you want to plot side by side, a list of methods for which you are plotting results. logit_y will logit transforms the y axis for emphasis on low and high end part of the metric. eval_type simply is for naming purposes, and will appear in y-axis titles.
In the following block we just create fake SNR and metrics just fo demonstration purposes.
If you want to compute SNR of a templates (np.ndarray of shape (# time samples, # channels, # units)) just call main_channels(templates).
End of explanation
plot.generate_snr_metric_plot(save_to=None)
Explanation: Generate SNR vs Metric
Change save_to to a file path to save it to file.
End of explanation
plot.generate_curve_plots(save_to=None)
Explanation: Generate the curve plots
Similar to the other plot, give path to file in save_to for saving to file.
End of explanation
<END_TASK> |
110 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Introduction to Tethne
Step1: Once again, read() accepts a string containing a path to either a single DfR dataset, or a directory containing several. Here, "DfR dataset" refers to the folder containing the file "citations.xml", and the contents of that folder.
This will take considerably more time than loading a WoS dataset. The reason is that Tethne automatically detects and parses all of the wordcount data.
Step2: Combining DfR and WoS data
We can combine our datasets using the merge() function. First, we load our WoS data in a separate Corpus
Step3: Both of these datasets are for the Journal of the History of Biology. But note that the WoS and DfR corpora have different numbers of Papers
Step4: Then import merge() from tethne.readers
Step5: We then create a new Corpus by passing both Corpus objects to merge(). If there is conflicting information in the two corpora, the first Corpus gets priority.
Step6: merge() has combined data where possible, and discarded any duplicates in the original datasets.
Step7: FeatureSets
Our wordcount data are represented by a FeatureSet. A FeatureSet is a description of how certain sets of elements are distributed across a Corpus. This is kind of like an inversion of an index. For example, we might be interested in which words (elements) are found in which Papers. We can think of authors as a FeatureSet, too.
All of the available FeatureSets are available in the features attribute (a dictionary) of our Corpus. We can see the available FeatureSets by inspecting its
Step8: Note that citations and authors are also FeatureSets. In fact, the majority of network-building functions in Tethne operate on FeatureSets -- including the coauthors() and bibliographic_coupling() functions that we used in the WoS notebook.
Each FeatureSet has several attributes. The features attribute contains the distribution data itself. These data themselves are (element, value) tuples. In this case, the elements are words, and the values are wordcounts.
Step9: The index contains our "vocabulary"
Step10: We can use the feature_distribution() method of our Corpus to look at the distribution of words over time. In the example below I used MatPlotLib to visualize the distribution.
Step11: If we add the argument mode='documentCounts', we get the number of documents in which 'evolutionary' occurs.
Step12: Note that we can look how documents themselves are distributed using the distribution() method.
Step13: So, putting these together, we can normalize our feature_distribution() data to get a sense of the relative use of the word 'evolution'.
Step14: Topic Modeling with DfR wordcounts
Latent Dirichlet Allocation is a popular approach to discovering latent "topics" in large corpora. Many digital humanists use a software package called MALLET to fit LDA to text data. Tethne uses MALLET to fit LDA topic models.
Before we use LDA, however, we need to do some preprocessing. "Preprocessing" refers to anything that we do to filter or transform our FeatureSet prior to analysis.
Pre-processing
Two important preprocessing steps are
Step15: We then need to define what elements to keep, and what elements to discard. We will use a function that will evaluate whether or not a word is in our stoplist. The function should take three arguments
Step16: We apply the stoplist using the transform() method. FeatureSets are not modified in place; instead, a new FeatureSet is generated that reflects the specified changes. We'll call the new FeatureSet 'wordcounts_filtered'.
Step17: There should be significantly fewer words in our new "wordcounts_filtered" FeatureSet.
Step18: The LDA topic model
Tethne provides a class called LDAModel. You should be able to import it directly from the tethne package
Step19: Now we'll create a new LDAModel for our Corpus. The featureset_name parameter tells the LDAModel which FeatureSet we want to use. We'll use our filtered wordcounts.
Step20: Next we'll fit the model. We need to tell MALLET how many topics to fit (the hyperparameter Z), and how many iterations (max_iter) to perform. This step may take a little while, depending on the size of your corpus.
Step21: You can inspect the inferred topics using the model's print_topics() method. By default, this will print the top ten words for each topic.
Step22: We can also look at the representation of a topic over time using the topic_over_time() method. In the example below we'll print the first five of the topics on the same plot.
Step23: Generating networks from topic models
The features module in the tethne.networks subpackage contains some useful methods for visualizing topic models as networks. You can import it just like the authors or papers modules.
Step24: The terms function generates a network of words connected on the basis of shared affinity with a topic. If two words i and j are both associated with a topic z with $\Phi(i|z) >= 0.01$ and $\Phi(j|z) >= 0.01$, then an edge is drawn between them.
Step25: | <ASSISTANT_TASK:>
Python Code:
from tethne.readers import dfr
Explanation: Introduction to Tethne: Working with data from the Web of Science
Now that we have the basics down, in this notebook we'll begin working with data from the JSTOR Data-for-Research (DfR) portal.
The JSTOR DfR portal gives researchers access to
bibliographic data and N-grams for the entire JSTOR database.
Tethne can use DfR data to generate coauthorship networks, and to improve
metadata for Web of Science records. Tethne is also able to use
N-gram counts to add information to networks, and can interface with MALLET to perform LDA topic modeling.
Methods in Digital & Computational Humanities
This notebook is part of a cluster of learning resources developed by the Laubichler Lab and the Digital Innovation Group at Arizona State University as part of an initiative for digital and computational humanities (d+cH). For more information, see our evolving online methods course at https://diging.atlassian.net/wiki/display/DCH.
Getting Help
Development of the Tethne project is led by Erick Peirson. To get help, first check our issue tracking system on GitHub. There, you can search for questions and problems reported by other users, or ask a question of your own. You can also reach Erick via e-mail at erick.peirson@asu.edu.
Getting bibliographic data from JSTOR Data-for-Research
For the purpose of this tutorial, you can use the sample dataset from https://www.dropbox.com/s/q2jy87pmy9r6bsa/tethne_workshop_data.zip?dl=0.
Access the DfR portal at http://dfr.jstor.org/ If you don't already have an
account, you will need to create a new account.
After you've logged in, perform a search using whatever criteria you please.
When you have achieved the result that you desire, create a new dataset request.
Under the "Dataset Request" menu in the upper-right corner of the page, click
"Submit new request".
On the Download Options page, select your desired Data Type. If you do
not intend to make use of the contents of the papers themselves, then "Citations
Only" is sufficient. Otherwise, choose word counts, bigrams, etc.
Output Format should be set to XML.
Give your request a title, and set the maximum number of articles. Note that
the maximum documents allowed per request is 1,000. Setting Maximum Articles
to a value less than the number of search results will yield a random sample of
your results.
Your request should now appear in your list of Data Requests. When your
request is ready (hours to days later), you will receive an e-mail with a
download link. When downloading from the Data Requests list, be sure to use
the link in the full dataset column.
When your dataset download is complete, unzip it. The contents should look
something like those shown below.
citations.XML contains bibliographic data in XML format. The bigrams,
trigrams, wordcounts folders contain N-gram counts for each document.
If you were to open one of the XML files in the wordcounts folder, say, you would see some XML that looks like this:
```
<?xml version="1.0" encoding="UTF-8"?>
<article id="10.2307/4330482" >
<wordcount weight="21" > of </wordcount>
<wordcount weight="16" > the </wordcount>
<wordcount weight="10" > university </wordcount>
<wordcount weight="10" > a </wordcount>
<wordcount weight="9" > s </wordcount>
<wordcount weight="9" > d </wordcount>
<wordcount weight="9" > harvard </wordcount>
<wordcount weight="8" > m </wordcount>
<wordcount weight="7" > and </wordcount>
<wordcount weight="6" > u </wordcount>
<wordcount weight="6" > press </wordcount>
<wordcount weight="5" > cambridge </wordcount>
<wordcount weight="5" > massachusetts </wordcount>
<wordcount weight="5" > journal </wordcount>
<wordcount weight="4" > by </wordcount>
...
<wordcount weight="1" > stephen </wordcount>
<wordcount weight="1" > liver </wordcount>
<wordcount weight="1" > committee </wordcount>
<wordcount weight="1" > school </wordcount>
<wordcount weight="1" > lewontin </wordcount>
<wordcount weight="1" > canguilhem </wordcount>
<wordcount weight="1" > assistant </wordcount>
<wordcount weight="1" > jay </wordcount>
<wordcount weight="1" > state </wordcount>
<wordcount weight="1" > morgan </wordcount>
<wordcount weight="1" > advertising </wordcount>
<wordcount weight="1" > animal </wordcount>
<wordcount weight="1" > is </wordcount>
<wordcount weight="1" > species </wordcount>
<wordcount weight="1" > claude </wordcount>
<wordcount weight="1" > review </wordcount>
<wordcount weight="1" > hunt </wordcount>
<wordcount weight="1" > founder </wordcount>
</article>
```
Each word is represented by a <wordcount></wordcount> tag. The "weight" attribute gives the number of times that the word occurs in the document, and the word itself is between the tags. We'll come back to this in just a moment.
Parsing DfR datasets
Just as for WoS data, there is a module in tethne.readers for working with DfR data. We can import it with:
End of explanation
dfr_corpus = dfr.read('/Users/erickpeirson/Dropbox/HSS ThatCamp Workshop/sample_data/DfR')
Explanation: Once again, read() accepts a string containing a path to either a single DfR dataset, or a directory containing several. Here, "DfR dataset" refers to the folder containing the file "citations.xml", and the contents of that folder.
This will take considerably more time than loading a WoS dataset. The reason is that Tethne automatically detects and parses all of the wordcount data.
End of explanation
from tethne.readers import wos
wos_corpus = wos.read('/Users/erickpeirson/Dropbox/HSS ThatCamp Workshop/sample_data/wos')
Explanation: Combining DfR and WoS data
We can combine our datasets using the merge() function. First, we load our WoS data in a separate Corpus:
End of explanation
len(dfr_corpus), len(wos_corpus)
Explanation: Both of these datasets are for the Journal of the History of Biology. But note that the WoS and DfR corpora have different numbers of Papers:
End of explanation
from tethne.readers import merge
Explanation: Then import merge() from tethne.readers:
End of explanation
corpus = merge(dfr_corpus, wos_corpus)
Explanation: We then create a new Corpus by passing both Corpus objects to merge(). If there is conflicting information in the two corpora, the first Corpus gets priority.
End of explanation
len(corpus)
Explanation: merge() has combined data where possible, and discarded any duplicates in the original datasets.
End of explanation
corpus.features
Explanation: FeatureSets
Our wordcount data are represented by a FeatureSet. A FeatureSet is a description of how certain sets of elements are distributed across a Corpus. This is kind of like an inversion of an index. For example, we might be interested in which words (elements) are found in which Papers. We can think of authors as a FeatureSet, too.
All of the available FeatureSets are available in the features attribute (a dictionary) of our Corpus. We can see the available FeatureSets by inspecting its:
End of explanation
corpus.features['wordcounts'].features.items()[0] # Just show data for the first Paper.
Explanation: Note that citations and authors are also FeatureSets. In fact, the majority of network-building functions in Tethne operate on FeatureSets -- including the coauthors() and bibliographic_coupling() functions that we used in the WoS notebook.
Each FeatureSet has several attributes. The features attribute contains the distribution data itself. These data themselves are (element, value) tuples. In this case, the elements are words, and the values are wordcounts.
End of explanation
print 'There are %i words in the wordcounts featureset' % len(corpus.features['wordcounts'].index)
Explanation: The index contains our "vocabulary":
End of explanation
plt.figure(figsize=(10, 5))
plt.bar(*corpus.feature_distribution('wordcounts', 'evolutionary')) # <-- The action.
plt.ylabel('Frequency of the word ``evolutionary`` in this Corpus')
plt.xlabel('Publication Date')
plt.show()
Explanation: We can use the feature_distribution() method of our Corpus to look at the distribution of words over time. In the example below I used MatPlotLib to visualize the distribution.
End of explanation
plt.figure(figsize=(10, 5))
plt.bar(*corpus.feature_distribution('wordcounts', 'evolutionary', mode='documentCounts')) # <-- The action.
plt.ylabel('Documents containing ``evolutionary``')
plt.xlabel('Publication Date')
plt.show()
Explanation: If we add the argument mode='documentCounts', we get the number of documents in which 'evolutionary' occurs.
End of explanation
plt.figure(figsize=(10, 5))
plt.bar(*corpus.distribution()) # <-- The action.
plt.ylabel('Number of Documents')
plt.xlabel('Publication Date')
plt.show()
Explanation: Note that we can look how documents themselves are distributed using the distribution() method.
End of explanation
dates, N_evolution = corpus.feature_distribution('wordcounts', 'evolutionary', mode='documentCounts')
dates, N = corpus.distribution()
normalized_frequency = [f/N[i] for i, f in enumerate(N_evolution)]
plt.figure(figsize=(10, 5))
plt.bar(dates, normalized_frequency) # <-- The action.
plt.ylabel('Proportion of documents containing ``evolutionary``')
plt.xlabel('Publication Date')
plt.show()
Explanation: So, putting these together, we can normalize our feature_distribution() data to get a sense of the relative use of the word 'evolution'.
End of explanation
from nltk.corpus import stopwords
stoplist = stopwords.words()
Explanation: Topic Modeling with DfR wordcounts
Latent Dirichlet Allocation is a popular approach to discovering latent "topics" in large corpora. Many digital humanists use a software package called MALLET to fit LDA to text data. Tethne uses MALLET to fit LDA topic models.
Before we use LDA, however, we need to do some preprocessing. "Preprocessing" refers to anything that we do to filter or transform our FeatureSet prior to analysis.
Pre-processing
Two important preprocessing steps are:
1. Removing "stopwords" -- common words like "the", "and", "but", "for", that don't yield much insight into the contents of documents.
2. Removing words that are too common or too rare. These include typos or OCR artifacts.
We can do both of these by using the transform() method on our FeatureSet.
First, we need a stoplist. NLTK provides a great stoplist.
End of explanation
def apply_stoplist(f, v, c, dc):
if f in stoplist or dc > 500 or dc < 3 or len(f) < 4:
return None # Discard the element.
return v
Explanation: We then need to define what elements to keep, and what elements to discard. We will use a function that will evaluate whether or not a word is in our stoplist. The function should take three arguments:
f -- the feature itself (the word)
v -- the number of instances of that feature in a specific document
c -- the number of instances of that feature in the whole FeatureSet
dc -- the number of documents that contain that feature
This function will be applied to each word in each document. If it returns 0 or None, the word will be excluded. Otherwise, it should return a numeric value (in this case, the count for that document).
In addition to applying the stoplist, we'll also exclude any word that occurs in more than 500 of the documents and less than 3 documents, and is less than 4 characters in length.
End of explanation
corpus.features['wordcounts_filtered'] = corpus.features['wordcounts'].transform(apply_stoplist)
Explanation: We apply the stoplist using the transform() method. FeatureSets are not modified in place; instead, a new FeatureSet is generated that reflects the specified changes. We'll call the new FeatureSet 'wordcounts_filtered'.
End of explanation
print 'There are %i words in the wordcounts featureset' % len(corpus.features['wordcounts'].index)
print 'There are %i words in the wordcounts_filtered featureset' % len(corpus.features['wordcounts_filtered'].index)
Explanation: There should be significantly fewer words in our new "wordcounts_filtered" FeatureSet.
End of explanation
from tethne import LDAModel
Explanation: The LDA topic model
Tethne provides a class called LDAModel. You should be able to import it directly from the tethne package:
End of explanation
model = LDAModel(corpus, featureset_name='wordcounts_filtered')
Explanation: Now we'll create a new LDAModel for our Corpus. The featureset_name parameter tells the LDAModel which FeatureSet we want to use. We'll use our filtered wordcounts.
End of explanation
model.fit(Z=50, max_iter=500)
Explanation: Next we'll fit the model. We need to tell MALLET how many topics to fit (the hyperparameter Z), and how many iterations (max_iter) to perform. This step may take a little while, depending on the size of your corpus.
End of explanation
model.print_topics()
Explanation: You can inspect the inferred topics using the model's print_topics() method. By default, this will print the top ten words for each topic.
End of explanation
plt.figure(figsize=(15, 5))
for k in xrange(5): # Generates numbers k in [0, 4].
x, y = model.topic_over_time(k) # Gets topic number k.
plt.plot(x, y, label='topic {0}'.format(k), lw=2, alpha=0.7)
plt.legend(loc='best')
plt.show()
Explanation: We can also look at the representation of a topic over time using the topic_over_time() method. In the example below we'll print the first five of the topics on the same plot.
End of explanation
from tethne.networks import topics
Explanation: Generating networks from topic models
The features module in the tethne.networks subpackage contains some useful methods for visualizing topic models as networks. You can import it just like the authors or papers modules.
End of explanation
termGraph = topics.terms(model, threshold=0.01)
termGraph.order(), termGraph.size()
termGraph.name = ''
from tethne.writers.graph import to_graphml
to_graphml(termGraph, '/Users/erickpeirson/Desktop/topic_terms.graphml')
Explanation: The terms function generates a network of words connected on the basis of shared affinity with a topic. If two words i and j are both associated with a topic z with $\Phi(i|z) >= 0.01$ and $\Phi(j|z) >= 0.01$, then an edge is drawn between them.
End of explanation
topicCoupling = topics.topic_coupling(model, threshold=0.2)
print '%i nodes and %i edges' % (topicCoupling.order(), topicCoupling.size())
to_graphml(topicCoupling, '/Users/erickpeirson/Desktop/lda_topicCoupling.graphml')
Explanation:
End of explanation
<END_TASK> |
111 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Title
Step1: Load Iris Flower Dataset
Step2: Standardize Features
Step3: Create Logistic Regression
Step4: Train Logistic Regression
Step5: Create Previously Unseen Observation
Step6: Predict Class Of Observation
Step7: View Predicted Probabilities | <ASSISTANT_TASK:>
Python Code:
# Load libraries
from sklearn.linear_model import LogisticRegression
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
Explanation: Title: Logistic Regression
Slug: logistic_regression
Summary: How to train a logistic regression in scikit-learn.
Date: 2017-09-21 12:00
Category: Machine Learning
Tags: Logistic Regression
Authors: Chris Albon
Despite having "regression" in its name, a logistic regression is actually a widely used binary classifier (i.e. the target vector can only take two values). In a logistic regression, a linear model (e.g. $\beta_{0}+\beta_{1}x$) is included in a logistic (also called sigmoid) function, ${\frac {1}{1+e^{-z}}}$, such that:
$$P(y_i=1 \mid X)={\frac {1}{1+e^{-(\beta_{0}+\beta_{1}x)}}}$$
where $P(y_i=1 \mid X)$ is the probability of the $i$th observation's target value, $y_i$, being class 1, $X$ is the training data, $\beta_0$ and $\beta_1$ are the parameters to be learned, and $e$ is Euler's number.
Preliminaries
End of explanation
# Load data with only two classes
iris = datasets.load_iris()
X = iris.data[:100,:]
y = iris.target[:100]
Explanation: Load Iris Flower Dataset
End of explanation
# Standarize features
scaler = StandardScaler()
X_std = scaler.fit_transform(X)
Explanation: Standardize Features
End of explanation
# Create logistic regression object
clf = LogisticRegression(random_state=0)
Explanation: Create Logistic Regression
End of explanation
# Train model
model = clf.fit(X_std, y)
Explanation: Train Logistic Regression
End of explanation
# Create new observation
new_observation = [[.5, .5, .5, .5]]
Explanation: Create Previously Unseen Observation
End of explanation
# Predict class
model.predict(new_observation)
Explanation: Predict Class Of Observation
End of explanation
# View predicted probabilities
model.predict_proba(new_observation)
Explanation: View Predicted Probabilities
End of explanation
<END_TASK> |
112 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
CitiBike Ridership Trends - MBA Data project 2017
Sarah Love
Tefiro Kituuka Serunjogi
This project endeavors to understand usership trends amongst Citi Bike riders in New York City.
CitiBike collects cumulative data about its riders, including the number of rentals each day, the total distance per ride (measured as distance between pick-up station and drop-off station), and the number of long-term rentals on any given day. Our analysis examines trends among the number of rentals by day and by month, as well as average distances. With these trends, we can better understand how and when people use CitiBike. This information has the potential to make a huge impact on CitiBike's advertising and marketing campaigns, as well as its internal operations.
Step1: First, we must import the data from CitiBike's website. The data accessed throught the 'Get the data' link at the bottom left corner of the following page http
Step2: We see that this data is has much more information than we need. For example, it includes total annual membership, which we do not need for this analysis. Thus, we have removed this column to prioritize the data that will most impact daily and monthly usership.
Step3: In order to manipulate and sort the data based on day of the week and month, we must convert the date information from a .csv format to python datetime format.
Step4: Now that Python recognizes the data in the Date column as calendar dates, we can add a column to classify each data point by day of the week and by month. This will ultimately allow us to compare usage on Monday vs. Tuesday, e.g., or January vs. February.
Step5: In order to get a sense for how much data we are working with, we need to pull the size and shape. This is relevant to see how many data points we have.
Step6: We now have all the useful data columns, but the index column needs to be replaced. We want to analyze this data by date, so we need to make the date column the index.
Step7: Next, we will retitle each column so that it's easier to understand what we're looking at.
Step8: To begin our analysis, we will add a column that shows the average mileage per trip each day. This can be done using a formula that divides the total miles for each day by the number of corresponding trips for each day, to derive an average trip length for each day.
Step9: To finalize the daily average comparisons, we need to create individual dataframes for each day of the week.
Step10: Now that we have individual dataframes for each day of the week, we can create larger dataframes for week days and weekends.
Step11: Now that we have these dataframes compiled, we can start to pull some insights. For instance, we can calculate the average number of miles a rider travels on a weekday vs. a weekend.
Step12: From this comparison, we can see that riders typically travel 50% farther on weekend trips than weekdays.
We will build on this insight in later graphs.
Step13: Based on the averages calculated above, we can plot how far riders travel on weekend rides vs. weekday rides.
This comparison shows a clear trend that riders travel 50% farther on weekend rides than on weekday rides. This makes sense to us, since the motivation for renting a CitiBiki would be very different on a weekday (likely for a commute) than on a weekend (likely to visit a place of interest).
Step14: Another interesting comparison is between months. We would like to examine and compare the total number of miles traveled by CitiBike users in January, February, and March. A higher number of miles traveled in a given month would indicate more rentals and/or more miles traveled per use. Either way, there is a trend toward heavier bike usage.
Our hypothesis is that riders will ride their bikes more in the beginning of the year, because New Year’s resolutions will push people to ride a bike to work instead of taking the train or a cab. We also need to factor in the poor weather during this time of year, which may deter bike riders, but we think that there will be a spike in January and then a downward trend month to month.
Step15: Based on the analysis, total miles traveled was actually highest in February, disproving our original hypothesis. One theory for why this may be the case is that riders are on vacation in the beginning of February and, therefore, are not commuting to work. Alternatively, blizzards and poor weather may have kept them on the train and in cabs, or working from home. Finally, it could be the case that February had more opportunities for bike rides (perhaps this was popular on Valentine’s Day weekend as couples sought out activities to do together), or CitiBike ran a promotion for part of the month to encourage bike rentals.
Though we can’t claim this as a long-term trend to be expected every February, we recommend that CitiBike work to convert this spike in miles traveled to other months of the year. If this spike represents a flurry of one-time users, CitiBike has an opportunity to convert those users to longer-term users. For instance, they could offer 2 weeks of unlimited use free with the first rental, in order to demonstrate the benefit of CitiBike to users who enjoy riding bikes. This may help create a slightly stickier service for consumers, and would translate a spike in interest into longer term business benefits.
We want to get a bit more granular in our analysis and look into which days are most popular for CitiBike in New York City. In order to do this, we will first create individual data frames for each day of the week in order to average usage.
Step16: We now use the values above to create a dataframe that we can use to plot daily average miles by the day of the week.
Step17: The analysis shows a steady downward trend throughout the week – riders are much more likely to rent a CitiBike on Monday than on Thursday. This is fairly logical. It’s easy to imagine that riders are energized on Monday after a relaxing weekend, and tired, busy, and distracted later in the week. This means that they have more energy and are more willing to ride a bike to work at the beginning of the week. Only those who are going short distances want to ride a bike.
However, there is a considerable spike on Fridays, up to more than 2.5 miles travelled by the average CitiBike rider. This could represent a number of things, but it is certainly related to the impending weekend. Riders are energized by the end of the week, and may bike more at the end of the day because they have the free time. Alternatively, riders may be working from home and going for longer rides during a break in the middle of the day.
Step18: We already know that weekend riders travel much farther than weekday riders, but we anticipated that there would be some difference in Saturday vs. Sunday usage. Instead, data on average miles per trip was almost identical between the two days. Sunday is marginally higher, but it is fair to conclude that weekend travel distance is split evenly across Saturday and Sunday. | <ASSISTANT_TASK:>
Python Code:
import sys # system module
import pandas as pd # data package
import matplotlib as mpl # graphics package
import matplotlib.pyplot as plt # graphics module
import datetime as dt # date and time module
import numpy as np # foundation for pandas
import csv # package for converting csv
from collections import defaultdict # will be used to convert dates
import seaborn as sns # advanced graphics
import urllib.request # package to read url
%matplotlib inline
Explanation: CitiBike Ridership Trends - MBA Data project 2017
Sarah Love
Tefiro Kituuka Serunjogi
This project endeavors to understand usership trends amongst Citi Bike riders in New York City.
CitiBike collects cumulative data about its riders, including the number of rentals each day, the total distance per ride (measured as distance between pick-up station and drop-off station), and the number of long-term rentals on any given day. Our analysis examines trends among the number of rentals by day and by month, as well as average distances. With these trends, we can better understand how and when people use CitiBike. This information has the potential to make a huge impact on CitiBike's advertising and marketing campaigns, as well as its internal operations.
End of explanation
url = "data:application/octet-stream;charset=utf-8,Date%2CTrips%20over%20the%20past%2024-hours%20(midnight%20to%2011%3A59pm)%2CMiles%20traveled%20today%20(midnight%20to%2011%3A59%20pm)%2CTotal%20Annual%20Members%20(All%20Time)%2C24-Hour%20Passes%20Purchased%20(midnight%20to%2011%3A59%20pm)%2C3-Day%20Passes%20Purchased%20(midnight%20to%2011%3A59%20pm)%0A1%2F1%2F17%2C16009%2C50746%2C206623%2C1161%2C16%0A1%2F2%2F17%2C8918%2C21394%2C206672%2C127%2C10%0A1%2F3%2F17%2C14187%2C25058%2C206724%2C40%2C12%0A1%2F4%2F17%2C34006%2C69445%2C206774%2C448%2C23%0A1%2F5%2F17%2C28382%2C52401%2C206840%2C197%2C9%0A1%2F6%2F17%2C24173%2C48939%2C206873%2C152%2C11%0A1%2F7%2F17%2C4425%2C23556%2C206884%2C17%2C4%0A1%2F8%2F17%2C6416%2C22756%2C206897%2C21%2C2%0A1%2F9%2F17%2C15852%2C35069%2C206922%2C28%2C2%0A1%2F10%2F17%2C23218%2C46074%2C206943%2C61%2C10%0A1%2F11%2F17%2C32400%2C62766%2C206967%2C195%2C22%0A1%2F12%2F17%2C39766%2C84313%2C207003%2C435%2C22%0A1%2F13%2F17%2C33602%2C75661%2C207039%2C361%2C22%0A1%2F14%2F17%2C13819%2C43090%2C207067%2C227%2C21%0A1%2F15%2F17%2C17383%2C50738%2C207100%2C454%2C13%0A1%2F16%2F17%2C24106%2C51613%2C207141%2C410%2C23%0A1%2F17%2F17%2C18643%2C40692%2C207176%2C61%2C10%0A1%2F18%2F17%2C30847%2C57814%2C207202%2C134%2C12%0A1%2F19%2F17%2C36543%2C71690%2C207239%2C351%2C17%0A1%2F20%2F17%2C26736%2C55837%2C207263%2C172%2C14%0A1%2F21%2F17%2C26610%2C64662%2C207310%2C901%2C24%0A1%2F22%2F17%2C20523%2C49368%2C207340%2C500%2C12%0A1%2F23%2F17%2C15816%2C37551%2C207370%2C47%2C15%0A1%2F24%2F17%2C13165%2C35067%2C207385%2C24%2C3%0A1%2F25%2F17%2C36918%2C70769%2C207423%2C274%2C20%0A1%2F26%2F17%2C32991%2C63675%2C207470%2C205%2C19%0A1%2F27%2F17%2C32385%2C70104%2C207508%2C211%2C18%0A1%2F28%2F17%2C21300%2C53951%2C207534%2C381%2C14%0A1%2F29%2F17%2C22522%2C62773%2C207588%2C540%2C19%0A1%2F30%2F17%2C29607%2C59796%2C207641%2C157%2C23%0A1%2F31%2F17%2C25163%2C48787%2C207671%2C54%2C13%0A2%2F1%2F17%2C35198%2C77344%2C207712%2C180%2C11%0A2%2F2%2F17%2C34511%2C70595%2C207756%2C193%2C21%0A2%2F3%2F17%2C29622%2C64375%2C207784%2C169%2C25%0A2%2F4%2F17%2C19204%2C45120%2C207819%2C366%2C8%0A2%2F5%2F17%2C20059%2C50532%2C207845%2C310%2C15%0A2%2F6%2F17%2C33405%2C72546%2C207884%2C272%2C33%0A2%2F7%2F17%2C23047%2C57685%2C207914%2C48%2C13%0A2%2F8%2F17%2C40416%2C90384%2C207962%2C448%2C26%0A2%2F9%2F17%2C0%2C0%2C207969%2C0%2C0%0A2%2F10%2F17%2C1995%2C8308%2C207981%2C11%2C0%0A2%2F11%2F17%2C8958%2C26272%2C207988%2C159%2C3%0A2%2F12%2F17%2C5800%2C26468%2C208005%2C24%2C3%0A2%2F13%2F17%2C23851%2C58682%2C208028%2C68%2C14%0A2%2F14%2F17%2C29656%2C77900%2C208050%2C152%2C11%0A2%2F15%2F17%2C32046%2C72348%2C208087%2C151%2C23%0A2%2F16%2F17%2C29116%2C63479%2C208112%2C98%2C9%0A2%2F17%2F17%2C28730%2C67472%2C208148%2C222%2C27%0A2%2F18%2F17%2C30569%2C99111%2C208205%2C1896%2C79%0A2%2F19%2F17%2C36962%2C125000%2C208290%2C3450%2C49%0A2%2F20%2F17%2C32286%2C91400%2C208371%2C1355%2C46%0A2%2F21%2F17%2C32637%2C84708%2C208444%2C292%2C37%0A2%2F22%2F17%2C38111%2C90486%2C208519%2C399%2C25%0A2%2F23%2F17%2C42335%2C98392%2C208580%2C752%2C60%0A2%2F24%2F17%2C46946%2C117022%2C208685%2C1400%2C69%0A2%2F25%2F17%2C33342%2C104309%2C208803%2C1870%2C38%0A2%2F26%2F17%2C24713%2C75196%2C208880%2C771%2C21%0A2%2F27%2F17%2C35975%2C79961%2C208946%2C348%2C28%0A2%2F28%2F17%2C41719%2C92669%2C209027%2C443%2C43%0A3%2F1%2F2017%2C40487%2C85690%2C209136%2C363%2C33%0A3%2F2%2F2017%2C35627%2C73635%2C209214%2C287%2C24%0A3%2F3%2F2017%2C31042%2C68544%2C209274%2C261%2C29%0A3%2F4%2F2017%2C15645%2C43108%2C209307%2C230%2C10%0A3%2F5%2F2017%2C15919%2C50924%2C209353%2C275%2C12%0A3%2F6%2F2017%2C32456%2C72517%2C209438%2C301%2C25%0A3%2F7%2F2017%2C29605%2C63538%2C209486%2C166%2C21%0A3%2F8%2F2017%2C43339%2C101349%2C209554%2C621%2C36%0A3%2F9%2F2017%2C45070%2C99193%2C209625%2C688%2C22%0A3%2F10%2F2017%2C18394%2C41929%2C209654%2C79%2C13%0A3%2F11%2F2017%2C15185%2C48379%2C209676%2C154%2C21%0A3%2F12%2F2017%2C13437%2C38128%2C209693%2C149%2C5%0A3%2F13%2F2017%2C27343%2C58708%2C209721%2C180%2C11%0A3%2F14%2F2017%2C0%2C0%2C209738%2C0%2C0%0A3%2F15%2F2017%2C0%2C0%2C209757%2C0%2C0%0A3%2F16%2F2017%2C0%2C0%2C209769%2C0%2C0%0A3%2F17%2F2017%2C7096%2C25327%2C209791%2C132%2C12%0A3%2F18%2F2017%2C4105%2C17356%2C209813%2C79%2C6%0A3%2F19%2F2017%2C10550%2C33344%2C209845%2C304%2C10%0A3%2F20%2F2017%2C27285%2C60121%2C209898%2C266%2C19%0A3%2F21%2F2017%2C36732%2C77937%2C209966%2C420%2C36%0A3%2F22%2F2017%2C26805%2C55434%2C210014%2C120%2C8%0A3%2F23%2F2017%2C29881%2C74320%2C210053%2C246%2C17%0A3%2F24%2F2017%2C34009%2C82641%2C210094%2C436%2C27%0A3%2F25%2F2017%2C29645%2C86293%2C210161%2C1540%2C44%0A3%2F26%2F2017%2C19893%2C55139%2C210212%2C483%2C17%0A3%2F27%2F2017%2C26288%2C59584%2C210279%2C293%2C29%0A3%2F28%2F2017%2C21463%2C45798%2C210338%2C75%2C9%0A3%2F29%2F2017%2C42398%2C91378%2C210444%2C762%2C86%0A3%2F30%2F2017%2C39732%2C90917%2C210504%2C601%2C47%0A3%2F31%2F2017%2C6943%2C23459%2C210549%2C19%2C4"
data_file = urllib.request.urlopen(url) # this code allows python to access the information directly from the source website
CitiBike = pd.read_csv(data_file)
print ('Variable dtypes:\n', CitiBike.dtypes)
CitiBike.head()
Explanation: First, we must import the data from CitiBike's website. The data accessed throught the 'Get the data' link at the bottom left corner of the following page http://datawrapper.dwcdn.net/33zqP/6/. This data is updated in near-real-time. When we run our anlysis, data available was from January 1, 2017, through March 31, 2017. Due to the character length of the link address for the data file, it is not redable directly by a .read_csv() function in Python and so we use the urllib.request functionality as shown below to access the source website directly through python.
End of explanation
CitiBike.drop(CitiBike.columns[[3,4,5]], axis = 1, inplace = True)
CitiBike.head()
Explanation: We see that this data is has much more information than we need. For example, it includes total annual membership, which we do not need for this analysis. Thus, we have removed this column to prioritize the data that will most impact daily and monthly usership.
End of explanation
CitiBike['Date'] = pd.to_datetime(CitiBike['Date'])
CitiBike.head ()
CitiBike.dtypes
Explanation: In order to manipulate and sort the data based on day of the week and month, we must convert the date information from a .csv format to python datetime format.
End of explanation
CitiBike['Day of Week'] = CitiBike['Date'].dt.weekday_name
CitiBike.head()
CitiBike['Month'] = CitiBike['Date'].dt.month
CitiBike.head()
Explanation: Now that Python recognizes the data in the Date column as calendar dates, we can add a column to classify each data point by day of the week and by month. This will ultimately allow us to compare usage on Monday vs. Tuesday, e.g., or January vs. February.
End of explanation
print ("The number of rows and columns are ", CitiBike.shape, "respectively")
Explanation: In order to get a sense for how much data we are working with, we need to pull the size and shape. This is relevant to see how many data points we have.
End of explanation
CitiBike = CitiBike.set_index ('Date')
CitiBike.head()
Explanation: We now have all the useful data columns, but the index column needs to be replaced. We want to analyze this data by date, so we need to make the date column the index.
End of explanation
titles = ['Total Trips', 'Total Miles', 'Day of Week', 'Month']
CitiBike.columns = titles
CitiBike.head()
Explanation: Next, we will retitle each column so that it's easier to understand what we're looking at.
End of explanation
CitiBike['Average Miles per Trip'] = CitiBike['Total Miles'] / CitiBike['Total Trips']
CitiBike.head()
CitiBike.shape
Explanation: To begin our analysis, we will add a column that shows the average mileage per trip each day. This can be done using a formula that divides the total miles for each day by the number of corresponding trips for each day, to derive an average trip length for each day.
End of explanation
CitiBike [CitiBike['Day of Week'] == 'Sunday']
CitiBike [CitiBike['Day of Week'] == 'Monday']
CitiBike [CitiBike['Day of Week'] == 'Tuesday']
CitiBike [CitiBike['Day of Week'] == 'Wednesday']
CitiBike [CitiBike['Day of Week'] == 'Thursday']
CitiBike [CitiBike['Day of Week'] == 'Friday']
CitiBike [CitiBike['Day of Week'] == 'Saturday']
CitiBike [CitiBike['Month'] == 1]
CitiBike [CitiBike['Month'] == 2].head ()
Explanation: To finalize the daily average comparisons, we need to create individual dataframes for each day of the week.
End of explanation
Weekend_List = ['Saturday', 'Sunday']
Weekday_List = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday']
CitiBike [CitiBike ['Day of Week'].isin (Weekend_List)].head ()
CitiBike [CitiBike ['Day of Week'].isin (Weekday_List)].head ()
Explanation: Now that we have individual dataframes for each day of the week, we can create larger dataframes for week days and weekends.
End of explanation
Weekend_Chart = CitiBike [CitiBike ['Day of Week'].isin (Weekend_List)]
Weekend_Average = Weekend_Chart[['Average Miles per Trip']].mean ()
Weekday_Chart = CitiBike [CitiBike ['Day of Week'].isin (Weekday_List)].head ()
Weekday_Average = Weekday_Chart[['Average Miles per Trip']].mean ()
print ("The average miles riders cover on the weekend are", Weekend_Average)
print ("The average miles riders cover on weekdays are", Weekday_Average)
Explanation: Now that we have these dataframes compiled, we can start to pull some insights. For instance, we can calculate the average number of miles a rider travels on a weekday vs. a weekend.
End of explanation
Average_Mileage = pd.DataFrame ({'Weekdays' : Weekday_Average, 'Weekends' : Weekend_Average})
Average_Mileage = Average_Mileage [['Weekdays', 'Weekends']]
print (Average_Mileage)
Explanation: From this comparison, we can see that riders typically travel 50% farther on weekend trips than weekdays.
We will build on this insight in later graphs.
End of explanation
fig, ax = plt.subplots(1)
Average_Mileage.plot(ax=ax, kind = 'bar', title = 'Average Miles on weekends vs. Weekdays Q1')
ax.legend(['Weekdays', 'Weekends'], loc = 'best')
ax.set_ylabel('Miles')
ax.set_ylim (0,3.5)
Explanation: Based on the averages calculated above, we can plot how far riders travel on weekend rides vs. weekday rides.
This comparison shows a clear trend that riders travel 50% farther on weekend rides than on weekday rides. This makes sense to us, since the motivation for renting a CitiBiki would be very different on a weekday (likely for a commute) than on a weekend (likely to visit a place of interest).
End of explanation
January_Miles = CitiBike [CitiBike['Month'] == 1]
January_Miles_Total = January_Miles [['Total Miles']].sum ()
February_Miles = CitiBike [CitiBike['Month'] == 2]
February_Miles_Total = February_Miles [['Total Miles']].sum ()
March_Miles = CitiBike [CitiBike['Month'] == 3]
March_Miles_Total = March_Miles [['Total Miles']].sum ()
print (January_Miles_Total)
print (February_Miles_Total)
print (March_Miles_Total)
Total_Mileage = pd.DataFrame ({'January' : January_Miles_Total,
'February' : February_Miles_Total,
'March' : March_Miles_Total})
Total_Mileage = Total_Mileage[['January', 'February', 'March']]
print (Total_Mileage)
fig, ax = plt.subplots(1)
Total_Mileage.plot(ax=ax, kind = 'bar', title = 'Total Miles Covered per Month Q1')
ax.legend(['JAN', 'FEB', 'MAR'], loc='best')
ax.set_xlabel('Month')
ax.set_ylabel('Total Miles')
ax.set_ylim (0,2100000)
Explanation: Another interesting comparison is between months. We would like to examine and compare the total number of miles traveled by CitiBike users in January, February, and March. A higher number of miles traveled in a given month would indicate more rentals and/or more miles traveled per use. Either way, there is a trend toward heavier bike usage.
Our hypothesis is that riders will ride their bikes more in the beginning of the year, because New Year’s resolutions will push people to ride a bike to work instead of taking the train or a cab. We also need to factor in the poor weather during this time of year, which may deter bike riders, but we think that there will be a spike in January and then a downward trend month to month.
End of explanation
# Monday
Monday_Data = CitiBike [CitiBike['Day of Week'] == 'Monday']
Monday_Miles = Monday_Data[['Average Miles per Trip']].mean ()
# Tuesday
Tuesday_Data = CitiBike [CitiBike['Day of Week'] == 'Tuesday']
Tuesday_Miles = Tuesday_Data[['Average Miles per Trip']].mean ()
# Wednesday
Wednesday_Data = CitiBike [CitiBike['Day of Week'] == 'Wednesday']
Wednesday_Miles = Wednesday_Data[['Average Miles per Trip']].mean ()
# Thursday
Thursday_Data = CitiBike [CitiBike['Day of Week'] == 'Thursday']
Thursday_Miles = Thursday_Data[['Average Miles per Trip']].mean ()
# Friday
Friday_Data = CitiBike [CitiBike['Day of Week'] == 'Friday']
Friday_Miles = Friday_Data[['Average Miles per Trip']].mean ()
# Saturday
Saturday_Data = CitiBike [CitiBike['Day of Week'] == 'Saturday']
Saturday_Miles = Saturday_Data[['Average Miles per Trip']].mean ()
# Sunday
Sunday_Data = CitiBike [CitiBike['Day of Week'] == 'Sunday']
Sunday_Miles = Sunday_Data[['Average Miles per Trip']].mean ()
print (Monday_Miles) # to confirm that code is working as intended and returning desired results
Explanation: Based on the analysis, total miles traveled was actually highest in February, disproving our original hypothesis. One theory for why this may be the case is that riders are on vacation in the beginning of February and, therefore, are not commuting to work. Alternatively, blizzards and poor weather may have kept them on the train and in cabs, or working from home. Finally, it could be the case that February had more opportunities for bike rides (perhaps this was popular on Valentine’s Day weekend as couples sought out activities to do together), or CitiBike ran a promotion for part of the month to encourage bike rentals.
Though we can’t claim this as a long-term trend to be expected every February, we recommend that CitiBike work to convert this spike in miles traveled to other months of the year. If this spike represents a flurry of one-time users, CitiBike has an opportunity to convert those users to longer-term users. For instance, they could offer 2 weeks of unlimited use free with the first rental, in order to demonstrate the benefit of CitiBike to users who enjoy riding bikes. This may help create a slightly stickier service for consumers, and would translate a spike in interest into longer term business benefits.
We want to get a bit more granular in our analysis and look into which days are most popular for CitiBike in New York City. In order to do this, we will first create individual data frames for each day of the week in order to average usage.
End of explanation
Weekday_Daily_Mileage = pd.DataFrame ({'Monday' : Monday_Miles,
'Tuesday' : Tuesday_Miles,
'Wednesday' : Wednesday_Miles,
'Thursday' : Thursday_Miles,
'Friday' : Friday_Miles})
Weekend_Daily_Mileage = pd.DataFrame ({'Saturday' : Saturday_Miles,
'Sunday' : Sunday_Miles})
Weekday_Daily_Mileage = Weekday_Daily_Mileage[['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday']]
Weekend_Daily_Mileage = Weekend_Daily_Mileage[['Saturday', 'Sunday']]
print (Weekday_Daily_Mileage)
print (Weekend_Daily_Mileage)
Weekday_Daily_Mileage.head ()
Explanation: We now use the values above to create a dataframe that we can use to plot daily average miles by the day of the week.
End of explanation
fig, ax = plt.subplots(1)
Weekday_Daily_Mileage.plot(ax=ax, kind = 'bar', title = 'Daily Weekday Average Miles per Rider Q1')
ax.legend(['MON', 'TUE', 'WED', 'THU', 'FRI'], loc='best')
ax.set_xlabel('Days of the week (Weekday)')
ax.set_ylabel('Average number of miles')
ax.set_ylim (0,3.0)
Explanation: The analysis shows a steady downward trend throughout the week – riders are much more likely to rent a CitiBike on Monday than on Thursday. This is fairly logical. It’s easy to imagine that riders are energized on Monday after a relaxing weekend, and tired, busy, and distracted later in the week. This means that they have more energy and are more willing to ride a bike to work at the beginning of the week. Only those who are going short distances want to ride a bike.
However, there is a considerable spike on Fridays, up to more than 2.5 miles travelled by the average CitiBike rider. This could represent a number of things, but it is certainly related to the impending weekend. Riders are energized by the end of the week, and may bike more at the end of the day because they have the free time. Alternatively, riders may be working from home and going for longer rides during a break in the middle of the day.
End of explanation
fig, ax = plt.subplots(1)
Weekend_Daily_Mileage.plot(ax=ax,
kind = 'barh',
title = 'Daily Weekend Average Miles per Rider Q1')
ax.legend(['SAT', 'SUN'], loc='best')
ax.set_ylabel('Day')
ax.set_xlabel('Average number of miles')
ax.set_xlim (0,3.5)
Explanation: We already know that weekend riders travel much farther than weekday riders, but we anticipated that there would be some difference in Saturday vs. Sunday usage. Instead, data on average miles per trip was almost identical between the two days. Sunday is marginally higher, but it is fair to conclude that weekend travel distance is split evenly across Saturday and Sunday.
End of explanation
<END_TASK> |
113 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Network analysis of the data.
The goal of this notebook is to uncover several constructions inside the dataset which may help us to uncover fraud. With that, we can see whether we can create features related to connections inside the network.
In this network analysis, we see if we can use the connections an account has in order to predict behavior. We will build a graph with the following properties
Step1: Load data.
Step2: Creata a unique bank account (bank + account)
Step3: Build the graph.
We initiate the graph, add the nodes from the internal account id and the non-zero external accounts.
Step4: Add non-empty edges.
Step5: Look at the largest connected components.
A connected component is a cluster of nodes who are connected by an edge. This can uncover certain structures of possible unwanted behavor.
Step6: Look at it in a directed graph. | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
Explanation: Network analysis of the data.
The goal of this notebook is to uncover several constructions inside the dataset which may help us to uncover fraud. With that, we can see whether we can create features related to connections inside the network.
In this network analysis, we see if we can use the connections an account has in order to predict behavior. We will build a graph with the following properties:
- Every account is a node (internal and external)
- Every transaction is an edge.
Once we have this, we can see if we can use different graph properties to see whether or not an account is suspicious of unwanted behaviour.
End of explanation
client_info = pd.read_csv('data/client_info.csv')
demographic_info = pd.read_csv('data/demographic_data.csv')
transaction_info = pd.read_csv('data/transction_info.csv')
order_info = pd.read_csv('data/order_info.csv')
Explanation: Load data.
End of explanation
transaction_info['bank_account'] = transaction_info['bank'] + transaction_info['account'].map(str)
Explanation: Creata a unique bank account (bank + account)
End of explanation
G = nx.Graph()
G.add_nodes_from(transaction_info['account_id'].unique().tolist())
G.add_nodes_from(transaction_info['bank_account'][transaction_info['bank_account'].notnull()].unique())
Explanation: Build the graph.
We initiate the graph, add the nodes from the internal account id and the non-zero external accounts.
End of explanation
nonEmpty = transaction_info[transaction_info['bank_account'].notnull()]
edges = zip(nonEmpty['account_id'],nonEmpty['bank_account'], nonEmpty['amount'])
G.add_weighted_edges_from(edges)
Explanation: Add non-empty edges.
End of explanation
giant = max(nx.connected_component_subgraphs(G), key=len)
nx.draw_circular(giant)
Explanation: Look at the largest connected components.
A connected component is a cluster of nodes who are connected by an edge. This can uncover certain structures of possible unwanted behavor.
End of explanation
Gdi = nx.DiGraph()
Gdi.add_nodes_from(giant.nodes())
Gdi.add_edges_from(giant.edges())
nx.draw_circular(Gdi)
Explanation: Look at it in a directed graph.
End of explanation
<END_TASK> |
114 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Deep Learning
Assignment 2
Previously in 1_notmnist.ipynb, we created a pickle with formatted datasets for training, development and testing on the notMNIST dataset.
The goal of this assignment is to progressively train deeper and more accurate models using TensorFlow.
Step1: First reload the data we generated in 1_notmnist.ipynb.
Step2: Reformat into a shape that's more adapted to the models we're going to train
Step3: We're first going to train a multinomial logistic regression using simple gradient descent.
TensorFlow works like this
Step4: Let's run this computation and iterate
Step5: Let's now switch to stochastic gradient descent training instead, which is much faster.
The graph will be similar, except that instead of holding all the training data into a constant node, we create a Placeholder node which will be fed actual data at every call of session.run().
Step6: Let's run it
Step7: Problem
Turn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units nn.relu() and 1024 hidden nodes. This model should improve your validation / test accuracy.
Setting up the graph with rectified linear units and one hidden layer | <ASSISTANT_TASK:>
Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
Explanation: Deep Learning
Assignment 2
Previously in 1_notmnist.ipynb, we created a pickle with formatted datasets for training, development and testing on the notMNIST dataset.
The goal of this assignment is to progressively train deeper and more accurate models using TensorFlow.
End of explanation
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
Explanation: First reload the data we generated in 1_notmnist.ipynb.
End of explanation
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
Explanation: Reformat into a shape that's more adapted to the models we're going to train:
- data as a flat matrix,
- labels as float 1-hot encodings.
End of explanation
# With gradient descent training, even this much data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000
graph = tf.Graph()
with graph.as_default():
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random values following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
Explanation: We're first going to train a multinomial logistic regression using simple gradient descent.
TensorFlow works like this:
* First you describe the computation that you want to see performed: what the inputs, the variables, and the operations look like. These get created as nodes over a computation graph. This description is all contained within the block below:
with graph.as_default():
...
Then you can run the operations on this graph as many times as you want by calling session.run(), providing it outputs to fetch from the graph that get returned. This runtime operation is all contained in the block below:
with tf.Session(graph=graph) as session:
...
Let's load all the data into TensorFlow and build the computation graph corresponding to our training:
End of explanation
num_steps = 801
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tf.global_variables_initializer().run()
print('Initialized')
for step in range(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 100 == 0):
print('Loss at step %d: %f' % (step, l))
print('Training accuracy: %.1f%%' % accuracy(
predictions, train_labels[:train_subset, :]))
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print('Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
Explanation: Let's run this computation and iterate:
End of explanation
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
Explanation: Let's now switch to stochastic gradient descent training instead, which is much faster.
The graph will be similar, except that instead of holding all the training data into a constant node, we create a Placeholder node which will be fed actual data at every call of session.run().
End of explanation
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
Explanation: Let's run it:
End of explanation
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights_layer_1 = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases_layer_1 = tf.Variable(tf.zeros([num_labels]))
# Layer 2 weights have an input dimension = output of first layer
weights_layer_2 = tf.Variable(
tf.truncated_normal([num_labels, num_labels]))
biases_layer_2 = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits_layer_1 = tf.matmul(tf_train_dataset, weights_layer_1) + biases_layer_1
relu_output = tf.nn.relu(logits_layer_1)
logits_layer_2 = tf.matmul(relu_output, weights_layer_2) + biases_layer_2
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits_layer_2))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits_layer_2)
logits_l_1_valid = tf.matmul(tf_valid_dataset, weights_layer_1) + biases_layer_1
relu_valid = tf.nn.relu(logits_l_1_valid)
logits_l_2_valid = tf.matmul(relu_valid, weights_layer_2) + biases_layer_2
valid_prediction = tf.nn.softmax(logits_l_2_valid)
logits_l_1_test = tf.matmul(tf_test_dataset, weights_layer_1) + biases_layer_1
relu_test = tf.nn.relu(logits_l_1_test)
logits_l_2_test = tf.matmul(relu_test, weights_layer_2) + biases_layer_2
test_prediction = tf.nn.softmax(logits_l_2_test)
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy.eval(
predictions, batch_labels)
)
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
Explanation: Problem
Turn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units nn.relu() and 1024 hidden nodes. This model should improve your validation / test accuracy.
Setting up the graph with rectified linear units and one hidden layer:
End of explanation
<END_TASK> |
115 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Train RNN
Here we give a complete example of what using Equinox normally looks like day-to-day.
In this example we'll train an RNN to classify clockwise vs anticlockwise spirals.
Step1: We begin by importing the usual libraries, setting up a very simple dataloader, and generating a toy dataset of spirals.
Step2: Now for our model.
Purely by way of example, we handle the final adding on of bias ourselves, rather than letting the linear layer do it. This is just so we can demonstrate how to use custom parameters in models.
Step3: And finally the training loop.
Step4: eqx.filter_value_and_grad will calculate the gradient with respect to the first argument (model). By default it will calculate gradients for all the floating-point JAX arrays and ignore everything else. For example the model parameters will be differentiated, whilst model.hidden_size is an integer and will be left alone. If you need finer control then these defaults can be adjusted; see [equinox.filter_grad][] and [equinox.filter_value_and_grad][].
Likewise, by default, eqx.filter_jit will look at all the arguments passed to make_step, and automatically JIT-trace every array and JIT-static everything else. For example the model parameters and the data x and y will be traced, whilst model.hidden_size is an integer and will be static'd instead. Once again if you need finer control then these defaults can be adjusted; see [equinox.filter_jit][]. | <ASSISTANT_TASK:>
Python Code:
import math
import jax
import jax.lax as lax
import jax.numpy as jnp
import jax.random as jrandom
import optax # https://github.com/deepmind/optax
import equinox as eqx
Explanation: Train RNN
Here we give a complete example of what using Equinox normally looks like day-to-day.
In this example we'll train an RNN to classify clockwise vs anticlockwise spirals.
End of explanation
def dataloader(arrays, batch_size, *, key):
dataset_size = arrays[0].shape[0]
assert all(array.shape[0] == dataset_size for array in arrays)
indices = jnp.arange(dataset_size)
while True:
perm = jrandom.permutation(key, indices)
(key,) = jrandom.split(key, 1)
start = 0
end = batch_size
while end < dataset_size:
batch_perm = perm[start:end]
yield tuple(array[batch_perm] for array in arrays)
start = end
end = start + batch_size
def get_data(dataset_size, *, key):
t = jnp.linspace(0, 2 * math.pi, 16)
offset = jrandom.uniform(key, (dataset_size, 1), minval=0, maxval=2 * math.pi)
x1 = jnp.sin(t + offset) / (1 + t)
x2 = jnp.cos(t + offset) / (1 + t)
y = jnp.ones((dataset_size, 1))
half_dataset_size = dataset_size // 2
x1 = x1.at[:half_dataset_size].multiply(-1)
y = y.at[:half_dataset_size].set(0)
x = jnp.stack([x1, x2], axis=-1)
return x, y
Explanation: We begin by importing the usual libraries, setting up a very simple dataloader, and generating a toy dataset of spirals.
End of explanation
class RNN(eqx.Module):
hidden_size: int
cell: eqx.Module
linear: eqx.nn.Linear
bias: jnp.ndarray
def __init__(self, in_size, out_size, hidden_size, *, key):
ckey, lkey = jrandom.split(key)
self.hidden_size = hidden_size
self.cell = eqx.nn.GRUCell(in_size, hidden_size, key=ckey)
self.linear = eqx.nn.Linear(hidden_size, out_size, use_bias=False, key=lkey)
self.bias = jnp.zeros(out_size)
def __call__(self, input):
hidden = jnp.zeros((self.hidden_size,))
def f(carry, inp):
return self.cell(inp, carry), None
out, _ = lax.scan(f, hidden, input)
# sigmoid because we're performing binary classification
return jax.nn.sigmoid(self.linear(out) + self.bias)
Explanation: Now for our model.
Purely by way of example, we handle the final adding on of bias ourselves, rather than letting the linear layer do it. This is just so we can demonstrate how to use custom parameters in models.
End of explanation
def main(
dataset_size=10000,
batch_size=32,
learning_rate=3e-3,
steps=200,
hidden_size=16,
depth=1,
seed=5678,
):
data_key, loader_key, model_key = jrandom.split(jrandom.PRNGKey(seed), 3)
xs, ys = get_data(dataset_size, key=data_key)
iter_data = dataloader((xs, ys), batch_size, key=loader_key)
model = RNN(in_size=2, out_size=1, hidden_size=hidden_size, key=model_key)
@eqx.filter_value_and_grad
def compute_loss(model, x, y):
pred_y = jax.vmap(model)(x)
# Trains with respect to binary cross-entropy
return -jnp.mean(y * jnp.log(pred_y) + (1 - y) * jnp.log(1 - pred_y))
# Important for efficiency whenever you use JAX: wrap everything into a single JIT
# region.
@eqx.filter_jit
def make_step(model, x, y, opt_state):
loss, grads = compute_loss(model, x, y)
updates, opt_state = optim.update(grads, opt_state)
model = eqx.apply_updates(model, updates)
return loss, model, opt_state
optim = optax.adam(learning_rate)
opt_state = optim.init(model)
for step, (x, y) in zip(range(steps), iter_data):
loss, model, opt_state = make_step(model, x, y, opt_state)
loss = loss.item()
print(f"step={step}, loss={loss}")
pred_ys = jax.vmap(model)(xs)
num_correct = jnp.sum((pred_ys > 0.5) == ys)
final_accuracy = (num_correct / dataset_size).item()
print(f"final_accuracy={final_accuracy}")
Explanation: And finally the training loop.
End of explanation
main() # All right, let's run the code.
Explanation: eqx.filter_value_and_grad will calculate the gradient with respect to the first argument (model). By default it will calculate gradients for all the floating-point JAX arrays and ignore everything else. For example the model parameters will be differentiated, whilst model.hidden_size is an integer and will be left alone. If you need finer control then these defaults can be adjusted; see [equinox.filter_grad][] and [equinox.filter_value_and_grad][].
Likewise, by default, eqx.filter_jit will look at all the arguments passed to make_step, and automatically JIT-trace every array and JIT-static everything else. For example the model parameters and the data x and y will be traced, whilst model.hidden_size is an integer and will be static'd instead. Once again if you need finer control then these defaults can be adjusted; see [equinox.filter_jit][].
End of explanation
<END_TASK> |
116 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Pesquisar por Tweets
Na API Rest também podemos utilizar o método search para procurar por tweets que combinam com o termo definido.
O método contém algumas opções como
Step1: Pesquisando
Step2: Recuperar 1000 tweets | <ASSISTANT_TASK:>
Python Code:
import tweepy
consumer_key = ''
consumer_secret = ''
access_token = ''
access_token_secret = ''
autorizar = tweepy.OAuthHandler(consumer_key, consumer_secret)
autorizar.set_access_token(access_token, access_token_secret)
api = tweepy.API(autorizar)
print(api)
Explanation: Pesquisar por Tweets
Na API Rest também podemos utilizar o método search para procurar por tweets que combinam com o termo definido.
O método contém algumas opções como:
api.search(q, count, max_id, lang)
q - é o parâmetro que terá o termo a ser pesquisado.
count - é a quantidade de tweets que serão retornados. O limite é 100 e o padrão é 15.
max_id - retorna apenas os tweets com o ID menor ou igual ao que foi especificado.
lang - restringe a busca por tweets de um determinado idioma.
End of explanation
tweets = api.search(q='Python', lang='pt') # Teste o parâmetro count=150
for i, tweet in enumerate(tweets, start=1):
print("%d ---- %s" % (i, tweet.text))
Explanation: Pesquisando
End of explanation
tweets_salvos = []
ultimo_id = -1
qtde_tweets = 1000
while len(tweets_salvos) < qtde_tweets:
contador = qtde_tweets - len(tweets_salvos)
try:
novos_tweets = api.search(q='Python', count=contador, max_id=str(ultimo_id - 1)) # Teste com o parâmetro lang='pt'
if not novos_tweets:
print("Nenhum tweet para recuperar")
break
tweets_salvos.extend(novos_tweets)
ultimo_id = novos_tweets[-1].id
except tweepy.TweepError as e:
print("Erro:", (e))
break
for i, tweet in enumerate(tweets_salvos, start=1):
print("{} ---- {}".format(i, tweet.text))
Explanation: Recuperar 1000 tweets
End of explanation
<END_TASK> |
117 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Build a DNN using the Keras Functional API
Learning objectives
Review how to read in CSV file data using tf.data.
Specify input, hidden, and output layers in the DNN architecture.
Review and visualize the final DNN shape.
Train the model locally and visualize the loss curves.
Deploy and predict with the model using Cloud AI Platform.
Introduction
In this notebook, we will build a Keras DNN to predict the fare amount for NYC taxi cab rides.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Step1: Locating the CSV files
We will start with the CSV files that we wrote out in the other notebook. Just so you don't have to run the notebook, we saved a copy in ../data/toy_data
Step2: Lab Task 1
Step3: Next, let's define our features we want to use and our label(s) and then load in the dataset for training.
Step4: Lab Task 2
Step5: Lab Task 3
Step6: Lab Task 4
Step7: Visualize the model loss curve
Next, we will use matplotlib to draw the model's loss curves for training and validation.
Step8: Lab Task 5 | <ASSISTANT_TASK:>
Python Code:
# You can use any Python source file as a module by executing an import statement in some other Python source file
# The import statement combines two operations; it searches for the named module, then it binds the
# results of that search to a name in the local scope.
import os, json, math
# Import data processing libraries like Numpy and TensorFlow
import numpy as np
import tensorflow as tf
# Python shutil module enables us to operate with file objects easily and without diving into file objects a lot.
import shutil
# Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # SET TF ERROR LOG VERBOSITY
Explanation: Build a DNN using the Keras Functional API
Learning objectives
Review how to read in CSV file data using tf.data.
Specify input, hidden, and output layers in the DNN architecture.
Review and visualize the final DNN shape.
Train the model locally and visualize the loss curves.
Deploy and predict with the model using Cloud AI Platform.
Introduction
In this notebook, we will build a Keras DNN to predict the fare amount for NYC taxi cab rides.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
End of explanation
# `ls` is a Linux shell command that lists directory contents
# `l` flag list all the files with permissions and details
!ls -l ../data/toy_data/*.csv
Explanation: Locating the CSV files
We will start with the CSV files that we wrote out in the other notebook. Just so you don't have to run the notebook, we saved a copy in ../data/toy_data
End of explanation
# Define columns of data
CSV_COLUMNS = ['fare_amount', 'pickup_datetime',
'pickup_longitude', 'pickup_latitude',
'dropoff_longitude', 'dropoff_latitude',
'passenger_count', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0],['na'],[0.0],[0.0],[0.0],[0.0],[0.0],['na']]
Explanation: Lab Task 1: Use tf.data to read the CSV files
First let's define our columns of data, which column we're predicting for, and the default values.
End of explanation
# Define features you want to use
def features_and_labels(row_data):
for unwanted_col in ['pickup_datetime', 'key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
# load the training data
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = (tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)
.map(features_and_labels) # features, label
)
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(1000).repeat()
dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE
return dataset
Explanation: Next, let's define our features we want to use and our label(s) and then load in the dataset for training.
End of explanation
# Build a simple Keras DNN using its Functional API
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model():
INPUT_COLS = ['pickup_longitude', 'pickup_latitude',
'dropoff_longitude', 'dropoff_latitude',
'passenger_count']
# TODO 2
# input layer
inputs = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32')
for colname in INPUT_COLS
}
# tf.feature_column.numeric_column() represents real valued or numerical features.
feature_columns = {
colname : tf.feature_column.numeric_column(colname)
for colname in INPUT_COLS
}
# the constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires that you specify: LayerConstructor()(inputs)
dnn_inputs = tf.keras.layers.DenseFeatures(feature_columns.values())(inputs)
# two hidden layers of [32, 8] just in like the BQML DNN
h1 = tf.keras.layers.Dense(32, activation='relu', name='h1')(dnn_inputs)
h2 = tf.keras.layers.Dense(8, activation='relu', name='h2')(h1)
# final output is a linear activation because this is regression
output = tf.keras.layers.Dense(1, activation='linear', name='fare')(h2)
model = tf.keras.models.Model(inputs, output)
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
print("Here is our DNN architecture so far:\n")
model = build_dnn_model()
print(model.summary())
Explanation: Lab Task 2: Build a DNN with Keras
Now let's build the Deep Neural Network (DNN) model in Keras and specify the input and hidden layers. We will print out the DNN architecture and then visualize it later on.
End of explanation
# tf.keras.utils.plot_model() Converts a Keras model to dot format and save to a file.
tf.keras.utils.plot_model(model, 'dnn_model.png', show_shapes=False, rankdir='LR')
Explanation: Lab Task 3: Visualize the DNN
We can visualize the DNN using the Keras plot_model utility.
End of explanation
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, so it will wrap around
NUM_EVALS = 32 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample, but not so much that it slows down
trainds = load_dataset('../data/toy_data/taxi-traffic-train*', TRAIN_BATCH_SIZE, tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset('../data/toy_data/taxi-traffic-valid*', 1000, tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
# Model Fit
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch)
Explanation: Lab Task 4: Train the model
To train the model, simply call model.fit().
Note that we should really use many more NUM_TRAIN_EXAMPLES (i.e. a larger dataset). We shouldn't make assumptions about the quality of the model based on training/evaluating it on a small sample of the full data.
End of explanation
# plot
# Use matplotlib for visualizing the model
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
# The .figure() method will create a new figure, or activate an existing figure.
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(['loss', 'rmse']):
ax = fig.add_subplot(nrows, ncols, idx+1)
# The .plot() is a versatile function, and will take an arbitrary number of arguments. For example, to plot x versus y.
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
# The .title() method sets a title for the axes.
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
# The .legend() method will place a legend on the axes.
plt.legend(['train', 'validation'], loc='upper left');
Explanation: Visualize the model loss curve
Next, we will use matplotlib to draw the model's loss curves for training and validation.
End of explanation
# TODO 5
# Use the model to do prediction with `model.predict()`
model.predict({
'pickup_longitude': tf.convert_to_tensor([-73.982683]),
'pickup_latitude': tf.convert_to_tensor([40.742104]),
'dropoff_longitude': tf.convert_to_tensor([-73.983766]),
'dropoff_latitude': tf.convert_to_tensor([40.755174]),
'passenger_count': tf.convert_to_tensor([3.0]),
}, steps=1)
Explanation: Lab Task 5: Predict with the model locally
To predict with Keras, you simply call model.predict() and pass in the cab ride you want to predict the fare amount for.
End of explanation
<END_TASK> |
118 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
How to Build a RuleBasedProfiler
This Notebook will demonstrate the steps we need to take to generate a simple RuleBasedProfiler by initializing the components in memory.
We will start from a new Great Expectations Data Context (ie great_expectations folder after running great_expectations init), and begin by adding the Datasource, and progressively adding more components
Step1: Set-up
Step2: BatchRequests
In this example, we will be using two BatchRequests using our Datasource.
single_batch_batch_request
Step3: Example 1
Step4: To continue our example, we will continue building a RuleBasedProfiler using our ColumnDomainBuilder
Build Rule
The first Rule that we build will output expect_column_values_to_not_be_null because it does not take in additional information other than Domain. We will add ParameterBuilders in a subsequent example.
Step5: Create RuleBasedProfiler and add Rule
We create a simple RuleBasedProfiler and add the Rule that we added in the previous step is added to the Profiler. When we run the Profiler, the output is an ExpectationSuite with 4 Expectations, which we expect.
Step6: As expected our simple RuleBasedProfiler will output 4 Expectations, one for each of our 4 columns.
Example 2
Step7: Build a ParameterBuilder
ParameterBuilders help calcluate "reasonable" parameters for Expectations based on data that is specified by a BatchRequest.
The largest categories include
Step8: Build an ExpectationConfigurationBuilder
ExpectationConfigurationBuilder is being built for expect_column_values_to_be_greater_than which will use the column.min values that are calculated using the ParameterBuilder. These are now accessible using the fully qualified parameter $parameter.my_column_min.value[-1]. The [-1] indicates that we will use the min value from the latest Batch (the only Batch in this case since our BatchRequest only returns a single Batch).
Step9: Build a Rule, RuleBasedProfiler, and run
Now we build a rule with our ParameterBuilder, DomainBuilder and ExpectationConfigurationBuilder.
Step10: Add the Rule to our RuleBasedProfiler and run.
Step11: The resulting ExpectationSuite now contain values (-80.0, 0.0 etc) that were calculated from the Batch of data defined by the BatchRequest.
Example 3
Step12: Instantiating RuleBasedProfiler with variables
Pass the variables dictionary into the RuleBasedProfiler constructor.
Step13: Instantiating ColumnDomainBuilder
The ColumnDomainBuilder is instantiated using column names tip_amount and fare_amount. The BatchRequest is passed in as a $variable.
Step14: Instantiating ParameterBuilders
Our Rule will contain 2 NumericMetricRangeMultiBatchParameterBuilders, one for each of our 2 Expectation types. One will be estimating the Parameter values for the column.min Metric, and the other will be estimating Parameter values for the column.max Metric. metric_domain_kwargs are passed in from our DomainBuilder using $domain.domain_kwargs.
Also note the use of 3 Variables we defined above
Step15: Instantiating ExpectationConfigurationBuilders
Our Rule will contain 2 ExpectationConfigurationBuilders, one for each of our 2 Expectation types
Step16: Instantiating RuleBasedProfiler and Running
We instantiate a Rule with our DomainBuilder, ParameterBuilders and ExpectationConfigurationBuilders and load into our RuleBasedProfiler.
Step17: As expected, the resulting ExpectationSuite contains our minimum and maximum values, with tip_amount ranging from $-2.16 to $195.05 (a generous tip), and fare_amount ranging from $-98.90 (a refund) to $405,904.54 (a very very long trip).
Appendix
Here we have additional example configuration of DomainBuilder and ParameterBuilders that were not included in the previous 3 Examples.
DomainBuilders
ColumnDomainBuilder
This DomainBuilder outputs column Domains, which are required by ColumnExpectations like (expect_column_median_to_be_between). There are a few ways that the ColumnDomainBuilder can be used.
In the simplest usecase, the ColumnDomainBuilder can output all columns in the dataset as a Domain, or include/exclude columns if you already know which ones you would like. Column suffixes (like _amount) can be used to select columns of interest, as we saw in our examples above.
The ColumnDomainBuilder also allows you to choose columns based on their semantic types (such as numeric, or text).
Semantic types are defined as an Enum object called SemanticDomainTypes, which can be found here
Step18: In the simplest usecase, the ColumnDomainBuilder can output all of the columns in yellow_tripdata_sample_2018
Step19: Columns can also be included or excluded by name
Step20: As described above, the ColumnDomainBuilder also allows you to choose columns based on their semantic types (such as numeric, or text). This is passed in as part of the include_semantic_types parameter.
Step21: MultiColumnDomainBuilder
This DomainBuilder outputs multicolumn Domains by taking in a column list in the include_column_names parameter.
Step22: ColumnPairDomainBuilder
This DomainBuilder outputs columnpair domains by taking in a column pair list in the include_column_names parameter.
Step23: TableDomainBuilder
This DomainBuilder outputs table Domains, which is required by Expectations that act on tables, like (expect_table_row_count_to_equal, or expect_table_columns_to_match_set).
Step24: MapMetricColumnDomainBuilder
This DomainBuilder allows you to choose columns based on Map Metrics, which give a yes/no answer for individual values or rows. In this example, we use the Map Metrics column_values.nonnull to filter out a column that was all None from taxi_data.
Step25: CategoricalColumnDomainBuilder
This DomainBuilder allows you to choose columns based on their cardinality (number of unique values).The CategoricalColumnDomainBuilder will take in various cardinality_limit_mode values for cardinality, and in this example we are only interested in columns that have "very_few" (less than 10) unique values. For a full of valid modes, along with the associated values, please refer to the CardinalityLimitMode enum in
Step26: ParameterBuilders
ParameterBuilders work under the hood by populating a ParameterContainer, which can also be shared by multiple ParameterBuilders. It requires a Domain, and metric_name, with domain_kwargs accessible from the DomainBuilder using the fully qualified parameter $domain.domain_kwargs.
For the sake of simplicity, we will define a Domain object directly using the Domain() constructor, and pass in a column name within domain_kwargs.
Step27: MetricMultiBatchParameterBuilder
The MetricMultiBatchParameterBuilder computes a Metric on data from one or more batches. It takes domain_kwargs, value_kwargs, and metric_name as arguments.
Step28: my_column_min[value] now contains a list of 12 values, which are the minimum values the total_amount column for each of the 12 Batches associated with 2018 taxi_data data. If we were to use the values in a ExpectationConfigurationBuilder, it would be accessible through the fully-qualified parameter
Step29: my_value_set[value] now contains a list of 3 values, which is a list of all unique vendor_ids across 12 Batches in the 2018 taxi_data dataset.
RegexPatternStringParameterBuilder
The RegexPatternStringParameterBuilder contains a set of default regex patterns and builds a value set of the best-matching patterns. Users are also able to pass in new patterns as a parameter.
Step30: vendor_id is a single integer. Let's see if our default patterns can match it.
Step31: Looks like my_regex_set[value] is an empty list. This means that none of the evaluated_regexes matched our domain. Let's try the same thing again, but this time with a regex that will match our vendor_id column. ^\\d{1}$ and ^\\d{2}$ which will match 1 or 2 digit integers anchored at the beginning and end of the string.
Step32: Now my_regex_set[value] contains ^\\d{1}$.
SimpleDateFormatStringParameterBuilder
The SimpleDateFormatStringParameterBuilder contains a set of default Datetime format patterns and builds a value set of the best-matching patterns. Users are also able to pass in new patterns as a parameter.
Step33: The result contains our matching datetime pattern, which is '%Y-%m-%d %H
Step34: As we see, the mean value range for the total_amount column is 16.0 to 44.0
Optional | <ASSISTANT_TASK:>
Python Code:
import great_expectations as ge
from ruamel import yaml
from great_expectations.core.batch import BatchRequest
from great_expectations.rule_based_profiler.rule.rule import Rule
from great_expectations.rule_based_profiler.rule_based_profiler import RuleBasedProfiler, RuleBasedProfilerResult
from great_expectations.rule_based_profiler.domain_builder import (
DomainBuilder,
ColumnDomainBuilder,
)
from great_expectations.rule_based_profiler.parameter_builder import (
MetricMultiBatchParameterBuilder,
)
from great_expectations.rule_based_profiler.expectation_configuration_builder import (
DefaultExpectationConfigurationBuilder,
)
data_context: ge.DataContext = ge.get_context()
Explanation: How to Build a RuleBasedProfiler
This Notebook will demonstrate the steps we need to take to generate a simple RuleBasedProfiler by initializing the components in memory.
We will start from a new Great Expectations Data Context (ie great_expectations folder after running great_expectations init), and begin by adding the Datasource, and progressively adding more components
End of explanation
data_path: str = "../../../../test_sets/taxi_yellow_tripdata_samples"
datasource_config = {
"name": "taxi_multi_batch_datasource",
"class_name": "Datasource",
"module_name": "great_expectations.datasource",
"execution_engine": {
"module_name": "great_expectations.execution_engine",
"class_name": "PandasExecutionEngine",
},
"data_connectors": {
"default_inferred_data_connector_name": {
"class_name": "InferredAssetFilesystemDataConnector",
"base_directory": data_path,
"default_regex": {
"group_names": ["data_asset_name", "month"],
"pattern": "(yellow_tripdata_sample_2018)-(\\d.*)\\.csv",
},
},
"default_inferred_data_connector_name_all_years": {
"class_name": "InferredAssetFilesystemDataConnector",
"base_directory": data_path,
"default_regex": {
"group_names": ["data_asset_name", "year", "month"],
"pattern": "(yellow_tripdata_sample)_(\\d.*)-(\\d.*)\\.csv",
},
},
},
}
data_context.test_yaml_config(yaml.dump(datasource_config))
# add_datasource only if it doesn't already exist in our configuration
try:
data_context.get_datasource(datasource_config["name"])
except ValueError:
data_context.add_datasource(**datasource_config)
Explanation: Set-up: Adding taxi_data Datasource
Add taxi_data as a new Datasource
We are using an InferredAssetFilesystemDataConnector to connect to data in the test_sets/taxi_yellow_tripdata_samples folder and get one DataAsset (yellow_tripdata_sample_2018) that has 12 Batches (1 Batch/month).
End of explanation
single_batch_batch_request: BatchRequest = BatchRequest(
datasource_name="taxi_multi_batch_datasource",
data_connector_name="default_inferred_data_connector_name",
data_asset_name="yellow_tripdata_sample_2018",
data_connector_query={"index": -1},
)
multi_batch_batch_request: BatchRequest = BatchRequest(
datasource_name="taxi_multi_batch_datasource",
data_connector_name="default_inferred_data_connector_name",
data_asset_name="yellow_tripdata_sample_2018",
)
Explanation: BatchRequests
In this example, we will be using two BatchRequests using our Datasource.
single_batch_batch_request : which gives the most recent (December) data from the 2018 taxi_data dataset.
multi_batch_batch_request: which gives all 12 Batches of data from the 2018 taxi_data datataset.
End of explanation
domain_builder: DomainBuilder = ColumnDomainBuilder(
include_column_name_suffixes=["_amount"],
data_context=data_context,
)
domains: list = domain_builder.get_domains(rule_name="my_rule", batch_request=single_batch_batch_request)
# assert that the domains we get are the ones we expect
assert len(domains) == 4
assert domains == [
{"rule_name": "my_rule", "domain_type": "column", "domain_kwargs": {"column": "fare_amount"}, "details": {"inferred_semantic_domain_type": {"fare_amount": "numeric",}},},
{"rule_name": "my_rule", "domain_type": "column", "domain_kwargs": {"column": "tip_amount"}, "details": {"inferred_semantic_domain_type": {"tip_amount": "numeric",}},},
{"rule_name": "my_rule", "domain_type": "column", "domain_kwargs": {"column": "tolls_amount"}, "details": {"inferred_semantic_domain_type": {"tolls_amount": "numeric",}},},
{"rule_name": "my_rule", "domain_type": "column", "domain_kwargs": {"column": "total_amount"}, "details": {"inferred_semantic_domain_type": {"total_amount": "numeric",}},},
]
Explanation: Example 1: RuleBasedProfiler with just a DomainBuilder and ExpectationConfigurationBuilder
Build a DomainBuilder
In the process of building a RuleBasedProfiler, one of the first components we want to build/test
is a DomainBuilder, which returns the Domains (tables, columns, set of columns, etc) that the our resulting Expectations will be run on. In our example, the DomainBuilder will output a list of columns that follow a certain pattern, namely have '_amount' in their suffix. To this end we will be using a ColumnDomainBuilder which allows you to choose columns based on their suffix, name, or semantic type (like numeric or string) and our DomainBuilder will output a list of 4 columns : fare_amount, tip_amount, tolls_amount and total_amount.
The RuleBasedProfiler also contains additional DomainBuilders that allow you to do more sophisticated filtering on your data.
These include:
* CategoricalColumnDomainBuilder: which allows you to choose columns based on their cardinality (number of unique values).
* MapMetricColumnDomainBuilder: which allows you to choose columns based on Map Metrics, which give a yes/no answer for individual values or rows.
In addition, there are DomainBuilders that do not perform any additional filtering, but are required by the Expectations that are being built by the RuleBasedProfiler.
* TableDomainBuilder: Outputs Table Domain, which is required by Expectations that act on tables, like (expect_table_row_count_to_equal, or expect_table_columns_to_match_set).
ColumnDomainBuilder
End of explanation
default_expectation_configuration_builder = DefaultExpectationConfigurationBuilder(
expectation_type="expect_column_values_to_not_be_null",
column="$domain.domain_kwargs.column", # Get the column from domain_kwargs that are retrieved from the DomainBuilder
)
simple_rule: Rule = Rule(
name="rule_with_no_parameters",
variables=None,
domain_builder=domain_builder,
expectation_configuration_builders=[default_expectation_configuration_builder],
)
Explanation: To continue our example, we will continue building a RuleBasedProfiler using our ColumnDomainBuilder
Build Rule
The first Rule that we build will output expect_column_values_to_not_be_null because it does not take in additional information other than Domain. We will add ParameterBuilders in a subsequent example.
End of explanation
from great_expectations.core import ExpectationSuite
from great_expectations.rule_based_profiler.rule_based_profiler import RuleBasedProfiler
my_rbp: RuleBasedProfiler = RuleBasedProfiler(
name="my_simple_rbp", data_context=data_context, config_version=1.0
)
my_rbp.add_rule(rule=simple_rule)
profiler_result: RuleBasedProfilerResult
profiler_result = my_rbp.run(batch_request=single_batch_batch_request)
assert len(profiler_result.expectation_configurations) == 4
profiler_result.expectation_configurations
Explanation: Create RuleBasedProfiler and add Rule
We create a simple RuleBasedProfiler and add the Rule that we added in the previous step is added to the Profiler. When we run the Profiler, the output is an ExpectationSuite with 4 Expectations, which we expect.
End of explanation
domain_builder: DomainBuilder = ColumnDomainBuilder(
include_column_name_suffixes=["_amount"],
data_context=data_context,
)
domains: list = domain_builder.get_domains(rule_name="my_rule", batch_request=single_batch_batch_request)
domains
Explanation: As expected our simple RuleBasedProfiler will output 4 Expectations, one for each of our 4 columns.
Example 2: RuleBasedProfiler with DomainBuilder, ParameterBuilder ExpectationConfigurationBuilder
Build a DomainBuilder
Using the same ColumnDomainBuilder from our previous example.
End of explanation
numeric_range_parameter_builder: MetricMultiBatchParameterBuilder = (
MetricMultiBatchParameterBuilder(
data_context=data_context,
metric_name="column.min",
metric_domain_kwargs="$domain.domain_kwargs", # domain kwarg values are accessible using fully qualified parameters
name="my_column_min",
)
)
Explanation: Build a ParameterBuilder
ParameterBuilders help calcluate "reasonable" parameters for Expectations based on data that is specified by a BatchRequest.
The largest categories include:
- metric_multi_batch_parameter_builder: Which is able to calculate a numeric Metric (like column.min) across multiple Batches (or just one Batch).
- value_set_multi_batch_parameter_builder: Which is able to build a value set across multiple Batches (or just one Batch).
In some cases, there is a better way to build a value set using regex or dates.
- regex_pattern_string_parameter_builder: Which contains a set of default regex patterns and builds a value set of the best-matching patterns. Users are also able to pass in new patterns as a parameter.
- simple_date_format_string_parameter_builder: Which contains a set of default datetime-format patterns and builds a value set of the best-matching patterns. Users are also able to pass in new patterns as a parameter.
Across multiple-Batches, we can build more-sophisticated parameters by using sampling methods.
- numeric_range_multi_batch_parameter_builder: Which is able to provide range estimations across Batches using sampling methods. For instance, if we expect a table's row_count to change between Batches, we could calculate the min / max values of row_count by using the NumericMetricRangeMultiBatchParameterBuilder. These parameters could then be used by ExpectTableRowCountToBeBetween
In our example we will be using a MetricMultiBatchParameterBuilder to estimate the column.min Metric for the 4 columns defined by our Domain Builder. These are passed in as metric_domain_kwargs and are accessible using the fully qualified parameter $domain.domain_kwargs.
End of explanation
config_builder: DefaultExpectationConfigurationBuilder = (
DefaultExpectationConfigurationBuilder(
expectation_type="expect_column_values_to_be_greater_than",
value="$parameter.my_column_min.value[-1]", # the parameter is accessible using a fully qualified parameter
column="$domain.domain_kwargs.column", # domain kwarg values are accessible using fully qualified parameters
name="my_column_min",
)
)
Explanation: Build an ExpectationConfigurationBuilder
ExpectationConfigurationBuilder is being built for expect_column_values_to_be_greater_than which will use the column.min values that are calculated using the ParameterBuilder. These are now accessible using the fully qualified parameter $parameter.my_column_min.value[-1]. The [-1] indicates that we will use the min value from the latest Batch (the only Batch in this case since our BatchRequest only returns a single Batch).
End of explanation
simple_rule: Rule = Rule(
name="rule_with_parameters",
variables=None,
domain_builder=domain_builder,
parameter_builders=[numeric_range_parameter_builder],
expectation_configuration_builders=[config_builder],
)
my_rbp = RuleBasedProfiler(name="my_rbp", data_context=data_context, config_version=1.0)
Explanation: Build a Rule, RuleBasedProfiler, and run
Now we build a rule with our ParameterBuilder, DomainBuilder and ExpectationConfigurationBuilder.
End of explanation
my_rbp.add_rule(rule=simple_rule)
profiler_result = my_rbp.run(batch_request=single_batch_batch_request)
assert len(profiler_result.expectation_configurations) == 4
profiler_result.expectation_configurations
Explanation: Add the Rule to our RuleBasedProfiler and run.
End of explanation
variables: dict = {
"multi_batch_batch_request": multi_batch_batch_request,
"estimator_name": "bootstrap",
"false_positive_rate": 5.0e-2,
}
Explanation: The resulting ExpectationSuite now contain values (-80.0, 0.0 etc) that were calculated from the Batch of data defined by the BatchRequest.
Example 3: RuleBasedProfiler with multiple ParameterBuilders, ExpectationConfigurationBuilders and Variables
The third example is more complex, using multiple batches, multiple ParameterBuilders, ExpectationConfigurationBuilders and also introducing the use of variables.
The goal of this example is to build a RuleBasedProfiler that outputs an ExpectationSuite containing 2 Expectation types
- expect_column_min_to_be_between : Defined as "Expect the column minimum to be between a min and max value".
- expect_column_max_to_be_between : Defined as "Expect the column maxmimum to be between a min and max value".
for 2 columns in our taxi_data dataset
- fare_amount
- tip_amount
with the min_value and max_value parameters for each of the Expectations estimated over 12 Batches of taxi_data, for a total of 4 Expectations.
To estimate the parameters, we will be using a NumericMetricRangeMultiBatchParameterBuilder, which is able to provide range estimations across Batches using sampling methods. We will also be using a variables dictionary to share defined variables across Rule components like DomainBuilders, ParameterBuilders and ExpectationConfigurationBuilders.
Instantiating variables dictionary
RuleBasedProfilers allow for the definition of variables, which can be shared across Rules and Rule components. When building a complex RuleBasedProfiler with multiple Rules or components, using variables will help you keep track of values without having to input them multiple times.
Once loaded into the RuleBasedProfiler configuration, the variables are accessible using the fully qualified variable name $variables.[key_in_variables_dictionary], similar to how domain kwarg values and parameter values are accessible using a fully qualified name that begins with $.
In the example below, the estimator_name is accessible using $variables.estimator_name.
End of explanation
my_rbp = RuleBasedProfiler(name="my_complex_rbp", data_context=data_context, variables=variables, config_version=1.0)
Explanation: Instantiating RuleBasedProfiler with variables
Pass the variables dictionary into the RuleBasedProfiler constructor.
End of explanation
from great_expectations.rule_based_profiler.domain_builder import ColumnDomainBuilder
domain_builder: DomainBuilder = ColumnDomainBuilder(
include_column_names=["tip_amount", "fare_amount"],
data_context=data_context,
)
Explanation: Instantiating ColumnDomainBuilder
The ColumnDomainBuilder is instantiated using column names tip_amount and fare_amount. The BatchRequest is passed in as a $variable.
End of explanation
from great_expectations.rule_based_profiler.parameter_builder import NumericMetricRangeMultiBatchParameterBuilder
min_range_parameter_builder: NumericMetricRangeMultiBatchParameterBuilder = NumericMetricRangeMultiBatchParameterBuilder(
name="min_range_parameter_builder",
metric_name="column.min",
metric_domain_kwargs="$domain.domain_kwargs",
false_positive_rate='$variables.false_positive_rate',
estimator="$variables.estimator_name",
data_context=data_context,
)
max_range_parameter_builder: NumericMetricRangeMultiBatchParameterBuilder = NumericMetricRangeMultiBatchParameterBuilder(
name="max_range_parameter_builder",
metric_name="column.max",
metric_domain_kwargs="$domain.domain_kwargs",
false_positive_rate="$variables.false_positive_rate",
estimator="$variables.estimator_name",
data_context=data_context,
)
Explanation: Instantiating ParameterBuilders
Our Rule will contain 2 NumericMetricRangeMultiBatchParameterBuilders, one for each of our 2 Expectation types. One will be estimating the Parameter values for the column.min Metric, and the other will be estimating Parameter values for the column.max Metric. metric_domain_kwargs are passed in from our DomainBuilder using $domain.domain_kwargs.
Also note the use of 3 Variables we defined above:
$variables.estimator_name: This is "oneshot" in our case.
$variables.false_positive_rate: This is 5.0e-2 or 5% in our case.
$variables.multi_batch_batch_request: This the multi_batch_batch_request, which gives all 12 Batches of data from the 2018 taxi_data datataset.
End of explanation
expect_column_min: DefaultExpectationConfigurationBuilder = DefaultExpectationConfigurationBuilder(
expectation_type="expect_column_min_to_be_between",
column="$domain.domain_kwargs.column",
min_value="$parameter.min_range_parameter_builder.value[0]",
max_value="$parameter.min_range_parameter_builder.value[1]",
)
expect_column_max: DefaultExpectationConfigurationBuilder = DefaultExpectationConfigurationBuilder(
expectation_type="expect_column_max_to_be_between",
column="$domain.domain_kwargs.column",
min_value="$parameter.max_range_parameter_builder.value[0]",
max_value="$parameter.max_range_parameter_builder.value[1]",
)
Explanation: Instantiating ExpectationConfigurationBuilders
Our Rule will contain 2 ExpectationConfigurationBuilders, one for each of our 2 Expectation types:
expect_column_min_to_be_between
expect_column_max_to_be_between
The Expectations are both ColumnExpectations, so the column parameter will be accessed from the Domain kwargs using $domain.domain_kwargs.column.
The Expectations also take in a min_value and max_value parameter, which our NumericMetricRangeMultiBatchParameterBuilders are estimating. For expect_column_min_to_be_between, these estimated values are accessible using
$parameter.min_range_parameter_builder.value[0] for the min_value, with min_range_parameter_builder being the name of our ParameterBuilder that estimates the column.min metric.
$parameter.min_range.value[1] for the max_value.
The equivalent $parameter for expect_column_max_to_be_between would be $parameter.max_range.value[0] and $parameter.max_range_parameter_builder.value[1] respectively.
End of explanation
more_complex_rule: Rule = Rule(
name="rule_with_parameters",
variables=None,
domain_builder=domain_builder,
parameter_builders=[min_range_parameter_builder, max_range_parameter_builder],
expectation_configuration_builders=[expect_column_min, expect_column_max],
)
my_rbp.add_rule(rule=more_complex_rule)
profiler_result = my_rbp.run(batch_request=multi_batch_batch_request)
profiler_result.expectation_configurations
Explanation: Instantiating RuleBasedProfiler and Running
We instantiate a Rule with our DomainBuilder, ParameterBuilders and ExpectationConfigurationBuilders and load into our RuleBasedProfiler.
End of explanation
from great_expectations.rule_based_profiler.domain_builder import ColumnDomainBuilder
Explanation: As expected, the resulting ExpectationSuite contains our minimum and maximum values, with tip_amount ranging from $-2.16 to $195.05 (a generous tip), and fare_amount ranging from $-98.90 (a refund) to $405,904.54 (a very very long trip).
Appendix
Here we have additional example configuration of DomainBuilder and ParameterBuilders that were not included in the previous 3 Examples.
DomainBuilders
ColumnDomainBuilder
This DomainBuilder outputs column Domains, which are required by ColumnExpectations like (expect_column_median_to_be_between). There are a few ways that the ColumnDomainBuilder can be used.
In the simplest usecase, the ColumnDomainBuilder can output all columns in the dataset as a Domain, or include/exclude columns if you already know which ones you would like. Column suffixes (like _amount) can be used to select columns of interest, as we saw in our examples above.
The ColumnDomainBuilder also allows you to choose columns based on their semantic types (such as numeric, or text).
Semantic types are defined as an Enum object called SemanticDomainTypes, which can be found here : https://github.com/great-expectations/great_expectations/blob/develop/great_expectations/rule_based_profiler/types/domain.py
End of explanation
domain_builder: DomainBuilder = ColumnDomainBuilder(
data_context=data_context,
)
domains: list = domain_builder.get_domains(rule_name="my_rule", batch_request=single_batch_batch_request)
assert len(domains) == 18 # all columns in yellow_tripdata_sample_2018
Explanation: In the simplest usecase, the ColumnDomainBuilder can output all of the columns in yellow_tripdata_sample_2018
End of explanation
domain_builder: DomainBuilder = ColumnDomainBuilder(
include_column_names=["vendor_id"],
data_context=data_context,
)
domains: list = domain_builder.get_domains(rule_name="my_rule", batch_request=single_batch_batch_request)
domains
domain_builder: DomainBuilder = ColumnDomainBuilder(
exclude_column_names=["vendor_id"],
data_context=data_context,
)
domains: list = domain_builder.get_domains(rule_name="my_rule", batch_request=single_batch_batch_request)
assert len(domains) == 17 # all columns in yellow_tripdata_sample_2018 with vendor_id excluded
domains
Explanation: Columns can also be included or excluded by name
End of explanation
domain_builder: DomainBuilder = ColumnDomainBuilder(
include_semantic_types=['numeric'],
data_context=data_context,
)
domains: list = domain_builder.get_domains(rule_name="my_rule", batch_request=single_batch_batch_request)
assert len(domains) == 15 # columns in yellow_trip_data_sample_2018 that are numeric
Explanation: As described above, the ColumnDomainBuilder also allows you to choose columns based on their semantic types (such as numeric, or text). This is passed in as part of the include_semantic_types parameter.
End of explanation
from great_expectations.rule_based_profiler.domain_builder import MultiColumnDomainBuilder
domain_builder: DomainBuilder = MultiColumnDomainBuilder(
include_column_names=["vendor_id", "fare_amount", "tip_amount"],
data_context=data_context,
)
domains: list = domain_builder.get_domains(rule_name="my_rule", batch_request=single_batch_batch_request)
assert len(domains) == 1 # 3 columns are part of a single multi-column domain.
expected_columns: list = ["vendor_id", "fare_amount", "tip_amount"]
assert domains[0]["domain_kwargs"]["column_list"] == expected_columns
Explanation: MultiColumnDomainBuilder
This DomainBuilder outputs multicolumn Domains by taking in a column list in the include_column_names parameter.
End of explanation
from great_expectations.rule_based_profiler.domain_builder import ColumnPairDomainBuilder
domain_builder: DomainBuilder = ColumnPairDomainBuilder(
include_column_names=["vendor_id", "fare_amount"],
data_context=data_context,
)
domains: list = domain_builder.get_domains(rule_name="my_rule", batch_request=single_batch_batch_request)
assert len(domains) == 1 # 2 columns are part of a single multi-column domain.
expect_columns_dict: dict = {'column_A': 'fare_amount', 'column_B': 'vendor_id'}
assert domains[0]["domain_kwargs"] == expect_columns_dict
Explanation: ColumnPairDomainBuilder
This DomainBuilder outputs columnpair domains by taking in a column pair list in the include_column_names parameter.
End of explanation
from great_expectations.rule_based_profiler.domain_builder import TableDomainBuilder
domain_builder: DomainBuilder = TableDomainBuilder(
data_context=data_context,
)
domains: list = domain_builder.get_domains(rule_name="my_rule", batch_request=single_batch_batch_request)
domains
Explanation: TableDomainBuilder
This DomainBuilder outputs table Domains, which is required by Expectations that act on tables, like (expect_table_row_count_to_equal, or expect_table_columns_to_match_set).
End of explanation
from great_expectations.rule_based_profiler.domain_builder import MapMetricColumnDomainBuilder
domain_builder: DomainBuilder = MapMetricColumnDomainBuilder(
map_metric_name="column_values.nonnull",
data_context=data_context,
)
domains: list = domain_builder.get_domains(rule_name="my_rule", batch_request=single_batch_batch_request)
len(domains) == 17 # filtered 1 column that was all None
Explanation: MapMetricColumnDomainBuilder
This DomainBuilder allows you to choose columns based on Map Metrics, which give a yes/no answer for individual values or rows. In this example, we use the Map Metrics column_values.nonnull to filter out a column that was all None from taxi_data.
End of explanation
from great_expectations.rule_based_profiler.domain_builder import CategoricalColumnDomainBuilder
domain_builder: DomainBuilder = CategoricalColumnDomainBuilder(
cardinality_limit_mode="very_few", # VERY_FEW = 10 or less
data_context=data_context,
)
domains: list = domain_builder.get_domains(rule_name="my_rule", batch_request=single_batch_batch_request)
assert len(domains) == 7
Explanation: CategoricalColumnDomainBuilder
This DomainBuilder allows you to choose columns based on their cardinality (number of unique values).The CategoricalColumnDomainBuilder will take in various cardinality_limit_mode values for cardinality, and in this example we are only interested in columns that have "very_few" (less than 10) unique values. For a full of valid modes, along with the associated values, please refer to the CardinalityLimitMode enum in:
https://github.com/great-expectations/great_expectations/blob/develop/great_expectations/rule_based_profiler/helpers/cardinality_checker.py
End of explanation
from great_expectations.rule_based_profiler.types.domain import Domain
from great_expectations.execution_engine.execution_engine import MetricDomainTypes
from great_expectations.rule_based_profiler.types import ParameterContainer
domain: Domain = Domain(rule_name="my_rule", domain_type=MetricDomainTypes.COLUMN, domain_kwargs = {'column': 'total_amount'})
Explanation: ParameterBuilders
ParameterBuilders work under the hood by populating a ParameterContainer, which can also be shared by multiple ParameterBuilders. It requires a Domain, and metric_name, with domain_kwargs accessible from the DomainBuilder using the fully qualified parameter $domain.domain_kwargs.
For the sake of simplicity, we will define a Domain object directly using the Domain() constructor, and pass in a column name within domain_kwargs.
End of explanation
from great_expectations.rule_based_profiler.parameter_builder import MetricMultiBatchParameterBuilder
numeric_range_parameter_builder: MetricMultiBatchParameterBuilder = (
MetricMultiBatchParameterBuilder(
data_context=data_context,
metric_name="column.min",
metric_domain_kwargs=domain.domain_kwargs,
name="my_column_min",
)
)
parameter_container: ParameterContainer = ParameterContainer(parameter_nodes=None)
parameters = {
domain.id: parameter_container,
}
numeric_range_parameter_builder.build_parameters(
domain=domain,
parameters=parameters,
batch_request=multi_batch_batch_request,
)
# we check the parameter container
print(parameter_container.parameter_nodes)
min(parameter_container.parameter_nodes["parameter"]["parameter"]["my_column_min"]["value"])
Explanation: MetricMultiBatchParameterBuilder
The MetricMultiBatchParameterBuilder computes a Metric on data from one or more batches. It takes domain_kwargs, value_kwargs, and metric_name as arguments.
End of explanation
from great_expectations.rule_based_profiler.parameter_builder import ValueSetMultiBatchParameterBuilder
domain: Domain = Domain(rule_name="my_rule", domain_type=MetricDomainTypes.COLUMN, domain_kwargs = {'column': 'vendor_id'})
# instantiating a new parameter container, since it can contain the results of more than one ParmeterBuilder.
parameter_container: ParameterContainer = ParameterContainer(parameter_nodes=None)
parameters[domain.id] = parameter_container
value_set_parameter_builder: ValueSetMultiBatchParameterBuilder = (
ValueSetMultiBatchParameterBuilder(
data_context=data_context,
metric_domain_kwargs=domain.domain_kwargs,
name="my_value_set",
)
)
value_set_parameter_builder.build_parameters(
domain=domain,
parameters=parameters,
batch_request=multi_batch_batch_request,
)
print(parameter_container.parameter_nodes)
Explanation: my_column_min[value] now contains a list of 12 values, which are the minimum values the total_amount column for each of the 12 Batches associated with 2018 taxi_data data. If we were to use the values in a ExpectationConfigurationBuilder, it would be accessible through the fully-qualified parameter: $parameter.my_column_min.value.
ValueSetMultiBatchParameterBuilder
The ValueSetMultiBatchParameterBuilder is able to build a value set across multiple Batches (or just one Batch).
End of explanation
from great_expectations.rule_based_profiler.parameter_builder import RegexPatternStringParameterBuilder
domain: Domain = Domain(rule_name="my_rule", domain_type=MetricDomainTypes.COLUMN, domain_kwargs = {'column': 'vendor_id'})
Explanation: my_value_set[value] now contains a list of 3 values, which is a list of all unique vendor_ids across 12 Batches in the 2018 taxi_data dataset.
RegexPatternStringParameterBuilder
The RegexPatternStringParameterBuilder contains a set of default regex patterns and builds a value set of the best-matching patterns. Users are also able to pass in new patterns as a parameter.
End of explanation
parameter_container: ParameterContainer = ParameterContainer(parameter_nodes=None)
parameters[domain.id] = parameter_container
regex_parameter_builder: RegexPatternStringParameterBuilder = (
RegexPatternStringParameterBuilder(
data_context=data_context,
metric_domain_kwargs=domain.domain_kwargs,
name="my_regex_set",
)
)
regex_parameter_builder.build_parameters(
domain=domain,
parameters=parameters,
batch_request=single_batch_batch_request,
)
print(parameter_container.parameter_nodes)
Explanation: vendor_id is a single integer. Let's see if our default patterns can match it.
End of explanation
regex_parameter_builder: RegexPatternStringParameterBuilder = (
RegexPatternStringParameterBuilder(
data_context=data_context,
metric_domain_kwargs=domain.domain_kwargs,
candidate_regexes=["^\\d{1}$"],
name="my_regex_set",
)
)
regex_parameter_builder.build_parameters(
domain=domain,
parameters=parameters,
batch_request=single_batch_batch_request,
)
print(parameter_container.parameter_nodes)
Explanation: Looks like my_regex_set[value] is an empty list. This means that none of the evaluated_regexes matched our domain. Let's try the same thing again, but this time with a regex that will match our vendor_id column. ^\\d{1}$ and ^\\d{2}$ which will match 1 or 2 digit integers anchored at the beginning and end of the string.
End of explanation
from great_expectations.rule_based_profiler.parameter_builder import SimpleDateFormatStringParameterBuilder
domain: Domain = Domain(rule_name="my_rule", domain_type=MetricDomainTypes.COLUMN, domain_kwargs = {'column': 'pickup_datetime'})
parameter_container: ParameterContainer = ParameterContainer(parameter_nodes=None)
parameters[domain.id] = parameter_container
simple_date_format_string_parameter_builder: SimpleDateFormatStringParameterBuilder = (
SimpleDateFormatStringParameterBuilder(
data_context=data_context,
metric_domain_kwargs=domain.domain_kwargs,
name="my_value_set",
)
)
simple_date_format_string_parameter_builder.build_parameters(
domain=domain,
parameters=parameters,
batch_request=single_batch_batch_request,
)
print(parameter_container.parameter_nodes)
parameter_container.parameter_nodes["parameter"]["parameter"]["my_value_set"]["value"]
Explanation: Now my_regex_set[value] contains ^\\d{1}$.
SimpleDateFormatStringParameterBuilder
The SimpleDateFormatStringParameterBuilder contains a set of default Datetime format patterns and builds a value set of the best-matching patterns. Users are also able to pass in new patterns as a parameter.
End of explanation
from great_expectations.rule_based_profiler.parameter_builder import NumericMetricRangeMultiBatchParameterBuilder
domain: Domain = Domain(rule_name="my_rule", domain_type=MetricDomainTypes.COLUMN, domain_kwargs = {'column': 'total_amount'})
parameter_container: ParameterContainer = ParameterContainer(parameter_nodes=None)
parameters[domain.id] = parameter_container
numeric_metric_range_parameter_builder: NumericMetricRangeMultiBatchParameterBuilder = NumericMetricRangeMultiBatchParameterBuilder(
name="column_mean_range",
metric_name="column.mean",
estimator="bootstrap",
metric_domain_kwargs=domain.domain_kwargs,
false_positive_rate=1.0e-2,
round_decimals=0,
data_context=data_context,
)
numeric_metric_range_parameter_builder.build_parameters(
domain=domain,
parameters=parameters,
batch_request=multi_batch_batch_request,
)
print(parameter_container.parameter_nodes)
Explanation: The result contains our matching datetime pattern, which is '%Y-%m-%d %H:%M:%S'
NumericMetricRangeMultiBatchParameterBuilder
The NumericMetricRangeMultiBatchParameterBuilder is able to provide range estimations across Batches using sampling methods. For instance, if we expect a table's row_count to change between Batches, we could calculate the min / max values of row_count by using the NumericMetricRangeMultiBatchParameterBuilder. These parameters could then be used by Expectations that take in ranges, like ExpectTableRowCountToBeBetween, or ExpectColumnValuesToBeBetween.
In this example, we will be taking a single Metric, column.mean and calculating it for a single column, total_amount. The parameter we will be building is the column mean-range, which are the min-max values of the total_amount column across random samples of 12 Batches of the 2018 taxi_data dataaset.
We will also be passing in specifications for estimator, namely bootstrap sampling with a false-positive rate of less than 0.01.
End of explanation
#import shutil
# clean up Expectations directory after running tests
#shutil.rmtree("great_expectations/expectations/tmp")
#os.remove("great_expectations/expectations/.ge_store_backend_id")
Explanation: As we see, the mean value range for the total_amount column is 16.0 to 44.0
Optional: Clean-up Directory
As part of running this notebook, the RuleBasedProfiler will create a number of ExpectationSuite configurations in the great_expectations/expectations/tmp directory. Optionally run the following cell to clean up the directory.
End of explanation
<END_TASK> |
119 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Level 1
In diesem Level werden wir unsere ersten Zeilen Python kennenlernen. Wir werden lernen, was ein integer, ein float, ein string ist, wie wir mit Python Text ausgeben und einlesen können, erste Berechnungen anstellen können, die Ergebnisse von Berechnungen in Variablen speichern und wie wir unseren Code kommentieren können.
Ausgabe
Step1: Einfache Operationen
Im Python Interpreter werden Kommandos zeilenweise eingegeben und vom Interpreter zeilenweise interpretiert und ausgeführt. Der Interpreter gibt dabei immer das Ergebnis (genauer den Rückgabewert des Ausdrucks) zurück.
Das bedeutet, wir können den Interpreter benutzen um erste Berechnungen durchzuführen und die mathematischen Operatoren kennenlernen.
Step2: Genauer betrachtet besteht die Zeile 4 + 34 aus zwei Literalen (4 und 34) und einem Operator (+), die kombiniert den Ausdruck ergeben. Ein Literal ist die direkte Darstellung eines Wertes. Operatoren verknüpfen Werte und geben Werte zurück.
Bei den Werten im obigen Beispiel handelt es sich um Werte vom Typ integer. Diese stellen ganze Zahlen dar.
Step3: Oben sind die wichtigsten Operatoren für Werte des Integer Typs aufgelistet. Bemerkenswert ist, dass es drei Arten der Divison gibt
Step4: Ein float repräsentiert Fließkommazahlen. Dieselben Operatoren, die oben auf Integer angewandt wurden, können auch auf floats angewendet werden. Wichtig ist, dass dabei das Ergebnis stets vom Typ float ist.
Zum Umwandeln bieten die Typen Funktionen an, so kann ein Objekt mit der int() Funktion in einen integer und mit derfloat() Funktion in einen float umgewandelt werden. Beim Umwandeln eines integers in einen float gehen allerdings etwaige Nachkommastellen verloren.
Step5: Variablen
Readability counts. - Zen of Python
Variablen werden benutzt, um Werte für die spätere Wiederbenutzung zu speichern. Dabei zeigt die Variable lediglich auf einen Wert. Eine Variable hat dabei keinen festen Typ, nur die Werte
Step6: Bei der Benennung von Variablen sollte darauf geachtet werden, kurze aber verständliche Variablennamen zu benutzen, da so klar ist wozu die Variable benutzt wird. Auf keinen Fall sollten Variablennamen wie l, O oder I benutzt werden, da diese, je nach Schriftart, wie 0 oder 1 aussehen können.
Step7: Python besitzt eine streng dynamische Typisierung, das heißt
Step8: Strings
Ein String ist eine Zeichenkette und wird repräsentiert Text. Ein String kann mit "" oder '' definiert werden.
Step10: Strings sind allerdings nicht auf eine Zeile begrenzt. Multiline strings werden durch dreifache Anführungszeichen definiert.
Step11: Eingabe
Die input()Funktion wird benutzt um Eingaben über die Standardeingabe zu erhalten. Dabei liefert die input()Funktion immer einen string.
Step12: Schlüsselwörter
Einige Begriffe sind integrale Bestandteile von Python und können daher nicht als Variablenname benutzt werden. Diese Schlüsselwörter variieren von Version zu Version. Eine Liste aller Schlüsselwörter können wir uns anzeigen lassen
Step13: Kommentare
If the implementation is hard to explain, it's a bad idea - Zen of Python
Kommentare dienen dazu den Quellcode für sich und andere lesbarer und verständlich zu machen. Kommentare können in Python mittels einer Raute (#) eingefügt werden. Dabei kann eine ganze Zeile zum Kommentar werden, oder ein Kommentar hinter einem Befehl eingefügt werden, dabei sollten wir vor der Raute zwei Leerzeichen Platz lassen. <br>
Dabei sollten wir beachten, dass ein Kommentar nie beschreiben sollte, was der Code macht, sondern warum der Code etwas macht.
Step14: Zen of Python
Der "Zen of Python" bietet einige Leitlinien, an denen sich Python als Sprache orientiert und an denen wir uns auch beim Programmieren mit Python orientieren können. In bisherigen Abschnitten wurden Teile bereits zitiert, unten sind einmal alle Punkte aufgelistet. | <ASSISTANT_TASK:>
Python Code:
# Für die Standardausgabe benutzen wir die print() Funktion
print("Hallo Welt!")
# Wir können mit Kommata getrennt auch mehrere Werte ausgeben:
print("foo", "bar")
# Mit der help() Funktionen zeigen wir uns
# die Hilfe der print() Funktion an:
help(print)
# Ausgabe mit Seperatoren:
print("foo", "bar", sep="#")
# Ausgabe mit end-string:
print("foo", "bar", end="##\n")
print("test")
Explanation: Level 1
In diesem Level werden wir unsere ersten Zeilen Python kennenlernen. Wir werden lernen, was ein integer, ein float, ein string ist, wie wir mit Python Text ausgeben und einlesen können, erste Berechnungen anstellen können, die Ergebnisse von Berechnungen in Variablen speichern und wie wir unseren Code kommentieren können.
Ausgabe
End of explanation
4 + 34
Explanation: Einfache Operationen
Im Python Interpreter werden Kommandos zeilenweise eingegeben und vom Interpreter zeilenweise interpretiert und ausgeführt. Der Interpreter gibt dabei immer das Ergebnis (genauer den Rückgabewert des Ausdrucks) zurück.
Das bedeutet, wir können den Interpreter benutzen um erste Berechnungen durchzuführen und die mathematischen Operatoren kennenlernen.
End of explanation
print(3 + 4) # Addition
print(4 - 6) # Subtraktion
print(3 * 7) # Multiplikation
print(3 // 2) # Ganzzahlige Division
print(3 % 2) # Division mit Rest
print(3 / 2) # Division
print(2 ** 4) # Potenz, alternativ pow(2, 4)
print(4 << 1) # Bitshift nach links, alternativ 4 * (2 ** 1)
print(4 >> 1) # Bitshift nach rechts, alternativ 4 // (2 ** 1)
print(5 ^ 1) # bitweises XOR
Explanation: Genauer betrachtet besteht die Zeile 4 + 34 aus zwei Literalen (4 und 34) und einem Operator (+), die kombiniert den Ausdruck ergeben. Ein Literal ist die direkte Darstellung eines Wertes. Operatoren verknüpfen Werte und geben Werte zurück.
Bei den Werten im obigen Beispiel handelt es sich um Werte vom Typ integer. Diese stellen ganze Zahlen dar.
End of explanation
print(4.5 + 3.8)
Explanation: Oben sind die wichtigsten Operatoren für Werte des Integer Typs aufgelistet. Bemerkenswert ist, dass es drei Arten der Divison gibt: die ganzzahlige Division, die (ganzzahlige) Division mit Rest und die "normale" Division. Die ganzzahlige Division liefert ein abgerundetes Ergebnis als Integer, die Division mit Rest liefert den Rest der ganzzahligen Divison und die "normale" Division liefert einen Wert des Typen float.
End of explanation
print(int(3.5))
print(float(4))
Explanation: Ein float repräsentiert Fließkommazahlen. Dieselben Operatoren, die oben auf Integer angewandt wurden, können auch auf floats angewendet werden. Wichtig ist, dass dabei das Ergebnis stets vom Typ float ist.
Zum Umwandeln bieten die Typen Funktionen an, so kann ein Objekt mit der int() Funktion in einen integer und mit derfloat() Funktion in einen float umgewandelt werden. Beim Umwandeln eines integers in einen float gehen allerdings etwaige Nachkommastellen verloren.
End of explanation
ham = 4
egg = 12
ham_price = 2.99
egg_price = 0.49
print(ham, egg)
print(ham_price, egg_price)
print()
print("ham: ", ham * ham_price)
print("egg: ", egg * egg_price)
summ = ham * ham_price + egg * egg_price
print("sum: ", summ)
Explanation: Variablen
Readability counts. - Zen of Python
Variablen werden benutzt, um Werte für die spätere Wiederbenutzung zu speichern. Dabei zeigt die Variable lediglich auf einen Wert. Eine Variable hat dabei keinen festen Typ, nur die Werte
End of explanation
# den Typen eines Wertes können wir mit type() bestimmt werden:
print(type("a"))
print(type(2))
print(type(4.8))
Explanation: Bei der Benennung von Variablen sollte darauf geachtet werden, kurze aber verständliche Variablennamen zu benutzen, da so klar ist wozu die Variable benutzt wird. Auf keinen Fall sollten Variablennamen wie l, O oder I benutzt werden, da diese, je nach Schriftart, wie 0 oder 1 aussehen können.
End of explanation
s = "String"
print(type(s))
s = 4
print(type(s))
Explanation: Python besitzt eine streng dynamische Typisierung, das heißt:
Eine Variable kann auf Werte verschiedenen Typs zeigen.
Jedes Objekt hat einen Typ.
Bei jeder neuen Zuweisung wird der Wert einer Variable überschrieben, dabei kann sich der Typ des Werts ändern.
End of explanation
hallo = 'Hallo Welt!'
text = "Programmieren mit Python."
print(hallo, text, sep="\n")
Explanation: Strings
Ein String ist eine Zeichenkette und wird repräsentiert Text. Ein String kann mit "" oder '' definiert werden.
End of explanation
multiline =
Dies ist ein
mehrzeiliger
String
mit Einrückung.
print(multiline)
# Strings können wir miteinander "addieren", man spricht auch von konkatinieren
foo = "foo"
bar = "bar"
foobar = foo + bar
print(foobar)
# Strings können wir auch "multiplizieren":
print(10*"#" + " foo " + 10*"#")
# len() liefert uns die Länge eines Objektes:
text = "Programmieren mit Python."
length = len(text)
print(text)
print(length*"*")
print(length)
# mit der str() Funktion lassen sich Objekte in einen String umwandeln:
s = str(12)
print(s)
Explanation: Strings sind allerdings nicht auf eine Zeile begrenzt. Multiline strings werden durch dreifache Anführungszeichen definiert.
End of explanation
eingabe = input("Bitte etwas eingeben: ")
print(eingabe)
print(type(eingabe))
Explanation: Eingabe
Die input()Funktion wird benutzt um Eingaben über die Standardeingabe zu erhalten. Dabei liefert die input()Funktion immer einen string.
End of explanation
import keyword
print(keyword.kwlist)
True = 0 # Anzahl an Versuchen
Explanation: Schlüsselwörter
Einige Begriffe sind integrale Bestandteile von Python und können daher nicht als Variablenname benutzt werden. Diese Schlüsselwörter variieren von Version zu Version. Eine Liste aller Schlüsselwörter können wir uns anzeigen lassen:
End of explanation
# Berechnen der Summe zweier Zahlen
sum1 = 5 # erster Summand
sum2 = 7 # zweiter Summand
print(sum1 + sum2)
Explanation: Kommentare
If the implementation is hard to explain, it's a bad idea - Zen of Python
Kommentare dienen dazu den Quellcode für sich und andere lesbarer und verständlich zu machen. Kommentare können in Python mittels einer Raute (#) eingefügt werden. Dabei kann eine ganze Zeile zum Kommentar werden, oder ein Kommentar hinter einem Befehl eingefügt werden, dabei sollten wir vor der Raute zwei Leerzeichen Platz lassen. <br>
Dabei sollten wir beachten, dass ein Kommentar nie beschreiben sollte, was der Code macht, sondern warum der Code etwas macht.
End of explanation
import this
Explanation: Zen of Python
Der "Zen of Python" bietet einige Leitlinien, an denen sich Python als Sprache orientiert und an denen wir uns auch beim Programmieren mit Python orientieren können. In bisherigen Abschnitten wurden Teile bereits zitiert, unten sind einmal alle Punkte aufgelistet.
End of explanation
<END_TASK> |
120 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
This notebook demonstrates the LeNet model.
First we load some dependencies for our code.
Step1: Now we can start to define the actual convolution code. We start by defining an object that represents a single layer of convolution that does the actual convolution operation followed by pooling over the output of that convolution. These layers will be stacked in the final model.
Step2: This next method uses the convolution layer above to make a stack of them and adds a hidden layer followed by a logistic regression classification layer on top.
Step3: This cell runs the model and allows you to play with a few hyperparameters. The ones below take about 1 to 2 minutes to run.
Step4: For most convolution model it can be interesting to show what the trained filters look like. The code below does that from the parameters returned by the training function above. In this model there isn't much of an effect since the filters are 5x5 and we can't see much unfortunately.
Step5: What can also be interesting is to draw the outputs of the filters for an example. This works somewhat better for this model.
Step6: Some things you can try with this model | <ASSISTANT_TASK:>
Python Code:
import numpy
import theano
import theano.tensor as T
from logistic_sgd import LogisticRegression
from mlp import HiddenLayer
Explanation: This notebook demonstrates the LeNet model.
First we load some dependencies for our code.
End of explanation
from theano.tensor.signal import downsample
from theano.tensor.nnet import conv
class LeNetConvPoolLayer(object):
def __init__(self, rng, input, filter_shape, image_shape, poolsize=(2, 2)):
assert image_shape[1] == filter_shape[1]
self.input = input
# there are "num input feature maps * filter height * filter width"
# inputs to each hidden unit
fan_in = numpy.prod(filter_shape[1:])
# each unit in the lower layer receives a gradient from:
# "num output feature maps * filter height * filter width" / pooling size
fan_out = (filter_shape[0] * numpy.prod(filter_shape[2:]) /
numpy.prod(poolsize))
# initialize weights with random weights
W_bound = numpy.sqrt(6. / (fan_in + fan_out))
self.W = theano.shared(
numpy.asarray(
rng.uniform(low=-W_bound, high=W_bound, size=filter_shape),
dtype=theano.config.floatX
),
borrow=True
)
# the bias is a 1D tensor -- one bias per output feature map
b_values = numpy.zeros((filter_shape[0],), dtype=theano.config.floatX)
self.b = theano.shared(value=b_values, borrow=True)
# convolve input feature maps with filters
conv_out = conv.conv2d(
input=input,
filters=self.W,
filter_shape=filter_shape,
image_shape=image_shape
)
# downsample each feature map individually, using maxpooling
pooled_out = downsample.max_pool_2d(
input=conv_out,
ds=poolsize,
ignore_border=True
)
# add the bias term. Since the bias is a vector (1D array), we first
# reshape it to a tensor of shape (1, n_filters, 1, 1). Each bias will
# thus be broadcasted across mini-batches and feature map
# width & height
self.output = T.tanh(pooled_out + self.b.dimshuffle('x', 0, 'x', 'x'))
# store parameters of this layer
self.params = [self.W, self.b]
Explanation: Now we can start to define the actual convolution code. We start by defining an object that represents a single layer of convolution that does the actual convolution operation followed by pooling over the output of that convolution. These layers will be stacked in the final model.
End of explanation
import time
import fuel
from fuel.streams import DataStream
from fuel.schemes import SequentialScheme
from fuel.transformers import Cast
fuel.config.floatX = theano.config.floatX = 'float32'
def evaluate_lenet5(train, test, valid,
learning_rate=0.1, n_epochs=200,
nkerns=[20, 50], batch_size=500):
rng = numpy.random.RandomState(23455)
train_stream = DataStream.default_stream(
train, iteration_scheme=SequentialScheme(train.num_examples,
batch_size))
valid_stream = DataStream.default_stream(
valid, iteration_scheme=SequentialScheme(valid.num_examples,
batch_size))
test_stream = DataStream.default_stream(
test, iteration_scheme=SequentialScheme(test.num_examples,
batch_size))
x = T.tensor4('x')
yi = T.imatrix('y')
y = yi.reshape((yi.shape[0],))
# Construct the first convolutional pooling layer:
# filtering reduces the image size to (28-5+1 , 28-5+1) = (24, 24)
# maxpooling reduces this further to (24/2, 24/2) = (12, 12)
# 4D output tensor is thus of shape (batch_size, nkerns[0], 12, 12)
layer0 = LeNetConvPoolLayer(
rng,
input=x,
image_shape=(batch_size, 1, 28, 28),
filter_shape=(nkerns[0], 1, 5, 5),
poolsize=(2, 2)
)
# Construct the second convolutional pooling layer
# filtering reduces the image size to (12-5+1, 12-5+1) = (8, 8)
# maxpooling reduces this further to (8/2, 8/2) = (4, 4)
# 4D output tensor is thus of shape (batch_size, nkerns[1], 4, 4)
layer1 = LeNetConvPoolLayer(
rng,
input=layer0.output,
image_shape=(batch_size, nkerns[0], 12, 12),
filter_shape=(nkerns[1], nkerns[0], 5, 5),
poolsize=(2, 2)
)
# the HiddenLayer being fully-connected, it operates on 2D matrices of
# shape (batch_size, num_pixels) (i.e matrix of rasterized images).
# This will generate a matrix of shape (batch_size, nkerns[1] * 4 * 4),
# or (500, 50 * 4 * 4) = (500, 800) with the default values.
layer2_input = layer1.output.flatten(2)
# construct a fully-connected sigmoidal layer
layer2 = HiddenLayer(
rng,
input=layer2_input,
n_in=nkerns[1] * 4 * 4,
n_out=500,
activation=T.tanh
)
# classify the values of the fully-connected sigmoidal layer
layer3 = LogisticRegression(input=layer2.output, n_in=500, n_out=10)
# the cost we minimize during training is the NLL of the model
cost = layer3.negative_log_likelihood(y)
# create a function to compute the mistakes that are made by the model
model_errors = theano.function(
[x, yi],
layer3.errors(y)
)
# create a list of all model parameters to be fit by gradient descent
params = layer3.params + layer2.params + layer1.params + layer0.params
# create a list of gradients for all model parameters
grads = T.grad(cost, params)
# train_model is a function that updates the model parameters by
# SGD Since this model has many parameters, it would be tedious to
# manually create an update rule for each model parameter. We thus
# create the updates list by automatically looping over all
# (params[i], grads[i]) pairs.
updates = [
(param_i, param_i - learning_rate * grad_i)
for param_i, grad_i in zip(params, grads)
]
train_model = theano.function(
[x, yi],
cost,
updates=updates
)
# early-stopping parameters
patience = 10000 # look as this many examples regardless
patience_increase = 2 # wait this much longer when a new best is found
# a relative improvement of this much is considered significant
improvement_threshold = 0.995
n_train_batches = (train.num_examples + batch_size - 1) // batch_size
# go through this many minibatches before checking the network on
# the validation set; in this case we check every epoch
validation_frequency = min(n_train_batches, patience / 2)
best_validation_loss = numpy.inf
best_iter = 0
test_score = 0.
start_time = time.clock()
epoch = 0
iter = 0
done_looping = False
while (epoch < n_epochs) and (not done_looping):
epoch = epoch + 1
minibatch_index = 0
for minibatch in train_stream.get_epoch_iterator():
iter += 1
minibatch_index += 1
if iter % 100 == 0:
print('training @ iter = ', iter)
error = train_model(minibatch[0], minibatch[1])
if (iter + 1) % validation_frequency == 0:
# compute zero-one loss on validation set
validation_losses = [model_errors(vb[0], vb[1]) for vb
in valid_stream.get_epoch_iterator()]
this_validation_loss = numpy.mean(validation_losses)
print('epoch %i, minibatch %i/%i, validation error %f %%' %
(epoch, minibatch_index + 1, n_train_batches,
this_validation_loss * 100.))
# if we got the best validation score until now
if this_validation_loss < best_validation_loss:
# improve patience if loss improvement is good enough
if this_validation_loss < best_validation_loss * improvement_threshold:
patience = max(patience, iter * patience_increase)
# save best validation score and iteration number
best_validation_loss = this_validation_loss
best_iter = iter
# test it on the test set
test_losses = [
model_errors(tb[0], tb[1])
for tb in test_stream.get_epoch_iterator()
]
test_score = numpy.mean(test_losses)
print((' epoch %i, minibatch %i/%i, test error of '
'best model %f %%') %
(epoch, minibatch_index + 1, n_train_batches,
test_score * 100.))
if patience <= iter:
done_looping = True
break
end_time = time.clock()
print('Optimization complete.')
print('Best validation score of %f %% obtained at iteration %i, '
'with test performance %f %%' %
(best_validation_loss * 100., best_iter + 1, test_score * 100.))
print('The code ran for %.2fm' % ((end_time - start_time) / 60.))
# This is to make the pretty pictures in the cells below
layer0_out = theano.function([x], layer0.output)
layer1_out = theano.function([x], layer1.output)
return params, layer0_out, layer1_out
Explanation: This next method uses the convolution layer above to make a stack of them and adds a hidden layer followed by a logistic regression classification layer on top.
End of explanation
from fuel.datasets import MNIST
train = MNIST(which_sets=('train',), subset=slice(0, 50000))
valid = MNIST(which_sets=('train',), subset=slice(50000, 60000))
test = MNIST(which_sets=('test',))
params, layer0_out, layer1_out = evaluate_lenet5(train, test, valid,
learning_rate=0.1, n_epochs=10,
nkerns=[10, 25], batch_size=50)
Explanation: This cell runs the model and allows you to play with a few hyperparameters. The ones below take about 1 to 2 minutes to run.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
from utils import tile_raster_images
filts1 = params[6].get_value()
filts2 = params[4].get_value()
plt.clf()
# Increase the size of the figure
plt.gcf().set_size_inches(15, 10)
# Make a grid for the two layers
gs = plt.GridSpec(1, 2, width_ratios=[1, 25], height_ratios=[1, 1])
a = plt.subplot(gs[0])
b = plt.subplot(gs[1])
# Show the first layer filters (the small column)
a.imshow(tile_raster_images(filts1.reshape(10, 25), img_shape=(5, 5), tile_shape=(10, 1), tile_spacing=(1,1)),
cmap="Greys", interpolation="none")
a.axis('off')
# Show the second layer filters (the large block)
b.imshow(tile_raster_images(filts2.reshape(250, 25), img_shape=(5, 5), tile_shape=(10, 25), tile_spacing=(1,1)),
cmap="Greys", interpolation="none")
b.axis('off')
Explanation: For most convolution model it can be interesting to show what the trained filters look like. The code below does that from the parameters returned by the training function above. In this model there isn't much of an effect since the filters are 5x5 and we can't see much unfortunately.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
from utils import tile_raster_images
# Grab some input examples from the test set (we cheat a bit here)
sample = test.get_data(None, slice(0, 50))[0]
# We will print this example amongst the batch
example = 7
plt.gcf()
# Increase the size of the figure
plt.gcf().set_size_inches(15, 10)
gs = plt.GridSpec(1, 3, width_ratios=[1, 1, 1], height_ratios=[1, 1, 1])
# Draw the input data
a = plt.subplot(gs[0])
a.imshow(sample[example, 0], cmap="Greys", interpolation='none')
a.axis('off')
# Compute first layer output
out0 = layer0_out(sample)[example]
# Draw its output
b = plt.subplot(gs[1])
b.imshow(tile_raster_images(out0.reshape(10, 144), img_shape=(12, 12), tile_shape=(5, 2), tile_spacing=(1, 1)),
cmap="Greys", interpolation='none')
b.axis('off')
# Compute the second layer output
out1 = layer1_out(sample)[example]
# Draw it
c = plt.subplot(gs[2])
c.imshow(tile_raster_images(out1.reshape(25, 16), img_shape=(4, 4), tile_shape=(5, 5), tile_spacing=(1, 1)),
cmap="Greys", interpolation='none')
c.axis('off')
Explanation: What can also be interesting is to draw the outputs of the filters for an example. This works somewhat better for this model.
End of explanation
%load lenet.py
Explanation: Some things you can try with this model:
- change the non linearity of the convolution to rectifier unit.
- add an extra mlp layer.
If you break the code too much you can get back to the working initial code by loading the lenet.py file with the cell below. (Or just reset the git repo ...)
End of explanation
<END_TASK> |
121 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Bayesian MLP for MNIST using preconditioned SGLD
We use the Jax Bayes library
by James Vuckovic
to fit an MLP to MNIST using SGD, and SGLD (with RMS preconditioning).
Code is based on
Step1: Data
Step3: Model
Step4: SGD
Step5: SGLD
Step6: Uncertainty analysis
We select the predictions above a confidence threshold, and compute the predictive accuracy on that subset. As we increase the threshold, the accuracy should increase, but fewer examples will be selected.
Step7: SGD
For the plugin estimate, the model is very confident on nearly all of the points.
Step9: SGLD
Step10: Distribution shift
We now examine the behavior of the models on the Fashion MNIST dataset.
We expect the predictions to be much less confident, since the inputs are now 'out of distribution'. We will see that this is true for the Bayesian approach, but not for the plugin approximation.
Step11: SGD
We see that the plugin estimate is confident (but wrong!) on many of the predictions, which is undesirable.
If consider a confidence threshold of 0.6,
the plugin approach predicts on about 80% of the examples,
even though the accuracy is only about 6% on these.
Step12: SGLD
If consider a confidence threshold of 0.6,
the Bayesian approach predicts on less than 20% of the examples,
on which the accuracy is ~4%. | <ASSISTANT_TASK:>
Python Code:
%%capture
!pip install git+https://github.com/deepmind/dm-haiku
!pip install git+https://github.com/jamesvuc/jax-bayes
import haiku as hk
import jax.numpy as jnp
from jax.experimental import optimizers
import jax
import jax_bayes
import sys, os, math, time
import numpy as onp
import numpy as np
from functools import partial
from matplotlib import pyplot as plt
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "2"
import tensorflow_datasets as tfds
Explanation: Bayesian MLP for MNIST using preconditioned SGLD
We use the Jax Bayes library
by James Vuckovic
to fit an MLP to MNIST using SGD, and SGLD (with RMS preconditioning).
Code is based on:
https://github.com/jamesvuc/jax-bayes/blob/master/examples/deep/mnist/mnist.ipynb
https://github.com/jamesvuc/jax-bayes/blob/master/examples/deep/mnist/mnist_mcmc.ipynb
Setup
End of explanation
def load_dataset(split, is_training, batch_size):
ds = tfds.load("mnist:3.*.*", split=split).cache().repeat()
if is_training:
ds = ds.shuffle(10 * batch_size, seed=0)
ds = ds.batch(batch_size)
# return tfds.as_numpy(ds)
return iter(tfds.as_numpy(ds))
# load the data into memory and create batch iterators
train_batches = load_dataset("train", is_training=True, batch_size=1_000)
val_batches = load_dataset("train", is_training=False, batch_size=10_000)
test_batches = load_dataset("test", is_training=False, batch_size=10_000)
Explanation: Data
End of explanation
nclasses = 10
def net_fn(batch, sig):
Standard LeNet-300-100 MLP
x = batch["image"].astype(jnp.float32) / 255.0
# x has size (1000, 28, 28, 1)
D = np.prod(x.shape[1:]) # 784
# To match initialization of linear layer
# sigma = 1/sqrt(fan-in)
# https://dm-haiku.readthedocs.io/en/latest/api.html#id1
# w_init = hk.initializers.TruncatedNormal(stddev=stddev)
sizes = [D, 300, 100, nclasses]
sigmas = [sig / jnp.sqrt(fanin) for fanin in sizes]
mlp = hk.Sequential(
[
hk.Flatten(),
hk.Linear(sizes[1], w_init=hk.initializers.TruncatedNormal(stddev=sigmas[0]), b_init=jnp.zeros),
jax.nn.relu,
hk.Linear(sizes[2], w_init=hk.initializers.TruncatedNormal(stddev=sigmas[1]), b_init=jnp.zeros),
jax.nn.relu,
hk.Linear(sizes[3], w_init=hk.initializers.TruncatedNormal(stddev=sigmas[2]), b_init=jnp.zeros),
]
)
return mlp(x)
# L2 regularizer will be added to loss
reg = 1e-4
Explanation: Model
End of explanation
net = hk.transform(partial(net_fn, sig=1))
lr = 1e-3
opt_init, opt_update, opt_get_params = optimizers.rmsprop(lr)
# instantiate the model parameters --- requires a sample batch to get size
params_init = net.init(jax.random.PRNGKey(42), next(train_batches))
# intialize the optimzier state
opt_state = opt_init(params_init)
def loss(params, batch):
logits = net.apply(params, None, batch)
labels = jax.nn.one_hot(batch["label"], 10)
l2_loss = 0.5 * sum(jnp.sum(jnp.square(p)) for p in jax.tree_leaves(params))
softmax_crossent = -jnp.mean(labels * jax.nn.log_softmax(logits))
return softmax_crossent + reg * l2_loss
@jax.jit
def accuracy(params, batch):
preds = net.apply(params, None, batch)
return jnp.mean(jnp.argmax(preds, axis=-1) == batch["label"])
@jax.jit
def train_step(i, opt_state, batch):
params = opt_get_params(opt_state)
dx = jax.grad(loss)(params, batch)
opt_state = opt_update(i, dx, opt_state)
return opt_state
print(params_init["linear"]["w"].shape)
def callback(step, params, train_eval, test_eval, print_every=500):
if step % print_every == 0:
# Periodically evaluate classification accuracy on train & test sets.
train_accuracy = accuracy(params, next(train_eval))
test_accuracy = accuracy(params, next(test_eval))
train_accuracy, test_accuracy = jax.device_get((train_accuracy, test_accuracy))
print(f"[Step {step}] Train / Test accuracy: " f"{train_accuracy:.3f} / {test_accuracy:.3f}.")
%%time
nsteps = 5000
for step in range(nsteps + 1):
opt_state = train_step(step, opt_state, next(train_batches))
params_sgd = opt_get_params(opt_state)
callback(step, params_sgd, val_batches, test_batches)
Explanation: SGD
End of explanation
lr = 5e-3
num_samples = 10 # number of samples to approximate the posterior
init_stddev = 0.01 # 0.1 # params sampled around params_init
# we initialize all weights to 0 since we will be sampling them anyway
# net_bayes = hk.transform(partial(net_fn, sig=0))
sampler_fns = jax_bayes.mcmc.rms_langevin_fns
seed = 0
key = jax.random.PRNGKey(seed)
sampler_init, sampler_propose, sampler_update, sampler_get_params = sampler_fns(
key, num_samples=num_samples, step_size=lr, init_stddev=init_stddev
)
@jax.jit
def accuracy_bayes(params_samples, batch):
# average the logits over the parameter samples
pred_fn = jax.vmap(net.apply, in_axes=(0, None, None))
preds = jnp.mean(pred_fn(params_samples, None, batch), axis=0)
return jnp.mean(jnp.argmax(preds, axis=-1) == batch["label"])
# the log-probability is the negative of the loss
logprob = lambda p, b: -loss(p, b)
# build the mcmc step. This is like the opimization step, but for sampling
@jax.jit
def mcmc_step(i, sampler_state, sampler_keys, batch):
# extract parameters
params = sampler_get_params(sampler_state)
# form a partial eval of logprob on the data
logp = lambda p: logprob(p, batch)
# evaluate *per-sample* gradients
fx, dx = jax.vmap(jax.value_and_grad(logp))(params)
# generat proposal states for the Markov chains
sampler_prop_state, new_keys = sampler_propose(i, dx, sampler_state, sampler_keys)
# we don't need to re-compute gradients for the accept stage (unadjusted Langevin)
fx_prop, dx_prop = fx, dx
# accept the proposal states for the markov chain
sampler_state, new_keys = sampler_update(i, fx, fx_prop, dx, sampler_state, dx_prop, sampler_prop_state, new_keys)
return jnp.mean(fx), sampler_state, new_keys
def callback_bayes(step, params, val_batches, test_batches, print_every=500):
if step % print_every == 0:
val_acc = accuracy_bayes(params, next(val_batches))
test_acc = accuracy_bayes(params, next(test_batches))
print(f"step = {step}" f" | val acc = {val_acc:.3f}" f" | test acc = {test_acc:.3f}")
%%time
#get a single sample of the params using the normal hk.init(...)
params_init = net.init(jax.random.PRNGKey(42), next(train_batches))
# get a SamplerState object with `num_samples` params along dimension 0
# generated by adding Gaussian noise (see sampler_fns(..., init_dist='normal'))
sampler_state, sampler_keys = sampler_init(params_init)
# iterate the the Markov chain
nsteps = 5000
for step in range(nsteps+1):
train_logprob, sampler_state, sampler_keys = \
mcmc_step(step, sampler_state, sampler_keys, next(train_batches))
params_samples = sampler_get_params(sampler_state)
callback_bayes(step, params_samples, val_batches, test_batches)
print(params_samples["linear"]["w"].shape) # 10 samples of the weights for first layer
Explanation: SGLD
End of explanation
test_batch = next(test_batches)
from jax_bayes.utils import entropy, certainty_acc
def plot_acc_vs_confidence(predict_fn, test_batch):
# plot how accuracy changes as we increase the required level of certainty
preds = predict_fn(test_batch) # (batch_size, n_classes) array of probabilities
acc, mask = certainty_acc(preds, test_batch["label"], cert_threshold=0)
thresholds = [0.1 * i for i in range(11)]
cert_accs, pct_certs = [], []
for t in thresholds:
cert_acc, cert_mask = certainty_acc(preds, test_batch["label"], cert_threshold=t)
cert_accs.append(cert_acc)
pct_certs.append(cert_mask.mean())
fig, ax = plt.subplots(1)
line1 = ax.plot(thresholds, cert_accs, label="accuracy at certainty", marker="x")
line2 = ax.axhline(y=acc, label="regular accuracy", color="black")
ax.set_ylabel("accuracy")
ax.set_xlabel("certainty threshold")
axb = ax.twinx()
line3 = axb.plot(thresholds, pct_certs, label="pct of certain preds", color="green", marker="x")
axb.set_ylabel("pct certain")
lines = line1 + [line2] + line3
labels = [l.get_label() for l in lines]
ax.legend(lines, labels, loc=6)
return fig, ax
Explanation: Uncertainty analysis
We select the predictions above a confidence threshold, and compute the predictive accuracy on that subset. As we increase the threshold, the accuracy should increase, but fewer examples will be selected.
End of explanation
# plugin approximation to posterior predictive
@jax.jit
def posterior_predictive_plugin(params, batch):
logit_pp = net.apply(params, None, batch)
return jax.nn.softmax(logit_pp, axis=-1)
def pred_fn(batch):
return posterior_predictive_plugin(params_sgd, batch)
fig, ax = plot_acc_vs_confidence(pred_fn, test_batch)
plt.savefig("acc-vs-conf-sgd.pdf")
plt.show()
Explanation: SGD
For the plugin estimate, the model is very confident on nearly all of the points.
End of explanation
def posterior_predictive_bayes(params_sampled, batch):
computes the posterior_predictive P(class = c | inputs, params) using a histogram
pred_fn = lambda p: net.apply(p, jax.random.PRNGKey(0), batch)
pred_fn = jax.vmap(pred_fn)
logit_samples = pred_fn(params_sampled) # n_samples x batch_size x n_classes
pred_samples = jnp.argmax(logit_samples, axis=-1) # n_samples x batch_size
n_classes = logit_samples.shape[-1]
batch_size = logit_samples.shape[1]
probs = np.zeros((batch_size, n_classes))
for c in range(n_classes):
idxs = pred_samples == c
probs[:, c] = idxs.sum(axis=0)
return probs / probs.sum(axis=1, keepdims=True)
def pred_fn(batch):
return posterior_predictive_bayes(params_samples, batch)
fig, ax = plot_acc_vs_confidence(pred_fn, test_batch)
plt.savefig("acc-vs-conf-sgld.pdf")
plt.show()
Explanation: SGLD
End of explanation
fashion_ds = tfds.load("fashion_mnist:3.*.*", split="test").cache().repeat()
fashion_test_batches = tfds.as_numpy(fashion_ds.batch(10_000))
fashion_test_batches = iter(fashion_test_batches)
fashion_batch = next(fashion_test_batches)
Explanation: Distribution shift
We now examine the behavior of the models on the Fashion MNIST dataset.
We expect the predictions to be much less confident, since the inputs are now 'out of distribution'. We will see that this is true for the Bayesian approach, but not for the plugin approximation.
End of explanation
def pred_fn(batch):
return posterior_predictive_plugin(params_sgd, batch)
fig, ax = plot_acc_vs_confidence(pred_fn, fashion_batch)
plt.savefig("acc-vs-conf-sgd-fashion.pdf")
plt.show()
Explanation: SGD
We see that the plugin estimate is confident (but wrong!) on many of the predictions, which is undesirable.
If consider a confidence threshold of 0.6,
the plugin approach predicts on about 80% of the examples,
even though the accuracy is only about 6% on these.
End of explanation
def pred_fn(batch):
return posterior_predictive_bayes(params_samples, batch)
fig, ax = plot_acc_vs_confidence(pred_fn, fashion_batch)
plt.savefig("acc-vs-conf-sgld-fashion.pdf")
plt.show()
Explanation: SGLD
If consider a confidence threshold of 0.6,
the Bayesian approach predicts on less than 20% of the examples,
on which the accuracy is ~4%.
End of explanation
<END_TASK> |
122 | <SYSTEM_TASK:>
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
<END_TASK>
<USER_TASK:>
Problem:
I have a csv file which looks like | <ASSISTANT_TASK:>
Python Code:
from sklearn.cluster import KMeans
df = load_data()
kmeans = KMeans(n_clusters=2)
labels = kmeans.fit_predict(df[['mse']])
<END_TASK> |
123 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Control Flow
Step1: NOTE on notation
* _x, _y, _z, ...
Step2: Q5. Given x, return the truth value of NOT x element-wise. | <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import tensorflow as tf
import numpy as np
from datetime import date
date.today()
author = "kyubyong. https://github.com/Kyubyong/tensorflow-exercises"
tf.__version__
np.__version__
sess = tf.InteractiveSession()
Explanation: Control Flow
End of explanation
x = tf.constant([True, False, False], tf.bool)
y = tf.constant([True, True, False], tf.bool)
Explanation: NOTE on notation
* _x, _y, _z, ...: NumPy 0-d or 1-d arrays
* _X, _Y, _Z, ...: NumPy 2-d or higer dimensional arrays
* x, y, z, ...: 0-d or 1-d tensors
* X, Y, Z, ...: 2-d or higher dimensional tensors
Control Flow Operations
Q1. Let x and y be random 0-D tensors. Return x + y
if x < y and x - y otherwise.
Q2. Let x and y be 0-D int32 tensors randomly selected from 0 to 5. Return x + y 2 if x < y, x - y elif x > y, 0 otherwise.
Q3. Let X be a tensor [[-1, -2, -3], [0, 1, 2]] and Y be a tensor of zeros with the same shape as X. Return a boolean tensor that yields True if X equals Y elementwise.
Logical Operators
Q4. Given x and y below, return the truth value x AND/OR/XOR y element-wise.
End of explanation
x = tf.constant([True, False, False], tf.bool)
Explanation: Q5. Given x, return the truth value of NOT x element-wise.
End of explanation
<END_TASK> |
124 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Parameters and Example Data
Step1: True Changepoints
Step2: Estimated Changepoints with GFGL smoother
Step3: Visualising Graphical Models
To visualise the estimated graphical model in comparison to the ground truth, we simply feed the $\Theta$ elements into the DynamicGraphicalModel holding a number of graphs.
Step4: Note that we only select three graphs to compare with the ground truth, i.e. those that lay inbetween the major changepoints at 30 and 60.
Step5: Generating Dynamic Graphical Models | <ASSISTANT_TASK:>
Python Code:
y = np.load('../data/y.npy')
sigma = np.load('../data/sigma.npy')
sigma_inv = np.load('../data/sigma_inv.npy')
T = 90 # Steps
K = 2 # Changepoints
P = 10 # Variables
M = 5 # Active Edges
eps = 0.000001 # Edge threshold epsilon
Explanation: Parameters and Example Data
End of explanation
edges = get_edges(sigma_inv[0], eps)
change_points = get_change_points(sigma_inv, eps)
fig = plot_data_with_cps(y, change_points, ymin=-5, ymax=5)
verbose = False
tol = 1e-4
max_iter = 500
gammas = [1, 1, 1] # gamma_V1, gamma_V2, gamma_W
lambda1G = 0.15
lambda2G = 25
lambda1I = 0.25
lambda2I = 2
Explanation: True Changepoints
End of explanation
gfgl = GroupFusedGraphLasso(lambda1G, lambda2G, gammas[0], gammas[1], gammas[2], tol, max_iter, verbose)
gfgl.fit(y)
cps = get_change_points(gfgl.sparse_Theta, 0.01)
fig = plot_data_with_cps(y, cps, ymin=-5, ymax=5)
Explanation: Estimated Changepoints with GFGL smoother
End of explanation
from graphtime.simulate import DynamicGraphicalModel
Explanation: Visualising Graphical Models
To visualise the estimated graphical model in comparison to the ground truth, we simply feed the $\Theta$ elements into the DynamicGraphicalModel holding a number of graphs.
End of explanation
DGM = DynamicGraphicalModel.from_Thetas(sigma_inv)
DGM_est = DynamicGraphicalModel.from_Thetas(gfgl.sparse_Theta[[0, 45, 75]], eps=0.1)
DGM.draw();
DGM_est.draw();
Explanation: Note that we only select three graphs to compare with the ground truth, i.e. those that lay inbetween the major changepoints at 30 and 60.
End of explanation
DGM = DynamicGraphicalModel(10)
DGM.generate_graphs(n_edges_list=[3, 6])
DGM.draw();
X = DGM.sample(60, changepoints=[30])
plot_data_with_cps(X, [30]);
Explanation: Generating Dynamic Graphical Models
End of explanation
<END_TASK> |
125 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
This notebook will serve to demonstrate Retina functionality within the Jupyter notebook as it relates to Matplotlib.
Retina, Matplotlib, and the Jupyter Notebook
Retina subclasses the Matplotlib Axes hierarchy. Retina axes can be generated from a call to Pyplot's subplot function. Let's walk through how this is done.
Retina 2D Functionality
Step1: Let's set up our plot so as to make it look more professional.
Step2: Not too shabby. I find it annoying that the label for the y-axis is tilted sideways. Maybe someone more familiar with Matplotlib than myself could propose a solution for this.
Now that we have our axes looking respectable, let's add some data to our plot. We will organize our data via the use of Retina's Layer2D class. It provides a way of grouping Matplotlib artists so that they can be styled, displayed, hidden, etc. in tandem. Let's start by plotting some trig functions.
Step3: As you can see, we have created two layers
Step4: Here is another issue that I can't figure out. If all the plotting code is executed simultaneously in the notebook, the plot generates fine. If, however, the code is run in individual cells, the plots are never generated. I'm not sure if this is a Matplotlib-Jupyter issue or a Retina issue. I think it's probably the former, but maybe some of the Jupyter devs can address this. This seems to happen regardless of whether %matplotlib inline or %matplotlib notebook magics are used, with the caveat that %matplotlib notebook allows generated plots to interact effectively with calls from subsequent cells provided that the plot is generated correctly the first time. It seems that splitting the plotting code across cells causes major hiccups.
Here is an example where all the code is run simultaneously...
Step5: Let's try adding some lines to our plot.
Step6: And now let's try toggling the display of the sin layer.
Step7: We can also make plots in layers "boldfaced".
Step8: Bolding effects can be applied in succession.
Step9: And they can be undone.
Step10: I'm not sure if the Jupyter team has considered this, but it would be nice if the %matplotlib notebook mode generated new plots each time a cell was run. The way things currently stand, I have to scroll up to the original plot in order to watch my changes propagate. This scrolling can quickly become infeasible in lengthier notebooks.
Retina 3D Functionality
Now, let's try creating a 3D plot using Retina's 3D axes.
Step11: We can add planes to 3D layers by specifying a point on the plane and a normal vector to the plane.
Step12: We can bound the data contained in layers, either in a box (for 3D Layers) or in a rectangle/circle (for 2D layers).
Step13: And we can remove the bounds.
Step14: We can also set arbitrary properties for the plots contained in layers. For example, let's edit the alpha value of our 3D plots.
Step15: Diagnostic Tracking
Retina also binds a diagnostic tracker class to layer instances which is linked to a "calc context". As part of the calc context, users can provide sandbox functions and see their plots updated in real-time as data within the layer changes. Let's work through an example use case. | <ASSISTANT_TASK:>
Python Code:
import retina.core.axes
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
fig = plt.figure()
ax1 = plt.subplot('111', projection='Fovea2D')
Explanation: This notebook will serve to demonstrate Retina functionality within the Jupyter notebook as it relates to Matplotlib.
Retina, Matplotlib, and the Jupyter Notebook
Retina subclasses the Matplotlib Axes hierarchy. Retina axes can be generated from a call to Pyplot's subplot function. Let's walk through how this is done.
Retina 2D Functionality
End of explanation
plt.xlabel('x')
plt.ylabel('y')
plt.title('An Introduction to Retina')
Explanation: Let's set up our plot so as to make it look more professional.
End of explanation
x = np.linspace(-2 * np.pi, 2 * np.pi)
sin_y = np.sin(x)
cos_y = np.cos(x)
sin = ax1.add_layer("sin")
cos = ax1.add_layer("cos")
sin.add_data(x, sin_y)
cos.add_data(x, cos_y)
ax1.build_layer("sin", color="green", linestyle="dashed", label="sin(x)")
ax1.build_layer("cos", color="blue", linestyle="dashed", label="cos(x)")
Explanation: Not too shabby. I find it annoying that the label for the y-axis is tilted sideways. Maybe someone more familiar with Matplotlib than myself could propose a solution for this.
Now that we have our axes looking respectable, let's add some data to our plot. We will organize our data via the use of Retina's Layer2D class. It provides a way of grouping Matplotlib artists so that they can be styled, displayed, hidden, etc. in tandem. Let's start by plotting some trig functions.
End of explanation
plt.show()
Explanation: As you can see, we have created two layers: one to hold the plot of $\sin(x)$ and the other to hold the plot of $\cos(x)$. Note that the Layer2D objects are not constructed explicitly by the user. This work is done by the add_layer function of the Fovea2D axes class. This way the layer is attached to an explicit axes and isn't left floating in a free-form void holding unvisualizable data. The add_layer function, however, returns the Layer2D instance which can then be saved to an explicit reference variable as was done here. The work of plotting is performed by the Axes' build_layer function and the keyword arguments are passed to Pyplot's default plot function. The layer-specific methods such as adding data are called from the Layer2D object directly. As a design consideration for future development, it might be useful to just attach a plot method to the layer class that works in the same way as this one. It would probably be easier to use since one could just use the layer instances to plot directly without having to pass layer names to the build_layer method.
Now, all that's left to do is show the plot.
End of explanation
import retina.core.axes
import matplotlib.pyplot as plt
import numpy as np
%matplotlib notebook
fig2 = plt.figure()
ax2 = plt.subplot('111', projection='Fovea2D')
plt.xlabel('x')
plt.ylabel('y')
plt.title('An Introduction to Retina')
ax2.set_xlim(-2 * np.pi, 2 * np.pi)
x = np.linspace(-2 * np.pi, 2 * np.pi)
sin_y = np.sin(x)
cos_y = np.cos(x)
sin = ax2.add_layer("sin")
cos = ax2.add_layer("cos")
sin.add_data(x, sin_y)
cos.add_data(x, cos_y)
ax2.build_layer("sin", color="green", linestyle="dashed", label="sin(x)")
ax2.build_layer("cos", color="blue", linestyle="dashed", label="cos(x)")
Explanation: Here is another issue that I can't figure out. If all the plotting code is executed simultaneously in the notebook, the plot generates fine. If, however, the code is run in individual cells, the plots are never generated. I'm not sure if this is a Matplotlib-Jupyter issue or a Retina issue. I think it's probably the former, but maybe some of the Jupyter devs can address this. This seems to happen regardless of whether %matplotlib inline or %matplotlib notebook magics are used, with the caveat that %matplotlib notebook allows generated plots to interact effectively with calls from subsequent cells provided that the plot is generated correctly the first time. It seems that splitting the plotting code across cells causes major hiccups.
Here is an example where all the code is run simultaneously...
End of explanation
sin.add_vline(0)
sin.add_hline(0)
Explanation: Let's try adding some lines to our plot.
End of explanation
sin.hide()
sin.show()
sin.toggle_display()
sin.toggle_display()
Explanation: And now let's try toggling the display of the sin layer.
End of explanation
cos.bold()
Explanation: We can also make plots in layers "boldfaced".
End of explanation
cos.bold()
Explanation: Bolding effects can be applied in succession.
End of explanation
cos.unbold()
cos.unbold()
Explanation: And they can be undone.
End of explanation
import retina.core.axes
import matplotlib.pyplot as plt
import numpy as np
from scipy.special import jn
%matplotlib notebook
fig = plt.figure()
ax = plt.subplot('111', projection='Fovea3D')
points = np.linspace(-10, 10, 51)
X, Y = np.meshgrid(points, points)
R = np.sqrt(X**2 + Y**2)
Z = jn(0,R)
surfing_usa = ax.add_layer('surfing_usa')
surfing_usa.add_data(X, Y, Z)
ax.build_layer('surfing_usa', plot=ax.plot_surface, rstride=1, cstride=1, cmap='jet')
surfing_usa.hide()
surfing_usa.show()
Explanation: I'm not sure if the Jupyter team has considered this, but it would be nice if the %matplotlib notebook mode generated new plots each time a cell was run. The way things currently stand, I have to scroll up to the original plot in order to watch my changes propagate. This scrolling can quickly become infeasible in lengthier notebooks.
Retina 3D Functionality
Now, let's try creating a 3D plot using Retina's 3D axes.
End of explanation
surfing_usa.add_plane([0, 3, 5], [1, 2, 1])
Explanation: We can add planes to 3D layers by specifying a point on the plane and a normal vector to the plane.
End of explanation
surfing_usa.bound()
Explanation: We can bound the data contained in layers, either in a box (for 3D Layers) or in a rectangle/circle (for 2D layers).
End of explanation
surfing_usa.unbound()
Explanation: And we can remove the bounds.
End of explanation
surfing_usa.set_prop(alpha=0.5)
Explanation: We can also set arbitrary properties for the plots contained in layers. For example, let's edit the alpha value of our 3D plots.
End of explanation
% matplotlib notebook
from IPython.display import display
from ipywidgets import widgets
textbox = widgets.Text()
display(textbox)
def handle_submit(sender):
fn = text.value
from __future__ import division
import PyDSTool as dst
#import PyDSTool.Toolbox.phaseplane as pp
import matplotlib.pyplot as plt
import retina.core.axes
import retina.core.layer
import retina.core.calc_context as cc
import math
import numpy as np
fig = plt.figure()
ax = plt.subplot('111', projection='Fovea2D')
layer = ax.add_layer('test_layer')
tracker = layer.tracker
def make_vel_ics(speed, ang):
rad = math.pi*(ang)/180.
return {'vx': speed*math.cos(rad),
'vy': speed*math.sin(rad)}
def make_shooter():
# no friction
# cos(atan(x)) = 1/(sqrt(1+x^2))
Fx_str = '0' # '-speed_fn()*cos(atan2(vy,vx))'
Fy_str = '-10'
DSargs = dst.args()
DSargs.varspecs = {'vx': Fx_str, 'x': 'vx',
'vy': Fy_str, 'y': 'vy',
'Fx_out': 'Fx(x,y)', 'Fy_out': 'Fy(x,y)',
'speed': 'speed_fn(vx, vy)',
'bearing': '90-180*atan2(vy,vx)/pi'}
auxfndict = {'Fx': (['x', 'y'], Fx_str),
'Fy': (['x', 'y'], Fy_str),
'speed_fn': (['vx', 'vy'], 'sqrt(vx*vx+vy*vy)'),
}
DSargs.auxvars = ['Fx_out', 'Fy_out', 'speed', 'bearing']
DSargs.fnspecs = auxfndict
DSargs.algparams = {'init_step':0.001,
'max_step': 0.1,
'max_pts': 20000,
'maxevtpts': 2,
'refine': 5}
ground_event = dst.Events.makeZeroCrossEvent('y', -1,
{'name': 'ground',
'eventtol': 1e-3,
'precise': True,
'term': True},
varnames=['y'],
targetlang='python')
peak_event = dst.Events.makeZeroCrossEvent('vy', -1,
{'name': 'peak',
'eventtol': 1e-3,
'precise': True,
'term': False},
varnames=['vy'],
targetlang='python')
DSargs.events = [ground_event, peak_event]
DSargs.checklevel = 2
DSargs.ics = {'x': 0, 'y': 0,
'vx': 0, 'vy': 0}
DSargs.ics.update(make_vel_ics(5,20))
DSargs.name = 'cannon'
DSargs.tdomain = [0, 100000]
DSargs.tdata = [0, 10]
return dst.embed(dst.Generator.Vode_ODEsystem(DSargs))
shooter = make_shooter()
# sim.model is a PyDSTool Model
sim = dst.args(tracked_objects=[],
model=shooter,
name='sim_cannon_traj',
pts=None)
calc = cc.calc_context(sim, 'cannon_traj')
w = calc.workspace
shot_num = 0
def go(speed, angle, do_tracker=True):
global shot_num, w
shot_num += 1
w.angle = angle
w.speed = speed
sim.model.set(ics=make_vel_ics(speed, angle))
sim.model.compute('shot%i' % shot_num)
sim.pts = sim.model.sample('shot%i' % shot_num)
if do_tracker:
ax.cla()
ax.plot(sim.pts['x'], sim.pts['y'], 'b-', lw=3)
ax.hlines(0, 0, max(sim.pts['x']))
plt.show()
calc()
tracker.show()
plt.show()
# initialize
go(30, 10, do_tracker=False)
# call tracker every loop to show all sim_stub tracked objects
# (= tracker_plotter objects)
#fig = plt.figure(1)
#ax = plt.gca()
# fig, ax = plt.subplots()
max_dist = cc.make_measure('maxdist',
'max(sim.pts["x"])')
max_height = cc.make_measure('maxheight',
'max(sim.pts["y"])')
calc.attach((max_dist, max_height))
tracker(calc, 2, ('angle', 'maxdist', 'ko'),
clear_on_refresh=False)
tracker(calc, 2, ('angle', 'maxheight', 'ro'),
clear_on_refresh=False)
def make_iter_angle():
for angle in np.linspace(5, 85, 10):
yield angle
iter_angle = make_iter_angle()
# rerun this ten times!
go(10, iter_angle.__next__())
# or call directly
go(10, 25)
Explanation: Diagnostic Tracking
Retina also binds a diagnostic tracker class to layer instances which is linked to a "calc context". As part of the calc context, users can provide sandbox functions and see their plots updated in real-time as data within the layer changes. Let's work through an example use case.
End of explanation
<END_TASK> |
126 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Generate a Cubic Lattice with an Interpenetrating Dual Cubic Lattice
(Since version 1.6) OpenPNM offers two options for generating dual networks. This tutorial will outline the use of the basic CubicDual class, while the DelaunayVoronoiDual is covered elsewhere. The main motivation for creating these dual networks was to enable the modeling of transport in the void phase on one network and through the solid phase on the other. These networks are interpenetrating but not overlapping or coincident so it makes the topology realistic. Moreover, these networks are interconnected to each other so they can exchange species between them, such as gas-liquid heat transfer. The tutorial below outlines how to setup a CubicDual network object, describes the combined topology, and explains how to use labels to access different parts of the network.
As usual start by importing Scipy and OpenPNM
Step1: Let's create a CubicDual and visualize it in Paraview
Step2: The resulting network has two sets of pores, labelled as blue and red in the image below. By default, the main cubic lattice is referred to as the 'primary' network which is colored blue, and the interpenetrating dual is referred to as the 'secondary' network shown in red. These names are used to label the pores and throats associated with each network. These names can be changed by sending label_1 and label_2 arguments during initialization. The throats connecting the 'primary' and 'secondary' pores are labelled 'interconnect', and they can be seen as the diagonal connections below.
The topotools module of openpnm also has handy visualization functions which can be used to consecutively build a picture of the network connections and coordinates. Replace %matplotlib inline with %matplotlib notebook for 3D interactive plots.
Step3: Inspection of this image shows that the 'primary' pores are located at expected locations for a cubic network including on the faces of the cube, and 'secondary' pores are located at the interstitial locations. There is one important nuance to note
Step4: Now that this topology is created, the next step would be to create Geometry objects for each network, and an additional one for the 'interconnect' throats | <ASSISTANT_TASK:>
Python Code:
import scipy as sp
import openpnm as op
import matplotlib.pyplot as plt
%matplotlib inline
wrk = op.Workspace() # Initialize a workspace object
wrk.loglevel=50
Explanation: Generate a Cubic Lattice with an Interpenetrating Dual Cubic Lattice
(Since version 1.6) OpenPNM offers two options for generating dual networks. This tutorial will outline the use of the basic CubicDual class, while the DelaunayVoronoiDual is covered elsewhere. The main motivation for creating these dual networks was to enable the modeling of transport in the void phase on one network and through the solid phase on the other. These networks are interpenetrating but not overlapping or coincident so it makes the topology realistic. Moreover, these networks are interconnected to each other so they can exchange species between them, such as gas-liquid heat transfer. The tutorial below outlines how to setup a CubicDual network object, describes the combined topology, and explains how to use labels to access different parts of the network.
As usual start by importing Scipy and OpenPNM:
End of explanation
net = op.network.CubicDual(shape=[6, 6, 6])
Explanation: Let's create a CubicDual and visualize it in Paraview:
End of explanation
from openpnm.topotools import plot_connections, plot_coordinates
fig1 = plot_coordinates(network=net, pores=net.pores('primary'), c='b')
fig2 = plot_coordinates(network=net, pores=net.pores('primary'), c='b')
fig2 = plot_coordinates(network=net, pores=net.pores('secondary'), fig=fig2, c='r')
fig3 = plot_coordinates(network=net, pores=net.pores('primary'), c='b')
fig3 = plot_coordinates(network=net, pores=net.pores('secondary'), fig=fig3, c='r')
fig3 = plot_connections(network=net, throats=net.throats('primary'), fig=fig3, c='b')
fig4 = plot_coordinates(network=net, pores=net.pores('primary'), c='b')
fig4 = plot_coordinates(network=net, pores=net.pores('secondary'), fig=fig4, c='r')
fig4 = plot_connections(network=net, throats=net.throats('primary'), fig=fig4, c='b')
fig4 = plot_connections(network=net, throats=net.throats('secondary'), fig=fig4, c='r')
fig5 = plot_coordinates(network=net, pores=net.pores('primary'), c='b')
fig5 = plot_coordinates(network=net, pores=net.pores('secondary'), fig=fig5, c='r')
fig5 = plot_connections(network=net, throats=net.throats('primary'), fig=fig5, c='b')
fig5 = plot_connections(network=net, throats=net.throats('secondary'), fig=fig5, c='r')
fig5 = plot_connections(network=net, throats=net.throats('interconnect'), fig=fig5, c='g')
Explanation: The resulting network has two sets of pores, labelled as blue and red in the image below. By default, the main cubic lattice is referred to as the 'primary' network which is colored blue, and the interpenetrating dual is referred to as the 'secondary' network shown in red. These names are used to label the pores and throats associated with each network. These names can be changed by sending label_1 and label_2 arguments during initialization. The throats connecting the 'primary' and 'secondary' pores are labelled 'interconnect', and they can be seen as the diagonal connections below.
The topotools module of openpnm also has handy visualization functions which can be used to consecutively build a picture of the network connections and coordinates. Replace %matplotlib inline with %matplotlib notebook for 3D interactive plots.
End of explanation
print(net.num_pores('primary'))
print(net.num_pores('secondary'))
print(net.num_throats('primary'))
print(net.num_throats('secondary'))
print(net.num_throats('interconnect'))
Explanation: Inspection of this image shows that the 'primary' pores are located at expected locations for a cubic network including on the faces of the cube, and 'secondary' pores are located at the interstitial locations. There is one important nuance to note: Some of 'secondary' pores area also on the face, and are offset 1/2 a lattice spacing from the internal 'secondary' pores. This means that each face of the network is a staggered tiling of 'primary' and 'secondary' pores.
The 'primary' and 'secondary' pores are connected to themselves in a standard 6-connected lattice, and connected to each other in the diagonal directions. Unlike a regular Cubic network, it is not possible to specify more elaborate connectivity in the CubicDual networks since the throats of each network would be conceptually entangled. The figure below shows the connections in the secondary (left), and primary (middle) networks, as well as the interconnections between them (right).
Using the labels it is possible to query the number of each type of pore and throat on the network:
End of explanation
geo_pri = op.geometry.GenericGeometry(network=net,
pores=net.pores('primary'),
throats=net.throats('primary'))
geo_sec = op.geometry.GenericGeometry(network=net,
pores=net.pores('secondary'),
throats=net.throats('secondary'))
geo_inter = op.geometry.GenericGeometry(network=net,
throats=net.throats('interconnect'))
Explanation: Now that this topology is created, the next step would be to create Geometry objects for each network, and an additional one for the 'interconnect' throats:
End of explanation
<END_TASK> |
127 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Para este exemplo, vamos usar o dataset Iris, que pode ser obtido aqui
Step1: Agora, vamos separar os dados entre as medidas e as espécies.
Step2: Agora, vamos calcular a decomposição em componentes principais
Step3: pca agora é uma referência para a função que calcula o PCA de X. Para efetivamente calcularmos os componentes principais, fazemos
Step4: Daqui pra frente, o objeto pca será onde nossas informações estão armazenadas. Para, por exemplo, verificarmos quais são os autovalores (variâncias) do nosso conjunto de dados, podemos fazer
Step5: Podemos ver então que o primeiro componente principal explica 92% dos dados.
Para transformarmos os dados para que fiquem alinhados/projetados nestes componentes principais, usamos
Step6: Agora, queremos visualizar estes dados. Precisamos então selecionar quantos componentes queremos representar. Se quisermos mostrar dois componentes, fazemos | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import decomposition
from sklearn import datasets
tabela = pd.read_csv("exemplo_7/iris.data",header=None,sep=',')
tabela
tabela.columns=['sepal_len', 'sepal_wid', 'petal_len', 'petal_wid', 'class']
tabela
tabela.tail()
Explanation: Para este exemplo, vamos usar o dataset Iris, que pode ser obtido aqui:
https://archive.ics.uci.edu/ml/datasets/Iris
End of explanation
X = tabela.ix[:,0:4].values
y = tabela.ix[:,4].values
X
y
nomes = list(set(y))
tabela.columns
colors = ['navy', 'turquoise', 'darkorange']
fig,ax = plt.subplots(2,2)
#n, bins, patches = P.hist(x, 10, normed=1, histtype='bar',
# color=['crimson', 'burlywood', 'chartreuse'],
# label=['Crimson', 'Burlywood', 'Chartreuse'])
# Coluna 0
dados_sepal_len = [X[y==nomes[0],0], X[y==nomes[1],0], X[y==nomes[2],0]]
n, bins, patches = ax[0,0].hist(dados_sepal_len,color=colors, label=list(set(y)))
ax[0,0].set_title('Sepal Length (cm)')
# Coluna 1
dados_sepal_wid = [X[y==nomes[0],1], X[y==nomes[1],1], X[y==nomes[2],1]]
ax[0,1].hist(dados_sepal_wid,color=colors, label=list(set(y)))
#ax[0,1].legend()
ax[0,1].set_title('Sepal Width (cm)')
# Coluna 2
dados_sepal_wid = [X[y==nomes[0],2], X[y==nomes[1],2], X[y==nomes[2],2]]
ax[1,0].hist(dados_sepal_wid,color=colors, label=list(set(y)))
#ax[1,0].legend()
ax[1,0].set_title('Petal Length (cm)')
# Coluna 3
dados_sepal_wid = [X[y==nomes[0],3], X[y==nomes[1],3], X[y==nomes[2],3]]
ax[1,1].hist(dados_sepal_wid,color=colors, label=list(set(y)))
#ax[1,1].legend()
ax[1,1].set_title('Petal Width (cm)')
fig.legend(patches, list(set(y)))
Explanation: Agora, vamos separar os dados entre as medidas e as espécies.
End of explanation
pca = decomposition.PCA()
print(pca)
Explanation: Agora, vamos calcular a decomposição em componentes principais:
End of explanation
pca.fit(X)
Explanation: pca agora é uma referência para a função que calcula o PCA de X. Para efetivamente calcularmos os componentes principais, fazemos
End of explanation
print(pca.explained_variance_ratio_)
Explanation: Daqui pra frente, o objeto pca será onde nossas informações estão armazenadas. Para, por exemplo, verificarmos quais são os autovalores (variâncias) do nosso conjunto de dados, podemos fazer
End of explanation
Xnew = pca.transform(X)
print(X)
print(Xnew)
Explanation: Podemos ver então que o primeiro componente principal explica 92% dos dados.
Para transformarmos os dados para que fiquem alinhados/projetados nestes componentes principais, usamos
End of explanation
fig,ax = plt.subplots()
plt.cla()
ax.scatter(Xnew[:, 0], Xnew[:, 1], cmap=plt.cm.spectral)
plt.show()
y[0]
list(set(y))
Xnew[y=='Iris-setosa']
fig2, ax2 = plt.subplots()
for color, i, name in zip(colors, [0, 1, 2], list(set(y))):
ax2.scatter(Xnew[y == name, 0], Xnew[y == name, 1], color=color, label=names)
ax2.legend(loc='best', shadow=False, scatterpoints=1)
ax2.set_title('PCA of IRIS dataset')
colors
from mpl_toolkits.mplot3d import Axes3D
fig3d = plt.figure(3)
ax = Axes3D(fig3d)
for color, i, name in zip(colors, [0, 1, 2], list(set(y))):
ax.scatter(Xnew[y == name, 0], Xnew[y == name, 1], Xnew[y==name, 2], color=color, label=names)
ax.legend(loc='best', shadow=False, scatterpoints=1)
plt.show()
Explanation: Agora, queremos visualizar estes dados. Precisamos então selecionar quantos componentes queremos representar. Se quisermos mostrar dois componentes, fazemos
End of explanation
<END_TASK> |
128 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Varcode - Quick Start
Varcode is a library for working with genomic variant data in Python and predicting the impact of those variants on protein sequences.
What this means is that this library can help you annotate gene variants, i.e. changes in DNA with their potential effect on the protein/transcript that they encode for.
This notebook highlights the basic functionality provided by varcode in a nut-shell.
If you haven't installed varcode yet,
please refer to the original README file for specific instructions on how to install the library.
Working with variants and their effects
Let's start with a really simple example to demonstrate what varcode can do for you.
Imagine that the sequencing core facility has just sent you the sequencing results of that sample you submitted earlier to them.
And now, they sent you a list of variants that they inferred using their analysis pipeline.
You open the file just to see that they haven't annotated the variants,
so you don't know whether these variants are within a gene
and if so whether they affect the protein sequence or not.
Enter varcode, which is specifically designed to solve this annotation problem.
For the sake of the example,
let's simplify things and assume that we are interested in annotating this variant,
a nucleotide change from an A into a T at the 1,404,553,136th base of chromosome 7.
To annotate this variant, we first have to model it within varcode using the Variant class.
We are going to assume that this coordinate is consistent with the human assembly GRCh37.
Varcode stands on the shoulders of PyEnsembl, a Python interface to Ensembl reference genome metadata.
So to start things off, let's import the following classes that are relevant to our example
Step1: Now let's create a new Variant that will represent our variant of interest
Step2: Now that we defined this variant,
we can start annotating it;
but let's start with this trivial example,
where we ask for a short descriptive description of the variant
Step3: this is our variation, but expressed using the offical variation nomenclature.
How about asking about the gene this variant is in
Step4: Looks like this variant lies within the BRAF gene;
but what about the potential effects of this variant to the product of this gene?
Step5: What the above list tells us is that this variation can potentially affect four different BRAF transcripts
and out of four, one will result in a Substitution,
i.e. a coding mutation which causes simple substitution of one amino acid for another.
For the other transcripts, IncompleteTranscript type implies that
varcode can't determine effect since transcript annotation is incomplete (often missing either the start or stop codon).
That is all great, but dealing with multiple effects is hard and we usually want to pick the one that causes the most dramatic change in the gene product.
Varcode provides an easy way to get to this worst effect
Step6: So overall, this what we have learned about the variant using varcode
Step7: Moreover, varcode can provide you with the altered protein sequence,
which becomes important especially for analyses that use the variation information within the context of a few aminoacids surrounding the mutated location
Step8: See that valenine (V) changing into a glutamine (E)?
That is the effect of our variant within the aminoacid context.
That was easy, right?
Importing variants from a MAF or VCF file
Now that we know how to work with a single variant
and extract annotations for it,
it is now time for us to learn how to work with collections of variants all together.
In the previous section,
we assumed that there was a single variation that we wanted to annotate;
but in the real world,
we usually receive the variant calls from a sequencing analysis as a file
that is formatted with either Mutation Annotation Format (MAF) or Variant Call Format (VCF).
Varcode has built-in methods to load variants described by either of the formats
Step9: The following loads mutations from the MAF file distributed within our code repository
Step10: and this should load variants from one of the VCF files
Step11: You can even extract summary statistics from these variant collections
if you would like to have a quick look at the overall variants and the altered genes
Step12: Filtering Variants
Having a collection of variants is great,
but you often need to filter them down to see, for example, whether any of the variants affect your gene of interest (TP53)
Step13: or, for example, you might want to drop all mutations that do not affect a protein sequence or variants that fall in a non-coding genomic region
Step14: or, you might want to get all effects whose priority falls below an Insertion | <ASSISTANT_TASK:>
Python Code:
from varcode import Variant
from pyensembl import ensembl_grch37
Explanation: Varcode - Quick Start
Varcode is a library for working with genomic variant data in Python and predicting the impact of those variants on protein sequences.
What this means is that this library can help you annotate gene variants, i.e. changes in DNA with their potential effect on the protein/transcript that they encode for.
This notebook highlights the basic functionality provided by varcode in a nut-shell.
If you haven't installed varcode yet,
please refer to the original README file for specific instructions on how to install the library.
Working with variants and their effects
Let's start with a really simple example to demonstrate what varcode can do for you.
Imagine that the sequencing core facility has just sent you the sequencing results of that sample you submitted earlier to them.
And now, they sent you a list of variants that they inferred using their analysis pipeline.
You open the file just to see that they haven't annotated the variants,
so you don't know whether these variants are within a gene
and if so whether they affect the protein sequence or not.
Enter varcode, which is specifically designed to solve this annotation problem.
For the sake of the example,
let's simplify things and assume that we are interested in annotating this variant,
a nucleotide change from an A into a T at the 1,404,553,136th base of chromosome 7.
To annotate this variant, we first have to model it within varcode using the Variant class.
We are going to assume that this coordinate is consistent with the human assembly GRCh37.
Varcode stands on the shoulders of PyEnsembl, a Python interface to Ensembl reference genome metadata.
So to start things off, let's import the following classes that are relevant to our example:
End of explanation
myVariant = Variant(contig=7, start=140453136, ref="A", alt="T", ensembl=ensembl_grch37)
Explanation: Now let's create a new Variant that will represent our variant of interest:
End of explanation
myVariant.short_description
Explanation: Now that we defined this variant,
we can start annotating it;
but let's start with this trivial example,
where we ask for a short descriptive description of the variant:
End of explanation
myVariant.coding_genes
Explanation: this is our variation, but expressed using the offical variation nomenclature.
How about asking about the gene this variant is in:
End of explanation
myEffects = myVariant.effects()
myEffects
Explanation: Looks like this variant lies within the BRAF gene;
but what about the potential effects of this variant to the product of this gene?
End of explanation
topPriorityEffect = myEffects.top_priority_effect()
topPriorityEffect
Explanation: What the above list tells us is that this variation can potentially affect four different BRAF transcripts
and out of four, one will result in a Substitution,
i.e. a coding mutation which causes simple substitution of one amino acid for another.
For the other transcripts, IncompleteTranscript type implies that
varcode can't determine effect since transcript annotation is incomplete (often missing either the start or stop codon).
That is all great, but dealing with multiple effects is hard and we usually want to pick the one that causes the most dramatic change in the gene product.
Varcode provides an easy way to get to this worst effect:
End of explanation
print ('The mutation %s leads to a %s in gene %s (%s)' % (myVariant.short_description, type(topPriorityEffect).__name__, topPriorityEffect.gene_name, topPriorityEffect.short_description))
Explanation: So overall, this what we have learned about the variant using varcode:
End of explanation
variantLocation = topPriorityEffect.aa_mutation_start_offset
topPriorityEffect.original_protein_sequence[variantLocation-3:variantLocation+4]
topPriorityEffect.mutant_protein_sequence[variantLocation-3:variantLocation+4]
Explanation: Moreover, varcode can provide you with the altered protein sequence,
which becomes important especially for analyses that use the variation information within the context of a few aminoacids surrounding the mutated location:
End of explanation
from varcode import load_maf, load_vcf
Explanation: See that valenine (V) changing into a glutamine (E)?
That is the effect of our variant within the aminoacid context.
That was easy, right?
Importing variants from a MAF or VCF file
Now that we know how to work with a single variant
and extract annotations for it,
it is now time for us to learn how to work with collections of variants all together.
In the previous section,
we assumed that there was a single variation that we wanted to annotate;
but in the real world,
we usually receive the variant calls from a sequencing analysis as a file
that is formatted with either Mutation Annotation Format (MAF) or Variant Call Format (VCF).
Varcode has built-in methods to load variants described by either of the formats:
End of explanation
mafVariants = load_maf("../test/data/tcga_ov.head.maf")
mafVariants # should load 4 variants
Explanation: The following loads mutations from the MAF file distributed within our code repository:
End of explanation
vcfVariants = load_vcf("../test/data/somatic_hg19_14muts.vcf")
vcfVariants # should load 14 variants
Explanation: and this should load variants from one of the VCF files:
End of explanation
vcfVariants.gene_counts()
mafVariants.gene_counts()
Explanation: You can even extract summary statistics from these variant collections
if you would like to have a quick look at the overall variants and the altered genes:
End of explanation
tp53Mutations = vcfVariants.groupby_gene_name()["TP53"].effects()
tp53Mutations
Explanation: Filtering Variants
Having a collection of variants is great,
but you often need to filter them down to see, for example, whether any of the variants affect your gene of interest (TP53):
End of explanation
vcfEffects = vcfVariants.effects()
nonSilentMutations = vcfEffects.drop_silent_and_noncoding()
nonSilentMutations.top_priority_effect_per_gene_id()
Explanation: or, for example, you might want to drop all mutations that do not affect a protein sequence or variants that fall in a non-coding genomic region:
End of explanation
from varcode import Insertion
vcfEffects.filter_by_effect_priority(Insertion).top_priority_effect_per_gene_id()
Explanation: or, you might want to get all effects whose priority falls below an Insertion:
End of explanation
<END_TASK> |
129 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Tutorial "Algorithmic Methods for Network Analysis with NetworKit" (Part 4)
Determining Important Nodes (cont'd)
Betweenness Centrality
If you interpret the Facebook graph as web link graph in the previous Q&A session, the obvious ranking choice is the PageRank. Note that today it is only one of many aspects modern web search engines consider to rank web pages. However, we were looking for the eigenvector centrality as MIT8 is a social network (both and possibly others can be applicable, though).
In applications where the flow of goods, vehicles, information, etc. via shortest paths in a network plays a major role, betweenness centrality is an interesting centrality index. It is also widely used for social networks. Its drawback is its rather high running time, which makes its use problematic for really large networks. But in many applications we do not need to consider the exact betweenness values nor an exact ranking. An approximation is often good enough.
Q&A Session #7
In the PGP network, compute the 15 nodes with the highest (exact) betweenness values and order them accordingly in a ranking.
Answer
Step1: Community Detection
This section demonstrates the community detection capabilities of NetworKit. Community detection is concerned with identifying groups of nodes which are significantly more densely connected to each other than to the rest of the network.
Code for community detection is contained in the community module. The module provides a top-level function to quickly perform community detection with a suitable algorithm and print some stats about the result.
Step2: The default setting uses the parallel Louvain method (PLM) as underlying algorithm. The function prints some statistics and returns the partition object representing the communities in the network as an assignment of node to community label. PLM yields a high-quality solution at reasonably fast running times. Let us now apply other algorithms. To this end, one specifies the algorithm directly in the call.
Step3: The visualization module, which is based on external code for graph drawing, provides a function which visualizes the community graph for a community detection solution
Step4: Q&A Session 8
Run PLMR as well. What are the main differences between the three algorithms PLM, PLMR, and PLP in terms of the solutions they compute and the time they need for this computation?
Answer | <ASSISTANT_TASK:>
Python Code:
from networkit import *
%matplotlib inline
cd ~/workspace/NetworKit
G = readGraph("input/PGPgiantcompo.graph", Format.METIS)
# Code for 7-1)
# exact computation
# Code for 7-2)
# approximate computation
Explanation: Tutorial "Algorithmic Methods for Network Analysis with NetworKit" (Part 4)
Determining Important Nodes (cont'd)
Betweenness Centrality
If you interpret the Facebook graph as web link graph in the previous Q&A session, the obvious ranking choice is the PageRank. Note that today it is only one of many aspects modern web search engines consider to rank web pages. However, we were looking for the eigenvector centrality as MIT8 is a social network (both and possibly others can be applicable, though).
In applications where the flow of goods, vehicles, information, etc. via shortest paths in a network plays a major role, betweenness centrality is an interesting centrality index. It is also widely used for social networks. Its drawback is its rather high running time, which makes its use problematic for really large networks. But in many applications we do not need to consider the exact betweenness values nor an exact ranking. An approximation is often good enough.
Q&A Session #7
In the PGP network, compute the 15 nodes with the highest (exact) betweenness values and order them accordingly in a ranking.
Answer:
Perform the same as in 1) with one difference: Instead of using the algorithm for computing exact betweenness values, use the RK approximation algorithm. Use also values different from the default ones for the parameters $\delta$ and $\epsilon$. What effects do you see in comparison to the ranking based on exact values? What about running time (you can use %time preceding a call to get its CPU time)? And how do the parameter settings affect these effects?
Answer:
End of explanation
community.detectCommunities(G)
Explanation: Community Detection
This section demonstrates the community detection capabilities of NetworKit. Community detection is concerned with identifying groups of nodes which are significantly more densely connected to each other than to the rest of the network.
Code for community detection is contained in the community module. The module provides a top-level function to quickly perform community detection with a suitable algorithm and print some stats about the result.
End of explanation
community.detectCommunities(G, algo=community.PLP(G))
Explanation: The default setting uses the parallel Louvain method (PLM) as underlying algorithm. The function prints some statistics and returns the partition object representing the communities in the network as an assignment of node to community label. PLM yields a high-quality solution at reasonably fast running times. Let us now apply other algorithms. To this end, one specifies the algorithm directly in the call.
End of explanation
communities = _
viztasks.drawCommunityGraph(G, communities)
Explanation: The visualization module, which is based on external code for graph drawing, provides a function which visualizes the community graph for a community detection solution: Communities are contracted into single nodes whose size corresponds to the community size. For problems with hundreds or thousands of communities, this may take a while.
End of explanation
# Code for 8-1) and 8-2)
Explanation: Q&A Session 8
Run PLMR as well. What are the main differences between the three algorithms PLM, PLMR, and PLP in terms of the solutions they compute and the time they need for this computation?
Answer:
Visualize the three results. Can you see aspects of your answer to 1) in the figure as well? Do the figures lead to other insights?
Answer:
End of explanation
<END_TASK> |
130 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Advanced Pandas
Step1: <a id=want></a>
The want operator
We need to know what we're trying to do -- what we want the data to look like. To borrow a phrase from our friend Tom Sargent, we say that we apply the want operator.
Some problems we've run across that ask to be solved
Step2: Comment. Note that the variable item_price has dtype object. The reason is evidently the dollar sign. We want to have it as a number, specifically a float.
Example
Step3: Comments. This is mostly text data, which means it's assigned the dtype object. There are two things that would make the data easier to work with
Step4: str.title() returns a copy of the string in which first characters of all the words are capitalized.
Step5: Second
Step6: Comment. Note the commas separating answers with more than one choice. We want to unpack them somehow.
Example
Step7: This looks bad. But we can always use pd.read_excel?. Let's look into the excel file.
* multiple sheets (want
Step8: The first three lines are empty. Skip those
Step9: Would be nice to have the countries as indices
Step10: The last two columns contain junk
Step11: What about the bottom of the table?
Step12: We still have a couple issues.
The index includes a space and a number
Step13: Useful columns
Step14: Look at the bottom
Step15: Missing values
Step16: Notice the , for thousands. As we saw before, there is an easy fix
Step17: Comment. This has several issues. Here's what we want
Step18: <a id='strings'></a>
String methods
We can treat variables as strings in Pandas in much the same way we dealt with strings in core Python. Run the code below to remind yourself how this works.
Step19: Pandas string methods. We can do the same thing to all the observations of a variable with so-called string methods. We append .str to a variable in a dataframe and then apply the string method of our choice. If this is part of converting a number-like entry that has mistakenly been given dtype object, we then convert its dtype with the astype method.
Example. Let's use a string method to fix the item_price variable in the Chipotle dataframe. This has three parts
Step20: Comment. We did everything here in one line
Step21: Want to convert the year variables into float
Step22: This error indicates that somewhere in weo['1980'] there is a string value --. We want to convert that into NaN. Later we will see how we can do that directly. For now use read_csv() again
Step23: Example. Here we strip off the numbers at the end of the indexes in the OECD docs dataframe. This involves some experimentation
Step24: Comment. Not quite, we only want to split once.
Step25: Comments.
Note that we need two str's here
Step26: What to do. We use the replace method on the whole dataframe. To mark something as missing, we replace it as None, which Pandas interprets as missing and labels NaN.
Step27: Comment. Replace automatically updates the dtypes. Here the double dots led us to label the variables as objects. After the replace, they're now floats, as they should be.
Step28: Comment. Unlike the string methods we described earlier, this use of replace affects complete entries, not elements of string entries. For example, suppose we tried to replace the periods in decimal numbers with an asterisk. We could try the following, but it doesn't work
Step29: Working with missing values
Step30: Comment. We usually don't have to worry about this, Pandas takes care of missing values automatically.
Comment. Let's try a picture to give us a feeling of accomplishment. What else would you say we need? How would we get it?
Step31: <a id='selection'></a>
Selecting variables and observations
The word selection refers to choosing a subset of variables or observations using their labels or index. Similar methods are sometimes referred to as slicing, subsetting, indexing, querying, or filtering. We'll treat the terms as synonymous.
There are lots of ways to do this. Mostly we do "Boolean" selection, which we address in the next section. We review more direct options here, mostly at high speed because they're not things we use much.
In the outline below, df is a dataframe, var and varn are variable names, n1 and n2 are integers,
- vlist = ['var1', 'var2'] is a list of variable names, and
- nlist = [0, 3, 4] is a list of numerical variable or observation indexes and
- bools is a list or pandas Series of booleans (True and False).
Some of the basic selection/indexing/slicing methods have the form
Step32: Example. Let's try each of these in a different cell and see what they do
Step33: Series indexing
Indexing a Series is a little different because we only have one column, so all indexing operations interact with rows.
The rules here are a little subtle, so we'll show examples and add comments that explain what each example does
In the list below s is a Series, n is an integer, nlist = [0, 3] is a list of integers, and i is a string, and is is a list of strings
s[n]
Step34: <a id='boolean'></a>
Boolean selection
We choose observations that satisfy one or more conditions. Boolean selection consists of two steps that we typically combine in one statement
Step35: Find variable and country codes. Which ones do we want? Let's start by seeing that's available. Here we create special dataframes that include all the variables and their definitions and all the countries.
Note the use of the drop_duplicates method, which does what it sounds like
Step36: Exercise.
Construct a list of countries with countries = weo['Country']; that is, without applying the drop_duplicates method. How large is it? How many duplicates have we dropped?
<!-- cn = sorted(list(set(weo.index))) -->
<!--
* What are the country codes (`ISO`) for Argentina and the United States?
* What are the variable codes (`WEO Subject Code`) for government debt (gross debt, percent of GDP) and net lending/borrowing (also percent of GDP)?
-->
Comment. Now that we have the country and variable codes, we can be more explicit about what we want. We want observations with those country and variable codes.
We work up to the solution one step at a time.
Comparisons for series
We can construct comparisons for series (dataframe columns) much as we did with simple variables. The difference is that we get a complete column of True/False responses, not just one.
Mutiple comparisons have a different syntax than we saw earlier
Step37: Boolean selection
Boolean selection simply chooses those observations for which a condition is True. Some people refer to this as filtering. The syntax is
python
df[comparison]
The result is a new dataframe of observations in which comparison is true.
Example. We choose obervations for which the units are 'National currency'. We do this first in two steps, then in one.
Step38: Exercise. Construct dataframes for which
small['Units'] does not equal 'National currency'.
small['Units'] equals 'National currency' and small['2011'] is greater than 100.
<a id='isin'></a>
The isin method
Pay attention now, this is really useful. Suppose we want to extract the data for which weo['Country'] == 'Argentina' or weo['Country'] == 'Greece' (Greece). We could do that by combining the comparisons
Step39: Comment. We're choosing 2 variables from 45, so there are lots of Falses.
Step40: Comment. We can do the same thing with countries. If we want to choose two variables and three countries, the code looks like
Step41: Comments.
We've now done what we described when we applied the want operator.
This is a go-to method. Circle it for later reference.
This is a go-to method. Circle it for later reference.
Exercise. Use the isin method to extract Gross domestic product in US dollars for China, India, and the United States. Assign the result to the dataframe gdp. Hint
Step42: Exercise (challenging). Plot the variable gdp['2015'] as a bar chart. What would you say it needs?
Step43: <a id='contains'></a>
The contains method
Another useful one. The contains string method for series identifies observations that contain a specific string. If yes, the observation is labelled True, if no, False. A little trick converts the True/False outcomes to ones and zeros.
We apply it to the Topics variable of the Entry Poll dataframe entry_poll. You may recall that this variable could have more than one response. We tease them apart with the contains method. Our want is to have a yes/no variable for each response.
Step44: Comment. That's pretty good, we now know which students mentioned Machine Learning and which did not. It's more useful, though, to convert this to zeros (False) and ones (True), which we do with this trick
Step45: Comment. Now let's do the same for some of the other entries and save them in new variables.
Step46: Comment. You might want to think about this a minute. Or two.
Step47: Comment. Just for fun, here's a bar graph of the result.
Step48: and a pie chart
Step49: Review
Let's remind ourselves what we've learned.
Exercise. We explore the Census's Business Dynamics Statistics, a huge collection of data about firms. We've extracted a small piece of one of their databases that includes these variables for 2013 | <ASSISTANT_TASK:>
Python Code:
import sys # system module
import pandas as pd # data package
import matplotlib.pyplot as plt # graphics module
import datetime as dt # date and time module
import numpy as np # foundation for pandas
%matplotlib inline
# check versions (overkill, but why not?)
print('Python version: ', sys.version)
print('Pandas version: ', pd.__version__)
print('Today: ', dt.date.today())
Explanation: Advanced Pandas: Cleaning data
Probably the best thing about Pandas is its extensive toolset for managing data. Here we describe features of Pandas that allow us to clean data that, for reasons beyond our control, comes in a form that's not immediately amendable to analysis. This is the first of several such notebooks.
Outline:
Want operator. Start with what we want to end up, then figure out how to get there.
String methods. Fixing string variables, especially strings that should really be numbers.
Missing values. Marking, dropping, counting missing values.
Selecting variables and observations. Choose the variables and observations we want by their labels.
Boolean selection. This is mostly what we do: choose observations from conditions. We use comparisons to produce Boolean variables and then use the Boolean variables to select observations that are True. The next two methods extend this capability.
The isin method. Choose observations whose values are in lists you specify.
The contains method. Flag observations that contain a specific piece of text. Another string method, operates through Booleans.
<!--
* [The `query` method](#query). Similar capability using database syntax. This is one of many examples in which **SQL database** tools have been built into Pandas.
-->
Note: requires internet access to run.
<!--
internal links http://sebastianraschka.com/Articles/2014_ipython_internal_links.html
-->
This Jupyter notebook was created by Dave Backus, Chase Coleman, Spencer Lyon and Balint Szoke for the NYU Stern course Data Bootcamp.
<a id=prelims></a>
Preliminaries
End of explanation
url = 'https://raw.githubusercontent.com/TheUpshot/chipotle/master/orders.tsv'
chipotle = pd.read_csv(url, sep='\t') # tab (\t) separated values
print('Variable dtypes:\n', chipotle.dtypes, sep='')
chipotle.head()
Explanation: <a id=want></a>
The want operator
We need to know what we're trying to do -- what we want the data to look like. To borrow a phrase from our friend Tom Sargent, we say that we apply the want operator.
Some problems we've run across that ask to be solved:
Numerical data is contaminated by commas (marking thousands) or dollar signs.
Row and column labels are contaminated.
Missing values are marked erratically.
We have too much data, would prefer to choose a subset.
Variables run across rows rather than down columns.
What we want in each case is the opposite of what we have: we want nicely formatted numbers, clean row and column labels, and so on.
We'll solve the first four problems here, the last one in the next notebook.
Example: Chipotle data
This data comes from a New York Times story about the number of calories in a typical order at Chipotle. The topic doesn't particularly excite us, but the data raises a number of issues that come up repeatedly. We adapt some code written by Daniel Forsyth.
Note: The file is a tsv (Tab Separated Values) file, so we need to set the separator accordingly when we call pandas' read_csv method. Remember that the default value of sep is sep=',' (see the docstring). We can change it to tabular by wrinting sep='\t'.
End of explanation
url1 = "https://raw.githubusercontent.com/NYUDataBootcamp/"
url2 = "Materials/master/Data/entry_poll_spring17.csv"
url = url1 + url2
entry_poll = pd.read_csv(url)
entry_poll.head()
print('Dimensions:', entry_poll.shape)
print('Data types:\n\n', entry_poll.dtypes, sep='')
Explanation: Comment. Note that the variable item_price has dtype object. The reason is evidently the dollar sign. We want to have it as a number, specifically a float.
Example: Data Bootcamp entry poll
This is the poll we did at the start of the course. Responses were collected in a Google spreadsheet, which we converted to a csv and uploaded to our website.
End of explanation
# (1) create list of strings with the new varnames
newnames = ['time', 'why', 'program', 'programming', 'prob_stats', 'major', 'career', 'data', 'topics']
newnames
# (2) Use the str.title() string method to make the varnames prettier
newnames = [name.title() for name in newnames]
Explanation: Comments. This is mostly text data, which means it's assigned the dtype object. There are two things that would make the data easier to work with:
First: The column names are excessively verbose. This one's easy: We replace them with single words. Which we do below.
End of explanation
newnames
# (3) assign newnames to the variables
entry_poll.columns = newnames
entry_poll.head(1)
Explanation: str.title() returns a copy of the string in which first characters of all the words are capitalized.
End of explanation
# check multi-response question to see what we're dealing with
entry_poll['Topics'].head(20)
Explanation: Second: The second one is harder. The question about special topics of interest says "mark all that apply." In the spreadsheet, we have a list of every choice the person checked. Our want is to count the number of each type of response. For example, we might want a bar chart that gives us the number of each response. The question is how we get there.
End of explanation
url1 = 'http://www.oecd.org/health/health-systems/'
url2 = 'OECD-Health-Statistics-2016-Frequently-Requested-Data.xls'
oecd = pd.read_excel(url1 + url2)
oecd.head()
Explanation: Comment. Note the commas separating answers with more than one choice. We want to unpack them somehow.
Example: OECD healthcare statistics
The OECD collects healthcare data on lots of (mostly rich) countries, which is helpful in producing comparisons. Here we use a spreadsheet that can be found under Frequently Requested Data.
End of explanation
oecd = pd.read_excel(url1 + url2, sheetname='Physicians')
oecd.head()
Explanation: This looks bad. But we can always use pd.read_excel?. Let's look into the excel file.
* multiple sheets (want: Physicians)
End of explanation
oecd = pd.read_excel(url1 + url2, sheetname='Physicians', skiprows=3)
oecd.head()
Explanation: The first three lines are empty. Skip those
End of explanation
oecd = pd.read_excel(url1 + url2,
sheetname='Physicians',
skiprows=3,
index_col=0)
oecd.head()
Explanation: Would be nice to have the countries as indices
End of explanation
oecd.shape # drop 57th and 58th columns
# There is no skipcols argument, so let's google "read_excel skip columns" -> usecols
oecd = pd.read_excel(url1 + url2,
sheetname='Physicians',
skiprows=3,
index_col=0,
usecols=range(57))
oecd.head()
Explanation: The last two columns contain junk
End of explanation
oecd.tail() # we are downloading the footnotes too
?pd.read_excel # -> skip_footer
# How many rows to skip??
oecd.tail(25)
oecd = pd.read_excel(url1 + url2,
sheetname='Physicians',
skiprows=3,
index_col=0,
usecols=range(57),
skip_footer=20)
oecd.tail()
oecd.dtypes[:5]
Explanation: What about the bottom of the table?
End of explanation
url = 'http://www.imf.org/external/pubs/ft/weo/2016/02/weodata/WEOOct2016all.xls'
# Try
weo = pd.read_excel(url) # NOT an excel file!
# try to open the file with a plain text editor (it is a TSV)
weo = pd.read_csv(url, sep = '\t')
weo.head()
Explanation: We still have a couple issues.
The index includes a space and a number: Australia 1, Chile 3, etc. We care about this because when we plot the data across countries, the country labels are going to be country names, so we want them in a better form than this.
The ..'s in the sheet lead us to label any column that includes them as dtype object. Here we want to label them as missing values.
If we want to plot each country against time, then we'll need to switch the rows and columns somehow, so that the x axis in the plot (the year) is the index and not the column label.
Example: World Economic Outlook
The IMF's World Economic Outlook database contains a broad range of macroeconomic data for a large number of countries. It's updated twice a year and is a go-to source for things like current account balances (roughly, the trade balance) and government debt and deficits. It also has a few quirks, as we'll see.
Example. Run the following code as is, and with the thousands and na_values parameters commented out. How do the dtypes differ?
End of explanation
names = list(weo.columns)
names[:8]
# for var details
details_list = names[1:5] + [names[6]]
# for years
years_list = names[9:-6]
details_list
weo = pd.read_csv(url,
sep = '\t',
index_col='ISO',
usecols=details_list + years_list)
weo.head()
Explanation: Useful columns:
- 1, 2, 3, 4, 6 (indices)
- years, say from 1980 to 2016
Need a list that specifies these
End of explanation
weo.tail(3)
weo = pd.read_csv(url,
sep = '\t',
index_col='ISO',
usecols=details_list + years_list,
skipfooter=1, engine='python') # read_csv requires 'python' engine (otherwise warning)
weo.tail()
Explanation: Look at the bottom
End of explanation
weo = pd.read_csv(url,
sep = '\t',
index_col='ISO',
usecols=details_list + years_list,
skipfooter=1, engine='python',
na_values='n/a')
weo.head()
weo.dtypes[:10] # still not ok
Explanation: Missing values
End of explanation
weo = pd.read_csv(url,
sep = '\t',
index_col='ISO',
usecols=details_list + years_list,
skipfooter=1, engine='python',
na_values='n/a',
thousands =',')
weo.head()
Explanation: Notice the , for thousands. As we saw before, there is an easy fix
End of explanation
weo.T.head(10)
Explanation: Comment. This has several issues. Here's what we want:
We would like the variables to be in columns and have observations labeled 1980, 1981, etc. In other words, we want the years in the index.
We would like to make sure the data columns (1980, 1981, etc.) have dtype float64
Here's what we might to do achieve what we want:
We could try to transpose the dataframe to get the years in the index. This is close, but not quite what we want (see below). We'll come back to this in the next notebook.
To make the data columns have dtype float64 we need to warn pandas about the , thousand separators and n/a for missing data.
Question. Can we transpose the whole thing to get the data running down columns?
End of explanation
dollars = '$123.45'
print('Type of variable dollars:', type(dollars))
num = dollars.replace('$', '')
num = float(num)
print('Type of variable num:', type(num))
Explanation: <a id='strings'></a>
String methods
We can treat variables as strings in Pandas in much the same way we dealt with strings in core Python. Run the code below to remind yourself how this works.
End of explanation
chipotle.head()
# create a copy of the df to play with
chipotle_num = chipotle.copy()
print('Original dtype:', chipotle_num['item_price'].dtype)
# delete dollar signs (dtype does not change!)
chipotle_num['item_price'].str.replace('$', '').head()
# delete dollar signs, convert to float, AND assign back to chipotle_num in one line
chipotle_num['item_price'] = chipotle_num['item_price'].str.replace('$', '').astype(float)
print('New dtype:', chipotle_num['item_price'].dtype)
# assign back to chp for future use
chipotle = chipotle_num
print('Variable dtypes:\n', chipotle.dtypes, sep='')
chipotle.head()
Explanation: Pandas string methods. We can do the same thing to all the observations of a variable with so-called string methods. We append .str to a variable in a dataframe and then apply the string method of our choice. If this is part of converting a number-like entry that has mistakenly been given dtype object, we then convert its dtype with the astype method.
Example. Let's use a string method to fix the item_price variable in the Chipotle dataframe. This has three parts:
Use the method str to identify this as a string method.
Apply the string method of our choice (here replace) to fix the string.
Use the astype method to convert the fixed-up string to a float.
We start by making a copy of the chp dataframe that we can experiment with.
End of explanation
weo.head(1)
weo.head(1).dtypes
Explanation: Comment. We did everything here in one line: replace the dollar sign with a string method, then converted to float using astype. If you think this is too dense, you might break it into two steps.
Example. Here we use the astype method again to convert the dtypes of weo into float
End of explanation
weo['1980'].astype(float)
Explanation: Want to convert the year variables into float
End of explanation
weo = pd.read_csv(url,
sep = '\t',
index_col='ISO',
usecols=details_list + years_list,
skipfooter=1, engine='python',
na_values=['n/a', '--'],
thousands =',')
weo.head(1)
# With that out of our way, we can do the conversion for one variable
weo['1980'].astype(float)
# or for all numeric variables
years = [str(year) for year in range(1980, 2017)]
weo[years] = weo[years].astype(float)
weo.dtypes
Explanation: This error indicates that somewhere in weo['1980'] there is a string value --. We want to convert that into NaN. Later we will see how we can do that directly. For now use read_csv() again
End of explanation
# try this with an example first
country = 'United States 1'
# get documentation for the rsplit method
country.rsplit?
# an example
country.rsplit()
Explanation: Example. Here we strip off the numbers at the end of the indexes in the OECD docs dataframe. This involves some experimentation:
Play with the rsplit method to see how it works.
Apply rsplit to the example country = 'United States 1'.
Use a string method to do this to all the entries of the variable Country.
End of explanation
# what about this?
country.rsplit(maxsplit=1)
# one more step, we want the first component of the list
country.rsplit(maxsplit=1)[0]
oecd.index
oecd.index.str.rsplit(maxsplit=1)[0]
#try
oecd.index.str.rsplit?
# Note the TWO str's
oecd.index.str.rsplit(n=1).str[0]
#or use the str.get() method
oecd.index.str.rsplit(n=1).str.get(0)
oecd.index = oecd.index.str.rsplit(n=1).str.get(0)
oecd.head()
Explanation: Comment. Not quite, we only want to split once.
End of explanation
docs = oecd
docs.head()
Explanation: Comments.
Note that we need two str's here: one to do the split, the other to extract the first element.
For reasons that mystify us, we ran into problems when we used maxsplit=1, but it works with n=1.
This is probably more than you want to know, but file away the possibilities in case you need them.
<a id='missing'></a>
Missing values
It's important to label missing values, so that Pandas doesn't interpret entries as strings. Pandas is also smart enough to ignore things labeled missing when it does calculations or graphs. If we compute, for example, the mean of a variable, the default is to ignore missing values.
We've seen that we can label certain entries as missing values in read statements: read_csv, read_excel, and so on. Here we do it directly, mostly to remind ourselves what's involved.
Marking missing values
Example. The oecd dataframe contains a number of instances of .. (double period). How can we mark them as missing values?
End of explanation
docs.replace(to_replace=['..'], value=[None]).head()
Explanation: What to do. We use the replace method on the whole dataframe. To mark something as missing, we replace it as None, which Pandas interprets as missing and labels NaN.
End of explanation
docsna = docs.replace(to_replace=['..'], value=[None])
docsna.dtypes
Explanation: Comment. Replace automatically updates the dtypes. Here the double dots led us to label the variables as objects. After the replace, they're now floats, as they should be.
End of explanation
docs.replace(to_replace=['.'], value=['*']).head()
Explanation: Comment. Unlike the string methods we described earlier, this use of replace affects complete entries, not elements of string entries. For example, suppose we tried to replace the periods in decimal numbers with an asterisk. We could try the following, but it doesn't work: the decimal numbers don't change.
End of explanation
# grab a variable to play with
var = docsna[2013].head(10)
var
# why not '2013'? check the type
docsna.columns
# which ones are missing ("null")?
var.isnull()
# which ones are not missing ("not null")?
var.notnull()
# drop the missing
var.dropna()
Explanation: Working with missing values
End of explanation
docsna[2013].plot.barh(figsize=(4, 12))
Explanation: Comment. We usually don't have to worry about this, Pandas takes care of missing values automatically.
Comment. Let's try a picture to give us a feeling of accomplishment. What else would you say we need? How would we get it?
End of explanation
# we create a small dataframe to experiment with
small = weo.head()
small
Explanation: <a id='selection'></a>
Selecting variables and observations
The word selection refers to choosing a subset of variables or observations using their labels or index. Similar methods are sometimes referred to as slicing, subsetting, indexing, querying, or filtering. We'll treat the terms as synonymous.
There are lots of ways to do this. Mostly we do "Boolean" selection, which we address in the next section. We review more direct options here, mostly at high speed because they're not things we use much.
In the outline below, df is a dataframe, var and varn are variable names, n1 and n2 are integers,
- vlist = ['var1', 'var2'] is a list of variable names, and
- nlist = [0, 3, 4] is a list of numerical variable or observation indexes and
- bools is a list or pandas Series of booleans (True and False).
Some of the basic selection/indexing/slicing methods have the form:
df[var] extracts a variable -- a series, in other words.
df[vlist] extracts a new dataframe consisting of the variables in vlist.
df[nlist] does the same thing.
df[bools]: extracts each row where the corresponding element in bools is true. len(bools) must be equal to df.size[0]
df[n1:n2] extracts observations n1 to n2-1, the traditional slicing syntax.
End of explanation
small[['Country', 'Units']]
small[[0, 4]]
small['2011']
small[1:3]
small[[False, True, True, False, False]]
Explanation: Example. Let's try each of these in a different cell and see what they do:
small[['Country', 'Units']]
small[[0, 4]]
small['2011']
small[1:3]
Can you explain the results?
End of explanation
s1 = pd.Series([5, 6, 7, 8], index=["a", "b", "c", "d"])
s1
s2 = pd.Series([50, 60, 70, 80], index=[0, 4, 2, 999])
s2
# index has dtype object, so using an int returns the value in that row (starting at 0)
s1[1]
# index has dtype int, so using an integer tries to find the that int in the
# index and return the corresponding value and throws an error if it can't find it
s2[1]
s2[0] # no error, 0 is in the index
# index has dtype object, so a list of ints extracts those rows
s1[[0, 3]]
# index has dtype int, so a list of ints tries to match each int to the index
# it returns NaN where it can't find the index. Notice it **did not** return
# `80` for 3
s2[[0, 3, 999]]
# index has type object, so a string finds row with matching index
s1["c"]
# index has dtype int, so using a string causes an error
s2["c"]
# similar behavior for lists of strings
s1[["a", "b", "penguin"]]
# index has dtype int, so list of strings returns NaN's everywhere
s2[["a", "b"]]
# lists of True/False work the same for any dtype of index
bools = [True, False, False, True]
s1[bools]
s2[bools]
Explanation: Series indexing
Indexing a Series is a little different because we only have one column, so all indexing operations interact with rows.
The rules here are a little subtle, so we'll show examples and add comments that explain what each example does
In the list below s is a Series, n is an integer, nlist = [0, 3] is a list of integers, and i is a string, and is is a list of strings
s[n]: if the index has dtype int, this extracts the row with index n. Otherwise extracts the nth row (starting at zero)
s[nlist]: if the index has dtype int, this extracts rows with indices in nlist returning NaN if they don't appear. Otherwise extracts the rows at positions in nlist, filling with NaN for invalid positions
s[i]: if the index has dtype object, this extracts the row with index i, otherwise it is an error
s[is]:
End of explanation
weo.head(2)
Explanation: <a id='boolean'></a>
Boolean selection
We choose observations that satisfy one or more conditions. Boolean selection consists of two steps that we typically combine in one statement:
Use a comparison to construct a Boolean variable consisting of True and False.
Compute df[comparison], where df is a dataframe and comparison is a comparison. This will select the observations (rows) for which comparison is true and throw away the others.
We work through this one step at a time:
Example: apply the want operator
Comparisons for dataframes
Boolean selection: select observations for which the comparison is True
The isin method
This is easier to describe with an example.
Example: Apply the want operator to WEO
Our want here is to take the weo dataframe and extract government debt and deficits for a given set of countries. Putting this to work involves several steps.
Here's the head of the dataframe to remind us what we're dealing with.
End of explanation
variable_list = weo[['Country', 'Subject Descriptor', 'Units']].drop_duplicates()
print('Number of variables: ', variable_list.shape[0])
variable_list.head()
country_list = weo['Country'].drop_duplicates()
print('Number of countries: ', country_list.shape[0])
country_list
Explanation: Find variable and country codes. Which ones do we want? Let's start by seeing that's available. Here we create special dataframes that include all the variables and their definitions and all the countries.
Note the use of the drop_duplicates method, which does what it sounds like: remove duplicate rows (!)
End of explanation
small
small['Units'] == 'National currency'
small['2011'] >= 200
(small['Units'] == 'National currency') & (small['2011'] >= 100)
(small['Units'] == 'National currency') | (small['2011'] >= 100)
Explanation: Exercise.
Construct a list of countries with countries = weo['Country']; that is, without applying the drop_duplicates method. How large is it? How many duplicates have we dropped?
<!-- cn = sorted(list(set(weo.index))) -->
<!--
* What are the country codes (`ISO`) for Argentina and the United States?
* What are the variable codes (`WEO Subject Code`) for government debt (gross debt, percent of GDP) and net lending/borrowing (also percent of GDP)?
-->
Comment. Now that we have the country and variable codes, we can be more explicit about what we want. We want observations with those country and variable codes.
We work up to the solution one step at a time.
Comparisons for series
We can construct comparisons for series (dataframe columns) much as we did with simple variables. The difference is that we get a complete column of True/False responses, not just one.
Mutiple comparisons have a different syntax than we saw earlier: and is replaced by &, and or is replaced by |. And when we have more than one comparison, we need to enclose them in parentheses.
Examples. Consider the comparisons:
small['Units'] == 'National currency'
small['2011'] >= 100
(small['Units'] == 'National currency') & (small['2011'] >= 100)
(small['Units'] == 'National currency') | (small['2011'] >= 100)
Remind yourself what the & and | do.
End of explanation
# remind ourslves what we're starting with
small
# two steps: comparison, then selection
ncunits = small['Units'] == 'National currency' # comparison
print(ncunits)
small[ncunits] # selection
# put the steps together in one line
small[small['Units'] == 'National currency']
Explanation: Boolean selection
Boolean selection simply chooses those observations for which a condition is True. Some people refer to this as filtering. The syntax is
python
df[comparison]
The result is a new dataframe of observations in which comparison is true.
Example. We choose obervations for which the units are 'National currency'. We do this first in two steps, then in one.
End of explanation
vlist = ['GGXWDG_NGDP', 'GGXCNL_NGDP']
weo['WEO Subject Code'].isin(vlist)
Explanation: Exercise. Construct dataframes for which
small['Units'] does not equal 'National currency'.
small['Units'] equals 'National currency' and small['2011'] is greater than 100.
<a id='isin'></a>
The isin method
Pay attention now, this is really useful. Suppose we want to extract the data for which weo['Country'] == 'Argentina' or weo['Country'] == 'Greece' (Greece). We could do that by combining the comparisons:
python
(weo['Country'] == 'Aregentina') | (weo['Country'] == 'Greece')
Remind youself that | stands for "or." (What do we use for "and"?)
A simpler approach is to apply the isin method to a variable. This sets the comparison equal to True if the value of the observation is of weo['Country'] equals any element in a list. We could do the same thing using mulitple comparisons, but this is a lot easier.
Let's see how this works.
Example. Let's apply the same logic to variable codes. If we want to extract the observations with codes
vlist = ['GGXWDG_NGDP', 'GGXCNL_NGDP']
we would use
End of explanation
weo.tail(4)
# this time let's use the result of isin for selection
vlist = ['GGXWDG_NGDP', 'GGXCNL_NGDP']
weo[weo['WEO Subject Code'].isin(vlist)].head(6)
# we've combined several things in one line
comparison = weo['WEO Subject Code'].isin(vlist)
selection = weo[comparison]
selection.head(6)
Explanation: Comment. We're choosing 2 variables from 45, so there are lots of Falses.
End of explanation
variables = ['GGXWDG_NGDP', 'GGXCNL_NGDP']
countries = ['Argentina', 'Greece']
weo_sub = weo[weo['WEO Subject Code'].isin(variables) & weo['Country'].isin(countries)]
weo_sub
Explanation: Comment. We can do the same thing with countries. If we want to choose two variables and three countries, the code looks like:
End of explanation
countries = ['China', 'India', 'United States']
gdp = weo[(weo['WEO Subject Code']=='NGDPD') & weo['Country'].isin(countries)]
gdp
Explanation: Comments.
We've now done what we described when we applied the want operator.
This is a go-to method. Circle it for later reference.
This is a go-to method. Circle it for later reference.
Exercise. Use the isin method to extract Gross domestic product in US dollars for China, India, and the United States. Assign the result to the dataframe gdp. Hint: You can adapt the code we just ran. The variable code is NGDPD. The country codes are CHN, IND, and USA.
End of explanation
gdp['2015'].plot(kind='bar')
Explanation: Exercise (challenging). Plot the variable gdp['2015'] as a bar chart. What would you say it needs?
End of explanation
# recall
entry_poll['Topics'].head(10)
# the contains method
entry_poll['Topics'].str.contains('Machine Learning')
Explanation: <a id='contains'></a>
The contains method
Another useful one. The contains string method for series identifies observations that contain a specific string. If yes, the observation is labelled True, if no, False. A little trick converts the True/False outcomes to ones and zeros.
We apply it to the Topics variable of the Entry Poll dataframe entry_poll. You may recall that this variable could have more than one response. We tease them apart with the contains method. Our want is to have a yes/no variable for each response.
End of explanation
entry_poll['Topics'].str.contains('Machine Learning').head(10)*1
Explanation: Comment. That's pretty good, we now know which students mentioned Machine Learning and which did not. It's more useful, though, to convert this to zeros (False) and ones (True), which we do with this trick: we multiply by 1.
End of explanation
topics = ['Web scraping', 'Machine Learning', 'regression']
old_ep = entry_poll.copy()
vnames = []
for x in topics:
newname = 'Topics' + '_' + x
vnames.append(newname)
entry_poll[newname] = entry_poll['Topics'].str.contains(x)*1
vnames
Explanation: Comment. Now let's do the same for some of the other entries and save them in new variables.
End of explanation
# create new df of just these variables
student_topics = entry_poll[vnames]
student_topics
# count them with the sum method
topics_counts = student_topics.sum()
topics_counts
Explanation: Comment. You might want to think about this a minute. Or two.
End of explanation
topics_counts.plot(kind='barh')
Explanation: Comment. Just for fun, here's a bar graph of the result.
End of explanation
topics_counts.plot(kind='pie')
Explanation: and a pie chart
End of explanation
data = {'Size': ['a) 1 to 4', 'b) 5 to 9', 'c) 10 to 19', 'd) 20 to 49', 'e) 50 to 99',
'f) 100 to 249', 'g) 250 to 499', 'h) 500 to 999', 'i) 1000 to 2499',
'j) 2500 to 4999', 'k) 5000 to 9999', 'l) 10000+'],
'Firms': [2846416, 1020772, 598153, 373345, 115544, 63845,
19389, 9588, 6088, 2287, 1250, 1357],
'Emp': [5998912, 6714924, 8151891, 11425545, 8055535, 9788341,
6611734, 6340775, 8321486, 6738218, 6559020, 32556671]}
bds = pd.DataFrame(data)
bds .head(3)
Explanation: Review
Let's remind ourselves what we've learned.
Exercise. We explore the Census's Business Dynamics Statistics, a huge collection of data about firms. We've extracted a small piece of one of their databases that includes these variables for 2013:
Size: size category of firms based on number of employees
Firms: number of firms in each size category
Emp: number of employees in each size category
Run the code cell below to load the data.
End of explanation
<END_TASK> |
131 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<a href="https
Step1: DDSP Training
This notebook demonstrates the libraries in https
Step2: Get a batch of data
Step3: Get a distribution strategy
Step4: Get model and trainer
python
Step6: or gin
Step7: Train
Build model
Step8: Train Loop
Step9: Analyze results | <ASSISTANT_TASK:>
Python Code:
# Copyright 2021 Google LLC. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: <a href="https://colab.research.google.com/github/magenta/ddsp/blob/main/ddsp/colab/tutorials/3_training.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2021 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
# Install and import dependencies
%tensorflow_version 2.x
!pip install -qU ddsp
# Ignore a bunch of deprecation warnings
import warnings
warnings.filterwarnings("ignore")
import time
import ddsp
from ddsp.training import (data, decoders, encoders, models, preprocessing,
train_util, trainers)
from ddsp.colab.colab_utils import play, specplot, DEFAULT_SAMPLE_RATE
import gin
import matplotlib.pyplot as plt
import numpy as np
import tensorflow.compat.v2 as tf
import tensorflow_datasets as tfds
sample_rate = DEFAULT_SAMPLE_RATE # 16000
Explanation: DDSP Training
This notebook demonstrates the libraries in https://github.com/magenta/ddsp/tree/master/ddsp/training. It is a simple example, overfitting a single audio sample, for educational purposes.
For a full training pipeline please use ddsp/training/ddsp_run.py as in the train_autoencoder.ipynb.
End of explanation
# Get a single example from NSynth.
# Takes a few seconds to load from GCS.
data_provider = data.NSynthTfds(split='test')
dataset = data_provider.get_batch(batch_size=1, shuffle=False).take(1).repeat()
batch = next(iter(dataset))
audio = batch['audio']
n_samples = audio.shape[1]
specplot(audio)
play(audio)
Explanation: Get a batch of data
End of explanation
strategy = train_util.get_strategy()
Explanation: Get a distribution strategy
End of explanation
TIME_STEPS = 1000
# Create Neural Networks.
preprocessor = preprocessing.F0LoudnessPreprocessor(time_steps=TIME_STEPS)
decoder = decoders.RnnFcDecoder(rnn_channels = 256,
rnn_type = 'gru',
ch = 256,
layers_per_stack = 1,
input_keys = ('ld_scaled', 'f0_scaled'),
output_splits = (('amps', 1),
('harmonic_distribution', 45),
('noise_magnitudes', 45)))
# Create Processors.
harmonic = ddsp.synths.Harmonic(n_samples=n_samples,
sample_rate=sample_rate,
name='harmonic')
noise = ddsp.synths.FilteredNoise(window_size=0,
initial_bias=-10.0,
name='noise')
add = ddsp.processors.Add(name='add')
# Create ProcessorGroup.
dag = [(harmonic, ['amps', 'harmonic_distribution', 'f0_hz']),
(noise, ['noise_magnitudes']),
(add, ['noise/signal', 'harmonic/signal'])]
processor_group = ddsp.processors.ProcessorGroup(dag=dag,
name='processor_group')
# Loss_functions
spectral_loss = ddsp.losses.SpectralLoss(loss_type='L1',
mag_weight=1.0,
logmag_weight=1.0)
with strategy.scope():
# Put it together in a model.
model = models.Autoencoder(preprocessor=preprocessor,
encoder=None,
decoder=decoder,
processor_group=processor_group,
losses=[spectral_loss])
trainer = trainers.Trainer(model, strategy, learning_rate=1e-3)
Explanation: Get model and trainer
python
End of explanation
gin_string =
import ddsp
import ddsp.training
# Preprocessor
models.Autoencoder.preprocessor = @preprocessing.F0LoudnessPreprocessor()
preprocessing.F0LoudnessPreprocessor.time_steps = 1000
# Encoder
models.Autoencoder.encoder = None
# Decoder
models.Autoencoder.decoder = @decoders.RnnFcDecoder()
decoders.RnnFcDecoder.rnn_channels = 256
decoders.RnnFcDecoder.rnn_type = 'gru'
decoders.RnnFcDecoder.ch = 256
decoders.RnnFcDecoder.layers_per_stack = 1
decoders.RnnFcDecoder.input_keys = ('ld_scaled', 'f0_scaled')
decoders.RnnFcDecoder.output_splits = (('amps', 1),
('harmonic_distribution', 20),
('noise_magnitudes', 20))
# ProcessorGroup
models.Autoencoder.processor_group = @processors.ProcessorGroup()
processors.ProcessorGroup.dag = [
(@harmonic/synths.Harmonic(),
['amps', 'harmonic_distribution', 'f0_hz']),
(@noise/synths.FilteredNoise(),
['noise_magnitudes']),
(@add/processors.Add(),
['noise/signal', 'harmonic/signal']),
]
# Harmonic Synthesizer
harmonic/synths.Harmonic.name = 'harmonic'
harmonic/synths.Harmonic.n_samples = 64000
harmonic/synths.Harmonic.scale_fn = @core.exp_sigmoid
# Filtered Noise Synthesizer
noise/synths.FilteredNoise.name = 'noise'
noise/synths.FilteredNoise.n_samples = 64000
noise/synths.FilteredNoise.window_size = 0
noise/synths.FilteredNoise.scale_fn = @core.exp_sigmoid
noise/synths.FilteredNoise.initial_bias = -10.0
# Add
add/processors.Add.name = 'add'
models.Autoencoder.losses = [
@losses.SpectralLoss(),
]
losses.SpectralLoss.loss_type = 'L1'
losses.SpectralLoss.mag_weight = 1.0
losses.SpectralLoss.logmag_weight = 1.0
with gin.unlock_config():
gin.parse_config(gin_string)
with strategy.scope():
# Autoencoder arguments are filled by gin.
model = ddsp.training.models.Autoencoder()
trainer = trainers.Trainer(model, strategy, learning_rate=1e-4)
Explanation: or gin
End of explanation
# Build model, easiest to just run forward pass.
dataset = trainer.distribute_dataset(dataset)
trainer.build(next(iter(dataset)))
Explanation: Train
Build model
End of explanation
dataset_iter = iter(dataset)
for i in range(300):
losses = trainer.train_step(dataset_iter)
res_str = 'step: {}\t'.format(i)
for k, v in losses.items():
res_str += '{}: {:.2f}\t'.format(k, v)
print(res_str)
Explanation: Train Loop
End of explanation
# Run a batch of predictions.
start_time = time.time()
controls = model(next(dataset_iter))
audio_gen = model.get_audio_from_outputs(controls)
print('Prediction took %.1f seconds' % (time.time() - start_time))
print('Original Audio')
play(audio)
print('Resynthesized Audio')
play(audio_gen)
print('Filtered Noise Audio')
audio_noise = controls['noise']['signal']
play(audio_noise)
specplot(audio)
specplot(audio_gen)
specplot(audio_noise)
batch_idx = 0
get = lambda key: ddsp.core.nested_lookup(key, controls)[batch_idx]
amps = get('harmonic/controls/amplitudes')
harmonic_distribution = get('harmonic/controls/harmonic_distribution')
noise_magnitudes = get('noise/controls/magnitudes')
f0_hz = get('f0_hz')
loudness = get('loudness_db')
audio_noise = get('noise/signal')
f, ax = plt.subplots(1, 2, figsize=(14, 4))
f.suptitle('Input Features', fontsize=16)
ax[0].plot(loudness)
ax[0].set_ylabel('Loudness')
ax[1].plot(f0_hz)
ax[1].set_ylabel('F0_Hz')
f, ax = plt.subplots(1, 2, figsize=(14, 4))
f.suptitle('Synth Params', fontsize=16)
ax[0].semilogy(amps)
ax[0].set_ylabel('Amps')
ax[0].set_ylim(1e-5, 2)
# ax[0].semilogy(harmonic_distribution)
ax[1].matshow(np.rot90(np.log10(harmonic_distribution + 1e-6)),
cmap=plt.cm.magma,
aspect='auto')
ax[1].set_ylabel('Harmonic Distribution')
ax[1].set_xticks([])
_ = ax[1].set_yticks([])
f, ax = plt.subplots(1, 1, figsize=(7, 4))
# f.suptitle('Filtered Noise Params', fontsize=16)
ax.matshow(np.rot90(np.log10(noise_magnitudes + 1e-6)),
cmap=plt.cm.magma,
aspect='auto')
ax.set_ylabel('Filtered Noise Magnitudes')
ax.set_xticks([])
_ = ax.set_yticks([])
Explanation: Analyze results
End of explanation
<END_TASK> |
132 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<a href="http
Step1: Make a grid and set boundary conditions.
Step2: Here we make the initial grid of elevation of zeros with a very small amount of noise to make a more pleasing network.
Step3: Set parameters related to time.
Step4: Set parameters for incision and intializing all of the process components that do the work. We also initialize tools for quantifying the landscape.
Step5: Initialize rock uplift rate. This will need to be changed later.
Step6: Now for the code loop.
Note that you can rerun Code Block 7 many times, and as long as you don't reset the elevation field (Code Block 3), it will take the already evolved landscape and evolve it even more. If you want to change parameters in other code blocks (e.g. Code Block 5 or 6), you can do that too, and as long as you don't reset the elevation field (Code Block 3) the new parameters will apply on the already evolved topography.
Step7: Plot the topography.
Step8: Plot the slope and area data at each point on the landscape (in log-log space). We will only plot the core nodes because the boundary nodes have slopes that are influenced by the boundary conditions.
Step9: It is slightly easier to interpret slope-area data when we look at a single channel, rather than the entire landscape. Below we plot the profile and slope-area data for the three largest channels on the landscape.
Step10: The chi index is a useful way to quantitatively interpret fluvial channels. Below we plot the chi index in the three largest channels and also a chi map across the entire landscape.
Step11: The channel steepness index is another useful index to quantify fluvial channels. Below we plot the steepness index in the same three largest channels, and also plot steepness index across the grid.
Step12: If you have a grid that you want to export, uncomment and edit the appropriate lines below and run the code block.
Step13: After running every code block once, has the landscape reached steady state? Answer | <ASSISTANT_TASK:>
Python Code:
# Code block 1
import copy
import numpy as np
from matplotlib import pyplot as plt
from landlab import RasterModelGrid, imshow_grid
from landlab.components import (
ChannelProfiler,
ChiFinder,
FlowAccumulator,
SteepnessFinder,
StreamPowerEroder,
)
from landlab.io import write_esri_ascii
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../../landlab_header.png"></a>
Quantifying river channel evolution with Landlab
These exercises are based on a project orginally designed by Kelin Whipple at Arizona State University. This notebook was created by Nicole Gasparini at Tulane University.
<hr>
<small>For tutorials on learning Landlab, click here: <a href="https://github.com/landlab/landlab/wiki/Tutorials">https://github.com/landlab/landlab/wiki/Tutorials</a></small>
<hr>
What is this notebook?
This notebook illustrates the evolution of detachment-limited channels in an actively uplifting landscape. The landscape evolves according to the equation:
\begin{equation}
\frac{d z}{d t} = -K_\text{sp} A^{m_{sp}} S^{n_{sp}} + U
\end{equation}
Here, $K_{sp}$ is the erodibility coefficient on fluvial incision, which is thought to be positively correlated with climate wetness, or storminess (this is hard to quantify) and to be negatively correlated with rock strength (again, rock strength is hard to quantify). $m_{sp}$ and $n_{sp}$ are positive exponents, usually thought to have a ratio, $m_{sp}/n_{sp} \approx 0.5$. $A$ is drainage area and $S$ is the slope of steepest descent ($-\frac{dz}{dx}$) where $x$ is horizontal distance (positive in the downslope direction) and $z$ is elevation. (If slope is negative there is no fluvial erosion.) $U$ is an externally-applied rock uplift field.
The fluvial erosion term is also known as the stream power equation. Before using this notebook you should be familiar with this equation from class lectures and reading.
For a great overview of the stream power equation, see:
Whipple and Tucker, 1999, Dynamics of the stream-power river incision model: Implications for height limits of mountain ranges, landscape response timescales, and research needs, Journal of Geophysical Research.
For some great illustrations of modeling with the sream power equation, see:
Tucker and Whipple, 2002, Topographic outcomes predicted by stream erosion models: Sensitivity analysis and intermodel comparison, Journal of Geophysical Research.
Helpful background on landscape sensitivity to rock uplift rates and patterns can be found here:
Kirby and Whipple, 2012, Expression of active tectonics in erosional landscapes, Journal of Structural Geology.
What will you do?
In this exercise you will modify the code to get a better understanding of how rock uplift rates and patterns and the erodibility coefficient control fluvial channel form.
Start at the top by reading each block of text and sequentially running each code block (shift - enter OR got to the Cell pulldown menu at the top and choose Run Cells).
If you just change one code block and rerun only that code block, only the parts of the code in that code block will be updated. (E.g. if you change parameters but don't reset the code blocks that initialize run time or topography, then these values will not be reset.)
STUDENTS: Questions to answer before starting this assignment.
Answer these questions before running the notebook.
What do you think will happen to total relief (defined as the maximum minus the minimum elevation, here area is fixed) and channel slope at steady state if $K_{sp}$ is uniformly increased?
What do you think will happen to total relief and channel slope at steady state if $U$ is uniformly increased?
How do you think a steady-state landscape with a uniform low rock uplift rate will respond if rock uplift is uniformly increased (relative to a steady base level)? How will channel slopes change through time?
Now on to the code...
First we have to import the parts of Python and Landlab that are needed to run this code. You should not have to change this first code block.
End of explanation
# Code Block 2
number_of_rows = 50 # number of raster cells in vertical direction (y)
number_of_columns = 100 # number of raster cells in horizontal direction (x)
dxy = 200 # side length of a raster model cell, or resolution [m]
# Below is a raster (square cells) grid, with equal width and height
mg1 = RasterModelGrid((number_of_rows, number_of_columns), dxy)
# Set boundary conditions - only the south side of the grid is open.
# Boolean parameters are sent to function in order of
# east, north, west, south.
mg1.set_closed_boundaries_at_grid_edges(True, True, True, False)
Explanation: Make a grid and set boundary conditions.
End of explanation
# Code Block 3
np.random.seed(35) # seed set so our figures are reproducible
mg1_noise = (
np.random.rand(mg1.number_of_nodes) / 1000.0
) # intial noise on elevation gri
# set up the elevation on the grid
z1 = mg1.add_zeros("topographic__elevation", at="node")
z1 += mg1_noise
Explanation: Here we make the initial grid of elevation of zeros with a very small amount of noise to make a more pleasing network.
End of explanation
# Code Block 4
tmax = 5e5 # time for the model to run [yr] (Original value was 5E5 yr)
dt = 1000 # time step [yr] (Original value was 100 yr)
total_time = 0 # amount of time the landscape has evolved [yr]
# total_time will increase as you keep running the code.
t = np.arange(0, tmax, dt) # each of the time steps that the code will run
Explanation: Set parameters related to time.
End of explanation
# Code Block 5
# Original K_sp value is 1e-5
K_sp = 1.0e-5 # units vary depending on m_sp and n_sp
m_sp = 0.5 # exponent on drainage area in stream power equation
n_sp = 1.0 # exponent on slope in stream power equation
frr = FlowAccumulator(mg1, flow_director="FlowDirectorD8") # intializing flow routing
spr = StreamPowerEroder(
mg1, K_sp=K_sp, m_sp=m_sp, n_sp=n_sp, threshold_sp=0.0
) # initializing stream power incision
theta = m_sp / n_sp
# initialize the component that will calculate channel steepness
sf = SteepnessFinder(mg1, reference_concavity=theta, min_drainage_area=1000.0)
# initialize the component that will calculate the chi index
cf = ChiFinder(
mg1, min_drainage_area=1000.0, reference_concavity=theta, use_true_dx=True
)
Explanation: Set parameters for incision and intializing all of the process components that do the work. We also initialize tools for quantifying the landscape.
End of explanation
# Code Block 6
# uplift_rate [m/yr] (Original value is 0.0001 m/yr)
uplift_rate = np.ones(mg1.number_of_nodes) * 0.0001
Explanation: Initialize rock uplift rate. This will need to be changed later.
End of explanation
# Code Block 7
for ti in t:
z1[mg1.core_nodes] += uplift_rate[mg1.core_nodes] * dt # uplift the landscape
frr.run_one_step() # route flow
spr.run_one_step(dt) # fluvial incision
total_time += dt # update time keeper
print(total_time)
Explanation: Now for the code loop.
Note that you can rerun Code Block 7 many times, and as long as you don't reset the elevation field (Code Block 3), it will take the already evolved landscape and evolve it even more. If you want to change parameters in other code blocks (e.g. Code Block 5 or 6), you can do that too, and as long as you don't reset the elevation field (Code Block 3) the new parameters will apply on the already evolved topography.
End of explanation
# Code Block 8
imshow_grid(
mg1, "topographic__elevation", grid_units=("m", "m"), var_name="Elevation (m)"
)
title_text = f"$K_{{sp}}$={K_sp}; $time$={total_time} yr; $dx$={dxy} m"
plt.title(title_text)
max_elev = np.max(z1)
print("Maximum elevation is ", np.max(z1))
Explanation: Plot the topography.
End of explanation
# Code Block 9
plt.loglog(
mg1.at_node["drainage_area"][mg1.core_nodes],
mg1.at_node["topographic__steepest_slope"][mg1.core_nodes],
"b.",
)
plt.ylabel("Topographic slope")
plt.xlabel("Drainage area (m^2)")
title_text = f"$K_{{sp}}$={K_sp}; $time$={total_time} yr; $dx$={dxy} m"
plt.title(title_text)
Explanation: Plot the slope and area data at each point on the landscape (in log-log space). We will only plot the core nodes because the boundary nodes have slopes that are influenced by the boundary conditions.
End of explanation
# Code Block 10
# profile the largest channels, set initially to find the mainstem channel in the three biggest watersheds
# you can change the number of watersheds, or choose to plot all the channel segments in the watershed that
# have drainage area below the threshold (here we have set the threshold to the area of a grid cell).
prf = ChannelProfiler(
mg1,
number_of_watersheds=3,
main_channel_only=True,
minimum_channel_threshold=dxy ** 2,
)
prf.run_one_step()
# plot the elevation as a function of distance upstream
plt.figure(1)
title_text = f"$K_{{sp}}$={K_sp}; $time$={total_time} yr; $dx$={dxy} m"
prf.plot_profiles(
xlabel="distance upstream (m)", ylabel="elevation (m)", title=title_text
)
# plot the location of the channels in map view
plt.figure(2)
prf.plot_profiles_in_map_view()
# slope-area data in just the profiled channels
plt.figure(3)
for i, outlet_id in enumerate(prf.data_structure):
for j, segment_id in enumerate(prf.data_structure[outlet_id]):
if j == 0:
label = "channel {i}".format(i=i + 1)
else:
label = "_nolegend_"
segment = prf.data_structure[outlet_id][segment_id]
profile_ids = segment["ids"]
color = segment["color"]
plt.loglog(
mg1.at_node["drainage_area"][profile_ids],
mg1.at_node["topographic__steepest_slope"][profile_ids],
".",
color=color,
label=label,
)
plt.legend(loc="lower left")
plt.xlabel("drainage area (m^2)")
plt.ylabel("channel slope [m/m]")
title_text = f"$K_{{sp}}$={K_sp}; $time$={total_time} yr; $dx$={dxy} m"
plt.title(title_text)
Explanation: It is slightly easier to interpret slope-area data when we look at a single channel, rather than the entire landscape. Below we plot the profile and slope-area data for the three largest channels on the landscape.
End of explanation
# Code Block 11
# calculate the chi index
cf.calculate_chi()
# chi-elevation plots in the profiled channels
plt.figure(4)
for i, outlet_id in enumerate(prf.data_structure):
for j, segment_id in enumerate(prf.data_structure[outlet_id]):
if j == 0:
label = "channel {i}".format(i=i + 1)
else:
label = "_nolegend_"
segment = prf.data_structure[outlet_id][segment_id]
profile_ids = segment["ids"]
color = segment["color"]
plt.plot(
mg1.at_node["channel__chi_index"][profile_ids],
mg1.at_node["topographic__elevation"][profile_ids],
color=color,
label=label,
)
plt.xlabel("chi index (m)")
plt.ylabel("elevation (m)")
plt.legend(loc="lower right")
title_text = (
f"$K_{{sp}}$={K_sp}; $time$={total_time} yr; $dx$={dxy} m; concavity={theta}"
)
plt.title(title_text)
# chi map
plt.figure(5)
imshow_grid(
mg1,
"channel__chi_index",
grid_units=("m", "m"),
var_name="Chi index (m)",
cmap="jet",
)
title_text = (
f"$K_{{sp}}$={K_sp}; $time$={total_time} yr; $dx$={dxy} m; concavity={theta}"
)
plt.title(title_text)
Explanation: The chi index is a useful way to quantitatively interpret fluvial channels. Below we plot the chi index in the three largest channels and also a chi map across the entire landscape.
End of explanation
# Code Block 12
# calculate channel steepness
sf.calculate_steepnesses()
# plots of steepnes vs. distance upstream in the profiled channels
plt.figure(6)
for i, outlet_id in enumerate(prf.data_structure):
for j, segment_id in enumerate(prf.data_structure[outlet_id]):
if j == 0:
label = "channel {i}".format(i=i + 1)
else:
label = "_nolegend_"
segment = prf.data_structure[outlet_id][segment_id]
profile_ids = segment["ids"]
distance_upstream = segment["distances"]
color = segment["color"]
plt.plot(
distance_upstream,
mg1.at_node["channel__steepness_index"][profile_ids],
"x",
color=color,
label=label,
)
plt.xlabel("distance upstream (m)")
plt.ylabel("steepness index")
plt.legend(loc="upper left")
plt.title(f"$K_{{sp}}$={K_sp}; $time$={total_time} yr; $dx$={dxy} m; concavity={theta}")
# channel steepness map
plt.figure(7)
imshow_grid(
mg1,
"channel__steepness_index",
grid_units=("m", "m"),
var_name="Steepness index ",
cmap="jet",
)
title_text = (
"$K_{sp}$="
+ str(K_sp)
+ "; $time$="
+ str(total_time)
+ "yr; $dx$="
+ str(dxy)
+ "m"
+ "; concavity="
+ str(theta)
)
plt.title(f"$K_{{sp}}$={K_sp}; $time$={total_time} yr; $dx$={dxy} m; concavity={theta}")
Explanation: The channel steepness index is another useful index to quantify fluvial channels. Below we plot the steepness index in the same three largest channels, and also plot steepness index across the grid.
End of explanation
# Code Block 13
## Below has the name of the file that data will be written to.
## You need to change the name of the file every time that you want
## to write data, otherwise you will get an error.
## This will write to the directory that you are running the code in.
# write_file_name = 'data_file.txt'
## Below is writing elevation data in the ESRI ascii format so that it can
## easily be read into Arc GIS or back into Landlab.
# write_esri_ascii(write_file_name, mg1, 'topographic__elevation')
Explanation: If you have a grid that you want to export, uncomment and edit the appropriate lines below and run the code block.
End of explanation
# Code Block 14
number_of_rows = 50 # number of raster cells in vertical direction (y)
number_of_columns = 100 # number of raster cells in horizontal direction (x)
dxy2 = 200 # side length of a raster model cell, or resolution [m]
# Below is a raster (square cells) grid, with equal width and height
mg2 = RasterModelGrid((number_of_rows, number_of_columns), dxy2)
# Set boundary conditions - only the south side of the grid is open.
# Boolean parameters are sent to function in order of
# east, north, west, south.
mg2.set_closed_boundaries_at_grid_edges(True, True, True, False)
z2 = copy.copy(z1) # initialize the elevations with the steady state
# topography produced for question 1
z2 = mg2.add_field("topographic__elevation", z2, at="node")
# K_sp value for base landscape is 1e-5
K_sp2 = 1e-5 # units vary depending on m_sp and n_sp
m_sp2 = 0.5 # exponent on drainage area in stream power equation
n_sp2 = 1.0 # exponent on slope in stream power equation
frr2 = FlowAccumulator(mg2, flow_director="FlowDirectorD8") # intializing flow routing
spr2 = StreamPowerEroder(
mg2, K_sp=K_sp2, m_sp=m_sp2, n_sp=n_sp2, threshold_sp=0.0
) # initializing stream power incision
theta2 = m_sp2 / n_sp2
# initialize the component that will calculate channel steepness
sf2 = SteepnessFinder(mg2, reference_concavity=theta2, min_drainage_area=1000.0)
# initialize the component that will calculate the chi index
cf2 = ChiFinder(
mg2, min_drainage_area=1000.0, reference_concavity=theta2, use_true_dx=True
)
# Code Block 15
tmax = 1e5 # time for the model to run [yr] (Original value was 5E5 yr)
dt = 500 # time step [yr] (Original value was 500 yr)
total_time = 0 # amount of time the landscape has evolved [yr]
# total_time will increase as you keep running the code.
t = np.arange(0, tmax, dt) # each of the time steps that the code will run
# Code Block 16
# uplift_rate [m/yr] (value was 0.0001 m/yr for base landscape)
uplift_rate = np.ones(mg2.number_of_nodes) * 0.0001
## If you want to add a one-time event that uplifts only part of the
## landscape, uncomment the 3 lines below
# fault_location = 4000 # [m]
# uplift_amount = 10 # [m]
# z2[np.nonzero(mg2.node_y>fault_location)] += uplift_amount
## IMPORTANT! To use the below fault generator, comment the one-time
## uplift event above if it isn't already commented out.
## Code below creates a fault horizontally across the grid.
## Uplift rates are greater where y values > fault location.
## To use, uncomment the 5 code lines below and edit to your values
# fault_location = 4000 # [m]
# low_uplift_rate = 0.0001 # [m/yr]
# high_uplift_rate = 0.0004 # [m/yr]
# uplift_rate[np.nonzero(mg2.node_y<fault_location)] = low_uplift_rate
# uplift_rate[np.nonzero(mg2.node_y>fault_location)] = high_uplift_rate
## IMPORTANT! To use below rock uplift gradient, comment the two
## uplift options above if they aren't already commented out.
## If you want a linear gradient in uplift rate
## (increasing uplift into the range),
## uncomment the 4 code lines below and edit to your values.
# low_uplift_rate = 0.0001 # [m/yr]
# high_uplift_rate = 0.0004 # [m/yr]
## below is uplift gradient per node row index, NOT row value in meters
# uplift_rate_gradient = (high_uplift_rate - low_uplift_rate)/(number_of_rows-3)
# uplift_rate = low_uplift_rate + ((mg2.node_y / dxy)-1) * uplift_rate_gradient
# Code Block 17
for ti in t:
z2[mg2.core_nodes] += uplift_rate[mg2.core_nodes] * dt # uplift the landscape
frr2.run_one_step() # route flow
spr2.run_one_step(dt) # fluvial incision
total_time += dt # update time keeper
print(total_time)
# Code Block 18
# Plot topography
plt.figure(8)
imshow_grid(
mg2, "topographic__elevation", grid_units=("m", "m"), var_name="Elevation (m)"
)
plt.title(f"$K_{{sp}}$={K_sp2}; $time$={total_time} yr; $dx$={dxy2} m")
max_elev = np.max(z2)
print("Maximum elevation is ", np.max(z2))
# Code Block 19
# Plot Channel Profiles and slope-area data along the channels
prf2 = ChannelProfiler(
mg2,
number_of_watersheds=3,
main_channel_only=True,
minimum_channel_threshold=dxy ** 2,
)
prf2.run_one_step()
# plot the elevation as a function of distance upstream
plt.figure(9)
title_text = f"$K_{{sp}}$={K_sp2}; $time$={total_time} yr; $dx$={dxy} m"
prf2.plot_profiles(
xlabel="distance upstream (m)", ylabel="elevation (m)", title=title_text
)
# plot the location of the channels in map view
plt.figure(10)
prf2.plot_profiles_in_map_view()
# slope-area data in just the profiled channels
plt.figure(11)
for i, outlet_id in enumerate(prf2.data_structure):
for j, segment_id in enumerate(prf2.data_structure[outlet_id]):
if j == 0:
label = "channel {i}".format(i=i + 1)
else:
label = "_nolegend_"
segment = prf2.data_structure[outlet_id][segment_id]
profile_ids = segment["ids"]
color = segment["color"]
plt.loglog(
mg2.at_node["drainage_area"][profile_ids],
mg2.at_node["topographic__steepest_slope"][profile_ids],
".",
color=color,
label=label,
)
plt.legend(loc="lower left")
plt.xlabel("drainage area (m^2)")
plt.ylabel("channel slope [m/m]")
title_text = f"$K_{{sp}}$={K_sp2}; $time$={total_time} yr; $dx$={dxy2} m"
plt.title(title_text)
# Code Block 20
# Chi Plots
# calculate the chi index
cf2.calculate_chi()
# chi-elevation plots in the profiled channels
plt.figure(12)
for i, outlet_id in enumerate(prf2.data_structure):
for j, segment_id in enumerate(prf2.data_structure[outlet_id]):
if j == 0:
label = "channel {i}".format(i=i + 1)
else:
label = "_nolegend_"
segment = prf2.data_structure[outlet_id][segment_id]
profile_ids = segment["ids"]
color = segment["color"]
plt.plot(
mg2.at_node["channel__chi_index"][profile_ids],
mg2.at_node["topographic__elevation"][profile_ids],
color=color,
label=label,
)
plt.xlabel("chi index (m)")
plt.ylabel("elevation (m)")
plt.legend(loc="lower right")
title_text = (
f"$K_{{sp}}$={K_sp2}; $time$={total_time} yr; $dx$={dxy2} m; concavity={theta2}"
)
plt.title(title_text)
# chi map
plt.figure(13)
imshow_grid(
mg2,
"channel__chi_index",
grid_units=("m", "m"),
var_name="Chi index (m)",
cmap="jet",
)
plt.title(
f"$K_{{sp}}$={K_sp2}; $time$={total_time} yr; $dx$={dxy2} m; concavity={theta2}"
)
# Code Block 21
# Plot channel steepness along profiles and across the landscape
# calculate channel steepness
sf2.calculate_steepnesses()
# plots of steepnes vs. distance upstream in the profiled channels
plt.figure(14)
for i, outlet_id in enumerate(prf2.data_structure):
for j, segment_id in enumerate(prf2.data_structure[outlet_id]):
if j == 0:
label = "channel {i}".format(i=i + 1)
else:
label = "_nolegend_"
segment = prf2.data_structure[outlet_id][segment_id]
profile_ids = segment["ids"]
distance_upstream = segment["distances"]
color = segment["color"]
plt.plot(
distance_upstream,
mg2.at_node["channel__steepness_index"][profile_ids],
"x",
color=color,
label=label,
)
plt.xlabel("distance upstream (m)")
plt.ylabel("steepness index")
plt.legend(loc="upper left")
plt.title(
f"$K_{{sp}}$={K_sp2}; $time$={total_time} yr; $dx$={dxy2} m; concavity={theta2}"
)
# channel steepness map
plt.figure(15)
imshow_grid(
mg2,
"channel__steepness_index",
grid_units=("m", "m"),
var_name="Steepness index ",
cmap="jet",
)
plt.title(
f"$K_{{sp}}$={K_sp2}; $time$={total_time} yr; $dx$={dxy2} m; concavity={theta2}"
)
Explanation: After running every code block once, has the landscape reached steady state? Answer: NO! How do you know? After you think about this, you are ready to complete this project.
Answer the following questions using the code above and below. All answers should be typed, and supporting figures (produced using the code) should be embedded in one document that you hand in. Code Blocks 8-12 and 18-21 produce different figures that you may find useful. You can use any or all of these different figures to help you with the questions below. (Download or screenshoot the figures.)
Anything with a question mark should be answered in the document that you hand in. Make sure your write in full sentences and proofread the document that you hand in.
Steady state with low uplift rate. Using the parameters provided in the initial notebook, run the landscape to steady state. (Note that you can keep running the main evolution loop - Code Block 7 - and the different plotting blocks without running the code blocks above them. You may also want to change $tmax$ in Code Block 4.) How did you know that the landscape reached steady state? Note the approximate time that it took to reach steady state for your own reference. (This will be usefull for later questions.) Include appropriate plots. (If you want to analyze these landscapes outside of Landlab or save for later, make sure you save the elevation data to a text file (Code Block 13).)
NOTE, For the rest of the questions you should use Code Blocks 14 - 21. These will allow you to use the steady-state landscape created for question 1 - referred to here as the 'base landscape' - as the initial condition. Start by editing what you need to in Code Blocks 14 - 16. Run these each once, sequentially. You can run Code Block 17, the time loop, as many times as you need to, along with Code Blocks 18-21, which produce plots.
Transient landscape responding to an increase in rock uplift. Use the base landscape and increase rock uplift uniformly by a factor of 4 to 0.0004 m/yr. Make sure you update the rock uplift rate (Code Block 16) and ensure that $tmax$ is 1e5 yrs and $dt$ is 500 yrs (Code Block 15). Run this until the maximum elevation in the grid is ~ 170 m and observe how the landscape gets to this elevation, i.e. plot intermediate steps. What patterns do you see in the supporting plots that illustrate this type of transient? Which patterns, if any, are diagnostic of a landscape response to uniform increase in rock uplift rate? (You may need to answer this after completing all of the questions.)
Steady-state landscape with increased rock uplift. Now run the landscape from question 2 until it reaches steady state. (I.e. run the time loop, Code Block 17, a bunch of times. You can increase $tmax$ and $dt$ to make this run faster.) Provide a plot that illustrates that the landscape is in steady state. What aspects of the landscape have changed in comparison with the base landscape from question 1?
Increase erodibility. Start again from the base landscape, but this time increase $K_{sp}$ to 2E-5 (Code Block 14). Make sure rock uplift rate is set to the original value of 0.0001 m/yr (Code Block 16). Set $tmax$ to 1e5 yrs (Code Block 15). Run for 1e5 yrs and save the plots that you think are diagnostic. Run for another 1e5 yrs and save plots again. Now run for 5e5 yrs and save plots again. Quantitatively describe how the landscape evolves in response to the increase in erodibility and provide supporting plots. What could cause a uniform increase in erodibility?
Spatially varible uplift - discrete, massive earthquake. Start again from the base landscape, and make sure that $K_{sp}$ = 1E-5 (Code Block 14). Now add a seismic event to this steady state landscape - a fault that runs horizontally across the landscape at y = 4000 m, and instantaneously uplifts half the landscape by 10 meters (Code Block 16). In this case, we will keep background uplift uniform at 0.0001 m/yr. Set $tmax$ to 1e5 yrs and $dt$ to 500 yrs (Code Block 15) before evolving the landscape after the fault. Now run the time loop four times and look at the different plots after each loop. How does the landscape respond to this fault? What patterns do you see in the supporting plots that illustrate this type of transient? Which patterns, if any, are diagnostic of a channel response to an earthquake? (You may need to answer this after completing all of the questions.)
Spatially Varible Rock Uplift - discrete fault with two different uplift rates. Start again from the base landscape, and make sure that $K_{sp}$ = 1E-5 (Code Block 14). Now we will add a fault (at y = 4000 m) to this landscape. In this case the uplift rate on the footwall is higher (0.0004 m/yr) than on the hanging wall (uplift rate = 0.0001 m/yr). (Edit Code Block 16.) Set $tmax$ to 1e5 yrs and $dt$ to 500 yrs (Code Block 15). Now run the time loop four separate times and look at the different plots after each loop. How does the landscape respond to this fault? What patterns do you see in the supporting plots that illustrate this type of transient? Which patterns, if any, are diagnostic of a channel response to a this type of gradient in rock uplift rates? (You may need to answer this after completing all of the questions.)
Spatially Varible Rock Uplift - gradient in uplift across the range. Start again from the base landscape, and make sure that $K_{sp}$ = 1E-5 (Code Block 14). Now we will add a linear gradient in uplift rate across the entire range (edit Code Block 16). The maximum uplift rate will be 0.0004 m/yr at the core of the range, and 0.0001 m/yr at the front of the range. Set $tmax$ to 1e5 yrs (Code Block 4) and $dt$ to 500 yrs before you start running the time loop for the fault before you start running the time loop with the rock uplift gradient. Now run the time loop four separate times and look at the different plots after each loop. How does the landscape respond to this gradient in uplift rate? What patterns do you see in the supporting plots that illustrate this type of transient? Which patterns, if any, are diagnostic of a channel response to this type of gradient in rock uplift rates? (You may need to answer this after completing all of the questions.)
Final Reflection. Was your initial insight into how parameters would affect the landscape correct? Discuss in 6 sentences or less.
End of explanation
<END_TASK> |
133 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Copyright 2020 The TensorFlow Authors.
Step1: TF Lattice 집계 함수 모델
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 필수 패키지 가져오기
Step3: 퍼즐 데이터세트 다운로드하기
Step4: 특성과 레이블 추출하기 및 변환하기
Step5: 이 가이드에서 훈련에 사용되는 기본값 설정하기
Step6: 특성 구성
특성 보정 및 특성별 구성은 tfl.configs.FeatureConfig를 사용하여 설정됩니다. 특성 구성에는 단조 제약 조건, 특성별 정규화(tfl.configs.RegularizerConfig 참조) 및 격자 모델에 대한 격자 크기가 포함됩니다.
모델이 인식해야 할 모든 특성에 대한 특성 구성을 완전하게 지정해야 합니다. 그렇지 않으면 모델은 해당 특성이 존재하는지 알 수 없습니다. 집계 모델의 경우 이러한 특성이 자동으로 고려되고 비 정형으로 적절하게 처리됩니다.
분위수 계산하기
tfl.configs.FeatureConfig에서 pwl_calibration_input_keypoints의 기본 설정은 'quantiles'이지만 사전 제작된 모델의 경우 입력 키포인트를 수동으로 정의해야 합니다. 이를 위해 먼저 분위수 계산을 위한 자체 도우미 함수를 정의합니다.
Step7: 특성 구성 정의하기
이제 분위수를 계산할 수 있으므로 모델이 입력으로 사용하기 원하는 각 특성에 대한 특성 구성을 정의합니다.
Step8: 집계 함수 모델
TFL 사전 제작 모델을 구성하려면 먼저 tfl.configs에서 모델 구성을 갖추세요. 집계 함수 모델은 tfl.configs.AggregateFunctionConfig를 사용하여 구성됩니다. 구간 선형 및 범주형 보정을 적용한 다음 비 정형 입력의 각 차원에 격자 모델을 적용합니다. 그런 다음 각 차원의 출력에 집계 레이어를 적용합니다. 그 후 선택적 출력 구간 선형 보정이 이어집니다.
Step9: 각 집계 레이어의 출력은 비 정형 입력에 대해 보정된 격자의 평균 출력입니다. 다음은 첫 번째 집계 레이어 내부에서 사용되는 모델입니다.
Step10: 이제 다른 tf.keras.Model과 마찬가지로 모델을 데이터에 맞게 컴파일하고 적합하도록 맞춥니다.
Step11: 모델을 훈련한 후 테스트세트에서 평가할 수 있습니다. | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
#@test {"skip": true}
!pip install tensorflow-lattice pydot
Explanation: TF Lattice 집계 함수 모델
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/lattice/tutorials/aggregate_function_learning_models"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">}TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/lattice/tutorials/aggregate_function_models.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행하기</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/lattice/tutorials/aggregate_function_models.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/lattice/tutorials/aggregate_function_models.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드하기</a></td>
</table>
개요
TFL 사전 제작된 집계 함수 모델은 복잡한 집계 함수를 학습하기 위해 TFL tf.keras.model 인스턴스를 구축하는 빠르고 쉬운 방법입니다. 이 가이드에서는 TFL 사전 제작된 집계 함수 모델을 구성하고 훈련/테스트하는 데 필요한 단계를 설명합니다.
설정
TF Lattice 패키지 설치하기
End of explanation
import tensorflow as tf
import collections
import logging
import numpy as np
import pandas as pd
import sys
import tensorflow_lattice as tfl
logging.disable(sys.maxsize)
Explanation: 필수 패키지 가져오기
End of explanation
train_dataframe = pd.read_csv(
'https://raw.githubusercontent.com/wbakst/puzzles_data/master/train.csv')
train_dataframe.head()
test_dataframe = pd.read_csv(
'https://raw.githubusercontent.com/wbakst/puzzles_data/master/test.csv')
test_dataframe.head()
Explanation: 퍼즐 데이터세트 다운로드하기
End of explanation
# Features:
# - star_rating rating out of 5 stars (1-5)
# - word_count number of words in the review
# - is_amazon 1 = reviewed on amazon; 0 = reviewed on artifact website
# - includes_photo if the review includes a photo of the puzzle
# - num_helpful number of people that found this review helpful
# - num_reviews total number of reviews for this puzzle (we construct)
#
# This ordering of feature names will be the exact same order that we construct
# our model to expect.
feature_names = [
'star_rating', 'word_count', 'is_amazon', 'includes_photo', 'num_helpful',
'num_reviews'
]
def extract_features(dataframe, label_name):
# First we extract flattened features.
flattened_features = {
feature_name: dataframe[feature_name].values.astype(float)
for feature_name in feature_names[:-1]
}
# Construct mapping from puzzle name to feature.
star_rating = collections.defaultdict(list)
word_count = collections.defaultdict(list)
is_amazon = collections.defaultdict(list)
includes_photo = collections.defaultdict(list)
num_helpful = collections.defaultdict(list)
labels = {}
# Extract each review.
for i in range(len(dataframe)):
row = dataframe.iloc[i]
puzzle_name = row['puzzle_name']
star_rating[puzzle_name].append(float(row['star_rating']))
word_count[puzzle_name].append(float(row['word_count']))
is_amazon[puzzle_name].append(float(row['is_amazon']))
includes_photo[puzzle_name].append(float(row['includes_photo']))
num_helpful[puzzle_name].append(float(row['num_helpful']))
labels[puzzle_name] = float(row[label_name])
# Organize data into list of list of features.
names = list(star_rating.keys())
star_rating = [star_rating[name] for name in names]
word_count = [word_count[name] for name in names]
is_amazon = [is_amazon[name] for name in names]
includes_photo = [includes_photo[name] for name in names]
num_helpful = [num_helpful[name] for name in names]
num_reviews = [[len(ratings)] * len(ratings) for ratings in star_rating]
labels = [labels[name] for name in names]
# Flatten num_reviews
flattened_features['num_reviews'] = [len(reviews) for reviews in num_reviews]
# Convert data into ragged tensors.
star_rating = tf.ragged.constant(star_rating)
word_count = tf.ragged.constant(word_count)
is_amazon = tf.ragged.constant(is_amazon)
includes_photo = tf.ragged.constant(includes_photo)
num_helpful = tf.ragged.constant(num_helpful)
num_reviews = tf.ragged.constant(num_reviews)
labels = tf.constant(labels)
# Now we can return our extracted data.
return (star_rating, word_count, is_amazon, includes_photo, num_helpful,
num_reviews), labels, flattened_features
train_xs, train_ys, flattened_features = extract_features(train_dataframe, 'Sales12-18MonthsAgo')
test_xs, test_ys, _ = extract_features(test_dataframe, 'SalesLastSixMonths')
# Let's define our label minimum and maximum.
min_label, max_label = float(np.min(train_ys)), float(np.max(train_ys))
min_label, max_label = float(np.min(train_ys)), float(np.max(train_ys))
Explanation: 특성과 레이블 추출하기 및 변환하기
End of explanation
LEARNING_RATE = 0.1
BATCH_SIZE = 128
NUM_EPOCHS = 500
MIDDLE_DIM = 3
MIDDLE_LATTICE_SIZE = 2
MIDDLE_KEYPOINTS = 16
OUTPUT_KEYPOINTS = 8
Explanation: 이 가이드에서 훈련에 사용되는 기본값 설정하기
End of explanation
def compute_quantiles(features,
num_keypoints=10,
clip_min=None,
clip_max=None,
missing_value=None):
# Clip min and max if desired.
if clip_min is not None:
features = np.maximum(features, clip_min)
features = np.append(features, clip_min)
if clip_max is not None:
features = np.minimum(features, clip_max)
features = np.append(features, clip_max)
# Make features unique.
unique_features = np.unique(features)
# Remove missing values if specified.
if missing_value is not None:
unique_features = np.delete(unique_features,
np.where(unique_features == missing_value))
# Compute and return quantiles over unique non-missing feature values.
return np.quantile(
unique_features,
np.linspace(0., 1., num=num_keypoints),
interpolation='nearest').astype(float)
Explanation: 특성 구성
특성 보정 및 특성별 구성은 tfl.configs.FeatureConfig를 사용하여 설정됩니다. 특성 구성에는 단조 제약 조건, 특성별 정규화(tfl.configs.RegularizerConfig 참조) 및 격자 모델에 대한 격자 크기가 포함됩니다.
모델이 인식해야 할 모든 특성에 대한 특성 구성을 완전하게 지정해야 합니다. 그렇지 않으면 모델은 해당 특성이 존재하는지 알 수 없습니다. 집계 모델의 경우 이러한 특성이 자동으로 고려되고 비 정형으로 적절하게 처리됩니다.
분위수 계산하기
tfl.configs.FeatureConfig에서 pwl_calibration_input_keypoints의 기본 설정은 'quantiles'이지만 사전 제작된 모델의 경우 입력 키포인트를 수동으로 정의해야 합니다. 이를 위해 먼저 분위수 계산을 위한 자체 도우미 함수를 정의합니다.
End of explanation
# Feature configs are used to specify how each feature is calibrated and used.
feature_configs = [
tfl.configs.FeatureConfig(
name='star_rating',
lattice_size=2,
monotonicity='increasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
flattened_features['star_rating'], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='word_count',
lattice_size=2,
monotonicity='increasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
flattened_features['word_count'], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='is_amazon',
lattice_size=2,
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='includes_photo',
lattice_size=2,
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='num_helpful',
lattice_size=2,
monotonicity='increasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
flattened_features['num_helpful'], num_keypoints=5),
# Larger num_helpful indicating more trust in star_rating.
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="star_rating", trust_type="trapezoid"),
],
),
tfl.configs.FeatureConfig(
name='num_reviews',
lattice_size=2,
monotonicity='increasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
flattened_features['num_reviews'], num_keypoints=5),
)
]
Explanation: 특성 구성 정의하기
이제 분위수를 계산할 수 있으므로 모델이 입력으로 사용하기 원하는 각 특성에 대한 특성 구성을 정의합니다.
End of explanation
# Model config defines the model structure for the aggregate function model.
aggregate_function_model_config = tfl.configs.AggregateFunctionConfig(
feature_configs=feature_configs,
middle_dimension=MIDDLE_DIM,
middle_lattice_size=MIDDLE_LATTICE_SIZE,
middle_calibration=True,
middle_calibration_num_keypoints=MIDDLE_KEYPOINTS,
middle_monotonicity='increasing',
output_min=min_label,
output_max=max_label,
output_calibration=True,
output_calibration_num_keypoints=OUTPUT_KEYPOINTS,
output_initialization=np.linspace(
min_label, max_label, num=OUTPUT_KEYPOINTS))
# An AggregateFunction premade model constructed from the given model config.
aggregate_function_model = tfl.premade.AggregateFunction(
aggregate_function_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
aggregate_function_model, show_layer_names=False, rankdir='LR')
Explanation: 집계 함수 모델
TFL 사전 제작 모델을 구성하려면 먼저 tfl.configs에서 모델 구성을 갖추세요. 집계 함수 모델은 tfl.configs.AggregateFunctionConfig를 사용하여 구성됩니다. 구간 선형 및 범주형 보정을 적용한 다음 비 정형 입력의 각 차원에 격자 모델을 적용합니다. 그런 다음 각 차원의 출력에 집계 레이어를 적용합니다. 그 후 선택적 출력 구간 선형 보정이 이어집니다.
End of explanation
aggregation_layers = [
layer for layer in aggregate_function_model.layers
if isinstance(layer, tfl.layers.Aggregation)
]
tf.keras.utils.plot_model(
aggregation_layers[0].model, show_layer_names=False, rankdir='LR')
Explanation: 각 집계 레이어의 출력은 비 정형 입력에 대해 보정된 격자의 평균 출력입니다. 다음은 첫 번째 집계 레이어 내부에서 사용되는 모델입니다.
End of explanation
aggregate_function_model.compile(
loss='mae',
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
aggregate_function_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
Explanation: 이제 다른 tf.keras.Model과 마찬가지로 모델을 데이터에 맞게 컴파일하고 적합하도록 맞춥니다.
End of explanation
print('Test Set Evaluation...')
print(aggregate_function_model.evaluate(test_xs, test_ys))
Explanation: 모델을 훈련한 후 테스트세트에서 평가할 수 있습니다.
End of explanation
<END_TASK> |
134 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Name
Data processing by creating a cluster in Cloud Dataproc
Label
Cloud Dataproc, cluster, GCP, Cloud Storage, KubeFlow, Pipeline
Summary
A Kubeflow Pipeline component to create a cluster in Cloud Dataproc.
Details
Intended use
Use this component at the start of a Kubeflow Pipeline to create a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline.
Runtime arguments
| Argument | Description | Optional | Data type | Accepted values | Default |
|----------|-------------|----------|-----------|-----------------|---------|
| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | |
| region | The Cloud Dataproc region to create the cluster in. | No | GCPRegion | | |
| name | The name of the cluster. Cluster names within a project must be unique. You can reuse the names of deleted clusters. | Yes | String | | None |
| name_prefix | The prefix of the cluster name. | Yes | String | | None |
| initialization_actions | A list of Cloud Storage URIs identifying executables to execute on each node after the configuration is completed. By default, executables are run on the master and all the worker nodes. | Yes | List | | None |
| config_bucket | The Cloud Storage bucket to use to stage the job dependencies, the configuration files, and the job driver console’s output. | Yes | GCSPath | | None |
| image_version | The version of the software inside the cluster. | Yes | String | | None |
| cluster | The full cluster configuration. | Yes | Dict | | None |
| wait_interval | The number of seconds to pause before polling the operation. | Yes | Integer | | 30 |
Output
Name | Description | Type
Step1: Load the component using KFP SDK
Step2: Sample
Note
Step3: Example pipeline that uses the component
Step4: Compile the pipeline
Step5: Submit the pipeline for execution | <ASSISTANT_TASK:>
Python Code:
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
Explanation: Name
Data processing by creating a cluster in Cloud Dataproc
Label
Cloud Dataproc, cluster, GCP, Cloud Storage, KubeFlow, Pipeline
Summary
A Kubeflow Pipeline component to create a cluster in Cloud Dataproc.
Details
Intended use
Use this component at the start of a Kubeflow Pipeline to create a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline.
Runtime arguments
| Argument | Description | Optional | Data type | Accepted values | Default |
|----------|-------------|----------|-----------|-----------------|---------|
| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | |
| region | The Cloud Dataproc region to create the cluster in. | No | GCPRegion | | |
| name | The name of the cluster. Cluster names within a project must be unique. You can reuse the names of deleted clusters. | Yes | String | | None |
| name_prefix | The prefix of the cluster name. | Yes | String | | None |
| initialization_actions | A list of Cloud Storage URIs identifying executables to execute on each node after the configuration is completed. By default, executables are run on the master and all the worker nodes. | Yes | List | | None |
| config_bucket | The Cloud Storage bucket to use to stage the job dependencies, the configuration files, and the job driver console’s output. | Yes | GCSPath | | None |
| image_version | The version of the software inside the cluster. | Yes | String | | None |
| cluster | The full cluster configuration. | Yes | Dict | | None |
| wait_interval | The number of seconds to pause before polling the operation. | Yes | Integer | | 30 |
Output
Name | Description | Type
:--- | :---------- | :---
cluster_name | The name of the cluster. | String
Note: You can recycle the cluster by using the Dataproc delete cluster component.
Cautions & requirements
To use the component, you must:
* Set up the GCP project by following these steps.
* The component can authenticate to GCP. Refer to Authenticating Pipelines to GCP for details.
* Grant the following types of access to the Kubeflow user service account:
* Read access to the Cloud Storage buckets which contains initialization action files.
* The role, roles/dataproc.editor on the project.
Detailed description
This component creates a new Dataproc cluster by using the Dataproc create cluster REST API.
Follow these steps to use the component in a pipeline:
Install the Kubeflow Pipeline SDK:
End of explanation
import kfp.components as comp
dataproc_create_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/dataproc/create_cluster/component.yaml')
help(dataproc_create_cluster_op)
Explanation: Load the component using KFP SDK
End of explanation
# Required Parameters
PROJECT_ID = '<Please put your project ID here>'
# Optional Parameters
EXPERIMENT_NAME = 'Dataproc - Create Cluster'
Explanation: Sample
Note: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template.
Set sample parameters
End of explanation
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc create cluster pipeline',
description='Dataproc create cluster pipeline'
)
def dataproc_create_cluster_pipeline(
project_id = PROJECT_ID,
region = 'us-central1',
name='',
name_prefix='',
initialization_actions='',
config_bucket='',
image_version='',
cluster='',
wait_interval='30'
):
dataproc_create_cluster_op(
project_id=project_id,
region=region,
name=name,
name_prefix=name_prefix,
initialization_actions=initialization_actions,
config_bucket=config_bucket,
image_version=image_version,
cluster=cluster,
wait_interval=wait_interval)
Explanation: Example pipeline that uses the component
End of explanation
pipeline_func = dataproc_create_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
Explanation: Compile the pipeline
End of explanation
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
Explanation: Submit the pipeline for execution
End of explanation
<END_TASK> |
135 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'sandbox-3', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: MIROC
Source ID: SANDBOX-3
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
<END_TASK> |
136 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Build a fraud detection model on Vertex AI
Step1: <table align="left">
<td>
<a href="https
Step2: Install the latest version of the Vertex AI client library.
Run the following command in your notebook environment to install the Vertex SDK for Python
Step3: Run the following command in your notebook environment to install witwidget
Step4: Run the following command in your notebook environment to install joblib
Step5: Run the following command in your notebook environment to install scikit-learn
Step6: Run the following command in your notebook environment to install fsspec
Step7: Run the following command in your notebook environment to install gcsfs
Step8: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
Step9: Before you begin
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API. {TODO
Step10: Otherwise, set your project ID here.
Step11: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
Step12: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step13: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you create a model in Vertex AI using the Cloud SDK, you give a Cloud Storage path where the trained model is saved.
In this tutorial, Vertex AI saves the trained model to a Cloud Storage bucket. Using this model artifact, you can then
create Vertex AI model and endpoint resources in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
Step14: Only if your bucket doesn't already exist
Step15: Finally, validate access to your Cloud Storage bucket by examining its contents
Step16: Tutorial
Import required libraries
Step17: Analyze the dataset
<a name="section-5"></a>
Take a quick look at the dataset and the number of rows.
Step18: Check for null values.
Step19: Check the type of transactions involved.
Step20: Working with imbalanced data
Althuogh the outcome variable "isFraud" seems to be very imbalanced in the current dataset, a base model can be trained on it to check the quality of fraudulent transactions in the data and if needed, counter measures like undersampling of majority class or oversampling of the minority class can be considered.
Step21: Prepare data for modeling
To prepare the dataset for training, a few columns need to be dropped that contain either unique data ('nameOrig','nameDest') or redundant fields ('isFlaggedFraud'). The categorical field "type" which describes the type of transaction and is important for fraud detection needs to be one-hot encoded.
Step22: Remove the outcome variable from the training data.
Step23: Split the data and assign 70% for training and 30% for testing.
Step24: Fit a random forest model
<a name="section-6"></a>
Fit a simple random forest classifier on the preprocessed training dataset.
Step25: Analyzing Results
<a name="section-7"></a>
The model returns good scores and the confusion matrix confirms that this model can indeed work with imbalanced data.
Step26: Use RandomForestClassifier's feature_importances_ function to get a better understanding about which features were the most useful to the model.
Step27: Save the model to a Cloud Storage path
<a name="section-8"></a>
Step28: Create a model in Vertex AI
<a name="section-9"></a>
Step29: Create an Endpoint
<a name="section-10"></a>
Step30: Deploy the model to the created Endpoint
Configure the deployment name, machine type, and other parameters for the deployment.
Step31: What-If Tool
<a name="section-11"></a>
The What-If Tool can be used to analyze the model predictions on a test data. See a brief introduction to the What-If Tool. In this tutorial, the What-If Tool will be configured and run on the model trained locally, and on the model deployed on Vertex AI Endpoint in the previous steps.
WitConfigBuilder provides the set_ai_platform_model() method to configure the What-If Tool with a model deployed as a version on Ai Platform models. This feature currently supports Ai Platform only but not Vertex AI models. Fortunately, there is also an option to pass a custom function for generating predictions through the set_custom_predict_fn() method where either the locally trained model or a function that returns predictions from a Vertex AI model can be passed.
Prepare test samples
Save some samples from the test data for both the available classes (Fraud/not-Fraud) to analyze the model using the What-If Tool.
Step32: Running the What-If Tool on the local model
Step33: Running the What-If Tool on the deployed Vertex AI model
Step34: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
Step35: Clean up
<a name="section-12"></a>
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | <ASSISTANT_TASK:>
Python Code:
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Build a fraud detection model on Vertex AI
End of explanation
import os
import google.auth
USER_FLAG = ""
# Google Cloud Notebook requires dependencies to be installed with '--user'
if "default" in dir(google.auth):
USER_FLAG = "--user"
Explanation: <table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/workbench/fraud_detection/fraud-detection-model.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/workbench/fraud_detection/fraud-detection-model.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/workbench/fraud_detection/fraud-detection-model.ipynb">
<img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo">
Open in Vertex AI Workbench
</a>
</td>
</table>
Table of contents
Overview
Dataset
Objective
Costs
Analyze the dataset
Fit a random forest model
Analyzing results
Save the model to a Cloud Storagae path
Create a model in Vertex AI
Create an Endpoint
What-If Tool
Clean up
Overview
<a name="section-1"></a>
This tutorial shows you how to build, deploy, and analyze predictions from a simple random forest model using tools like scikit-learn, Vertex AI, and the What-IF Tool (WIT) on a synthetic fraud transaction dataset to solve a financial fraud detection problem.
Dataset
<a name="section-2"></a>
The dataset used in this tutorial is publicly available at Kaggle. See Synthetic Financial Datasets For Fraud Detection.
Objective
<a name="section-3"></a>
This tutorial demonstrates data analysis and model-building using a synthetic financial dataset. The model is trained on identifying fraudulent cases among the transactions. Then, the trained model is deployed on a Vertex AI Endpoint and analyzed using the What-If Tool. The steps taken in this tutorial are as follows:
Installation of required libraries
Reading the dataset from a Cloud Storage bucket
Performing exploratory analysis on the dataset
Preprocessing the dataset
Training a random forest model using scikit-learn
Saving the model to a Cloud Storage bucket
Creating a Vertex AI model resource and deploying to an endpoint
Running the What-If Tool on test data
Un-deploying the model and cleaning up the model resources
Costs
<a name="section-4"></a>
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets
all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements.
You need the following:
The Google Cloud SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to Setting up a Python development
environment and the Jupyter
installation guide provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
Install and initialize the Cloud SDK.
Install Python 3.
Install
virtualenv
and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the
command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Install additional packages
End of explanation
! pip install {USER_FLAG} --upgrade google-cloud-aiplatform
Explanation: Install the latest version of the Vertex AI client library.
Run the following command in your notebook environment to install the Vertex SDK for Python:
End of explanation
! pip install {USER_FLAG} witwidget
Explanation: Run the following command in your notebook environment to install witwidget:
End of explanation
! pip install {USER_FLAG} joblib
Explanation: Run the following command in your notebook environment to install joblib:
End of explanation
! pip install {USER_FLAG} scikit-learn
Explanation: Run the following command in your notebook environment to install scikit-learn:
End of explanation
! pip install {USER_FLAG} fsspec
Explanation: Run the following command in your notebook environment to install fsspec:
End of explanation
! pip install {USER_FLAG} gcsfs
Explanation: Run the following command in your notebook environment to install gcsfs:
End of explanation
# Automatically restart kernel after installs
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
Explanation: Before you begin
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API. {TODO: Update the APIs needed for your tutorial. Edit the API names, and update the link to append the API IDs, separating each one with a comma. For example, container.googleapis.com,cloudbuild.googleapis.com}
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
! gcloud config set project $PROJECT_ID
Explanation: Otherwise, set your project ID here.
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
End of explanation
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key
page.
Click Create service account.
In the Service account name field, enter a name, and
click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI"
into the filter box, and select
Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
local environment.
Enter the path to your service account key as the
GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
REGION = "[your-region]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "-vertex-ai-" + TIMESTAMP
BUCKET_URI = f"gs://{BUCKET_NAME}"
if REGION == "[your-region]":
REGION = "us-central1"
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you create a model in Vertex AI using the Cloud SDK, you give a Cloud Storage path where the trained model is saved.
In this tutorial, Vertex AI saves the trained model to a Cloud Storage bucket. Using this model artifact, you can then
create Vertex AI model and endpoint resources in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
End of explanation
! gsutil mb -l $REGION $BUCKET_URI
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_URI
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import warnings
import joblib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from google.cloud import aiplatform, storage
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import (average_precision_score, classification_report,
confusion_matrix, f1_score)
from sklearn.model_selection import train_test_split
from witwidget.notebook.visualization import WitConfigBuilder, WitWidget
warnings.filterwarnings("ignore")
# Load dataset
df = pd.read_csv(
"gs://cloud-samples-data/vertex-ai/managed_notebooks/fraud_detection/fraud_detection_data.csv"
)
Explanation: Tutorial
Import required libraries
End of explanation
print("shape : ", df.shape)
df.head()
Explanation: Analyze the dataset
<a name="section-5"></a>
Take a quick look at the dataset and the number of rows.
End of explanation
df.isnull().sum()
Explanation: Check for null values.
End of explanation
print(df.type.value_counts())
var = df.groupby("type").amount.sum()
fig = plt.figure()
ax1 = fig.add_subplot(1, 1, 1)
var.plot(kind="bar")
ax1.set_title("Total amount per transaction type")
ax1.set_xlabel("Type of Transaction")
ax1.set_ylabel("Amount")
Explanation: Check the type of transactions involved.
End of explanation
# Count number of fraudulent/non-fraudulent transactions
df.isFraud.value_counts()
piedata = df.groupby(["isFlaggedFraud"]).sum()
f, axes = plt.subplots(1, 1, figsize=(6, 6))
axes.set_title("% of fraud transaction detected")
piedata.plot(
kind="pie", y="isFraud", ax=axes, fontsize=14, shadow=False, autopct="%1.1f%%"
)
axes.set_ylabel("")
plt.legend(loc="upper left", labels=["Not Detected", "Detected"])
plt.show()
Explanation: Working with imbalanced data
Althuogh the outcome variable "isFraud" seems to be very imbalanced in the current dataset, a base model can be trained on it to check the quality of fraudulent transactions in the data and if needed, counter measures like undersampling of majority class or oversampling of the minority class can be considered.
End of explanation
df.drop(["nameOrig", "nameDest", "isFlaggedFraud"], axis=1, inplace=True)
X = pd.concat([df.drop("type", axis=1), pd.get_dummies(df["type"])], axis=1)
X.head()
Explanation: Prepare data for modeling
To prepare the dataset for training, a few columns need to be dropped that contain either unique data ('nameOrig','nameDest') or redundant fields ('isFlaggedFraud'). The categorical field "type" which describes the type of transaction and is important for fraud detection needs to be one-hot encoded.
End of explanation
y = X[["isFraud"]]
X = X.drop(["isFraud"], axis=1)
Explanation: Remove the outcome variable from the training data.
End of explanation
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=42, shuffle=False
)
print(X_train.shape, X_test.shape)
Explanation: Split the data and assign 70% for training and 30% for testing.
End of explanation
print("before initiating")
forest = RandomForestClassifier(verbose=1)
print("after initiating")
forest.fit(X_train, y_train)
print("after fitting")
Explanation: Fit a random forest model
<a name="section-6"></a>
Fit a simple random forest classifier on the preprocessed training dataset.
End of explanation
print("before predicting")
y_prob = forest.predict_proba(X_test)
print("after predicting y_prob")
y_pred = forest.predict(X_test)
print("AUPRC :", (average_precision_score(y_test, y_prob[:, 1])))
print("F1 - score :", (f1_score(y_test, y_pred)))
print("Confusion_matrix : ")
print(confusion_matrix(y_test, y_pred))
print("classification_report")
print(classification_report(y_test, y_pred))
print("after printing classification_report")
Explanation: Analyzing Results
<a name="section-7"></a>
The model returns good scores and the confusion matrix confirms that this model can indeed work with imbalanced data.
End of explanation
importances = forest.feature_importances_
std = np.std([tree.feature_importances_ for tree in forest.estimators_], axis=0)
forest_importances = pd.Series(importances, index=list(X_train))
fig, ax = plt.subplots()
forest_importances.plot.bar(yerr=std, ax=ax)
ax.set_title("Feature Importance for Fraud Transaction Detection Model")
ax.set_ylabel("Importance")
fig.tight_layout()
Explanation: Use RandomForestClassifier's feature_importances_ function to get a better understanding about which features were the most useful to the model.
End of explanation
# save the trained model to a local file "model.joblib"
FILE_NAME = "model.joblib"
joblib.dump(forest, FILE_NAME)
# Upload the saved model file to Cloud Storage
BLOB_PATH = "[your-blob-path]"
BLOB_NAME = os.path.join(BLOB_PATH, FILE_NAME)
bucket = storage.Client(PROJECT_ID).bucket(BUCKET_NAME)
blob = bucket.blob(BLOB_NAME)
blob.upload_from_filename(FILE_NAME)
Explanation: Save the model to a Cloud Storage path
<a name="section-8"></a>
End of explanation
MODEL_DISPLAY_NAME = "[your-model-display-name]"
ARTIFACT_GCS_PATH = f"{BUCKET_URI}/{BLOB_PATH}"
SERVING_CONTAINER_IMAGE_URI = (
"us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest"
)
# Create a Vertex AI model resource
aiplatform.init(project=PROJECT_ID, location=REGION)
model = aiplatform.Model.upload(
display_name=MODEL_DISPLAY_NAME,
artifact_uri=ARTIFACT_GCS_PATH,
serving_container_image_uri=SERVING_CONTAINER_IMAGE_URI,
)
model.wait()
print(model.display_name)
print(model.resource_name)
Explanation: Create a model in Vertex AI
<a name="section-9"></a>
End of explanation
ENDPOINT_DISPLAY_NAME = "[your-endpoint-display-name]"
endpoint = aiplatform.Endpoint.create(display_name=ENDPOINT_DISPLAY_NAME)
print(endpoint.display_name)
print(endpoint.resource_name)
Explanation: Create an Endpoint
<a name="section-10"></a>
End of explanation
DEPLOYED_MODEL_NAME = "[your-deployed-model-name]"
MACHINE_TYPE = "n1-standard-2"
# deploy the model to the endpoint
model.deploy(
endpoint=endpoint,
deployed_model_display_name=DEPLOYED_MODEL_NAME,
machine_type=MACHINE_TYPE,
)
model.wait()
print(model.display_name)
print(model.resource_name)
Explanation: Deploy the model to the created Endpoint
Configure the deployment name, machine type, and other parameters for the deployment.
End of explanation
# collect 50 samples for each class-label from the test data
pos_samples = y_test[y_test["isFraud"] == 1].sample(50).index
neg_samples = y_test[y_test["isFraud"] == 0].sample(50).index
test_samples_y = pd.concat([y_test.loc[pos_samples], y_test.loc[neg_samples]])
test_samples_X = X_test.loc[test_samples_y.index].copy()
Explanation: What-If Tool
<a name="section-11"></a>
The What-If Tool can be used to analyze the model predictions on a test data. See a brief introduction to the What-If Tool. In this tutorial, the What-If Tool will be configured and run on the model trained locally, and on the model deployed on Vertex AI Endpoint in the previous steps.
WitConfigBuilder provides the set_ai_platform_model() method to configure the What-If Tool with a model deployed as a version on Ai Platform models. This feature currently supports Ai Platform only but not Vertex AI models. Fortunately, there is also an option to pass a custom function for generating predictions through the set_custom_predict_fn() method where either the locally trained model or a function that returns predictions from a Vertex AI model can be passed.
Prepare test samples
Save some samples from the test data for both the available classes (Fraud/not-Fraud) to analyze the model using the What-If Tool.
End of explanation
# define target and labels
TARGET_FEATURE = "isFraud"
LABEL_VOCAB = ["not-fraud", "fraud"]
# define the function to adjust the predictions
def adjust_prediction(pred):
return [1 - pred, pred]
# Combine the features and labels into one array for the What-If Tool
test_examples = np.hstack(
(test_samples_X.to_numpy(), test_samples_y.to_numpy().reshape(-1, 1))
)
# Configure the WIT to run on the locally trained model
config_builder = (
WitConfigBuilder(
test_examples.tolist(), test_samples_X.columns.tolist() + ["isFraud"]
)
.set_custom_predict_fn(forest.predict_proba)
.set_target_feature(TARGET_FEATURE)
.set_label_vocab(LABEL_VOCAB)
)
# display the WIT widget
WitWidget(config_builder, height=600)
Explanation: Running the What-If Tool on the local model
End of explanation
# configure the target and class-labels
TARGET_FEATURE = "isFraud"
LABEL_VOCAB = ["not-fraud", "fraud"]
# function to return predictions from the deployed Model
def endpoint_predict_sample(instances: list):
prediction = endpoint.predict(instances=instances)
preds = [[1 - i, i] for i in prediction.predictions]
return preds
# Combine the features and labels into one array for the What-If Tool
test_examples = np.hstack(
(test_samples_X.to_numpy(), test_samples_y.to_numpy().reshape(-1, 1))
)
# Configure the WIT with the prediction function
config_builder = (
WitConfigBuilder(
test_examples.tolist(), test_samples_X.columns.tolist() + ["isFraud"]
)
.set_custom_predict_fn(endpoint_predict_sample)
.set_target_feature(TARGET_FEATURE)
.set_label_vocab(LABEL_VOCAB)
)
# run the WIT-widget
WitWidget(config_builder, height=400)
Explanation: Running the What-If Tool on the deployed Vertex AI model
End of explanation
endpoint.undeploy_all()
Explanation: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
End of explanation
# delete the endpoint
endpoint.delete()
# delete the model
model.delete()
delete_bucket = True
if delete_bucket or os.getenv("IS_TESTING"):
! gsutil rm -r $BUCKET_URI
Explanation: Clean up
<a name="section-12"></a>
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
End of explanation
<END_TASK> |
137 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Gradient Checking
Welcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking.
You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker.
But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, "Give me a proof that your backpropagation is actually working!" To give this reassurance, you are going to use "gradient checking".
Let's do it!
Step2: 1) How does gradient checking work?
Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.
Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$.
Let's look back at the definition of a derivative (or gradient)
Step4: Expected Output
Step6: Expected Output
Step8: Expected Output
Step10: Now, run backward propagation.
Step12: You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct.
How does gradient checking work?.
As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still | <ASSISTANT_TASK:>
Python Code:
# Packages
import numpy as np
from testCases import *
from gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector
Explanation: Gradient Checking
Welcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking.
You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker.
But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, "Give me a proof that your backpropagation is actually working!" To give this reassurance, you are going to use "gradient checking".
Let's do it!
End of explanation
# GRADED FUNCTION: forward_propagation
def forward_propagation(x, theta):
Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x)
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
J -- the value of function J, computed using the formula J(theta) = theta * x
### START CODE HERE ### (approx. 1 line)
J = theta * x
### END CODE HERE ###
return J
x, theta = 2, 4
J = forward_propagation(x, theta)
print ("J = " + str(J))
Explanation: 1) How does gradient checking work?
Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.
Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$.
Let's look back at the definition of a derivative (or gradient):
$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$
If you're not familiar with the "$\displaystyle \lim_{\varepsilon \to 0}$" notation, it's just a way of saying "when $\varepsilon$ is really really small."
We know the following:
$\frac{\partial J}{\partial \theta}$ is what you want to make sure you're computing correctly.
You can compute $J(\theta + \varepsilon)$ and $J(\theta - \varepsilon)$ (in the case that $\theta$ is a real number), since you're confident your implementation for $J$ is correct.
Lets use equation (1) and a small value for $\varepsilon$ to convince your CEO that your code for computing $\frac{\partial J}{\partial \theta}$ is correct!
2) 1-dimensional gradient checking
Consider a 1D linear function $J(\theta) = \theta x$. The model contains only a single real-valued parameter $\theta$, and takes $x$ as input.
You will implement code to compute $J(.)$ and its derivative $\frac{\partial J}{\partial \theta}$. You will then use gradient checking to make sure your derivative computation for $J$ is correct.
<img src="images/1Dgrad_kiank.png" style="width:600px;height:250px;">
<caption><center> <u> Figure 1 </u>: 1D linear model<br> </center></caption>
The diagram above shows the key computation steps: First start with $x$, then evaluate the function $J(x)$ ("forward propagation"). Then compute the derivative $\frac{\partial J}{\partial \theta}$ ("backward propagation").
Exercise: implement "forward propagation" and "backward propagation" for this simple function. I.e., compute both $J(.)$ ("forward propagation") and its derivative with respect to $\theta$ ("backward propagation"), in two separate functions.
End of explanation
# GRADED FUNCTION: backward_propagation
def backward_propagation(x, theta):
Computes the derivative of J with respect to theta (see Figure 1).
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
dtheta -- the gradient of the cost with respect to theta
### START CODE HERE ### (approx. 1 line)
dtheta = x
### END CODE HERE ###
return dtheta
x, theta = 2, 4
dtheta = backward_propagation(x, theta)
print ("dtheta = " + str(dtheta))
Explanation: Expected Output:
<table style=>
<tr>
<td> ** J ** </td>
<td> 8</td>
</tr>
</table>
Exercise: Now, implement the backward propagation step (derivative computation) of Figure 1. That is, compute the derivative of $J(\theta) = \theta x$ with respect to $\theta$. To save you from doing the calculus, you should get $dtheta = \frac { \partial J }{ \partial \theta} = x$.
End of explanation
# GRADED FUNCTION: gradient_check
def gradient_check(x, theta, epsilon = 1e-7):
Implement the backward propagation presented in Figure 1.
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
# Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit.
### START CODE HERE ### (approx. 5 lines)
thetaplus = theta + epsilon # Step 1
thetaminus = theta - epsilon # Step 2
J_plus = forward_propagation(x, thetaplus) # Step 3
J_minus = forward_propagation(x, thetaminus) # Step 4
gradapprox = (J_plus - J_minus) / 2 / epsilon # Step 5
### END CODE HERE ###
# Check if gradapprox is close enough to the output of backward_propagation()
### START CODE HERE ### (approx. 1 line)
grad = backward_propagation(x, theta)
### END CODE HERE ###
### START CODE HERE ### (approx. 1 line)
numerator = np.linalg.norm(grad - gradapprox) # Step 1'
denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2'
difference = numerator / denominator # Step 3'
### END CODE HERE ###
if difference < 1e-7:
print ("The gradient is correct!")
else:
print ("The gradient is wrong!")
return difference
x, theta = 2, 4
difference = gradient_check(x, theta)
print("difference = " + str(difference))
Explanation: Expected Output:
<table>
<tr>
<td> ** dtheta ** </td>
<td> 2 </td>
</tr>
</table>
Exercise: To show that the backward_propagation() function is correctly computing the gradient $\frac{\partial J}{\partial \theta}$, let's implement gradient checking.
Instructions:
- First compute "gradapprox" using the formula above (1) and a small value of $\varepsilon$. Here are the Steps to follow:
1. $\theta^{+} = \theta + \varepsilon$
2. $\theta^{-} = \theta - \varepsilon$
3. $J^{+} = J(\theta^{+})$
4. $J^{-} = J(\theta^{-})$
5. $gradapprox = \frac{J^{+} - J^{-}}{2 \varepsilon}$
- Then compute the gradient using backward propagation, and store the result in a variable "grad"
- Finally, compute the relative difference between "gradapprox" and the "grad" using the following formula:
$$ difference = \frac {\mid\mid grad - gradapprox \mid\mid_2}{\mid\mid grad \mid\mid_2 + \mid\mid gradapprox \mid\mid_2} \tag{2}$$
You will need 3 Steps to compute this formula:
- 1'. compute the numerator using np.linalg.norm(...)
- 2'. compute the denominator. You will need to call np.linalg.norm(...) twice.
- 3'. divide them.
- If this difference is small (say less than $10^{-7}$), you can be quite confident that you have computed your gradient correctly. Otherwise, there may be a mistake in the gradient computation.
End of explanation
def forward_propagation_n(X, Y, parameters):
Implements the forward propagation (and computes the cost) presented in Figure 3.
Arguments:
X -- training set for m examples
Y -- labels for m examples
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (5, 4)
b1 -- bias vector of shape (5, 1)
W2 -- weight matrix of shape (3, 5)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
Returns:
cost -- the cost function (logistic cost for one example)
# retrieve parameters
m = X.shape[1]
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
# Cost
logprobs = np.multiply(-np.log(A3),Y) + np.multiply(-np.log(1 - A3), 1 - Y)
cost = 1./m * np.sum(logprobs)
cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3)
return cost, cache
Explanation: Expected Output:
The gradient is correct!
<table>
<tr>
<td> ** difference ** </td>
<td> 2.9193358103083e-10 </td>
</tr>
</table>
Congrats, the difference is smaller than the $10^{-7}$ threshold. So you can have high confidence that you've correctly computed the gradient in backward_propagation().
Now, in the more general case, your cost function $J$ has more than a single 1D input. When you are training a neural network, $\theta$ actually consists of multiple matrices $W^{[l]}$ and biases $b^{[l]}$! It is important to know how to do a gradient check with higher-dimensional inputs. Let's do it!
3) N-dimensional gradient checking
The following figure describes the forward and backward propagation of your fraud detection model.
<img src="images/NDgrad_kiank.png" style="width:600px;height:400px;">
<caption><center> <u> Figure 2 </u>: deep neural network<br>LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID</center></caption>
Let's look at your implementations for forward propagation and backward propagation.
End of explanation
def backward_propagation_n(X, Y, cache):
Implement the backward propagation presented in figure 2.
Arguments:
X -- input datapoint, of shape (input size, 1)
Y -- true "label"
cache -- cache output from forward_propagation_n()
Returns:
gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables.
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,
"dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2,
"dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
Explanation: Now, run backward propagation.
End of explanation
# GRADED FUNCTION: gradient_check_n
def gradient_check_n(parameters, gradients, X, Y, epsilon = 1e-7):
Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n
Arguments:
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters.
x -- input datapoint, of shape (input size, 1)
y -- true "label"
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
# Set-up variables
parameters_values, _ = dictionary_to_vector(parameters)
grad = gradients_to_vector(gradients)
num_parameters = parameters_values.shape[0]
J_plus = np.zeros((num_parameters, 1))
J_minus = np.zeros((num_parameters, 1))
gradapprox = np.zeros((num_parameters, 1))
# Compute gradapprox
for i in range(num_parameters):
# Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]".
# "_" is used because the function you have to outputs two parameters but we only care about the first one
### START CODE HERE ### (approx. 3 lines)
thetaplus = np.copy(parameters_values) # Step 1
thetaplus[i][0] += epsilon # Step 2
J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaplus)) # Step 3
### END CODE HERE ###
# Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]".
### START CODE HERE ### (approx. 3 lines)
thetaminus = np.copy(parameters_values) # Step 1
thetaminus[i][0] -= epsilon # Step 2
J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaminus)) # Step 3
### END CODE HERE ###
# Compute gradapprox[i]
### START CODE HERE ### (approx. 1 line)
gradapprox[i] = (J_plus[i] - J_minus[i]) / 2 / epsilon
### END CODE HERE ###
# Compare gradapprox to backward propagation gradients by computing difference.
### START CODE HERE ### (approx. 1 line)
numerator = np.linalg.norm(grad - gradapprox) # Step 1'
denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2'
difference = numerator / denominator # Step 3'
### END CODE HERE ###
if difference > 2e-7:
print ("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m")
else:
print ("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m")
return difference
X, Y, parameters = gradient_check_n_test_case()
cost, cache = forward_propagation_n(X, Y, parameters)
gradients = backward_propagation_n(X, Y, cache)
difference = gradient_check_n(parameters, gradients, X, Y)
Explanation: You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct.
How does gradient checking work?.
As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still:
$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$
However, $\theta$ is not a scalar anymore. It is a dictionary called "parameters". We implemented a function "dictionary_to_vector()" for you. It converts the "parameters" dictionary into a vector called "values", obtained by reshaping all parameters (W1, b1, W2, b2, W3, b3) into vectors and concatenating them.
The inverse function is "vector_to_dictionary" which outputs back the "parameters" dictionary.
<img src="images/dictionary_to_vector.png" style="width:600px;height:400px;">
<caption><center> <u> Figure 2 </u>: dictionary_to_vector() and vector_to_dictionary()<br> You will need these functions in gradient_check_n()</center></caption>
We have also converted the "gradients" dictionary into a vector "grad" using gradients_to_vector(). You don't need to worry about that.
Exercise: Implement gradient_check_n().
Instructions: Here is pseudo-code that will help you implement the gradient check.
For each i in num_parameters:
- To compute J_plus[i]:
1. Set $\theta^{+}$ to np.copy(parameters_values)
2. Set $\theta^{+}_i$ to $\theta^{+}_i + \varepsilon$
3. Calculate $J^{+}_i$ using to forward_propagation_n(x, y, vector_to_dictionary($\theta^{+}$ )).
- To compute J_minus[i]: do the same thing with $\theta^{-}$
- Compute $gradapprox[i] = \frac{J^{+}_i - J^{-}_i}{2 \varepsilon}$
Thus, you get a vector gradapprox, where gradapprox[i] is an approximation of the gradient with respect to parameter_values[i]. You can now compare this gradapprox vector to the gradients vector from backpropagation. Just like for the 1D case (Steps 1', 2', 3'), compute:
$$ difference = \frac {\| grad - gradapprox \|_2}{\| grad \|_2 + \| gradapprox \|_2 } \tag{3}$$
End of explanation
<END_TASK> |
138 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Copyright 2020 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: TensorFlow Probability on JAX
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: We can install TFP on JAX with the latest nightly builds of TFP.
Step3: Let's import some useful Python libraries.
Step4: Let's also import some basic JAX functionality.
Step5: Importing TFP on JAX
To use TFP on JAX, simply import the jax "substrate" and use it as you usually would tfp
Step6: Demo
Step7: We can define the model using tfd.JointDistributionCoroutine. We'll put standard normal priors on both the weights and the bias term then write a target_log_prob function that pins the sampled labels to the data.
Step8: We sample from dist to produce an initial state for MCMC. We can then define a function that takes in a random key and an initial state, and produces 500 samples from a No-U-Turn-Sampler (NUTS). Note that we can use JAX transformations like jit to compile our NUTS sampler using XLA.
Step9: Let's use our samples to perform Bayesian model averaging (BMA) by averaging the predicted probabilies of each set of weights.
First let's write a function that for a given set of parameters will produce the probabilities over each class. We can use dist.sample_distributions to obtain the final distribution in the model.
Step10: We can vmap(classifier_probs) over the set of samples to get the predicted class probabilities for each of our samples. We then compute the average accuracy across each sample, and the accuracy from Bayesian model averaging.
Step11: Looks like BMA reduces our error rate by almost a third!
Fundamentals
TFP on JAX has an identical API to TF where instead of accepting TF objects like tf.Tensors it accepts the JAX analogue. For example, wherever a tf.Tensor was previously used as input, the API now expects a JAX DeviceArray. Instead of returning a tf.Tensor, TFP methods will return DeviceArrays. TFP on JAX also works with nested structures of JAX objects, like a list or dictionary of DeviceArrays.
Distributions
Most of TFP's distributions are supported in JAX with very similar semantics to their TF counterparts. They are also registered as JAX Pytrees, so they can be inputs and outputs of JAX-transformed functions.
Basic distributions
The log_prob method for distributions works the same.
Step12: Sampling from a distribution requires explicitly passing in a PRNGKey (or list of integers) as the seed keyword argument. Failing to explicitly pass in a seed will throw an error.
Step13: The shape semantics for distributions remain the same in JAX, where distributions will each have an event_shape and a batch_shape and drawing many samples will add additional sample_shape dimensions.
For example, a tfd.MultivariateNormalDiag with vector parameters will have a vector event shape and empty batch shape.
Step14: On the other hand, a tfd.Normal parameterized with vectors will have a scalar event shape and vector batch shape.
Step15: The semantics of taking log_prob of samples works the same in JAX too.
Step16: Because JAX DeviceArrays are compatible with libraries like NumPy and Matplotlib, we can feed samples directly into a plotting function.
Step17: Distribution methods are compatible with JAX transformations.
Step18: Because TFP distributions are registered as JAX pytree nodes, we can write functions with distributions as inputs or outputs and transform them using jit, but they are not yet supported as arguments to vmap-ed functions.
Step19: Transformed distributions
Transformed distributions i.e. distributions whose samples are passed through a Bijector also work out of the box (bijectors work too! see below).
Step20: Joint distributions
TFP offers JointDistributions to enable combining component distributions into a single distribution over multiple random variables. Currently, TFP offers three core variants (JointDistributionSequential, JointDistributionNamed, and JointDistributionCoroutine) all of which are supported in JAX. The AutoBatched variants are also all supported.
Step21: Other distributions
Gaussian processes also work in JAX mode!
Step22: Hidden Markov models are also supported.
Step23: A few distributions like PixelCNN are not supported yet due to strict dependencies on TensorFlow or XLA incompatibilities.
Bijectors
Most of TFP's bijectors are supported in JAX today!
Step24: Bijectors are compatible with JAX transformations like jit, grad and vmap.
Step25: Some bijectors, like RealNVP and FFJORD are not yet supported.
MCMC
We've ported tfp.mcmc to JAX as well, so we can run algorithms like Hamiltonian Monte Carlo (HMC) and the No-U-Turn-Sampler (NUTS) in JAX.
Step26: Unlike TFP on TF, we are required to pass a PRNGKey into sample_chain using the seed keyword argument.
Step27: To run multiple chains, we can either pass a batch of states into sample_chain or use vmap (though we have not yet explored performance differences between the two approaches).
Step28: Optimizers
TFP on JAX supports some important optimizers like BFGS and L-BFGS. Let's set up a simple scaled quadratic loss function.
Step29: BFGS can find the minimum of this loss.
Step30: So can L-BFGS.
Step31: To vmap L-BFGS, let's set up a function that optimizes the loss for a single starting point.
Step32: Caveats
There are some fundamental differences between TF and JAX, some TFP behaviors will be different between the two substrates and not all functionality is supported. For example,
TFP on JAX does not support anything like tf.Variable since nothing like it exists in JAX. This also means utilities like tfp.util.TransformedVariable are not supported either.
tfp.layers is not supported in the backend yet, due to its dependence on Keras and tf.Variables.
tfp.math.minimize does not work in TFP on JAX because of its dependence on tf.Variable.
With TFP on JAX, tensor shapes are always concrete integer values and are never unknown/dynamic as in TFP on TF.
Pseudorandomness is handled differently in TF and JAX (see appendix).
Libraries in tfp.experimental are not guaranteed to exist in the JAX substrate.
Dtype promotion rules are different between TF and JAX. TFP on JAX tries to respect TF's dtype semantics internally, for consistency.
Bijectors have not yet been registered as JAX pytrees.
To see the complete list of what is supported in TFP on JAX, please refer to the API documentation.
Conclusion
We've ported a lot of TFP's features to JAX and are excited to see what everyone will build. Some functionality is not yet supported; if we've missed something important to you (or if you find a bug!) please reach out to us -- you can email tfprobability@tensorflow.org or file an issue on our Github repo.
Appendix
Step33: Random functions in JAX consume a key to deterministically produce a random variate, meaning they should not be used again. For example, we can use key to sample a normally distributed value, but we should not use key again elsewhere. Furthermore, passing the same value into random.normal will produce the same value.
Step34: So how do we ever draw multiple samples from a single key? The answer is key splitting. The basic idea is that we can split a PRNGKey into multiple, and each of the new keys can be treated as an independent source of randomness.
Step35: Key splitting is deterministic but is chaotic, so each new key can now be used to draw a distinct random sample. | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
!pip uninstall tensorflow -y -q
Explanation: TensorFlow Probability on JAX
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/TensorFlow_Probability_on_JAX"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Probability_on_JAX.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Probability_on_JAX.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Probability_on_JAX.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
TensorFlow Probability (TFP) is a library for probabilistic reasoning and statistical analysis that now also works on JAX! For those not familiar, JAX is a library for accelerated numerical computing based on composable function transformations.
TFP on JAX supports a lot of the most useful functionality of regular TFP while preserving the abstractions and APIs that many TFP users are now comfortable with.
Setup
TFP on JAX does not depend on TensorFlow; let's uninstall TensorFlow from this Colab entirely.
End of explanation
!pip install -Uq tfp-nightly[jax] > /dev/null
Explanation: We can install TFP on JAX with the latest nightly builds of TFP.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from sklearn import datasets
sns.set(style='white')
Explanation: Let's import some useful Python libraries.
End of explanation
import jax.numpy as jnp
from jax import grad
from jax import jit
from jax import random
from jax import value_and_grad
from jax import vmap
Explanation: Let's also import some basic JAX functionality.
End of explanation
from tensorflow_probability.substrates import jax as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
tfpk = tfp.math.psd_kernels
Explanation: Importing TFP on JAX
To use TFP on JAX, simply import the jax "substrate" and use it as you usually would tfp:
End of explanation
iris = datasets.load_iris()
features, labels = iris['data'], iris['target']
num_features = features.shape[-1]
num_classes = len(iris.target_names)
Explanation: Demo: Bayesian logistic regression
To demonstrate what we can do with the JAX backend, we'll implement Bayesian logistic regression applied to the classic Iris dataset.
First, let's import the Iris dataset and extract some metadata.
End of explanation
Root = tfd.JointDistributionCoroutine.Root
def model():
w = yield Root(tfd.Sample(tfd.Normal(0., 1.),
sample_shape=(num_features, num_classes)))
b = yield Root(
tfd.Sample(tfd.Normal(0., 1.), sample_shape=(num_classes,)))
logits = jnp.dot(features, w) + b
yield tfd.Independent(tfd.Categorical(logits=logits),
reinterpreted_batch_ndims=1)
dist = tfd.JointDistributionCoroutine(model)
def target_log_prob(*params):
return dist.log_prob(params + (labels,))
Explanation: We can define the model using tfd.JointDistributionCoroutine. We'll put standard normal priors on both the weights and the bias term then write a target_log_prob function that pins the sampled labels to the data.
End of explanation
init_key, sample_key = random.split(random.PRNGKey(0))
init_params = tuple(dist.sample(seed=init_key)[:-1])
@jit
def run_chain(key, state):
kernel = tfp.mcmc.NoUTurnSampler(target_log_prob, 1e-3)
return tfp.mcmc.sample_chain(500,
current_state=state,
kernel=kernel,
trace_fn=lambda _, results: results.target_log_prob,
num_burnin_steps=500,
seed=key)
states, log_probs = run_chain(sample_key, init_params)
plt.figure()
plt.plot(log_probs)
plt.ylabel('Target Log Prob')
plt.xlabel('Iterations of NUTS')
plt.show()
Explanation: We sample from dist to produce an initial state for MCMC. We can then define a function that takes in a random key and an initial state, and produces 500 samples from a No-U-Turn-Sampler (NUTS). Note that we can use JAX transformations like jit to compile our NUTS sampler using XLA.
End of explanation
def classifier_probs(params):
dists, _ = dist.sample_distributions(seed=random.PRNGKey(0),
value=params + (None,))
return dists[-1].distribution.probs_parameter()
Explanation: Let's use our samples to perform Bayesian model averaging (BMA) by averaging the predicted probabilies of each set of weights.
First let's write a function that for a given set of parameters will produce the probabilities over each class. We can use dist.sample_distributions to obtain the final distribution in the model.
End of explanation
all_probs = jit(vmap(classifier_probs))(states)
print('Average accuracy:', jnp.mean(all_probs.argmax(axis=-1) == labels))
print('BMA accuracy:', jnp.mean(all_probs.mean(axis=0).argmax(axis=-1) == labels))
Explanation: We can vmap(classifier_probs) over the set of samples to get the predicted class probabilities for each of our samples. We then compute the average accuracy across each sample, and the accuracy from Bayesian model averaging.
End of explanation
dist = tfd.Normal(0., 1.)
print(dist.log_prob(0.))
Explanation: Looks like BMA reduces our error rate by almost a third!
Fundamentals
TFP on JAX has an identical API to TF where instead of accepting TF objects like tf.Tensors it accepts the JAX analogue. For example, wherever a tf.Tensor was previously used as input, the API now expects a JAX DeviceArray. Instead of returning a tf.Tensor, TFP methods will return DeviceArrays. TFP on JAX also works with nested structures of JAX objects, like a list or dictionary of DeviceArrays.
Distributions
Most of TFP's distributions are supported in JAX with very similar semantics to their TF counterparts. They are also registered as JAX Pytrees, so they can be inputs and outputs of JAX-transformed functions.
Basic distributions
The log_prob method for distributions works the same.
End of explanation
tfd.Normal(0., 1.).sample(seed=random.PRNGKey(0))
Explanation: Sampling from a distribution requires explicitly passing in a PRNGKey (or list of integers) as the seed keyword argument. Failing to explicitly pass in a seed will throw an error.
End of explanation
dist = tfd.MultivariateNormalDiag(
loc=jnp.zeros(5),
scale_diag=jnp.ones(5)
)
print('Event shape:', dist.event_shape)
print('Batch shape:', dist.batch_shape)
Explanation: The shape semantics for distributions remain the same in JAX, where distributions will each have an event_shape and a batch_shape and drawing many samples will add additional sample_shape dimensions.
For example, a tfd.MultivariateNormalDiag with vector parameters will have a vector event shape and empty batch shape.
End of explanation
dist = tfd.Normal(
loc=jnp.ones(5),
scale=jnp.ones(5),
)
print('Event shape:', dist.event_shape)
print('Batch shape:', dist.batch_shape)
Explanation: On the other hand, a tfd.Normal parameterized with vectors will have a scalar event shape and vector batch shape.
End of explanation
dist = tfd.Normal(jnp.zeros(5), jnp.ones(5))
s = dist.sample(sample_shape=(10, 2), seed=random.PRNGKey(0))
print(dist.log_prob(s).shape)
dist = tfd.Independent(tfd.Normal(jnp.zeros(5), jnp.ones(5)), 1)
s = dist.sample(sample_shape=(10, 2), seed=random.PRNGKey(0))
print(dist.log_prob(s).shape)
Explanation: The semantics of taking log_prob of samples works the same in JAX too.
End of explanation
sns.distplot(tfd.Normal(0., 1.).sample(1000, seed=random.PRNGKey(0)))
plt.show()
Explanation: Because JAX DeviceArrays are compatible with libraries like NumPy and Matplotlib, we can feed samples directly into a plotting function.
End of explanation
sns.distplot(jit(vmap(lambda key: tfd.Normal(0., 1.).sample(seed=key)))(
random.split(random.PRNGKey(0), 2000)))
plt.show()
x = jnp.linspace(-5., 5., 100)
plt.plot(x, jit(vmap(grad(tfd.Normal(0., 1.).prob)))(x))
plt.show()
Explanation: Distribution methods are compatible with JAX transformations.
End of explanation
@jit
def random_distribution(key):
loc_key, scale_key = random.split(key)
loc, log_scale = random.normal(loc_key), random.normal(scale_key)
return tfd.Normal(loc, jnp.exp(log_scale))
random_dist = random_distribution(random.PRNGKey(0))
print(random_dist.mean(), random_dist.variance())
Explanation: Because TFP distributions are registered as JAX pytree nodes, we can write functions with distributions as inputs or outputs and transform them using jit, but they are not yet supported as arguments to vmap-ed functions.
End of explanation
dist = tfd.TransformedDistribution(
tfd.Normal(0., 1.),
tfb.Sigmoid()
)
sns.distplot(dist.sample(1000, seed=random.PRNGKey(0)))
plt.show()
Explanation: Transformed distributions
Transformed distributions i.e. distributions whose samples are passed through a Bijector also work out of the box (bijectors work too! see below).
End of explanation
dist = tfd.JointDistributionSequential([
tfd.Normal(0., 1.),
lambda x: tfd.Normal(x, 1e-1)
])
plt.scatter(*dist.sample(1000, seed=random.PRNGKey(0)), alpha=0.5)
plt.show()
joint = tfd.JointDistributionNamed(dict(
e= tfd.Exponential(rate=1.),
n= tfd.Normal(loc=0., scale=2.),
m=lambda n, e: tfd.Normal(loc=n, scale=e),
x=lambda m: tfd.Sample(tfd.Bernoulli(logits=m), 12),
))
joint.sample(seed=random.PRNGKey(0))
Root = tfd.JointDistributionCoroutine.Root
def model():
e = yield Root(tfd.Exponential(rate=1.))
n = yield Root(tfd.Normal(loc=0, scale=2.))
m = yield tfd.Normal(loc=n, scale=e)
x = yield tfd.Sample(tfd.Bernoulli(logits=m), 12)
joint = tfd.JointDistributionCoroutine(model)
joint.sample(seed=random.PRNGKey(0))
Explanation: Joint distributions
TFP offers JointDistributions to enable combining component distributions into a single distribution over multiple random variables. Currently, TFP offers three core variants (JointDistributionSequential, JointDistributionNamed, and JointDistributionCoroutine) all of which are supported in JAX. The AutoBatched variants are also all supported.
End of explanation
k1, k2, k3 = random.split(random.PRNGKey(0), 3)
observation_noise_variance = 0.01
f = lambda x: jnp.sin(10*x[..., 0]) * jnp.exp(-x[..., 0]**2)
observation_index_points = random.uniform(
k1, [50], minval=-1.,maxval= 1.)[..., jnp.newaxis]
observations = f(observation_index_points) + tfd.Normal(
loc=0., scale=jnp.sqrt(observation_noise_variance)).sample(seed=k2)
index_points = jnp.linspace(-1., 1., 100)[..., jnp.newaxis]
kernel = tfpk.ExponentiatedQuadratic(length_scale=0.1)
gprm = tfd.GaussianProcessRegressionModel(
kernel=kernel,
index_points=index_points,
observation_index_points=observation_index_points,
observations=observations,
observation_noise_variance=observation_noise_variance)
samples = gprm.sample(10, seed=k3)
for i in range(10):
plt.plot(index_points, samples[i], alpha=0.5)
plt.plot(observation_index_points, observations, marker='o', linestyle='')
plt.show()
Explanation: Other distributions
Gaussian processes also work in JAX mode!
End of explanation
initial_distribution = tfd.Categorical(probs=[0.8, 0.2])
transition_distribution = tfd.Categorical(probs=[[0.7, 0.3],
[0.2, 0.8]])
observation_distribution = tfd.Normal(loc=[0., 15.], scale=[5., 10.])
model = tfd.HiddenMarkovModel(
initial_distribution=initial_distribution,
transition_distribution=transition_distribution,
observation_distribution=observation_distribution,
num_steps=7)
print(model.mean())
print(model.log_prob(jnp.zeros(7)))
print(model.sample(seed=random.PRNGKey(0)))
Explanation: Hidden Markov models are also supported.
End of explanation
tfb.Exp().inverse(1.)
bij = tfb.Shift(1.)(tfb.Scale(3.))
print(bij.forward(jnp.ones(5)))
print(bij.inverse(jnp.ones(5)))
b = tfb.FillScaleTriL(diag_bijector=tfb.Exp(), diag_shift=None)
print(b.forward(x=[0., 0., 0.]))
print(b.inverse(y=[[1., 0], [.5, 2]]))
b = tfb.Chain([tfb.Exp(), tfb.Softplus()])
# or:
# b = tfb.Exp()(tfb.Softplus())
print(b.forward(-jnp.ones(5)))
Explanation: A few distributions like PixelCNN are not supported yet due to strict dependencies on TensorFlow or XLA incompatibilities.
Bijectors
Most of TFP's bijectors are supported in JAX today!
End of explanation
jit(vmap(tfb.Exp().inverse))(jnp.arange(4.))
x = jnp.linspace(0., 1., 100)
plt.plot(x, jit(grad(lambda x: vmap(tfb.Sigmoid().inverse)(x).sum()))(x))
plt.show()
Explanation: Bijectors are compatible with JAX transformations like jit, grad and vmap.
End of explanation
target_log_prob = tfd.MultivariateNormalDiag(jnp.zeros(2), jnp.ones(2)).log_prob
Explanation: Some bijectors, like RealNVP and FFJORD are not yet supported.
MCMC
We've ported tfp.mcmc to JAX as well, so we can run algorithms like Hamiltonian Monte Carlo (HMC) and the No-U-Turn-Sampler (NUTS) in JAX.
End of explanation
def run_chain(key, state):
kernel = tfp.mcmc.NoUTurnSampler(target_log_prob, 1e-1)
return tfp.mcmc.sample_chain(1000,
current_state=state,
kernel=kernel,
trace_fn=lambda _, results: results.target_log_prob,
seed=key)
states, log_probs = jit(run_chain)(random.PRNGKey(0), jnp.zeros(2))
plt.figure()
plt.scatter(*states.T, alpha=0.5)
plt.figure()
plt.plot(log_probs)
plt.show()
Explanation: Unlike TFP on TF, we are required to pass a PRNGKey into sample_chain using the seed keyword argument.
End of explanation
states, log_probs = jit(run_chain)(random.PRNGKey(0), jnp.zeros([10, 2]))
plt.figure()
for i in range(10):
plt.scatter(*states[:, i].T, alpha=0.5)
plt.figure()
for i in range(10):
plt.plot(log_probs[:, i], alpha=0.5)
plt.show()
Explanation: To run multiple chains, we can either pass a batch of states into sample_chain or use vmap (though we have not yet explored performance differences between the two approaches).
End of explanation
minimum = jnp.array([1.0, 1.0]) # The center of the quadratic bowl.
scales = jnp.array([2.0, 3.0]) # The scales along the two axes.
# The objective function and the gradient.
def quadratic_loss(x):
return jnp.sum(scales * jnp.square(x - minimum))
start = jnp.array([0.6, 0.8]) # Starting point for the search.
Explanation: Optimizers
TFP on JAX supports some important optimizers like BFGS and L-BFGS. Let's set up a simple scaled quadratic loss function.
End of explanation
optim_results = tfp.optimizer.bfgs_minimize(
value_and_grad(quadratic_loss), initial_position=start, tolerance=1e-8)
# Check that the search converged
assert(optim_results.converged)
# Check that the argmin is close to the actual value.
np.testing.assert_allclose(optim_results.position, minimum)
# Print out the total number of function evaluations it took. Should be 5.
print("Function evaluations: %d" % optim_results.num_objective_evaluations)
Explanation: BFGS can find the minimum of this loss.
End of explanation
optim_results = tfp.optimizer.lbfgs_minimize(
value_and_grad(quadratic_loss), initial_position=start, tolerance=1e-8)
# Check that the search converged
assert(optim_results.converged)
# Check that the argmin is close to the actual value.
np.testing.assert_allclose(optim_results.position, minimum)
# Print out the total number of function evaluations it took. Should be 5.
print("Function evaluations: %d" % optim_results.num_objective_evaluations)
Explanation: So can L-BFGS.
End of explanation
def optimize_single(start):
return tfp.optimizer.lbfgs_minimize(
value_and_grad(quadratic_loss), initial_position=start, tolerance=1e-8)
all_results = jit(vmap(optimize_single))(
random.normal(random.PRNGKey(0), (10, 2)))
assert all(all_results.converged)
for i in range(10):
np.testing.assert_allclose(optim_results.position[i], minimum)
print("Function evaluations: %s" % all_results.num_objective_evaluations)
Explanation: To vmap L-BFGS, let's set up a function that optimizes the loss for a single starting point.
End of explanation
key = random.PRNGKey(0) # Creates a key with value [0, 0]
print(key)
Explanation: Caveats
There are some fundamental differences between TF and JAX, some TFP behaviors will be different between the two substrates and not all functionality is supported. For example,
TFP on JAX does not support anything like tf.Variable since nothing like it exists in JAX. This also means utilities like tfp.util.TransformedVariable are not supported either.
tfp.layers is not supported in the backend yet, due to its dependence on Keras and tf.Variables.
tfp.math.minimize does not work in TFP on JAX because of its dependence on tf.Variable.
With TFP on JAX, tensor shapes are always concrete integer values and are never unknown/dynamic as in TFP on TF.
Pseudorandomness is handled differently in TF and JAX (see appendix).
Libraries in tfp.experimental are not guaranteed to exist in the JAX substrate.
Dtype promotion rules are different between TF and JAX. TFP on JAX tries to respect TF's dtype semantics internally, for consistency.
Bijectors have not yet been registered as JAX pytrees.
To see the complete list of what is supported in TFP on JAX, please refer to the API documentation.
Conclusion
We've ported a lot of TFP's features to JAX and are excited to see what everyone will build. Some functionality is not yet supported; if we've missed something important to you (or if you find a bug!) please reach out to us -- you can email tfprobability@tensorflow.org or file an issue on our Github repo.
Appendix: pseudorandomness in JAX
JAX's pseudorandom number generation (PRNG) model is stateless. Unlike a stateful model, there is no mutable global state that evolves after each random draw. In JAX's model, we start with a PRNG key, which acts like a pair of 32-bit integers. We can construct these keys by using jax.random.PRNGKey.
End of explanation
print(random.normal(key))
Explanation: Random functions in JAX consume a key to deterministically produce a random variate, meaning they should not be used again. For example, we can use key to sample a normally distributed value, but we should not use key again elsewhere. Furthermore, passing the same value into random.normal will produce the same value.
End of explanation
key1, key2 = random.split(key, num=2)
print(key1, key2)
Explanation: So how do we ever draw multiple samples from a single key? The answer is key splitting. The basic idea is that we can split a PRNGKey into multiple, and each of the new keys can be treated as an independent source of randomness.
End of explanation
print(random.normal(key1), random.normal(key2))
Explanation: Key splitting is deterministic but is chaotic, so each new key can now be used to draw a distinct random sample.
End of explanation
<END_TASK> |
139 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Introduction to BigQuery ML - Predict Birth Weight
Learning Objectives
Use BigQuery to explore the natality dataset
Create a regression (linear regression) model in BQML
Evaluate the performance of your machine learning model
Make predictions with a trained BQML model
Introduction
In this lab, you will be using the US Centers for Disease Control and Prevention's (CDC) natality data to build a model to predict baby birth weights based on a handful of features known at pregnancy. Because we're predicting a continuous value, this is a regression problem, and for that, we'll use the linear regression model built into BQML.
Step1: Set up the notebook environment
VERY IMPORTANT
Step2: Exploring the Data
This lab will use natality data and training on features to predict the birth weight.
The CDC's Natality data has details on US births from 1969 to 2008 and is available in BigQuery as a public data set. More details
Step3: Define Features
Looking over the data set, there are a few columns of interest that could be leveraged into features for a reasonable prediction of approximate birth weight.
Further, some feature engineering may be accomplished with the BigQuery CAST function -- in BQML, all strings are considered categorical features and all numeric types are considered continuous ones.
The hashmonth is added so that we can repeatably split the data without leakage -- the goal is to have all babies that share a birthday to be either in training set or in test set and not spread between them (otherwise, there would be information leakage when it comes to triplets, etc.)
Step4: Train Model
With the relevant columns chosen to accomplish predictions, it is then possible to create and train the model in BigQuery. First, a dataset will be needed store the model.
Step5: With the demo dataset ready, it is possible to create a linear regression model to train the model.
This will take approximately 5 to 7 minutes to run. Feedback from BigQuery will cease in output cell and the notebook will leave the "busy" state when complete.
Step6: Training Statistics
For all training runs, statistics are captured in the "TRAINING_INFO" table. This table has basic performance statistics for each iteration.
The query below returns the training details.
Step7: Some of these columns are obvious although what do the non-specific ML columns mean (specific to BQML)?
training_run - Will be zero for a newly created model. If the model is re-trained using warm_start, this will increment for each re-training.
iteration - Number of the associated training_run, starting with zero for the first iteration.
duration_ms - Indicates how long the iteration took (in ms).
Next plot the training and evaluation loss to see if the model has an overfit.
Step8: As you can see, the training loss and evaluation loss are essentially identical. There does not appear to be any overfitting.
Make a Prediction with BQML using the Model
With a trained model, it is now possible to make a prediction on the values. The only difference from the second query above is the reference to the model. The data has been limited (LIMIT 100) to reduce amount of data returned.
When the ml.predict function is leveraged, output prediction column name for the model is predicted_<label_column_name>. | <ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
Explanation: Introduction to BigQuery ML - Predict Birth Weight
Learning Objectives
Use BigQuery to explore the natality dataset
Create a regression (linear regression) model in BQML
Evaluate the performance of your machine learning model
Make predictions with a trained BQML model
Introduction
In this lab, you will be using the US Centers for Disease Control and Prevention's (CDC) natality data to build a model to predict baby birth weights based on a handful of features known at pregnancy. Because we're predicting a continuous value, this is a regression problem, and for that, we'll use the linear regression model built into BQML.
End of explanation
PROJECT = '<YOUR PROJECT>' #TODO Replace with your GCP PROJECT
Explanation: Set up the notebook environment
VERY IMPORTANT: In the cell below you must replace the text <YOUR PROJECT> with your GCP project id as provided during the setup of your environment. Please leave any surrounding single quotes in place.
End of explanation
%%bigquery
SELECT
*
FROM
publicdata.samples.natality
WHERE
year > 2000
AND gestation_weeks > 0
AND mother_age > 0
AND plurality > 0
AND weight_pounds > 0
LIMIT 10
Explanation: Exploring the Data
This lab will use natality data and training on features to predict the birth weight.
The CDC's Natality data has details on US births from 1969 to 2008 and is available in BigQuery as a public data set. More details: https://bigquery.cloud.google.com/table/publicdata:samples.natality?tab=details
Start by looking at the data since 2000 with useful values, those greater than 0.
Note: "%%bigquery" is a magic which allows quick access to BigQuery from within a notebook.
End of explanation
%%bigquery
SELECT
weight_pounds, -- this is the label; because it is continuous, we need to use regression
CAST(is_male AS STRING) AS is_male,
mother_age,
CAST(plurality AS STRING) AS plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND gestation_weeks > 0
AND mother_age > 0
AND plurality > 0
AND weight_pounds > 0
LIMIT 10
Explanation: Define Features
Looking over the data set, there are a few columns of interest that could be leveraged into features for a reasonable prediction of approximate birth weight.
Further, some feature engineering may be accomplished with the BigQuery CAST function -- in BQML, all strings are considered categorical features and all numeric types are considered continuous ones.
The hashmonth is added so that we can repeatably split the data without leakage -- the goal is to have all babies that share a birthday to be either in training set or in test set and not spread between them (otherwise, there would be information leakage when it comes to triplets, etc.)
End of explanation
%%bash
bq --location=US mk -d demo
Explanation: Train Model
With the relevant columns chosen to accomplish predictions, it is then possible to create and train the model in BigQuery. First, a dataset will be needed store the model.
End of explanation
%%bigquery
CREATE or REPLACE MODEL demo.babyweight_model_asis
OPTIONS
(model_type='linear_reg', labels=['weight_pounds'], optimize_strategy='batch_gradient_descent') AS
WITH natality_data AS (
SELECT
weight_pounds,-- this is the label; because it is continuous, we need to use regression
CAST(is_male AS STRING) AS is_male,
mother_age,
CAST(plurality AS STRING) AS plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND gestation_weeks > 0
AND mother_age > 0
AND plurality > 0
AND weight_pounds > 0
)
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
natality_data
WHERE
ABS(MOD(hashmonth, 4)) < 3 -- select 75% of the data as training
Explanation: With the demo dataset ready, it is possible to create a linear regression model to train the model.
This will take approximately 5 to 7 minutes to run. Feedback from BigQuery will cease in output cell and the notebook will leave the "busy" state when complete.
End of explanation
%%bigquery
SELECT * FROM ML.TRAINING_INFO(MODEL demo.babyweight_model_asis);
Explanation: Training Statistics
For all training runs, statistics are captured in the "TRAINING_INFO" table. This table has basic performance statistics for each iteration.
The query below returns the training details.
End of explanation
%%bigquery history
SELECT * FROM ML.TRAINING_INFO(MODEL demo.babyweight_model_asis)
history
plt.plot('iteration', 'loss', data=history,
marker='o', color='orange', linewidth=2)
plt.plot('iteration', 'eval_loss', data=history,
marker='', color='green', linewidth=2, linestyle='dashed')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.legend();
Explanation: Some of these columns are obvious although what do the non-specific ML columns mean (specific to BQML)?
training_run - Will be zero for a newly created model. If the model is re-trained using warm_start, this will increment for each re-training.
iteration - Number of the associated training_run, starting with zero for the first iteration.
duration_ms - Indicates how long the iteration took (in ms).
Next plot the training and evaluation loss to see if the model has an overfit.
End of explanation
%%bigquery
SELECT
*
FROM
ml.PREDICT(MODEL demo.babyweight_model_asis,
(SELECT
weight_pounds,
CAST(is_male AS STRING) AS is_male,
mother_age,
CAST(plurality AS STRING) AS plurality,
gestation_weeks
FROM
publicdata.samples.natality
WHERE
year > 2000
AND gestation_weeks > 0
AND mother_age > 0
AND plurality > 0
AND weight_pounds > 0
))
LIMIT 100
Explanation: As you can see, the training loss and evaluation loss are essentially identical. There does not appear to be any overfitting.
Make a Prediction with BQML using the Model
With a trained model, it is now possible to make a prediction on the values. The only difference from the second query above is the reference to the model. The data has been limited (LIMIT 100) to reduce amount of data returned.
When the ml.predict function is leveraged, output prediction column name for the model is predicted_<label_column_name>.
End of explanation
<END_TASK> |
140 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
SETUP
Step1: Autosipper
Step2: Manifold
Step3: Micromanager
Step4: Preset
Step5: ACQUISITION
Step6: MM Get info
Step7: Video
Step8: SNAP CV2
Step9: EXIT | <ASSISTANT_TASK:>
Python Code:
import time
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
Explanation: SETUP
End of explanation
# config directory must have "__init__.py" file
# from the 'config' directory, import the following classes:
from config import Motor, ASI_Controller, Autosipper
from config import utils as ut
autosipper = Autosipper(Motor('config/motor.yaml'), ASI_Controller('config/asi_controller.yaml'))
autosipper.coord_frames
from config import gui
gui.stage_control(autosipper.XY, autosipper.Z)
# add/determine deck info
autosipper.coord_frames.deck.position_table = ut.read_delim_pd('config/position_tables/deck')
# check deck alignment
# CLEAR DECK OF OBSTRUCTIONS!!
autosipper.go_to('deck', ['name'],'align')
# add plate
from config import utils as ut
platemap = ut.generate_position_table((8,8),(9,9),93.5)
platemap[]
ut.lookup(platemap)
Explanation: Autosipper
End of explanation
from config import Manifold
manifold = Manifold('192.168.1.3', 'config/valvemaps/valvemap.csv', 512)
manifold.valvemap[manifold.valvemap.name>0]
def valve_states():
tmp = []
for i in [2,0,14,8]:
status = 'x'
if manifold.read_valve(i):
status = 'o'
tmp.append([status, manifold.valvemap.name.iloc[i]])
return pd.DataFrame(tmp)
tmp = []
for i in range(16):
status = 'x'
if manifold.read_valve(i):
status = 'o'
name = manifold.valvemap.name.iloc[i]
tmp.append([status, name])
pd.DataFrame(tmp).replace(np.nan, '')
name = 'inlet_in'
v = manifold.valvemap['valve'][manifold.valvemap.name==name]
v=14
manifold.depressurize(v)
manifold.pressurize(v)
manifold.exit()
Explanation: Manifold
End of explanation
# !!!! Also must have MM folder on system PATH
# mm_version = 'C:\Micro-Manager-1.4'
# cfg = 'C:\Micro-Manager-1.4\SetupNumber2_05102016.cfg'
mm_version = 'C:\Program Files\Micro-Manager-2.0beta'
cfg = 'C:\Program Files\Micro-Manager-2.0beta\Setup2_20170413.cfg'
import sys
sys.path.insert(0, mm_version) # make it so python can find MMCorePy
import MMCorePy
from PIL import Image
core = MMCorePy.CMMCore()
core.loadSystemConfiguration(cfg)
core.setProperty("Spectra", "White_Enable", "1")
core.waitForDevice("Spectra")
core.setProperty("Cam Andor_Zyla4.2", "Sensitivity/DynamicRange", "16-bit (low noise & high well capacity)") # NEED TO SET CAMERA TO 16 BIT (ceiling 12 BIT = 4096)
core.setProperty("Spectra", "White_Enable", "0")
Explanation: Micromanager
End of explanation
log = []
autosipper.Z.move(93.5)
manifold.depressurize(2)
manifold.depressurize(0)
log.append([time.ctime(time.time()), 'open inlet_in, inlet_out'])
valve_states()
text = 'fluorescence observed'
log.append([time.ctime(time.time()), text])
text = 'CLOSE inlet_out'
manifold.pressurize(0)
log.append([time.ctime(time.time()), text])
text = 'OPEN chip_in, chip_out'
manifold.depressurize(14)
manifold.depressurize(8)
log.append([time.ctime(time.time()), text])
valve_states()
text = 'fill'
log.append([time.ctime(time.time()), text])
manifold.pressurize(8)
#closed all
autosipper.Z.move(93.5)
manifold.depressurize(2)
manifold.depressurize(0)
log.append([time.ctime(time.time()), 'open inlet_in, inlet_out'])
valve_states()
text = 'fluorescence removed'
log.append([time.ctime(time.time()), text])
text = 'CLOSE inlet_out'
manifold.pressurize(0)
log.append([time.ctime(time.time()), text])
text = 'OPEN chip_in, chip_out'
manifold.depressurize(14)
manifold.depressurize(8)
log.append([time.ctime(time.time()), text])
valve_states()
text = 'flush'
log.append([time.ctime(time.time()), text])
manifold.pressurize(8)
for i in [2,0,14,8]:
manifold.pressurize(i)
Explanation: Preset: 1_PBP
ConfigGroup,Channel,1_PBP,TIFilterBlock1,Label,1-PBP
Preset: 2_BF
ConfigGroup,Channel,2_BF,TIFilterBlock1,Label,2-BF
Preset: 3_DAPI
ConfigGroup,Channel,3_DAPI,TIFilterBlock1,Label,3-DAPI
Preset: 4_eGFP
ConfigGroup,Channel,4_eGFP,TIFilterBlock1,Label,4-GFP
Preset: 5_Cy5
ConfigGroup,Channel,5_Cy5,TIFilterBlock1,Label,5-Cy5
Preset: 6_AttoPhos
ConfigGroup,Channel,6_AttoPhos,TIFilterBlock1,Label,6-AttoPhos
TEST
4.5 psi, 25 psi valves
End of explanation
log
core.setConfig('Channel','2_BF')
core.setProperty(core.getCameraDevice(), "Exposure", 20)
core.snapImage()
img = core.getImage()
plt.imshow(img,cmap='gray')
image = Image.fromarray(img)
# image.save('TESTIMAGE.tif')
position_list = ut.load_mm_positionlist("C:/Users/fordycelab/Desktop/D1_cjm.pos")
position_list
def acquire():
for i in xrange(len(position_list)):
si = str(i)
x,y = position_list[['x','y']].iloc[i]
core.setXYPosition(x,y)
core.waitForDevice(core.getXYStageDevice())
logadd(log, 'moved '+si)
core.snapImage()
# core.waitForDevice(core.getCameraDevice())
logadd(log, 'snapped '+si)
img = core.getImage()
logadd(log, 'got image '+si)
image = Image.fromarray(img)
image.save('images/images_{}.tif'.format(i))
logadd(log, 'saved image '+si)
x,y = position_list[['x','y']].iloc[0]
core.setXYPosition(x,y)
core.waitForDevice(core.getXYStageDevice())
logadd(log, 'moved '+ str(0))
def logadd(log,st):
log.append([time.ctime(time.time()), st])
print log[-1]
# Auto
core.setAutoShutter(True) # default
core.snapImage()
# Manual
core.setAutoShutter(False) # disable auto shutter
core.setProperty("Shutter", "State", "1")
core.waitForDevice("Shutter")
core.snapImage()
core.setProperty("Shutter", "State", "0")
Explanation: ACQUISITION
End of explanation
core.getFocusDevice()
core.getCameraDevice()
core.XYStageDevice()
core.getDevicePropertyNames(core.getCameraDevice())
Explanation: MM Get info
End of explanation
import cv2
from IPython import display
import numpy as np
from ipywidgets import widgets
import time
# core.initializeCircularBuffer()
# core.setCircularBufferMemoryFootprint(4096) # MiB
cv2.WND
# video with button (CV2)
live = widgets.Button(description='Live')
close = widgets.Button(description='Close')
display.display(widgets.HBox([live, close]))
def on_live_clicked(b):
display.clear_output(wait=True)
print 'LIVE'
core.startContinuousSequenceAcquisition(1000) # time overridden by exposure
time.sleep(.2)
cv2.namedWindow('Video', cv2.WINDOW_NORMAL)
cv2.setWindowProperty('Video', cv2.WND_PROP_ASPECT_RATIO, cv2.WINDOW_KEEPRATIO)
cv2.resizeWindow('Video', 500,500)
img = np.zeros((500,500))
print 'To stop, click window + press ESC'
while(1):
time.sleep(.015)
if core.getRemainingImageCount() > 0:
img = core.getLastImage()
cv2.imshow('Video',img)
k = cv2.waitKey(30)
if k==27: # ESC key; may need 255 mask?
break
print 'STOPPED'
core.stopSequenceAcquisition()
def on_close_clicked(b):
if core.isSequenceRunning():
core.stopSequenceAcquisition()
cv2.destroyWindow('Video')
live.on_click(on_live_clicked)
close.on_click(on_close_clicked)
# video with button (CV2)
# serial snap image
live = widgets.Button(description='Live')
close = widgets.Button(description='Close')
display.display(widgets.HBox([live, close]))
def on_live_clicked(b):
display.clear_output(wait=True)
print 'LIVE'
cv2.namedWindow('Video', cv2.WINDOW_NORMAL)
cv2.setWindowProperty('Video', cv2.WND_PROP_ASPECT_RATIO, cv2.WINDOW_KEEPRATIO)
cv2.resizeWindow('Video', 500,500)
img = np.zeros((500,500))
print 'To stop, click window + press ESC'
while(1):
core.snapImage()
time.sleep(.05)
img = core.getImage()
cv2.imshow('Video',img)
k = cv2.waitKey(30)
if k==27: # ESC key; may need 255 mask?
break
print 'STOPPED'
def on_close_clicked(b):
if core.isSequenceRunning():
core.stopSequenceAcquisition()
cv2.destroyWindow('Video')
live.on_click(on_live_clicked)
close.on_click(on_close_clicked)
cv2.destroyAllWindows()
Explanation: Video
End of explanation
# snap (CV2)
snap = widgets.Button(description='Snap')
close2 = widgets.Button(description='Close')
display.display(widgets.HBox([snap, close2]))
def on_snap_clicked(b):
cv2.destroyWindow('Snap')
cv2.namedWindow('Snap',cv2.WINDOW_NORMAL)
cv2.resizeWindow('Snap', 500,500)
cv2.setWindowProperty('Snap', cv2.WND_PROP_ASPECT_RATIO, cv2.WINDOW_KEEPRATIO)
core.snapImage()
time.sleep(.1)
img = core.getImage()
cv2.imshow('Snap',img)
k = cv2.waitKey(30)
def on_close2_clicked(b):
cv2.destroyWindow('Snap')
snap.on_click(on_snap_clicked)
close2.on_click(on_close2_clicked)
Explanation: SNAP CV2
End of explanation
autosipper.exit()
manifold.exit()
core.unloadAllDevices()
core.reset()
print 'closed'
Explanation: EXIT
End of explanation
<END_TASK> |
141 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
These examples are tests for scc_info on alternating automata.
Step1: universal edges are handled as if they were many distinct existencial edges from the point of view of scc_info, so the acceptance / rejection status is not always meaningful.
Step2: A corner case for the dot printer | <ASSISTANT_TASK:>
Python Code:
from IPython.display import display
import spot
spot.setup(show_default='.bas')
spot.automaton('''
HOA: v1
States: 2
Start: 0&1
AP: 2 "a" "b"
acc-name: Buchi
Acceptance: 1 Inf(0)
--BODY--
State: 0
[0] 0
[!0] 1
State: 1
[1] 1 {0}
--END--
''')
Explanation: These examples are tests for scc_info on alternating automata.
End of explanation
spot.automaton('''
HOA: v1
States: 2
Start: 0&1
AP: 2 "a" "b"
Acceptance: 1 Fin(0)
--BODY--
State: 0
[0] 0&1 {0}
State: 1
[1] 1
--END--
''')
spot.automaton('''
HOA: v1
States: 2
Start: 0&1
AP: 2 "a" "b"
Acceptance: 1 Fin(0)
--BODY--
State: 0
[0] 0 {0}
[!0] 1
State: 1
[1] 1&0
--END--
''')
spot.automaton('''
HOA: v1
States: 2
Start: 0
AP: 2 "a" "b"
Acceptance: 1 Fin(0)
--BODY--
State: 0
[0] 0
[!0] 1 {0}
State: 1
[1] 1&0
--END--
''')
spot.automaton('''
HOA: v1
States: 2
Start: 0
AP: 2 "a" "b"
Acceptance: 1 Fin(0)
--BODY--
State: 0
[0] 0 {0}
[!0] 1
State: 1
[1] 1&0 {0}
--END--
''')
Explanation: universal edges are handled as if they were many distinct existencial edges from the point of view of scc_info, so the acceptance / rejection status is not always meaningful.
End of explanation
for a in spot.automata('''
HOA: v1
States: 3
Start: 0
AP: 2 "a" "b"
Acceptance: 1 Fin(0)
--BODY--
State: 0
[0] 1&2
State: 1
[1] 1&2 {0}
State: 2
[1] 2
--END--
HOA: v1
States: 3
Start: 0
AP: 2 "a" "b"
Acceptance: 1 Fin(0)
--BODY--
State: 0
[0] 1&2
State: 1
[1] 1 {0}
State: 2
[1] 2
--END--
'''):
display(a)
a = spot.automaton('''
HOA: v1
States: 3
Start: 0&2
AP: 2 "a" "b"
Acceptance: 1 Fin(0)
spot.highlight.edges: 2 2
--BODY--
State: 0
[0] 1&2
State: 1
[1] 1&2 {0}
State: 2
[1] 1&2
--END--
''')
display(a, a.show('.basy'))
Explanation: A corner case for the dot printer
End of explanation
<END_TASK> |
142 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Course 12-752
Step1: Section 2
Step2: Section 3
Step3: Display variable power
Step4: Use parser function from dateutil module to convert the timestamps in power from a string to a datetime - Timestamp object
Step5: Select 15 minute time interval data and re-store in power
Step6: Use group by function of Pandas dataframe to group the power data based on
Datestamp (P_Total)
Datestamp & Panel Name (P_Panel)
Step7: Use .sum() attribute of group object to sum up the Average power based on the group variables defined above
Create grouped data for first & second set of group variables and call it Lighting_Total and Lighting_Panel respectively
Step8: Plot the graph of Total power over time
Step9: Section 4
Step10: Find the Design matrix for the Total lighting consumption
Step11: Define a function 'Beta_hat' which will take a design matrix and a power vector as arguments and outputs the Beta hat values as defined by the function inverse(Xt * X) * Xt * Y where Xt is the transpose of the design matrix and Y is the power vector
Step12: Finding Beta hat for Total lighting consumption and calculating predicted power. Here the data set used for training and testing the regression model is the same.
Step13: Defining function Cal_Rsqr which takes arguments Actual power and Predicted power and then calculates & returns the R squared value
Step14: Call function Cal_Rsqr for the total lighting consumption
Step15: Section 5
Step16: Removing the first 4 days of data to allow the train and test datasets to start at the same 15 minute time interval of the week
Step17: Generating design matrices for train and test dataset by calling fucntion DesignMatrix
Step18: Calculating Beta hat for train data set
Step19: Estimate predicted power using beta hat and test matrix. Calculate R square value
Step20: Plot graph of Actual power versus Predicted power with a common time axis
Step21: Section 6 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import datetime as dt
import scipy.spatial.distance as dist
%matplotlib inline
Explanation: Course 12-752: Term Project
Project Title: Comparison of the accuracy of linear regression model based on the granularity of data: Scaife Hall
Team Members
Rushil Desai (Andrew ID: rushid)
Varun Deshpande (Andrew ID: varund)
Sakshi Mishra (Andrew ID: sakshimi)
Section 1: Importing general modules
End of explanation
import glob
allFiles = glob.glob('4months' + "/*.csv")
frame = pd.DataFrame()
list_ = []
for file_ in allFiles:
df = pd.read_csv(file_,index_col=None, header=0)
list_.append(df)
frame = pd.concat(list_)
frame.reset_index(drop=True)
frame = frame.iloc[:,(5,4,2,11)]
Explanation: Section 2: Loading the Power Data
Imported Glob module to accecss all CSV data files in the folder. Load one file at a time and concatenate to a dataframe variable 'frame'
End of explanation
scheck = lambda d: 'Lights' in d or 'Light' in d or 'Lts' in d
S_ind = list(map(scheck,frame.BranchName))
power = frame[S_ind].reset_index(drop=True)
Explanation: Section 3: Conditioning, subsetting and grouping power data
Select only those rows within 'frame' which have lights in the Branch Name. Store these rows in a new dataframe called 'power'
End of explanation
power
Explanation: Display variable power: It has 4 column - dates, panel names, branch name and average wattage
End of explanation
from dateutil import parser
power.DateStamp = power.DateStamp.apply(parser.parse)
Explanation: Use parser function from dateutil module to convert the timestamps in power from a string to a datetime - Timestamp object
End of explanation
intervals = lambda d: (d.time().minute)%15 == 0
indexes_15 = list(map(intervals,power.DateStamp))
power = power[indexes_15].reset_index(drop=True)
Explanation: Select 15 minute time interval data and re-store in power
End of explanation
P_Total = power.groupby(['DateStamp'], as_index=False)
P_Panel = power.groupby(['DateStamp','PanelName'],as_index=False)
Explanation: Use group by function of Pandas dataframe to group the power data based on
Datestamp (P_Total)
Datestamp & Panel Name (P_Panel)
End of explanation
Lighting_Panel = P_Panel.sum()
Lighting_Panel.columns = ['Timestamp','PanelName','AvgPower']
Lighting_Panel
Lighting_Total = P_Total.sum()
Lighting_Total.columns = ['Timestamp','AvgPower']
Lighting_Total
Explanation: Use .sum() attribute of group object to sum up the Average power based on the group variables defined above
Create grouped data for first & second set of group variables and call it Lighting_Total and Lighting_Panel respectively
End of explanation
plt.figure(figsize=(10,10))
plt.plot(Lighting_Total.Timestamp,Lighting_Total.AvgPower)
plt.xlabel('Time stamp (days)')
plt.ylabel('Power [ in Watts]')
Explanation: Plot the graph of Total power over time
End of explanation
import math
def DesignMatrix(timestamps):
tslen = len(timestamps)
ind = 672
num = math.ceil(tslen/ind)
sing = np.identity(ind)
Dmat = np.tile(sing,(num,1))[0:tslen,:]
return Dmat
Explanation: Section 4: Linear regression and analysis
Define function - DesignMatrix which takes only timestamp as an argument and outputs a stack of indentity matrix with number of rows equal to the numer of timestamps and number of columns equal to the number of 15 minute intervals in a week
End of explanation
DMX = DesignMatrix(Lighting_Total.Timestamp)
Explanation: Find the Design matrix for the Total lighting consumption
End of explanation
def beta_hat(X,Y):
B = np.dot(np.dot(np.linalg.inv(np.dot(X.T,X)),X.T),Y)
return B
Explanation: Define a function 'Beta_hat' which will take a design matrix and a power vector as arguments and outputs the Beta hat values as defined by the function inverse(Xt * X) * Xt * Y where Xt is the transpose of the design matrix and Y is the power vector
End of explanation
Act_power = Lighting_Total.AvgPower
B_Lighting = beta_hat(DMX,Actual_power)
Pred_power = np.dot(DMX,B_Lighting)
Explanation: Finding Beta hat for Total lighting consumption and calculating predicted power. Here the data set used for training and testing the regression model is the same.
End of explanation
def Cal_Rsqr(Actual_power,Predict_power):
Power_mean = np.mean(Actual_power)
Numer = Actual_power - Predict_power
Denom = Actual_power - Power_mean
R_sqr = 1- (np.dot(Numer.T,Numer)/np.dot(Denom.T,Denom))
return R_sqr
Explanation: Defining function Cal_Rsqr which takes arguments Actual power and Predicted power and then calculates & returns the R squared value
End of explanation
Cal_Rsqr(Act_power,Pred_power)
Explanation: Call function Cal_Rsqr for the total lighting consumption
End of explanation
W_check = lambda d : d.isocalendar()[1]%2 == 1
W_indices = list(map(W_check, Lighting_Total.Timestamp))
Train_Lighting = Lighting_Total[W_indices]
Test_Lighting = Lighting_Total[np.invert(W_indices)]
Explanation: Section 5: Predicting for Lighting consumption for Scaife building using train and test datasets
Define separate train and test datasets. Here we have used alternate weeeks for train and the remaining alternate for test
Used the isocalendar function to extract the week number in order to segregate the data
End of explanation
Train_Lighting = Train_Lighting.iloc[384:,:]
Explanation: Removing the first 4 days of data to allow the train and test datasets to start at the same 15 minute time interval of the week
End of explanation
TrainDMX = DesignMatrix(Train_Lighting.Timestamp)
TestDMX = DesignMatrix(Test_Lighting.Timestamp)
Explanation: Generating design matrices for train and test dataset by calling fucntion DesignMatrix
End of explanation
LBs = beta_hat(TrainDMX, Train_Lighting.AvgPower)
Explanation: Calculating Beta hat for train data set
End of explanation
Lighting_predpower = np.dot(TestDMX,LBs)
Lighting_actpower = Test_Lighting.AvgPower
Cal_Rsqr(Lighting_actpower,Lighting_predpower)
Explanation: Estimate predicted power using beta hat and test matrix. Calculate R square value
End of explanation
plt.figure(figsize=(15,15))
plt.plot(Test_Lighting.Timestamp,Lighting_actpower,Test_Lighting.Timestamp,Lighting_predpower)
plt.xlabel('Time stamp (days)')
plt.ylabel('Power [ in Watts]')
Explanation: Plot graph of Actual power versus Predicted power with a common time axis
End of explanation
count = 0
for name in (Lighting_Panel.PanelName):
Data = Lighting_Panel[Lighting_Panel.PanelName == name]
count = count + 1
W_check = lambda d : d.isocalendar()[1]%2 == 1
W_indices = list(map(W_check, Data.Timestamp))
Train_dat = Data[W_indices].iloc[384:,:]
Test_dat = Data[np.invert(W_indices)]
TrainDMX = DesignMatrix(Train_dat.Timestamp)
TestDMX = DesignMatrix(Test_dat.Timestamp)
LB = beta_hat(TrainDMX,Train_dat.AvgPower)
Lighting_actpower = Train_dat.AvgPower
Lighting_predpower = np.dot(TrainDMX,LB)
R_train_panel = Cal_Rsqr(Lighting_actpower,Lighting_predpower)
print ('R square value for prediction on train data for panel ' + name + ' is ' + str(R_train_panel))
Lighting_actpower = Test_dat.AvgPower
Lighting_predpower = np.dot(TestDMX,LB)
R_test_panel = Cal_Rsqr(Lighting_actpower,Lighting_predpower)
print ('R square value for prediction on test data for panel ' + name + ' is ' + str(R_test_panel))
if (count == 7):
plt.plot(Test_dat.Timestamp,Lighting_actpower,Test_dat.Timestamp,Lighting_predpower)
plt.xlabel('Time stamp (days)')
plt.ylabel('Power [ in Watts]')
break;
Explanation: Section 6: Predicting for Lighting consumption for different panels of Scaife building using train and test datasets
The logic for calculating R square value for each of the different panel consumptions in Scaife building is the same as given above.
A loop has been created to access the summed up Average power of each panel and the fucntions above have been called in order to calculate R square for each of the panels.
Provision has been given in the end of the loop to plot predicted and actual power of individual panels.
End of explanation
<END_TASK> |
143 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
This notebook was prepared by Donne Martin. Source and license info is on GitHub.
Spark
IPython Notebook Setup
Python Shell
DataFrames
RDDs
Pair RDDs
Running Spark on a Cluster
Viewing the Spark Application UI
Working with Partitions
Caching RDDs
Checkpointing RDDs
Writing and Running a Spark Application
Configuring Spark Applications
Streaming
Streaming with States
Broadcast Variables
Accumulators
IPython Notebook Setup
The dev-setup repo contains scripts to install Spark and to automate the its integration with IPython Notebook through the pydata.sh script.
You can also follow the instructions provided here to configure IPython Notebook Support for PySpark with Python 2.
To run Python 3 with Spark 1.4+, check out the following posts on Stack Overflow or Reddit.
Python Shell
Start the pyspark shell (REPL)
Step1: View the spark context, the main entry point to the Spark API
Step2: DataFrames
From the following reference
Step3: Create a new DataFrame that contains “young users” only
Step4: Alternatively, using Pandas-like syntax
Step5: Increment everybody’s age by 1
Step6: Count the number of young users by gender
Step7: Join young users with another DataFrame called logs
Step8: Count the number of users in the young DataFrame
Step9: Convert Spark DataFrame to Pandas
Step10: Create a Spark DataFrame from Pandas
Step11: Given the Spark Context, create a SQLContext
Step12: Create a DataFrame based on the content of a file
Step13: Display the content of the DataFrame
Step14: Print the schema
Step15: Select a column
Step16: Create a DataFrame with rows matching a given filter
Step17: Aggregate the results and count
Step18: Convert a RDD to a DataFrame (by inferring the schema)
Step19: Register the DataFrame as a table
Step20: Run a SQL Query on a DataFrame registered as a table
Step21: RDDs
Note
Step22: Count the number of lines in the data
Step23: Return all the elements of the dataset as an array--this is usually more useful after a filter or other operation that returns a sufficiently small subset of the data
Step24: Return the first 10 lines in the data
Step25: Create an RDD with lines matching the given filter
Step26: Chain a series of commands
Step27: Create a new RDD mapping each line to an array of words, taking only the first word of each array
Step28: Output each word in first_words
Step29: Save the first words to a text file
Step30: Pair RDDs
Pair RDDs contain elements that are key-value pairs. Keys and values can be any type.
Given a log file with the following space deilmited format
Step31: Show the top 5 users by count, sorted in descending order
Step32: Group IP addresses by user id
Step33: Given a user table with the following csv format
Step34: Inner join the user_actions and user_profile RDDs
Step35: Show the joined table
Step36: Running Spark on a Cluster
Start the standalone cluster's Master and Worker daemons
Step37: Stop the standalone cluster's Master and Worker daemons
Step38: Restart the standalone cluster's Master and Worker daemons
Step39: View the Spark standalone cluster UI
Step40: Start the Spark shell and connect to the cluster
Step41: Confirm you are connected to the correct master
Step42: Viewing the Spark Application UI
From the following reference
Step43: Working with Partitions
From the following reference
Step44: However, this fails for several reasons
Step45: Caching RDDs
Caching an RDD saves the data in memory. Caching is a suggestion to Spark as it is memory dependent.
By default, every RDD operation executes the entire lineage. Caching can boost performance for datasets that are likely to be used by saving this expensive recomputation and is ideal for iterative algorithms or machine learning.
cache() stores data in memory
persist() stores data in MEMORY_ONLY, MEMORY_AND_DISK (spill to disk), and DISK_ONLY
Disk memory is stored on the node, not on HDFS.
Replication is possible by using MEMORY_ONLY_2, MEMORY_AND_DISK_2, etc. If a cached partition becomes unavailable, Spark recomputes the partition through the lineage.
Serialization is possible with MEMORY_ONLY_SER and MEMORY_AND_DISK_SER. This is more space efficient but less time efficient, as it uses Java serialization by default.
Step46: Checkpointing RDDs
Caching maintains RDD lineage, providing resilience. If the lineage is very long, it is possible to get a stack overflow.
Checkpointing saves the data to HDFS, which provide fault tolerant storage across nodes. HDFS is not as fast as local storage for both reading and writing. Checkpointing is good for long lineages and for very large data sets that might not fit on local storage. Checkpointing removes lineage.
Create a checkpoint and perform an action by calling count() to materialize the checkpoint and save it to the checkpoint file
Step47: Writing and Running a Spark Application
Create a Spark application to count the number of text files
Step48: Submit the script to Spark for processing
Step49: Configuring Spark Applications
Run a Spark app and set the configuration options in the command line
Step50: Configure spark.conf
Step51: Run a Spark app and set the configuration options through spark.conf
Step52: Set the config options programmatically
Step53: Set logging levels located in the following file, or place a copy in your pwd
Step54: Streaming
Start the Spark Shell locally with at least two threads (need a minimum of two threads for streaming, one for receiving, one for processing)
Step55: Create a StreamingContext (similar to SparkContext in core Spark) with a batch duration of 1 second
Step56: Get a DStream from a streaming data source (text from a socket)
Step57: DStreams support regular transformations such as map, flatMap, and filter, and pair transformations such as reduceByKey, groupByKey, and joinByKey.
Apply a DStream operation to each batch of RDDs (count up requests by user id, reduce by key to get the count)
Step58: The transform(function) method creates a new DStream by executing the input function on the RDDs.
Step59: foreachRDD(function) performs a function on each RDD in the DStream (map is like a shortcut not requiring you to get the RDD first before doing an operation)
Step60: Save the DStream result part files with the given folder prefix, the actual folder will be /dir/requests-timestamp0/
Step61: Start the execution of all DStreams
Step62: Wait for all background threads to complete before ending the main thread
Step63: Streaming with States
Enable checkpointing to prevent infinite lineages
Step64: Compute a DStream based on the previous states plus the current state
Step65: Compute a DStream based Sliding window, every 30 seconds, count requests by user over the last 5 minutes
Step66: Collect statistics with the StreamingListener API
Step67: Broadcast Variables
Read in list of items to broadcast from a local file
Step68: Broadcast the target list to all workers
Step69: Filter based on the broadcast list
Step70: Accumulators
Create an accumulator
Step71: Count the number of txt files in the RDD
Step72: Count the number of file types encountered | <ASSISTANT_TASK:>
Python Code:
!pyspark
Explanation: This notebook was prepared by Donne Martin. Source and license info is on GitHub.
Spark
IPython Notebook Setup
Python Shell
DataFrames
RDDs
Pair RDDs
Running Spark on a Cluster
Viewing the Spark Application UI
Working with Partitions
Caching RDDs
Checkpointing RDDs
Writing and Running a Spark Application
Configuring Spark Applications
Streaming
Streaming with States
Broadcast Variables
Accumulators
IPython Notebook Setup
The dev-setup repo contains scripts to install Spark and to automate the its integration with IPython Notebook through the pydata.sh script.
You can also follow the instructions provided here to configure IPython Notebook Support for PySpark with Python 2.
To run Python 3 with Spark 1.4+, check out the following posts on Stack Overflow or Reddit.
Python Shell
Start the pyspark shell (REPL):
End of explanation
sc
Explanation: View the spark context, the main entry point to the Spark API:
End of explanation
users = context.load("s3n://path/to/users.json", "json")
Explanation: DataFrames
From the following reference:
A DataFrame is a distributed collection of data organized into named columns. It is conceptually equivalent to a table in a relational database or a data frame in R/Python, but with richer optimizations under the hood.
Create a DataFrame from JSON files on S3:
End of explanation
young = users.filter(users.age<21)
Explanation: Create a new DataFrame that contains “young users” only:
End of explanation
young = users[users.age<21]
Explanation: Alternatively, using Pandas-like syntax:
End of explanation
young.select(young.name, young.age+1)
Explanation: Increment everybody’s age by 1:
End of explanation
young.groupBy("gender").count()
Explanation: Count the number of young users by gender:
End of explanation
young.join(logs, logs.userId == users.userId, "left_outer")
Explanation: Join young users with another DataFrame called logs:
End of explanation
young.registerTempTable("young")
context.sql("SELECT count(*) FROM young")
Explanation: Count the number of users in the young DataFrame:
End of explanation
pandas_df = young.toPandas()
Explanation: Convert Spark DataFrame to Pandas:
End of explanation
spark_df = context.createDataFrame(pandas_df)
Explanation: Create a Spark DataFrame from Pandas:
End of explanation
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
Explanation: Given the Spark Context, create a SQLContext:
End of explanation
df = sqlContext.jsonFile("file:/path/file.json")
Explanation: Create a DataFrame based on the content of a file:
End of explanation
df.show()
Explanation: Display the content of the DataFrame:
End of explanation
df.printSchema()
Explanation: Print the schema:
End of explanation
df.select("column_name")
Explanation: Select a column:
End of explanation
df.filter(df.column_name>10)
Explanation: Create a DataFrame with rows matching a given filter:
End of explanation
df.groupBy("column_name").count()
Explanation: Aggregate the results and count:
End of explanation
df = sqlContext.inferSchema(my_data)
Explanation: Convert a RDD to a DataFrame (by inferring the schema):
End of explanation
df.registerTempTable("dataframe_name")
Explanation: Register the DataFrame as a table:
End of explanation
rdd_from_df = sqlContext.sql("SELECT * FROM dataframe_name")
Explanation: Run a SQL Query on a DataFrame registered as a table:
End of explanation
my_data = sc.textFile("file:/path/*")
Explanation: RDDs
Note: RDDs are included for completeness. In Spark 1.3, DataFrames were introduced which are recommended over RDDs. Check out the DataFrames announcement for more info.
Resilient Distributed Datasets (RDDs) are the fundamental unit of data in Spark. RDDs can be created from a file, from data in memory, or from another RDD. RDDs are immutable.
There are two types of RDD operations:
* Actions: Returns values, data is not processed in an RDD until an action is preformed
* Transformations: Defines a new RDD based on the current
Create an RDD from the contents of a directory:
End of explanation
my_data.count()
Explanation: Count the number of lines in the data:
End of explanation
my_data.collect()
Explanation: Return all the elements of the dataset as an array--this is usually more useful after a filter or other operation that returns a sufficiently small subset of the data:
End of explanation
my_data.take(10)
Explanation: Return the first 10 lines in the data:
End of explanation
my_data.filter(lambda line: ".txt" in line)
Explanation: Create an RDD with lines matching the given filter:
End of explanation
sc.textFile("file:/path/file.txt") \
.filter(lambda line: ".txt" in line) \
.count()
Explanation: Chain a series of commands:
End of explanation
first_words = my_data.map(lambda line: line.split()[0])
Explanation: Create a new RDD mapping each line to an array of words, taking only the first word of each array:
End of explanation
for word in first_words.take(10):
print word
Explanation: Output each word in first_words:
End of explanation
first_words.saveAsTextFile("file:/path/file")
Explanation: Save the first words to a text file:
End of explanation
DATE_TIME = 0
USER_ID = 1
IP_ADDRESS = 2
ACTION = 3
log_data = sc.textFile("file:/path/*")
user_actions = log_data \
.map(lambda line: line.split()) \
.map(lambda words: (words[USER_ID], 1)) \
.reduceByKey(lambda count1, count2: count1 + count2)
Explanation: Pair RDDs
Pair RDDs contain elements that are key-value pairs. Keys and values can be any type.
Given a log file with the following space deilmited format: [date_time, user_id, ip_address, action], map each request to (user_id, 1):
End of explanation
user_actions.map(lambda pair: (pair[0], pair[1])).sortyByKey(False).take(5)
Explanation: Show the top 5 users by count, sorted in descending order:
End of explanation
user_ips = log_data \
.map(lambda line: line.split()) \
.map(lambda words: (words[IP_ADDRESS],words[USER_ID])) \
.groupByKey()
Explanation: Group IP addresses by user id:
End of explanation
user_data = sc.textFile("file:/path/*")
user_profile = user_data \
.map(lambda line: line.split(',')) \
.map(lambda words: (words[0], words[1:]))
Explanation: Given a user table with the following csv format: [user_id, user_info0, user_info1, ...], map each line to (user_id, [user_info...]):
End of explanation
user_actions_with_profile = user_actions.join(user_profile)
Explanation: Inner join the user_actions and user_profile RDDs:
End of explanation
for (user_id, (user_info, count)) in user_actions_with_profiles.take(10):
print user_id, count, user_info
Explanation: Show the joined table:
End of explanation
!sudo service spark-master start
!sudo service spark-worker start
Explanation: Running Spark on a Cluster
Start the standalone cluster's Master and Worker daemons:
End of explanation
!sudo service spark-master stop
!sudo service spark-worker stop
Explanation: Stop the standalone cluster's Master and Worker daemons:
End of explanation
!sudo service spark-master stop
!sudo service spark-worker stop
Explanation: Restart the standalone cluster's Master and Worker daemons:
End of explanation
http://localhost:18080//
Explanation: View the Spark standalone cluster UI:
End of explanation
!MASTER=spark://localhost:7077 pyspark
Explanation: Start the Spark shell and connect to the cluster:
End of explanation
sc.master
Explanation: Confirm you are connected to the correct master:
End of explanation
http://localhost:4040/
Explanation: Viewing the Spark Application UI
From the following reference:
Every SparkContext launches a web UI, by default on port 4040, that displays useful information about the application. This includes:
A list of scheduler stages and tasks
A summary of RDD sizes and memory usage
Environmental information.
Information about the running executors
You can access this interface by simply opening http://<driver-node>:4040 in a web browser. If multiple SparkContexts are running on the same host, they will bind to successive ports beginning with 4040 (4041, 4042, etc).
Note that this information is only available for the duration of the application by default. To view the web UI after the fact, set spark.eventLog.enabled to true before starting the application. This configures Spark to log Spark events that encode the information displayed in the UI to persisted storage.
End of explanation
val dbConnection = ...
lines.map(... dbConnection.createStatement(...) ...)
dbConnection.close() // Wrong!
Explanation: Working with Partitions
From the following reference:
The Spark map() and flatMap() methods only operate on one input at a time, and provide no means to execute code before or after transforming a batch of values. It looks possible to simply put the setup and cleanup code before and after a call to map() in Spark:
End of explanation
def count_txt(partIter):
for line in partIter:
if ".txt" in line: txt_count += 1
yield (txt_count)
my_data = sc.textFile("file:/path/*") \
.mapPartitions(count_txt) \
.collect()
# Show the partitioning
print "Data partitions: ", my_data.toDebugString()
Explanation: However, this fails for several reasons:
It puts the object dbConnection into the map function’s closure, which requires that it be serializable (for example, by implementing java.io.Serializable). An object like a database connection is generally not serializable.
map() is a transformation, rather than an operation, and is lazily evaluated. The connection can’t be closed immediately here.
Even so, it would only close the connection on the driver, not necessarily freeing resources allocated by serialized copies.
In fact, neither map() nor flatMap() is the closest counterpart to a Mapper in Spark — it’s the important mapPartitions() method. This method does not map just one value to one other value, but rather maps an Iterator of values to an Iterator of other values. It’s like a “bulk map” method. This means that the mapPartitions() function can allocate resources locally at its start, and release them when done mapping many values.
End of explanation
# Cache RDD to memory
my_data.cache()
# Persist RDD to both memory and disk (if memory is not enough), with replication of 2
my_data.persist(MEMORY_AND_DISK_2)
# Unpersist RDD, removing it from memory and disk
my_data.unpersist()
# Change the persistence level after unpersist
my_data.persist(MEMORY_AND_DISK)
Explanation: Caching RDDs
Caching an RDD saves the data in memory. Caching is a suggestion to Spark as it is memory dependent.
By default, every RDD operation executes the entire lineage. Caching can boost performance for datasets that are likely to be used by saving this expensive recomputation and is ideal for iterative algorithms or machine learning.
cache() stores data in memory
persist() stores data in MEMORY_ONLY, MEMORY_AND_DISK (spill to disk), and DISK_ONLY
Disk memory is stored on the node, not on HDFS.
Replication is possible by using MEMORY_ONLY_2, MEMORY_AND_DISK_2, etc. If a cached partition becomes unavailable, Spark recomputes the partition through the lineage.
Serialization is possible with MEMORY_ONLY_SER and MEMORY_AND_DISK_SER. This is more space efficient but less time efficient, as it uses Java serialization by default.
End of explanation
# Enable checkpointing by setting the checkpoint directory,
# which will contain all checkpoints for the given data:
sc.setCheckpointDir("checkpoints")
my_data = sc.parallelize([1,2,3,4,5])
# Long loop that may cause a stack overflow
for i in range(1000):
my_data = mydata.map(lambda myInt: myInt + 1)
if i % 10 == 0:
my_data.checkpoint()
my_data.count()
my_data.collect()
# Display the lineage
for rddstring in my_data.toDebugString().split('\n'):
print rddstring.strip()
Explanation: Checkpointing RDDs
Caching maintains RDD lineage, providing resilience. If the lineage is very long, it is possible to get a stack overflow.
Checkpointing saves the data to HDFS, which provide fault tolerant storage across nodes. HDFS is not as fast as local storage for both reading and writing. Checkpointing is good for long lineages and for very large data sets that might not fit on local storage. Checkpointing removes lineage.
Create a checkpoint and perform an action by calling count() to materialize the checkpoint and save it to the checkpoint file:
End of explanation
import sys
from pyspark import SparkContext
if __name__ == "__main__":
if len(sys.argv) < 2:
print >> sys.stderr, "Usage: App Name <file>"
exit(-1)
count_text_files()
def count_text_files():
sc = SparkContext()
logfile = sys.argv[1]
text_files_count = sc.textFile(logfile)
.filter(lambda line: '.txt' in line)
text_files_count.cache()
print("Number of text files: ", text_files_count.count())
Explanation: Writing and Running a Spark Application
Create a Spark application to count the number of text files:
End of explanation
!spark-submit --properties-file dir/myspark.conf script.py data/*
Explanation: Submit the script to Spark for processing:
End of explanation
!spark-submit --master spark//localhost:7077 --name 'App Name' script.py data/*
Explanation: Configuring Spark Applications
Run a Spark app and set the configuration options in the command line:
End of explanation
spark.app.name App Name
spark.ui.port 4141
spark.master spark://localhost:7077
Explanation: Configure spark.conf:
End of explanation
!spark-submit --properties-file spark.conf script.py data/*
Explanation: Run a Spark app and set the configuration options through spark.conf:
End of explanation
sconf = SparkConf() \
.setAppName("Word Count") \
.set("spark.ui.port","4141")
sc = SparkContext(conf=sconf)
Explanation: Set the config options programmatically:
End of explanation
$SPARK_HOME/conf/log4j.properties.template
Explanation: Set logging levels located in the following file, or place a copy in your pwd:
End of explanation
!spark-shell --master local[2]
Explanation: Streaming
Start the Spark Shell locally with at least two threads (need a minimum of two threads for streaming, one for receiving, one for processing):
End of explanation
val ssc = new StreamingContext(new SparkConf(), Seconds(1))
val my_stream = ssc.socketTextStream(hostname, port)
Explanation: Create a StreamingContext (similar to SparkContext in core Spark) with a batch duration of 1 second:
End of explanation
val logs = ssc.socketTextStream(hostname, port)
Explanation: Get a DStream from a streaming data source (text from a socket):
End of explanation
val requests = my_stream
.map(line => (line.split(" ")(2), 1))
.reduceByKey((x, y) => x + y)
Explanation: DStreams support regular transformations such as map, flatMap, and filter, and pair transformations such as reduceByKey, groupByKey, and joinByKey.
Apply a DStream operation to each batch of RDDs (count up requests by user id, reduce by key to get the count):
End of explanation
val sorted_requests = requests
.map(pair => pair.swap)
.transform(rdd => rdd.sortByKey(false))
Explanation: The transform(function) method creates a new DStream by executing the input function on the RDDs.
End of explanation
sorted_requests.foreachRDD((rdd, time) => {
println("Top users @ " + time)
rdd.take(5).foreach(
pair => printf("User: %s (%s)\n", pair._2, pair._1))
}
Explanation: foreachRDD(function) performs a function on each RDD in the DStream (map is like a shortcut not requiring you to get the RDD first before doing an operation):
End of explanation
requests.saveAsTextFiles("/dir/requests")
Explanation: Save the DStream result part files with the given folder prefix, the actual folder will be /dir/requests-timestamp0/:
End of explanation
ssc.start()
Explanation: Start the execution of all DStreams:
End of explanation
ssc.awaitTermination()
Explanation: Wait for all background threads to complete before ending the main thread:
End of explanation
ssc.checkpoint("dir")
Explanation: Streaming with States
Enable checkpointing to prevent infinite lineages:
End of explanation
def updateCount = (newCounts: Seq[Int], state: Option[Int]) => {
val newCount = newCounts.foldLeft(0)(_ + _)
val previousCount = state.getOrElse(0)
Some(newCount + previousCount)
}
val totalUserreqs = userreqs.updateStateByKey(updateCount)
Explanation: Compute a DStream based on the previous states plus the current state:
End of explanation
val reqcountsByWindow = logs.map(line => (line.split(' ')(2), 1))
.reduceByKeyAndWindow((x: Int, y: Int) => x + y, Minutes(5), Seconds(30))
Explanation: Compute a DStream based Sliding window, every 30 seconds, count requests by user over the last 5 minutes:
End of explanation
// define listener
class MyListener extends StreamingListener {
override def onReceiverStopped(...) {
streamingContext.stop()
}
}
// attach listener
streamingContext. addStreamingListener(new MyListener())
Explanation: Collect statistics with the StreamingListener API:
End of explanation
broadcast_file = "broadcast.txt"
broadcast_list = list(map(lambda l: l.strip(), open(broadcast_file)))
Explanation: Broadcast Variables
Read in list of items to broadcast from a local file:
End of explanation
broadcast_list_sc = sc.broadcast(broadcast_list)
Explanation: Broadcast the target list to all workers:
End of explanation
log_file = "hdfs://localhost/user/logs/*"
filtered_data = sc.textFile(log_file)\
.filter(lambda line: any(item in line for item in broadcast_list_sc.value))
filtered_data.take(10)
Explanation: Filter based on the broadcast list:
End of explanation
txt_count = sc.accumulator(0)
Explanation: Accumulators
Create an accumulator:
End of explanation
my_data = sc.textFile(filePath)
my_data.foreach(lambda line: if '.txt' in line: txt_count.add(1))
Explanation: Count the number of txt files in the RDD:
End of explanation
jpg_count = sc.accumulator(0)
html_count = sc.accumulator(0)
css_count = sc.accumulator(0)
def countFileType(s):
if '.jpg' in s: jpg_count.add(1)
elif '.html' in s: html_count.add(1)
elif '.css' in s: css_count.add(1)
filename="hdfs://logs/*"
logs = sc.textFile(filename)
logs.foreach(lambda line: countFileType(line))
print 'File Type Totals:'
print '.css files: ', css_count.value
print '.html files: ', html_count.value
print '.jpg files: ', jpg_count.value
Explanation: Count the number of file types encountered:
End of explanation
<END_TASK> |
144 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Random Variables
Frequently, when an experiment is performed, we are interested mainly in some function of the outcome as opposed to the actual outcome itself.
For instance,<br>
1) In recent flipping a coin experiment, we may be interested in the total number of heads that occur and not care at all about the actual Head(H)–Tail(T) sequence that results. <br>
2) In throwing dice, we are often interested in the sum of the two dice and are not really concerned about the separate values of each die. That is, we may be interested in knowingthat the sum is 7 and may not be concerned over whether the actual outcome was
Step1: As shown earlier in slide,<br>
A probability space $(\Omega, P)$ is an outcome space accompanied by the probabilities of all the outcomes.
<br>If you assume all eight outcomes of three tosses are equally likely, the probabilities are all 1/8
Step2: As you can see above, Product spaces(Probability spaces) get large very quickly.
If we are tossing 10 times, the outcome space would consist of the $2^{10}$ sequences of 10 elements where each element is H or T. <br>
The outcomes are a pain to list by hand, but computers are good at saving us that kind of pain.
Lets take example of rolling die,<br>
If we roll a die 5 times, there are almost 8,000 possible outcomes
Step3: A Function on the Outcome Space
Suppose you roll a die five times and add up the number of spots you see. If that seems artificial, be patient for a moment and you'll soon see why it's interesting.
The sum of the rolls is a numerical function on the outcome space $\Omega$ of five rolls. The sum is thus a random variable. Let's call it $S$ . Then, formally,
$S
Step4: Functions of Random Variables,
A random variable is a numerical function on $\Omega$ . Therefore by composition, a numerical function of a random variable is also a random variable.
For example, $S^2$ is a random variable, calculated as follows
Step5: There are 126 values of $\omega$ for which $S(\omega) = 10$. Since all the $\omega$ are equally likely, the chance that $S$ has the value 10 is 126/7776.
We are informal with notation and write ${ S = 10 }$ instead of ${ S \in {10} }$
Step6: The contents of the table – all the possible values of the random variable, along with all their probabilities – are called the probability distribution of $S$ , or just distribution of $S$ for short. The distribution shows how the total probability of 100% is distributed over all the possible values of $S$ .
Let's check this, to make sure that all the $\omega$ 's in the outcome space have been accounted for in the column of probabilities.
Step7: That's 1 in a computing environment, and it is true in general for the distribution of any random variable.
Probabilities in a distribution are non-negative and sum to 1.
Visualising Distribution | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
from itertools import product
# from IPython.core.display import HTML
# css = open('media/style-table.css').read() + open('media/style-notebook.css').read()
# HTML('<style>{}</style>'.format(css))
one_toss = np.array(['H', 'T'])
two_tosses = list(product(one_toss, repeat=2))
two_tosses
# For three tosses, just change the number of repetitions:
three_tosses = list(product(one_toss, repeat=3))
three_tosses
Explanation: Random Variables
Frequently, when an experiment is performed, we are interested mainly in some function of the outcome as opposed to the actual outcome itself.
For instance,<br>
1) In recent flipping a coin experiment, we may be interested in the total number of heads that occur and not care at all about the actual Head(H)–Tail(T) sequence that results. <br>
2) In throwing dice, we are often interested in the sum of the two dice and are not really concerned about the separate values of each die. That is, we may be interested in knowingthat the sum is 7 and may not be concerned over whether the actual outcome was: (1, 6), (2, 5), (3, 4), (4, 3), (5, 2), or (6, 1). <br>
Also, These quantities of interest, or, more formally, these real-valued functions defined on the sample space, are known as 'Random Variables'.
Lets do an experiment with Python to demostrate
Why we need Random Variables?
& show its importance
End of explanation
three_toss_probs = (1/8)*np.ones(8)
three_toss_space = pd.DataFrame({
'Omega':three_tosses,
'P(omega)':three_toss_probs
})
three_toss_space
Explanation: As shown earlier in slide,<br>
A probability space $(\Omega, P)$ is an outcome space accompanied by the probabilities of all the outcomes.
<br>If you assume all eight outcomes of three tosses are equally likely, the probabilities are all 1/8:
End of explanation
die = np.arange(1, 7, 1)
five_rolls = list(product(die, repeat=5))
# five_rolls = [list(i) for i in product(die, repeat=5)]
five_roll_probs = (1/6**5)**np.ones(6**5)
five_roll_space = pd.DataFrame({
'Omega':five_rolls,
'P(omega)':five_roll_probs
})
five_roll_space
Explanation: As you can see above, Product spaces(Probability spaces) get large very quickly.
If we are tossing 10 times, the outcome space would consist of the $2^{10}$ sequences of 10 elements where each element is H or T. <br>
The outcomes are a pain to list by hand, but computers are good at saving us that kind of pain.
Lets take example of rolling die,<br>
If we roll a die 5 times, there are almost 8,000 possible outcomes:
End of explanation
five_rolls_sum = pd.DataFrame({
'Omega':five_rolls,
'S(omega)':five_roll_space['Omega'].map(lambda val: sum(val)),
'P(omega)':five_roll_probs
})
five_rolls_sum
Explanation: A Function on the Outcome Space
Suppose you roll a die five times and add up the number of spots you see. If that seems artificial, be patient for a moment and you'll soon see why it's interesting.
The sum of the rolls is a numerical function on the outcome space $\Omega$ of five rolls. The sum is thus a random variable. Let's call it $S$ . Then, formally,
$S: \Omega \rightarrow { 5, 6, \ldots, 30 }$
The range of $S$ is the integers 5 through 30, because each die shows at least one and at most six spots. We can also use the equivalent notation
$\Omega \stackrel{S}{\rightarrow} { 5, 6, \ldots, 30 }$
From a computational perspective, the elements of $\Omega$ are in the column omega of five_roll_space. Let's apply this function and create a larger table.
End of explanation
five_rolls_sum[five_rolls_sum['S(omega)']==10]
Explanation: Functions of Random Variables,
A random variable is a numerical function on $\Omega$ . Therefore by composition, a numerical function of a random variable is also a random variable.
For example, $S^2$ is a random variable, calculated as follows:
$S^2(\omega) = \big{(} S(\omega)\big{)}^2$
Thus for example $S^2(\text{[6 6 6 6 6]}) = 30^2 = 900$.
Events Determined by $S$
From the table five_rolls_sum it is hard to tell how many rows show a sum of 6, or 10, or any other value. To better understand the properties of $S$, we have to organize the information in five_rolls_sum.
For any subset $A$ of the range of $S$, define the event ${S \in A}$ as
$$
{S \in A } = {\omega: S(\omega) \in A }
$$
That is, ${ S \in A}$ is the collection of all $\omega$ for which $S(\omega)$ is in $A$.
If that definition looks unfriendly, try it out in a special case. Take $A = {5, 30}$. Then ${S \in A}$ if and only if either all the rolls show 1 spot or all the rolls show 6 spots. So
$$
{S \in A} = {\text{[1 1 1 1 1], [6 6 6 6 6]}}
$$
It is natural to ask about the chance the sum is a particular value, say 10. That's not easy to read off the table, but we can access the corresponding rows:
End of explanation
dist_S = five_rolls_sum.drop('Omega', axis=1).groupby('S(omega)', as_index=False).sum()
dist_S
Explanation: There are 126 values of $\omega$ for which $S(\omega) = 10$. Since all the $\omega$ are equally likely, the chance that $S$ has the value 10 is 126/7776.
We are informal with notation and write ${ S = 10 }$ instead of ${ S \in {10} }$:
$$
P(S = 10) = \frac{126}{7776} = 1.62\%
$$
This is how Random Variables help us quantify the results of experiments for the purpose of analysis.
i.e., Random variables provide numerical summaries of the experiment in question. - Stats110 harvard (also below paragraph)
This definition is abstract but fundamental; one of the most important skills to
develop when studying probability and statistics is the ability to go back and forth
between abstract ideas and concrete examples. Relatedly, it is important to work
on recognizing the essential pattern or structure of a problem and how it connectsto problems you have studied previously. We will often discuss stories that involve
tossing coins or drawing balls from urns because they are simple, convenient sce-
narios to work with, but many other problems are isomorphic: they have the same
essential structure, but in a different guise.
we can use mathematical opeartion on these variables since they are real valued function nowto problems you have studied previously. We will often discuss stories that involve
tossing coins or drawing balls from urns because they are simple, convenient sce-
narios to work with, but many other problems are isomorphic: they have the same
essential structure, but in a di↵erent guise.
Looking at Distributions
The table below shows all the possible values of $S$ along with all their probabilities. It is called a "Probability Distribution Table" for $S$ .
End of explanation
dist_S.ix[:,1].sum()
Explanation: The contents of the table – all the possible values of the random variable, along with all their probabilities – are called the probability distribution of $S$ , or just distribution of $S$ for short. The distribution shows how the total probability of 100% is distributed over all the possible values of $S$ .
Let's check this, to make sure that all the $\omega$ 's in the outcome space have been accounted for in the column of probabilities.
End of explanation
dist_S.ix[:,0], dist_S.ix[:,1]
s = dist_S.ix[:,0]
p_s = dist_S.ix[:,1]
dist_S = pd.concat([s,p_s],axis=1)
dist_S
dist_S.plot(x="S(omega)",y="P(omega)", kind="bar")
from prob140 import Plot
!pip install sympy
Explanation: That's 1 in a computing environment, and it is true in general for the distribution of any random variable.
Probabilities in a distribution are non-negative and sum to 1.
Visualising Distribution
End of explanation
<END_TASK> |
145 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
DBSCAN Lab
在此 notebook 中,我们将使用 DBSCAN 来聚集几个数据集。我们将研究更改参数(ε 和 min_samples)会如何更改生成的聚类结构。
Step1: 下面是第一个数据集:
Step2: 我们使用 DBSCAN 的默认设置对其进行聚类,看看会发生什么。我们希望它能够将三个“blob”分别分配给各自的聚类。可以通过创新方式完成操作吗?
Step3: 看起来合适吗?能够将数据集拆分为我们希望获得的三个聚类吗?
可以看出,我们需要作出一些调整。首先看看 ε,即每个点周围的半径。sklearn 中的默认值为 0.5。
Step4: 从图表中可以看出,对于此数据集来说,ε 为 0.5 太小了。我们需要增大该值,以便 blob 中的点周围不会相互重叠,但是也不会导致一个聚类占据了两个 blob。
练习:更改 ε 的值,使每个 blob 都有自己的聚类(没有任何噪点)。图表显示了数据集中的点以及每个点周围的空间:
Step5: 能够这么操作吗?当你更改值时,可以看到点聚合成了更大的聚类,噪点数量不断减少。当 ε 的值超过 1.6 时,获得了我们想要的聚类。但是一旦超过 5,可以看到两个 blob 形成了一个聚类。因此在此情形下,合适的 ε 应该是在这两个值的范围之内。
数据集 2
现在看一个有点复杂的数据集
Step6: 如果我们运行 DBSCAN 并传入默认参数值,会发生什么?
Step7: 在某些情形下,这种聚类比较合理,但是似乎很随意。查看数据集后,可以想象出至少有两种想要的操作情形:
情境 1:将数据集拆分为三个聚类:左边一个 blob、右边一个 blob,中间一个 blob(虽然比两侧的密度要低)。
情境 2: 将数据集拆分为两个聚类:左边一个 blob,右边一个 blob。将中心的所有点标记为噪点。
什么样的 DBSCAN 参数值可以满足上述每种情形?尝试几种参数,看看能否找到更合适的聚类。
Step8: 以下网格绘制了各种参数值的 DBSCAN 聚类结果。ε 在水平方向变化,在垂直方向,每行表示不同的 min_samples 值。 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
dataset_1 = pd.read_csv('blobs.csv')[:80].values
Explanation: DBSCAN Lab
在此 notebook 中,我们将使用 DBSCAN 来聚集几个数据集。我们将研究更改参数(ε 和 min_samples)会如何更改生成的聚类结构。
End of explanation
%matplotlib inline
import dbscan_lab_helper as helper
helper.plot_dataset(dataset_1)
Explanation: 下面是第一个数据集:
End of explanation
#TODO: Import sklearn's cluster module
from sklearn import cluster
#TODO: create an instance of DBSCAN
dbscan = cluster.DBSCAN()
#TODO: use DBSCAN's fit_predict to return clustering labels for dataset_1
clustering_labels_1 = dbscan.fit_predict(dataset_1)
# Plot clustering
helper.plot_clustered_dataset(dataset_1, clustering_labels_1)
Explanation: 我们使用 DBSCAN 的默认设置对其进行聚类,看看会发生什么。我们希望它能够将三个“blob”分别分配给各自的聚类。可以通过创新方式完成操作吗?
End of explanation
# Plot clustering with neighborhoods
helper.plot_clustered_dataset(dataset_1, clustering_labels_1, neighborhood=True)
Explanation: 看起来合适吗?能够将数据集拆分为我们希望获得的三个聚类吗?
可以看出,我们需要作出一些调整。首先看看 ε,即每个点周围的半径。sklearn 中的默认值为 0.5。
End of explanation
# TODO: increase the value of epsilon to allow DBSCAN to find three clusters in the dataset
epsilon=2
# Cluster
dbscan = cluster.DBSCAN(eps=epsilon)
clustering_labels_2 = dbscan.fit_predict(dataset_1)
# Plot
helper.plot_clustered_dataset(dataset_1, clustering_labels_2, neighborhood=True, epsilon=epsilon)
Explanation: 从图表中可以看出,对于此数据集来说,ε 为 0.5 太小了。我们需要增大该值,以便 blob 中的点周围不会相互重叠,但是也不会导致一个聚类占据了两个 blob。
练习:更改 ε 的值,使每个 blob 都有自己的聚类(没有任何噪点)。图表显示了数据集中的点以及每个点周围的空间:
End of explanation
dataset_2 = pd.read_csv('varied.csv')[:300].values
# Plot
helper.plot_dataset(dataset_2, xlim=(-14, 5), ylim=(-12, 7))
Explanation: 能够这么操作吗?当你更改值时,可以看到点聚合成了更大的聚类,噪点数量不断减少。当 ε 的值超过 1.6 时,获得了我们想要的聚类。但是一旦超过 5,可以看到两个 blob 形成了一个聚类。因此在此情形下,合适的 ε 应该是在这两个值的范围之内。
数据集 2
现在看一个有点复杂的数据集
End of explanation
# Cluster with DBSCAN
# TODO: Create a new isntance of DBSCAN
dbscan = cluster.DBSCAN()
# TODO: use DBSCAN's fit_predict to return clustering labels for dataset_2
clustering_labels_3 = dbscan.fit_predict(dataset_2)
# Plot
helper.plot_clustered_dataset(dataset_2,
clustering_labels_3,
xlim=(-14, 5),
ylim=(-12, 7),
neighborhood=True,
epsilon=0.5)
Explanation: 如果我们运行 DBSCAN 并传入默认参数值,会发生什么?
End of explanation
# TODO: Experiment with different values for eps and min_samples to find a suitable clustering for the dataset
eps=1.32
min_samples=50
# Cluster with DBSCAN
dbscan = cluster.DBSCAN(eps=eps, min_samples=min_samples)
clustering_labels_4 = dbscan.fit_predict(dataset_2)
# Plot
helper.plot_clustered_dataset(dataset_2,
clustering_labels_4,
xlim=(-14, 5),
ylim=(-12, 7),
neighborhood=True,
epsilon=0.5)
Explanation: 在某些情形下,这种聚类比较合理,但是似乎很随意。查看数据集后,可以想象出至少有两种想要的操作情形:
情境 1:将数据集拆分为三个聚类:左边一个 blob、右边一个 blob,中间一个 blob(虽然比两侧的密度要低)。
情境 2: 将数据集拆分为两个聚类:左边一个 blob,右边一个 blob。将中心的所有点标记为噪点。
什么样的 DBSCAN 参数值可以满足上述每种情形?尝试几种参数,看看能否找到更合适的聚类。
End of explanation
eps_values = [0.3, 0.5, 1, 1.3, 1.5]
min_samples_values = [2, 5, 10, 20, 80]
helper.plot_dbscan_grid(dataset_2, eps_values, min_samples_values)
Explanation: 以下网格绘制了各种参数值的 DBSCAN 聚类结果。ε 在水平方向变化,在垂直方向,每行表示不同的 min_samples 值。
End of explanation
<END_TASK> |
146 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Introduction" data-toc-modified-id="Introduction-1"><span class="toc-item-num">1 </span>Introduction</a></span></li><li><span><a href="#Setup" data-toc-modified-id="Setup-2"><span class="toc-item-num">2 </span>Setup</a></span><ul class="toc-item"><li><span><a href="#Setup---Debug" data-toc-modified-id="Setup---Debug-2.1"><span class="toc-item-num">2.1 </span>Setup - Debug</a></span></li><li><span><a href="#Setup---Imports" data-toc-modified-id="Setup---Imports-2.2"><span class="toc-item-num">2.2 </span>Setup - Imports</a></span></li><li><span><a href="#Setup---working-folder-paths" data-toc-modified-id="Setup---working-folder-paths-2.3"><span class="toc-item-num">2.3 </span>Setup - working folder paths</a></span></li><li><span><a href="#Setup---logging" data-toc-modified-id="Setup---logging-2.4"><span class="toc-item-num">2.4 </span>Setup - logging</a></span></li><li><span><a href="#Setup---virtualenv-jupyter-kernel" data-toc-modified-id="Setup---virtualenv-jupyter-kernel-2.5"><span class="toc-item-num">2.5 </span>Setup - virtualenv jupyter kernel</a></span></li><li><span><a href="#Setup---Initialize-Django" data-toc-modified-id="Setup---Initialize-Django-2.6"><span class="toc-item-num">2.6 </span>Setup - Initialize Django</a></span></li><li><span><a href="#Setup---Initialize-LoggingHelper" data-toc-modified-id="Setup---Initialize-LoggingHelper-2.7"><span class="toc-item-num">2.7 </span>Setup - Initialize LoggingHelper</a></span></li><li><span><a href="#Setup---initialize-ProquestHNPNewspaper" data-toc-modified-id="Setup---initialize-ProquestHNPNewspaper-2.8"><span class="toc-item-num">2.8 </span>Setup - initialize ProquestHNPNewspaper</a></span><ul class="toc-item"><li><span><a href="#load-from-database" data-toc-modified-id="load-from-database-2.8.1"><span class="toc-item-num">2.8.1 </span>load from database</a></span></li><li><span><a href="#set-up-manually" data-toc-modified-id="set-up-manually-2.8.2"><span class="toc-item-num">2.8.2 </span>set up manually</a></span></li></ul></li></ul></li><li><span><a href="#Find-articles-to-be-loaded" data-toc-modified-id="Find-articles-to-be-loaded-3"><span class="toc-item-num">3 </span>Find articles to be loaded</a></span><ul class="toc-item"><li><span><a href="#Uncompress-files" data-toc-modified-id="Uncompress-files-3.1"><span class="toc-item-num">3.1 </span>Uncompress files</a></span></li><li><span><a href="#Work-with-uncompressed-files" data-toc-modified-id="Work-with-uncompressed-files-3.2"><span class="toc-item-num">3.2 </span>Work with uncompressed files</a></span></li><li><span><a href="#parse-and-load-XML-files" data-toc-modified-id="parse-and-load-XML-files-3.3"><span class="toc-item-num">3.3 </span>parse and load XML files</a></span></li><li><span><a href="#build-list-of-all-ObjectTypes" data-toc-modified-id="build-list-of-all-ObjectTypes-3.4"><span class="toc-item-num">3.4 </span>build list of all ObjectTypes</a></span></li><li><span><a href="#map-files-to-types" data-toc-modified-id="map-files-to-types-3.5"><span class="toc-item-num">3.5 </span>map files to types</a></span><ul class="toc-item"><li><span><a href="#explore-all-known-object-types" data-toc-modified-id="explore-all-known-object-types-3.5.1"><span class="toc-item-num">3.5.1 </span>explore all known object types</a></span></li><li><span><a href="#files-in-archive-Newsday_20171006231925_00050---1989" data-toc-modified-id="files-in-archive-Newsday_20171006231925_00050---1989-3.5.2"><span class="toc-item-num">3.5.2 </span>files in archive Newsday_20171006231925_00050 - 1989</a></span></li></ul></li></ul></li><li><span><a href="#XML-Analysis" data-toc-modified-id="XML-Analysis-4"><span class="toc-item-num">4 </span>XML Analysis</a></span></li><li><span><a href="#TODO" data-toc-modified-id="TODO-5"><span class="toc-item-num">5 </span>TODO</a></span></li></ul></div>
Introduction
Back to Table of Contents
This is a notebook that expands on the OpenCalais code in the file article_coding.py, also in this folder. It includes more sections on selecting publications you want to submit to OpenCalais as an example. It is intended to be copied and re-used.
Setup
Back to Table of Contents
Setup - Debug
Back to Table of Contents
Step1: Setup - Imports
Back to Table of Contents
Step2: Setup - working folder paths
Back to Table of Contents
What data are we looking at?
Step3: Setup - logging
Back to Table of Contents
configure logging for this notebook's kernel (If you do not run this cell, you'll get the django application's logging configuration.
Step4: Setup - virtualenv jupyter kernel
Back to Table of Contents
If you are using a virtualenv, make sure that you
Step5: Setup - Initialize LoggingHelper
Back to Table of Contents
Create a LoggingHelper instance to use to log debug and also print at the same time.
Preconditions
Step6: Setup - initialize ProquestHNPNewspaper
Back to Table of Contents
Create an initialize an instance of ProquestHNPNewspaper for this paper.
load from database
Back to Table of Contents
Step7: set up manually
Back to Table of Contents
Step8: If desired, add to database.
Step9: Find articles to be loaded
Back to Table of Contents
Specify which folder of XML files should be loaded into system, then process all files within the folder.
The compressed archives from proquest_hnp just contain publication XML files, no containing folder.
To process
Step10: For each *.zip file in the paper's source folder
Step11: Work with uncompressed files
Back to Table of Contents
Change working directories to the uncompressed paper path.
Step12: parse and load XML files
Back to Table of Contents
Load one of the files into memory and see what we can do with it. Beautiful Soup?
Looks like the root element is "Record", then the high-level type of the article is "ObjectType".
ObjectType values
Step13: Processing 5752 files in /mnt/hgfs/projects/phd/proquest_hnp/uncompressed/BostonGlobe/BG_20171002210239_00001
----> XML file count
Step14: Example output
Step15: explore all known object types
Back to Table of Contents
Look at all known object types to see which contain actual news content.
Step16: files in archive Newsday_20171006231925_00050 - 1989
Back to Table of Contents
Archive details | <ASSISTANT_TASK:>
Python Code:
debug_flag = False
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Introduction" data-toc-modified-id="Introduction-1"><span class="toc-item-num">1 </span>Introduction</a></span></li><li><span><a href="#Setup" data-toc-modified-id="Setup-2"><span class="toc-item-num">2 </span>Setup</a></span><ul class="toc-item"><li><span><a href="#Setup---Debug" data-toc-modified-id="Setup---Debug-2.1"><span class="toc-item-num">2.1 </span>Setup - Debug</a></span></li><li><span><a href="#Setup---Imports" data-toc-modified-id="Setup---Imports-2.2"><span class="toc-item-num">2.2 </span>Setup - Imports</a></span></li><li><span><a href="#Setup---working-folder-paths" data-toc-modified-id="Setup---working-folder-paths-2.3"><span class="toc-item-num">2.3 </span>Setup - working folder paths</a></span></li><li><span><a href="#Setup---logging" data-toc-modified-id="Setup---logging-2.4"><span class="toc-item-num">2.4 </span>Setup - logging</a></span></li><li><span><a href="#Setup---virtualenv-jupyter-kernel" data-toc-modified-id="Setup---virtualenv-jupyter-kernel-2.5"><span class="toc-item-num">2.5 </span>Setup - virtualenv jupyter kernel</a></span></li><li><span><a href="#Setup---Initialize-Django" data-toc-modified-id="Setup---Initialize-Django-2.6"><span class="toc-item-num">2.6 </span>Setup - Initialize Django</a></span></li><li><span><a href="#Setup---Initialize-LoggingHelper" data-toc-modified-id="Setup---Initialize-LoggingHelper-2.7"><span class="toc-item-num">2.7 </span>Setup - Initialize LoggingHelper</a></span></li><li><span><a href="#Setup---initialize-ProquestHNPNewspaper" data-toc-modified-id="Setup---initialize-ProquestHNPNewspaper-2.8"><span class="toc-item-num">2.8 </span>Setup - initialize ProquestHNPNewspaper</a></span><ul class="toc-item"><li><span><a href="#load-from-database" data-toc-modified-id="load-from-database-2.8.1"><span class="toc-item-num">2.8.1 </span>load from database</a></span></li><li><span><a href="#set-up-manually" data-toc-modified-id="set-up-manually-2.8.2"><span class="toc-item-num">2.8.2 </span>set up manually</a></span></li></ul></li></ul></li><li><span><a href="#Find-articles-to-be-loaded" data-toc-modified-id="Find-articles-to-be-loaded-3"><span class="toc-item-num">3 </span>Find articles to be loaded</a></span><ul class="toc-item"><li><span><a href="#Uncompress-files" data-toc-modified-id="Uncompress-files-3.1"><span class="toc-item-num">3.1 </span>Uncompress files</a></span></li><li><span><a href="#Work-with-uncompressed-files" data-toc-modified-id="Work-with-uncompressed-files-3.2"><span class="toc-item-num">3.2 </span>Work with uncompressed files</a></span></li><li><span><a href="#parse-and-load-XML-files" data-toc-modified-id="parse-and-load-XML-files-3.3"><span class="toc-item-num">3.3 </span>parse and load XML files</a></span></li><li><span><a href="#build-list-of-all-ObjectTypes" data-toc-modified-id="build-list-of-all-ObjectTypes-3.4"><span class="toc-item-num">3.4 </span>build list of all ObjectTypes</a></span></li><li><span><a href="#map-files-to-types" data-toc-modified-id="map-files-to-types-3.5"><span class="toc-item-num">3.5 </span>map files to types</a></span><ul class="toc-item"><li><span><a href="#explore-all-known-object-types" data-toc-modified-id="explore-all-known-object-types-3.5.1"><span class="toc-item-num">3.5.1 </span>explore all known object types</a></span></li><li><span><a href="#files-in-archive-Newsday_20171006231925_00050---1989" data-toc-modified-id="files-in-archive-Newsday_20171006231925_00050---1989-3.5.2"><span class="toc-item-num">3.5.2 </span>files in archive Newsday_20171006231925_00050 - 1989</a></span></li></ul></li></ul></li><li><span><a href="#XML-Analysis" data-toc-modified-id="XML-Analysis-4"><span class="toc-item-num">4 </span>XML Analysis</a></span></li><li><span><a href="#TODO" data-toc-modified-id="TODO-5"><span class="toc-item-num">5 </span>TODO</a></span></li></ul></div>
Introduction
Back to Table of Contents
This is a notebook that expands on the OpenCalais code in the file article_coding.py, also in this folder. It includes more sections on selecting publications you want to submit to OpenCalais as an example. It is intended to be copied and re-used.
Setup
Back to Table of Contents
Setup - Debug
Back to Table of Contents
End of explanation
import datetime
import glob
import logging
import lxml
import os
import six
import xml
import xmltodict
import zipfile
Explanation: Setup - Imports
Back to Table of Contents
End of explanation
# paper identifier
paper_identifier = "Newsday"
archive_identifier = "Newsday_20171006230211_00001"
# source
source_paper_folder = "/mnt/hgfs/projects/phd/proquest_hnp/proquest_hnp/data"
source_paper_path = "{}/{}".format( source_paper_folder, paper_identifier )
# uncompressed
uncompressed_paper_folder = "/mnt/hgfs/projects/phd/proquest_hnp/uncompressed"
uncompressed_paper_path = "{}/{}".format( uncompressed_paper_folder, paper_identifier )
# make sure an identifier is set before you make a path here.
if ( ( archive_identifier is not None ) and ( archive_identifier != "" ) ):
# identifier is set.
source_archive_file = "{}.zip".format( archive_identifier )
source_archive_path = "{}/{}".format( source_paper_path, source_archive_file )
uncompressed_archive_path = "{}/{}".format( uncompressed_paper_path, archive_identifier )
#-- END check to see if archive_identifier present. --#
%pwd
# current working folder
current_working_folder = "/home/jonathanmorgan/work/django/research/work/phd_work/data/article_loading/proquest_hnp/{}".format( paper_identifier )
current_datetime = datetime.datetime.now()
current_date_string = current_datetime.strftime( "%Y-%m-%d-%H-%M-%S" )
Explanation: Setup - working folder paths
Back to Table of Contents
What data are we looking at?
End of explanation
logging_file_name = "{}/research-data_load-{}-{}.log.txt".format( current_working_folder, paper_identifier, current_date_string )
logging.basicConfig(
level = logging.DEBUG,
format = '%(asctime)s - %(levelname)s - %(name)s - %(message)s',
filename = logging_file_name,
filemode = 'w' # set to 'a' if you want to append, rather than overwrite each time.
)
Explanation: Setup - logging
Back to Table of Contents
configure logging for this notebook's kernel (If you do not run this cell, you'll get the django application's logging configuration.
End of explanation
# init django
django_init_folder = "/home/jonathanmorgan/work/django/research/work/phd_work"
django_init_path = "django_init.py"
if( ( django_init_folder is not None ) and ( django_init_folder != "" ) ):
# add folder to front of path.
django_init_path = "{}/{}".format( django_init_folder, django_init_path )
#-- END check to see if django_init folder. --#
%run $django_init_path
# context_text imports
from context_text.article_coding.article_coding import ArticleCoder
from context_text.article_coding.article_coding import ArticleCoding
from context_text.article_coding.open_calais_v2.open_calais_v2_article_coder import OpenCalaisV2ArticleCoder
from context_text.collectors.newsbank.newspapers.GRPB import GRPB
from context_text.collectors.newsbank.newspapers.DTNB import DTNB
from context_text.models import Article
from context_text.models import Article_Subject
from context_text.models import Newspaper
from context_text.shared.context_text_base import ContextTextBase
# context_text_proquest_hnp
from context_text_proquest_hnp.proquest_hnp_newspaper_helper import ProquestHNPNewspaperHelper
Explanation: Setup - virtualenv jupyter kernel
Back to Table of Contents
If you are using a virtualenv, make sure that you:
have installed your virtualenv as a kernel.
choose the kernel for your virtualenv as the kernel for your notebook (Kernel --> Change kernel).
Since I use a virtualenv, need to get that activated somehow inside this notebook. One option is to run ../dev/wsgi.py in this notebook, to configure the python environment manually as if you had activated the sourcenet virtualenv. To do this, you'd make a code cell that contains:
%run ../dev/wsgi.py
This is sketchy, however, because of the changes it makes to your Python environment within the context of whatever your current kernel is. I'd worry about collisions with the actual Python 3 kernel. Better, one can install their virtualenv as a separate kernel. Steps:
activate your virtualenv:
workon research
in your virtualenv, install the package ipykernel.
pip install ipykernel
use the ipykernel python program to install the current environment as a kernel:
python -m ipykernel install --user --name <env_name> --display-name "<display_name>"
sourcenet example:
python -m ipykernel install --user --name sourcenet --display-name "research (Python 3)"
More details: http://ipython.readthedocs.io/en/stable/install/kernel_install.html
Setup - Initialize Django
Back to Table of Contents
First, initialize my dev django project, so I can run code in this notebook that references my django models and can talk to the database using my project's settings.
End of explanation
# python_utilities
from python_utilities.logging.logging_helper import LoggingHelper
# init
my_logging_helper = LoggingHelper()
my_logging_helper.set_logger_name( "proquest_hnp-article-loading-{}".format( paper_identifier ) )
log_message = None
Explanation: Setup - Initialize LoggingHelper
Back to Table of Contents
Create a LoggingHelper instance to use to log debug and also print at the same time.
Preconditions: Must be run after Django is initialized, since python_utilities is in the django path.
End of explanation
my_paper = ProquestHNPNewspaperHelper()
paper_instance = my_paper.initialize_from_database( paper_identifier )
my_paper.source_all_papers_folder = source_paper_folder
my_paper.destination_all_papers_folder = uncompressed_paper_folder
print( my_paper )
print( paper_instance )
Explanation: Setup - initialize ProquestHNPNewspaper
Back to Table of Contents
Create an initialize an instance of ProquestHNPNewspaper for this paper.
load from database
Back to Table of Contents
End of explanation
my_paper = ProquestHNPNewspaperHelper()
my_paper.paper_identifier = paper_identifier
my_paper.source_all_papers_folder = source_paper_folder
my_paper.source_paper_path = source_paper_path
my_paper.destination_all_papers_folder = uncompressed_paper_folder
my_paper.destination_paper_path = uncompressed_paper_path
my_paper.paper_start_year = 1940
my_paper.paper_end_year = 1989
my_newspaper = Newspaper.objects.get( id = 7 )
my_paper.newspaper = my_newspaper
Explanation: set up manually
Back to Table of Contents
End of explanation
phnp_newspaper_instance = my_paper.create_PHNP_newspaper()
print( phnp_newspaper_instance )
Explanation: If desired, add to database.
End of explanation
# create folder to hold the results of decompressing paper's zip files.
did_uncomp_paper_folder_exist = my_paper.make_dest_paper_folder()
Explanation: Find articles to be loaded
Back to Table of Contents
Specify which folder of XML files should be loaded into system, then process all files within the folder.
The compressed archives from proquest_hnp just contain publication XML files, no containing folder.
To process:
uncompresed paper folder ( <paper_folder> ) - make a folder in /mnt/hgfs/projects/phd/proquest_hnp/uncompressed for the paper whose data you are working with, named the same as the paper's folder in /mnt/hgfs/projects/phd/proquest_hnp/proquest_hnp/data.
for example, for the Boston Globe, name it "BostonGlobe".
uncompressed archive folder ( <archive_folder> ) - inside a given paper's folder in uncompressed, for each archive file, create a folder named the same as the archive file, but with no ".zip" at the end.
For example, for the file "BG_20171002210239_00001.zip", make a folder named "BG_20171002210239_00001".
path should be "<paper_folder>/<archive_name_no_zip>.
unzip the archive into this folder:
unzip <path_to_zip> -d <archive_folder>
Uncompress files
Back to Table of Contents
See if the uncompressed paper folder exists. If not, set flag and create it.
End of explanation
# decompress the files
my_paper.uncompress_paper_zip_files()
Explanation: For each *.zip file in the paper's source folder:
parse file name from path returned by glob.
parse the part before ".zip" from the file name. This is referred to subsequently as the "archive identifier".
check if folder named the same as the "archive identifier" is present.
If no:
create it.
then, uncompress the archive into it.
If yes:
output a message. Don't want to uncompress if it was already uncompressed once.
End of explanation
%cd $uncompressed_paper_path
%ls
Explanation: Work with uncompressed files
Back to Table of Contents
Change working directories to the uncompressed paper path.
End of explanation
# loop over files in the current archive folder path.
object_type_to_count_map = my_paper.process_archive_object_types( uncompressed_archive_path )
Explanation: parse and load XML files
Back to Table of Contents
Load one of the files into memory and see what we can do with it. Beautiful Soup?
Looks like the root element is "Record", then the high-level type of the article is "ObjectType".
ObjectType values:
Advertisement
...
Good options for XML parser:
lxml.etree - https://stackoverflow.com/questions/12290091/reading-xml-file-and-fetching-its-attributes-value-in-python
xmltodict - https://docs.python-guide.org/scenarios/xml/
beautifulsoup using lxml
End of explanation
xml_folder_list = glob.glob( "{}/*".format( uncompressed_paper_path ) )
print( "folder_list: {}".format( xml_folder_list ) )
# build map of all object types for a paper to the overall counts of each
paper_object_type_to_count_map = my_paper.process_paper_object_types()
Explanation: Processing 5752 files in /mnt/hgfs/projects/phd/proquest_hnp/uncompressed/BostonGlobe/BG_20171002210239_00001
----> XML file count: 5752
Counters:
- Processed 5752 files
- No Record: 0
- No ObjectType: 0
- No ObjectType value: 0
ObjectType values and occurrence counts:
- A|d|v|e|r|t|i|s|e|m|e|n|t: 1902
- Article|Feature: 1792
- N|e|w|s: 53
- Commentary|Editorial: 36
- G|e|n|e|r|a|l| |I|n|f|o|r|m|a|t|i|o|n: 488
- S|t|o|c|k| |Q|u|o|t|e: 185
- Advertisement|Classified Advertisement: 413
- E|d|i|t|o|r|i|a|l| |C|a|r|t|o|o|n|/|C|o|m|i|c: 31
- Correspondence|Letter to the Editor: 119
- Front Matter|Table of Contents: 193
- O|b|i|t|u|a|r|y: 72
- F|r|o|n|t| |P|a|g|e|/|C|o|v|e|r| |S|t|o|r|y: 107
- I|m|a|g|e|/|P|h|o|t|o|g|r|a|p|h: 84
- Marriage Announcement|News: 6
- I|l|l|u|s|t|r|a|t|i|o|n: 91
- R|e|v|i|e|w: 133
- C|r|e|d|i|t|/|A|c|k|n|o|w|l|e|d|g|e|m|e|n|t: 30
- News|Legal Notice: 17
build list of all ObjectTypes
Back to Table of Contents
Loop over all folders in the paper path. For each folder, grab all files in the folder. For each file, parse XML, then get the ObjectType value and if it isn't already in map of obect types to counts, add it. Increment count.
From command line, in the uncompressed BostonGlobe folder:
find . -type f -iname "*.xml" | wc -l
resulted in 11,374,500 articles. That is quite a few.
End of explanation
news_object_type_list = []
news_object_type_list.append( 'Article|Feature' )
news_object_type_list.append( 'Feature|Article' )
news_object_type_list.append( 'F|r|o|n|t| |P|a|g|e|/|C|o|v|e|r| |S|t|o|r|y' )
Explanation: Example output:
XML file count: 5752
Counters:
- Processed 5752 files
- No Record: 0
- No ObjectType: 0
- No ObjectType value: 0
ObjectType values and occurrence counts:
- A|d|v|e|r|t|i|s|e|m|e|n|t: 2114224
- Feature|Article: 5271887
- I|m|a|g|e|/|P|h|o|t|o|g|r|a|p|h: 249942
- O|b|i|t|u|a|r|y: 625143
- G|e|n|e|r|a|l| |I|n|f|o|r|m|a|t|i|o|n: 1083164
- S|t|o|c|k| |Q|u|o|t|e: 202776
- N|e|w|s: 140274
- I|l|l|u|s|t|r|a|t|i|o|n: 106925
- F|r|o|n|t| |P|a|g|e|/|C|o|v|e|r| |S|t|o|r|y: 386421
- E|d|i|t|o|r|i|a|l| |C|a|r|t|o|o|n|/|C|o|m|i|c: 78993
- Editorial|Commentary: 156342
- C|r|e|d|i|t|/|A|c|k|n|o|w|l|e|d|g|e|m|e|n|t: 68356
- Classified Advertisement|Advertisement: 291533
- R|e|v|i|e|w: 86889
- Table of Contents|Front Matter: 69798
- Letter to the Editor|Correspondence: 202071
- News|Legal Notice: 24053
- News|Marriage Announcement: 41314
- B|i|r|t|h| |N|o|t|i|c|e: 926
- News|Military/War News: 3
- U|n|d|e|f|i|n|e|d: 5
- Article|Feature: 137526
- Front Matter|Table of Contents: 11195
- Commentary|Editorial: 3386
- Marriage Announcement|News: 683
- Correspondence|Letter to the Editor: 7479
- Legal Notice|News: 1029
- Advertisement|Classified Advertisement: 12163
map files to types
Back to Table of Contents
Choose a directory, then loop over the files in the directory to build a map of types to lists of file names.
End of explanation
# get list of all object types
master_object_type_list = my_paper.get_all_object_types()
print( "Object Types: {}".format( master_object_type_list ) )
# directory to work in.
uncompressed_archive_folder = "Newsday_20171006231925_00050"
uncompressed_archive_path = "{}/{}".format( uncompressed_paper_path, uncompressed_archive_folder )
print( 'Uncompressed archive folder: {}'.format( uncompressed_archive_path ) )
# build map of file types to lists of files of that type in specified folder.
object_type_to_file_path_map = my_paper.map_archive_folder_files_to_types( uncompressed_archive_path )
# which types do we want to preview?
#types_to_output = news_object_type_list
types_to_output = [ "Advertisement|Classified Advertisement" ]
types_to_output = [ "A|d|v|e|r|t|i|s|e|m|e|n|t" ]
types_to_output = [ 'Advertisement|Classified Advertisement' ]
types_to_output = [ 'Article|Feature' ]
types_to_output = [ 'B|i|r|t|h| |N|o|t|i|c|e' ]
types_to_output = [ 'Classified Advertisement|Advertisement' ]
types_to_output = [ 'Commentary|Editorial' ]
types_to_output = [ 'Correspondence|Letter to the Editor' ]
types_to_output = [ 'C|r|e|d|i|t|/|A|c|k|n|o|w|l|e|d|g|e|m|e|n|t' ]
types_to_output = [ 'E|d|i|t|o|r|i|a|l| |C|a|r|t|o|o|n|/|C|o|m|i|c' ]
types_to_output = [ 'Editorial|Commentary' ]
types_to_output = [ 'Feature|Article' ]
types_to_output = [ 'Front Matter|Table of Contents' ]
types_to_output = [ 'F|r|o|n|t| |P|a|g|e|/|C|o|v|e|r| |S|t|o|r|y' ]
types_to_output = [ 'G|e|n|e|r|a|l| |I|n|f|o|r|m|a|t|i|o|n' ]
types_to_output = [ 'I|l|l|u|s|t|r|a|t|i|o|n' ]
types_to_output = [ 'I|m|a|g|e|/|P|h|o|t|o|g|r|a|p|h' ]
types_to_output = [ 'Legal Notice|News' ]
types_to_output = [ 'Letter to the Editor|Correspondence' ]
types_to_output = [ 'Marriage Announcement|News' ]
types_to_output = [ 'N|e|w|s' ]
types_to_output = [ 'News|Legal Notice' ]
types_to_output = [ 'News|Marriage Announcement' ]
types_to_output = [ 'News|Military/War News' ]
types_to_output = [ 'O|b|i|t|u|a|r|y' ]
types_to_output = [ 'R|e|v|i|e|w' ]
types_to_output = [ 'S|t|o|c|k| |Q|u|o|t|e' ]
types_to_output = [ 'Table of Contents|Front Matter' ]
types_to_output = [ 'Table Of Contents|Front Matter' ]
types_to_output = [ 'U|n|d|e|f|i|n|e|d' ]
# declare variables
xml_file_path_list = None
xml_file_path_count = None
xml_file_path_example_list = None
xml_file_path = None
xml_file = None
xml_dict = None
xml_string = None
# loop over types
for object_type in types_to_output:
# print type and count
xml_file_path_list = object_type_to_file_path_map.get( object_type, [] )
xml_file_path_count = len( xml_file_path_list )
xml_file_path_example_list = xml_file_path_list[ : 10 ]
print( "\n- {} - {} files:".format( object_type, xml_file_path_count ) )
for xml_file_path in xml_file_path_example_list:
print( "----> {}".format( xml_file_path ) )
# try to parse the file
with open( xml_file_path ) as xml_file:
# parse XML
xml_dict = xmltodict.parse( xml_file.read() )
#-- END with open( xml_file_path ) as xml_file: --#
# pretty-print
xml_string = xmltodict.unparse( xml_dict, pretty = True )
# output
print( xml_string )
#-- END loop over example file paths. --#
#-- END loop over object types. --#
Explanation: explore all known object types
Back to Table of Contents
Look at all known object types to see which contain actual news content.
End of explanation
# directory to work in.
uncompressed_archive_folder = "Newsday_20171006231925_00050"
uncompressed_archive_path = "{}/{}".format( uncompressed_paper_path, uncompressed_archive_folder )
print( 'Uncompressed archive folder: {}'.format( uncompressed_archive_path ) )
# build map of file types to lists of files of that type in specified folder.
object_type_to_file_path_map = my_paper.map_archive_folder_files_to_types( uncompressed_archive_path )
# which types do we want to preview?
types_to_output = news_object_type_list
# declare variables
xml_file_path_list = None
xml_file_path_count = None
xml_file_path_example_list = None
xml_file_path = None
xml_file = None
xml_dict = None
xml_string = None
# loop over types
for object_type in types_to_output:
# print type and count
xml_file_path_list = object_type_to_file_path_map.get( object_type, [] )
xml_file_path_count = len( xml_file_path_list )
xml_file_path_example_list = xml_file_path_list[ : 10 ]
print( "\n- {} - {} files:".format( object_type, xml_file_path_count ) )
for xml_file_path in xml_file_path_example_list:
print( "----> {}".format( xml_file_path ) )
# try to parse the file
with open( xml_file_path ) as xml_file:
# parse XML
xml_dict = xmltodict.parse( xml_file.read() )
#-- END with open( xml_file_path ) as xml_file: --#
# pretty-print
xml_string = xmltodict.unparse( xml_dict, pretty = True )
# output
print( xml_string )
#-- END loop over example file paths. --#
#-- END loop over object types. --#
Explanation: files in archive Newsday_20171006231925_00050 - 1989
Back to Table of Contents
Archive details:
ID: 496
Newspaper: 2 - Newsday - Newsday
archive_identifier: Newsday_20171006231925_00050
min_date: 1941-05-06
max_date: 1989-12-30
path: /mnt/hgfs/projects/phd/proquest_hnp/uncompressed/Newsday/Newsday_20171006231925_00050
End of explanation
<END_TASK> |
147 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Predicting house prices using k-nearest neighbors regression
In this notebook, you will implement k-nearest neighbors regression. You will
Step1: Load in house sales data
For this notebook, we use a subset of the King County housing dataset created by randomly selecting 40% of the houses in the full dataset.
Step2: Import useful functions from previous notebooks
To efficiently compute pairwise distances among data points, we will convert the SFrame into a 2D Numpy array. First import the numpy library and then copy and paste get_numpy_data() from the second notebook of Week 2.
Step3: We will also need the normalize_features() function from Week 5 that normalizes all feature columns to unit norm. Paste this function below.
Step4: Split data into training, test, and validation sets
Step5: Extract features and normalize
Using all of the numerical inputs listed in feature_list, transform the training, test, and validation SFrames into Numpy arrays
Step6: In computing distances, it is crucial to normalize features. Otherwise, for example, the sqft_living feature (typically on the order of thousands) would exert a much larger influence on distance than the bedrooms feature (typically on the order of ones). We divide each column of the training feature matrix by its 2-norm, so that the transformed column has unit norm.
IMPORTANT
Step7: Compute a single distance
To start, let's just explore computing the "distance" between two given houses. We will take our query house to be the first house of the test set and look at the distance between this house and the 10th house of the training set.
To see the features associated with the query house, print the first row (index 0) of the test feature matrix. You should get an 18-dimensional vector whose components are between 0 and 1.
Step8: Now print the 10th row (index 9) of the training feature matrix. Again, you get an 18-dimensional vector with components between 0 and 1.
Step9: QUIZ QUESTION
What is the Euclidean distance between the query house and the 10th house of the training set?
Note
Step10: Compute multiple distances
Of course, to do nearest neighbor regression, we need to compute the distance between our query house and all houses in the training set.
To visualize this nearest-neighbor search, let's first compute the distance from our query house (features_test[0]) to the first 10 houses of the training set (features_train[0
Step11: QUIZ QUESTION
Among the first 10 training houses, which house is the closest to the query house?
Step12: It is computationally inefficient to loop over computing distances to all houses in our training dataset. Fortunately, many of the Numpy functions can be vectorized, applying the same operation over multiple values or vectors. We now walk through this process.
Consider the following loop that computes the element-wise difference between the features of the query house (features_test[0]) and the first 3 training houses (features_train[0
Step13: The subtraction operator (-) in Numpy is vectorized as follows
Step14: Note that the output of this vectorized operation is identical to that of the loop above, which can be verified below
Step15: Aside
Step16: To test the code above, run the following cell, which should output a value -0.0934339605842
Step17: The next step in computing the Euclidean distances is to take these feature-by-feature differences in diff, square each, and take the sum over feature indices. That is, compute the sum of square feature differences for each training house (row in diff).
By default, np.sum sums up everything in the matrix and returns a single number. To instead sum only over a row or column, we need to specifiy the axis parameter described in the np.sum documentation. In particular, axis=1 computes the sum across each row.
Below, we compute this sum of square feature differences for all training houses and verify that the output for the 16th house in the training set is equivalent to having examined only the 16th row of diff and computing the sum of squares on that row alone.
Step18: With this result in mind, write a single-line expression to compute the Euclidean distances between the query house and all houses in the training set. Assign the result to a variable distances.
Hint
Step19: To test the code above, run the following cell, which should output a value 0.0237082324496
Step20: Now you are ready to write a function that computes the distances from a query house to all training houses. The function should take two parameters
Step21: QUIZ QUESTIONS
Take the query house to be third house of the test set (features_test[2]). What is the index of the house in the training set that is closest to this query house?
What is the predicted value of the query house based on 1-nearest neighbor regression?
Step22: Perform k-nearest neighbor regression
For k-nearest neighbors, we need to find a set of k houses in the training set closest to a given query house. We then make predictions based on these k nearest neighbors.
Fetch k-nearest neighbors
Using the functions above, implement a function that takes in
* the value of k;
* the feature matrix for the training houses; and
* the feature vector of the query house
and returns the indices of the k closest training houses. For instance, with 2-nearest neighbor, a return value of [5, 10] would indicate that the 6th and 11th training houses are closest to the query house.
Hint
Step23: QUIZ QUESTION
Take the query house to be third house of the test set (features_test[2]). What are the indices of the 4 training houses closest to the query house?
Step24: Make a single prediction by averaging k nearest neighbor outputs
Now that we know how to find the k-nearest neighbors, write a function that predicts the value of a given query house. For simplicity, take the average of the prices of the k nearest neighbors in the training set. The function should have the following parameters
Step25: QUIZ QUESTION
Again taking the query house to be third house of the test set (features_test[2]), predict the value of the query house using k-nearest neighbors with k=4 and the simple averaging method described and implemented above.
Step26: Compare this predicted value using 4-nearest neighbors to the predicted value using 1-nearest neighbor computed earlier.
Make multiple predictions
Write a function to predict the value of each and every house in a query set. (The query set can be any subset of the dataset, be it the test set or validation set.) The idea is to have a loop where we take each house in the query set as the query house and make a prediction for that specific house. The new function should take the following parameters
Step27: QUIZ QUESTION
Make predictions for the first 10 houses in the test set using k-nearest neighbors with k=10.
What is the index of the house in this query set that has the lowest predicted value?
What is the predicted value of this house?
Step28: Choosing the best value of k using a validation set
There remains a question of choosing the value of k to use in making predictions. Here, we use a validation set to choose this value. Write a loop that does the following
Step29: To visualize the performance as a function of k, plot the RSS on the VALIDATION set for each considered k value
Step30: QUIZ QUESTION
What is the RSS on the TEST data using the value of k found above? To be clear, sum over all houses in the TEST set. | <ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('C:\Anaconda2\envs\dato-env\Lib\site-packages')
import graphlab
Explanation: Predicting house prices using k-nearest neighbors regression
In this notebook, you will implement k-nearest neighbors regression. You will:
* Find the k-nearest neighbors of a given query input
* Predict the output for the query input using the k-nearest neighbors
* Choose the best value of k using a validation set
Fire up GraphLab Create
End of explanation
sales = graphlab.SFrame('kc_house_data_small.gl/')
Explanation: Load in house sales data
For this notebook, we use a subset of the King County housing dataset created by randomly selecting 40% of the houses in the full dataset.
End of explanation
import numpy as np # note this allows us to refer to numpy as np instead
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
# select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):
features_sframe = data_sframe[features]
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe[output]
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
Explanation: Import useful functions from previous notebooks
To efficiently compute pairwise distances among data points, we will convert the SFrame into a 2D Numpy array. First import the numpy library and then copy and paste get_numpy_data() from the second notebook of Week 2.
End of explanation
def normalize_features(feature_matrix):
norms = np.linalg.norm(feature_matrix, axis=0)
normalized_features = feature_matrix/norms
return (normalized_features, norms)
Explanation: We will also need the normalize_features() function from Week 5 that normalizes all feature columns to unit norm. Paste this function below.
End of explanation
(train_and_validation, test) = sales.random_split(.8, seed=1) # initial train/test split
(train, validation) = train_and_validation.random_split(.8, seed=1) # split training set into training and validation sets
Explanation: Split data into training, test, and validation sets
End of explanation
feature_list = ['bedrooms',
'bathrooms',
'sqft_living',
'sqft_lot',
'floors',
'waterfront',
'view',
'condition',
'grade',
'sqft_above',
'sqft_basement',
'yr_built',
'yr_renovated',
'lat',
'long',
'sqft_living15',
'sqft_lot15']
features_train, output_train = get_numpy_data(train, feature_list, 'price')
features_test, output_test = get_numpy_data(test, feature_list, 'price')
features_valid, output_valid = get_numpy_data(validation, feature_list, 'price')
Explanation: Extract features and normalize
Using all of the numerical inputs listed in feature_list, transform the training, test, and validation SFrames into Numpy arrays:
End of explanation
features_train, norms = normalize_features(features_train) # normalize training set features (columns)
features_test = features_test / norms # normalize test set by training set norms
features_valid = features_valid / norms # normalize validation set by training set norms
Explanation: In computing distances, it is crucial to normalize features. Otherwise, for example, the sqft_living feature (typically on the order of thousands) would exert a much larger influence on distance than the bedrooms feature (typically on the order of ones). We divide each column of the training feature matrix by its 2-norm, so that the transformed column has unit norm.
IMPORTANT: Make sure to store the norms of the features in the training set. The features in the test and validation sets must be divided by these same norms, so that the training, test, and validation sets are normalized consistently.
End of explanation
print features_test[0]
Explanation: Compute a single distance
To start, let's just explore computing the "distance" between two given houses. We will take our query house to be the first house of the test set and look at the distance between this house and the 10th house of the training set.
To see the features associated with the query house, print the first row (index 0) of the test feature matrix. You should get an 18-dimensional vector whose components are between 0 and 1.
End of explanation
print features_train[9]
Explanation: Now print the 10th row (index 9) of the training feature matrix. Again, you get an 18-dimensional vector with components between 0 and 1.
End of explanation
euclidean_distance = np.sqrt(np.sum((features_train[9] - features_test[0])**2))
print euclidean_distance
Explanation: QUIZ QUESTION
What is the Euclidean distance between the query house and the 10th house of the training set?
Note: Do not use the np.linalg.norm function; use np.sqrt, np.sum, and the power operator (**) instead. The latter approach is more easily adapted to computing multiple distances at once.
Slide 16
Euclidean distance:
$distance(x_j, x_q) \sqrt{a_1(x_j[1]-x_q[1])^2 + ... + a_d(x_j[d]-x_q[d])^2)}$
End of explanation
dist_dict = {}
for i in range(0,10):
dist_dict[i] = np.sqrt(np.sum((features_train[i] - features_test[0])**2))
print (i, np.sqrt(np.sum((features_train[i] - features_test[0])**2)))
Explanation: Compute multiple distances
Of course, to do nearest neighbor regression, we need to compute the distance between our query house and all houses in the training set.
To visualize this nearest-neighbor search, let's first compute the distance from our query house (features_test[0]) to the first 10 houses of the training set (features_train[0:10]) and then search for the nearest neighbor within this small set of houses. Through restricting ourselves to a small set of houses to begin with, we can visually scan the list of 10 distances to verify that our code for finding the nearest neighbor is working.
Write a loop to compute the Euclidean distance from the query house to each of the first 10 houses in the training set.
End of explanation
print min(dist_dict.items(), key=lambda x: x[1])
Explanation: QUIZ QUESTION
Among the first 10 training houses, which house is the closest to the query house?
End of explanation
for i in xrange(3):
print features_train[i]-features_test[0]
# should print 3 vectors of length 18
Explanation: It is computationally inefficient to loop over computing distances to all houses in our training dataset. Fortunately, many of the Numpy functions can be vectorized, applying the same operation over multiple values or vectors. We now walk through this process.
Consider the following loop that computes the element-wise difference between the features of the query house (features_test[0]) and the first 3 training houses (features_train[0:3]):
End of explanation
print features_train[0:3] - features_test[0]
Explanation: The subtraction operator (-) in Numpy is vectorized as follows:
End of explanation
# verify that vectorization works
results = features_train[0:3] - features_test[0]
print results[0] - (features_train[0]-features_test[0])
# should print all 0's if results[0] == (features_train[0]-features_test[0])
print results[1] - (features_train[1]-features_test[0])
# should print all 0's if results[1] == (features_train[1]-features_test[0])
print results[2] - (features_train[2]-features_test[0])
# should print all 0's if results[2] == (features_train[2]-features_test[0])
Explanation: Note that the output of this vectorized operation is identical to that of the loop above, which can be verified below:
End of explanation
diff = features_train - features_test[0]
Explanation: Aside: it is a good idea to write tests like this cell whenever you are vectorizing a complicated operation.
Perform 1-nearest neighbor regression
Now that we have the element-wise differences, it is not too hard to compute the Euclidean distances between our query house and all of the training houses. First, write a single-line expression to define a variable diff such that diff[i] gives the element-wise difference between the features of the query house and the i-th training house.
End of explanation
print diff[-1].sum() # sum of the feature differences between the query and last training house
# should print -0.0934339605842
Explanation: To test the code above, run the following cell, which should output a value -0.0934339605842:
End of explanation
print np.sum(diff**2, axis=1)[15] # take sum of squares across each row, and print the 16th sum
print np.sum(diff[15]**2) # print the sum of squares for the 16th row -- should be same as above
Explanation: The next step in computing the Euclidean distances is to take these feature-by-feature differences in diff, square each, and take the sum over feature indices. That is, compute the sum of square feature differences for each training house (row in diff).
By default, np.sum sums up everything in the matrix and returns a single number. To instead sum only over a row or column, we need to specifiy the axis parameter described in the np.sum documentation. In particular, axis=1 computes the sum across each row.
Below, we compute this sum of square feature differences for all training houses and verify that the output for the 16th house in the training set is equivalent to having examined only the 16th row of diff and computing the sum of squares on that row alone.
End of explanation
distances = np.sqrt(np.sum(diff**2, axis=1))
Explanation: With this result in mind, write a single-line expression to compute the Euclidean distances between the query house and all houses in the training set. Assign the result to a variable distances.
Hint: Do not forget to take the square root of the sum of squares.
End of explanation
print distances[100] # Euclidean distance between the query house and the 101th training house
# should print 0.0237082324496
Explanation: To test the code above, run the following cell, which should output a value 0.0237082324496:
End of explanation
def compute_distances(train_matrix, query_vector):
diff = train_matrix - query_vector
distances = np.sqrt(np.sum(diff**2, axis=1))
return distances
Explanation: Now you are ready to write a function that computes the distances from a query house to all training houses. The function should take two parameters: (i) the matrix of training features and (ii) the single feature vector associated with the query.
End of explanation
third_house_distance = compute_distances(features_train, features_test[2])
print third_house_distance.argsort()[:1], min(third_house_distance)
print third_house_distance[382]
print np.argsort(third_house_distance, axis = 0)[:4]
print output_train[382]
Explanation: QUIZ QUESTIONS
Take the query house to be third house of the test set (features_test[2]). What is the index of the house in the training set that is closest to this query house?
What is the predicted value of the query house based on 1-nearest neighbor regression?
End of explanation
def compute_k_nearest_neighbors(k, features_matrix, feature_vector):
distances = compute_distances(features_matrix, feature_vector)
return np.argsort(distances, axis = 0)[:k]
Explanation: Perform k-nearest neighbor regression
For k-nearest neighbors, we need to find a set of k houses in the training set closest to a given query house. We then make predictions based on these k nearest neighbors.
Fetch k-nearest neighbors
Using the functions above, implement a function that takes in
* the value of k;
* the feature matrix for the training houses; and
* the feature vector of the query house
and returns the indices of the k closest training houses. For instance, with 2-nearest neighbor, a return value of [5, 10] would indicate that the 6th and 11th training houses are closest to the query house.
Hint: Look at the documentation for np.argsort.
End of explanation
print compute_k_nearest_neighbors(4, features_train, features_test[2])
Explanation: QUIZ QUESTION
Take the query house to be third house of the test set (features_test[2]). What are the indices of the 4 training houses closest to the query house?
End of explanation
def compute_distances_k_avg(k, features_matrix, output_values, feature_vector):
k_neigbors = compute_k_nearest_neighbors(k, features_matrix, feature_vector)
avg_value = np.mean(output_values[k_neigbors])
return avg_value
Explanation: Make a single prediction by averaging k nearest neighbor outputs
Now that we know how to find the k-nearest neighbors, write a function that predicts the value of a given query house. For simplicity, take the average of the prices of the k nearest neighbors in the training set. The function should have the following parameters:
* the value of k;
* the feature matrix for the training houses;
* the output values (prices) of the training houses; and
* the feature vector of the query house, whose price we are predicting.
The function should return a predicted value of the query house.
Hint: You can extract multiple items from a Numpy array using a list of indices. For instance, output_train[[6, 10]] returns the prices of the 7th and 11th training houses.
End of explanation
print compute_distances_k_avg(4, features_train, output_train, features_test[2])
Explanation: QUIZ QUESTION
Again taking the query house to be third house of the test set (features_test[2]), predict the value of the query house using k-nearest neighbors with k=4 and the simple averaging method described and implemented above.
End of explanation
print features_test[0:10].shape[0]
def compute_distances_k_all(k, features_matrix, output_values, feature_vector):
num_of_rows = feature_vector.shape[0]
predicted_values = []
for i in xrange(num_of_rows):
avg_value = compute_distances_k_avg(k, features_train, output_train, features_test[i])
predicted_values.append(avg_value)
return predicted_values
Explanation: Compare this predicted value using 4-nearest neighbors to the predicted value using 1-nearest neighbor computed earlier.
Make multiple predictions
Write a function to predict the value of each and every house in a query set. (The query set can be any subset of the dataset, be it the test set or validation set.) The idea is to have a loop where we take each house in the query set as the query house and make a prediction for that specific house. The new function should take the following parameters:
* the value of k;
* the feature matrix for the training houses;
* the output values (prices) of the training houses; and
* the feature matrix for the query set.
The function should return a set of predicted values, one for each house in the query set.
Hint: To get the number of houses in the query set, use the .shape field of the query features matrix. See the documentation.
End of explanation
predicted_values = compute_distances_k_all(10, features_train, output_train, features_test[0:10])
print predicted_values
print predicted_values.index(min(predicted_values))
print min(predicted_values)
Explanation: QUIZ QUESTION
Make predictions for the first 10 houses in the test set using k-nearest neighbors with k=10.
What is the index of the house in this query set that has the lowest predicted value?
What is the predicted value of this house?
End of explanation
rss_all = []
for k in range(1,16):
predict_value = compute_distances_k_all(k, features_train, output_train, features_valid)
residual = (output_valid - predict_value)
rss = sum(residual**2)
rss_all.append(rss)
print rss_all
print rss_all.index(min(rss_all))
Explanation: Choosing the best value of k using a validation set
There remains a question of choosing the value of k to use in making predictions. Here, we use a validation set to choose this value. Write a loop that does the following:
For k in [1, 2, ..., 15]:
Makes predictions for each house in the VALIDATION set using the k-nearest neighbors from the TRAINING set.
Computes the RSS for these predictions on the VALIDATION set
Stores the RSS computed above in rss_all
Report which k produced the lowest RSS on VALIDATION set.
(Depending on your computing environment, this computation may take 10-15 minutes.)
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
kvals = range(1, 16)
plt.plot(kvals, rss_all,'bo-')
Explanation: To visualize the performance as a function of k, plot the RSS on the VALIDATION set for each considered k value:
End of explanation
predict_value = compute_distances_k_all(14, features_train, output_train, features_test)
residual = (output_test - predict_value)
rss = sum(residual**2)
print rss
Explanation: QUIZ QUESTION
What is the RSS on the TEST data using the value of k found above? To be clear, sum over all houses in the TEST set.
End of explanation
<END_TASK> |
148 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Release of hammer-cli gem
Requirements
push access to https
Step1: Update the following notebook settings
Step2: Ensure the repo is up to date
Step3: Cherry picks for minor release
Step4: Run tests localy
Step5: Update release related stuff
Step6: Manual step
Step7: Commit changes
Step8: Update translations
Step9: Tag new version
Step10: Prepare stable branch for major release
Step11: Build the gem
Step12: Bump the develop version for major release
Step13: PUSH the changes upstream If everything is correct
Step14: Now the new release is in upstream repo
Some manual steps follow to improve the UX
New relase on GitHub
Copy the following changelog lines to the description in form on link below
The release title is the new version. | <ASSISTANT_TASK:>
Python Code:
%cd ..
Explanation: Release of hammer-cli gem
Requirements
push access to https://github.com/theforeman/hammer-cli
push access to rubygems.org for hammer-cli
sudo yum install transifex-client python-slugify asciidoc
ensure neither the git push or gem push don't require interractive auth. If you can't use api key or ssh key to auth skip these steps and run them form the shell manually
to push translations you need an account on Transifex (https://docs.transifex.com/client/client-configuration#~/-transifexrc)
Release process
Follow the steps with <Shift>+<Enter> or <Ctrl>+<Enter>,<Down>
If anything fails, fix it and re-run the step if applicable
Release settings
End of explanation
NEW_VERSION = '3.0.0'
LAST_VERSION = '2.5.1'
DEVELOP_VERSION = '2.6.0-develop'
NEXT_FUTURE_VERSION = '3.0.0'
MAJOR_RELEASE = True
STABLE_BRANCH = '2.5-stable'
GIT_REMOTE_UPSTREAM = 'origin'
WORK_BRANCH = 'master' if MAJOR_RELEASE else STABLE_BRANCH
CHERRY_PICKS = []
Explanation: Update the following notebook settings
End of explanation
! git checkout {WORK_BRANCH}
! git fetch {GIT_REMOTE_UPSTREAM}
! git rebase {GIT_REMOTE_UPSTREAM}/{WORK_BRANCH}
Explanation: Ensure the repo is up to date
End of explanation
if not MAJOR_RELEASE:
for cp in CHERRY_PICKS:
! git cherry-pick -x {cp}
Explanation: Cherry picks for minor release
End of explanation
! bundle update
! bundle exec rake test
Explanation: Run tests localy
End of explanation
! sed -i 's/Gem::Version.new .*/Gem::Version.new "{NEW_VERSION}"/' lib/hammer_cli/version.rb
# Parse git changelog
from IPython.display import Markdown as md
from subprocess import check_output
from shlex import split
import re
def format_log_entry(entry):
issues = re.findall(r'[^(]#([0-9]+)', entry)
entry = re.sub(r'([fF]ixes|[rR]efs)[^-]*-\s*(.*)', r'\2', entry)
entry = '* ' + entry.capitalize()
entry = re.sub(r'\(#([0-9]+)\)', r'([PR #\1](https://github.com/theforeman/hammer-cli/pull/\1))', entry)
for i in issues:
referenced_issues.append(i)
entry = entry + ', [#%s](http://projects.theforeman.org/issues/%s)' % (i, i)
return entry
def skip(entry):
if re.match(r'Merge pull', entry) or \
re.match(r'^i18n', entry) or \
re.match(r'^Bump to version', entry):
return True
else:
return False
referenced_issues = []
git_log_cmd = 'git log --pretty=format:"%%s" %s..HEAD' % LAST_VERSION
log = check_output(split(git_log_cmd)).decode('utf8').split('\n')
change_log = [format_log_entry(e) for e in log if not skip(e)]
md('\n'.join(change_log))
# Write release notes
from datetime import datetime
import fileinput
import sys
fh = fileinput.input('doc/release_notes.md', inplace=True)
for line in fh:
print(line.rstrip())
if re.match(r'========', line):
print('### %s (%s)' % (NEW_VERSION, datetime.today().strftime('%Y-%m-%d')))
for entry in change_log:
print(entry)
print('')
fh.close()
Explanation: Update release related stuff
End of explanation
! git add -u
! git status
! git diff --cached
Explanation: Manual step: Update deps in the gemspec if neccessary
Check what is going to be commited
End of explanation
! git commit -m "Bump to {NEW_VERSION}"
Explanation: Commit changes
End of explanation
if MAJOR_RELEASE:
! make -C locale/ tx-update
Explanation: Update translations
End of explanation
! git tag {NEW_VERSION}
Explanation: Tag new version
End of explanation
if MAJOR_RELEASE:
! git checkout -b {STABLE_BRANCH}
! git push {GIT_REMOTE_UPSTREAM} {STABLE_BRANCH}
! git checkout {WORK_BRANCH}
Explanation: Prepare stable branch for major release
End of explanation
! rake build
! gem push pkg/hammer_cli-{NEW_VERSION}.gem
Explanation: Build the gem
End of explanation
if MAJOR_RELEASE:
! sed -i 's/Gem::Version.new .*/Gem::Version.new "{DEVELOP_VERSION}"/' lib/hammer_cli/version.rb
if MAJOR_RELEASE:
! git add -u
! git status
if MAJOR_RELEASE:
! git diff --cached
if MAJOR_RELEASE:
! git commit -m "Bump to {DEVELOP_VERSION}"
Explanation: Bump the develop version for major release
End of explanation
! git push {GIT_REMOTE_UPSTREAM} {WORK_BRANCH}
! git push --tags {GIT_REMOTE_UPSTREAM} {WORK_BRANCH}
Explanation: PUSH the changes upstream If everything is correct
End of explanation
print('\n')
print('\n'.join(change_log))
print('\n\nhttps://github.com/theforeman/hammer-cli/releases/new?tag=%s' % NEW_VERSION)
from IPython.display import Markdown as md
md('### Create new hammer-cli release in Redmine \n' + \
'<a href="https://projects.theforeman.org/projects/hammer-cli/versions/new" target="_blank">https://projects.theforeman.org/projects/hammer-cli/versions/new</a>\n\n' + \
'Set name to hammer-cli-%s' % (NEXT_FUTURE_VERSION if MAJOR_RELEASE else NEW_VERSION))
if not MAJOR_RELEASE:
print('Set fixed in versions to %s in following issues:' % NEW_VERSION)
for i in referenced_issues:
print('- https://projects.theforeman.org/issues/%s' % i)
Explanation: Now the new release is in upstream repo
Some manual steps follow to improve the UX
New relase on GitHub
Copy the following changelog lines to the description in form on link below
The release title is the new version.
End of explanation
<END_TASK> |
149 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Arrays for Numerical work?
Step2: You can see, adding two lists just results in a longer list, catenation of the two. | <ASSISTANT_TASK:>
Python Code:
a = [1,2,3]
b = [4,5,6]
c = a+b
print(c)
Explanation: Arrays for Numerical work?
End of explanation
a.append(b)
print(a)
def sum(data):
sum the elements of an array
asum = 0.0
for i in data:
asum = asum + i
return asum
# the length of the array is defined here, and re-used below
# to test performance, we can make this number very large
# 1000, 1000000 etc.
n = 10
%%time
a = list(range(n))
%%time
print(sum(a))
import numpy as np
%%time
a=np.arange(n)
%%time
print(sum(a))
#%%time
%time print(a.sum())
%time print(a.sum())
a=np.arange(10)
b=np.arange(10)
c = a + b
d = 3*a*a + b + 2.0
print(c)
print(d)
c.shape
c2=c.reshape(5,2)
c3=c.reshape(2,5)
print(c)
print(c2)
print(c3)
type(c)
c[0]=999
print(c2)
d2=c.reshape(5,2)[1:3,:]
print(d2)
d2[1,1]=888
print(c)
print(c2)
Explanation: You can see, adding two lists just results in a longer list, catenation of the two.
End of explanation
<END_TASK> |
150 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
View in Colaboratory
<img height="60" width="120" src="https
Step1: visit http
Step2: What happen when we change the tree depth?
Step3: Try by yourself
Step4: Random Forest | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
from sklearn.preprocessing import Imputer
from sklearn import tree
from sklearn import metrics
import numpy as np
import matplotlib.pyplot as plt
% matplotlib inline
#train_df = pd.read_csv("titanic.csv")
#test_df = pd.read_csv("titanic_test.csv")
from google.colab import files
import io
uploaded = files.upload()
train_df = pd.read_csv(io.StringIO(uploaded['titanic.csv'].decode('utf-8')))
uploaded = files.upload()
test_df = pd.read_csv(io.StringIO(uploaded['titanic_test.csv'].decode('utf-8')))
train_df.head()
train_df["Sex"] = train_df["Sex"].apply(lambda sex: 0 if sex == 'male' else 1)
y = targets = labels = train_df["Survived"].values
columns = ["Fare", "Pclass", "Sex", "Age", "SibSp"]
features = train_df[list(columns)].values
features
imp = Imputer(missing_values='NaN', strategy='mean', axis=0)
X = imp.fit_transform(features)
X
my_tree_one = tree.DecisionTreeClassifier(criterion="entropy", max_depth=3)
my_tree_one = my_tree_one.fit(X, y)
#The feature_importances_ attribute make it simple to interpret the significance of the predictors you include
print(my_tree_one.feature_importances_)
print(my_tree_one.score(X, y))
with open("titanic.dot", 'w') as f:
f = tree.export_graphviz(my_tree_one, out_file=f, feature_names=columns)
files.download('titanic.dot')
Explanation: View in Colaboratory
<img height="60" width="120" src="https://shwetkm.github.io/upxlogo.png"></img>
UpX Academy - Machine Learning Track
Decision Trees & Random Forest
Titanic Dataset
End of explanation
test_df.head()
test_df["Sex"] = test_df["Sex"].apply(lambda sex: 0 if sex == 'male' else 1)
#features_test = train_df[list(columns)].values
features_test = test_df[list(columns)].values
imp_test = Imputer(missing_values='NaN', strategy='mean', axis=0)
X_test = imp_test.fit_transform(features_test)
X_test
pred = my_tree_one.predict(X_test)
pred
#Print Confusion matrix
pred = my_tree_one.predict(X)
df_confusion = metrics.confusion_matrix(y, pred)
df_confusion
def plot_confusion_matrix(df_confusion, title='Confusion matrix', cmap=plt.cm.gray_r):
plt.matshow(df_confusion, cmap=cmap) # imshow
plt.title('Confusion Matrix')
plt.colorbar()
plt.ylabel('Actual')
plt.xlabel('Predicted')
plot_confusion_matrix(df_confusion)
Explanation: visit http://webgraphviz.com/ to visualize the tree
End of explanation
#Setting "max_depth" to 10 and "min_samples_split" to 5 : my_tree_two
max_depth = 10
min_samples_split = 5
my_tree_two = tree.DecisionTreeClassifier(max_depth = 10, min_samples_split = 5, random_state = 1)
my_tree_two = my_tree_two.fit(X, y)
#Print the score of the new decison tree
print(my_tree_two.score(X, y))
pred = my_tree_two.predict(X)
df_confusion = metrics.confusion_matrix(y, pred)
df_confusion
def plot_confusion_matrix(df_confusion, title='Confusion matrix', cmap=plt.cm.gray_r):
plt.matshow(df_confusion, cmap=cmap) # imshow
plt.title('Confusion Matrix')
plt.colorbar()
plt.ylabel('Actual')
plt.xlabel('Predicted')
plot_confusion_matrix(df_confusion)
Explanation: What happen when we change the tree depth?
End of explanation
# Add new feature and then train the model
train_df["family_size"] = train_df.SibSp + train_df.Parch + 1
Explanation: Try by yourself
End of explanation
from sklearn.ensemble import RandomForestClassifier
# Building and fitting my_forest
forest = RandomForestClassifier(max_depth = 10, min_samples_split=2, n_estimators = 100, random_state = 1)
my_forest = forest.fit(X, y)
# Print the score of the fitted random forest
print(my_forest.score(X, y))
pred = my_forest.predict(X)
df_confusion = metrics.confusion_matrix(y, pred)
df_confusion
fpr, tpr, _ = metrics.roc_curve(y, pred)
roc_auc = metrics.auc(fpr, tpr)
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, 'b',
label='AUC = %0.2f'% roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.2])
plt.ylim([-0.1,1.2])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
# ROC curve on Predicted probabilities
pred_proba = my_forest.predict_proba(X)
fpr, tpr, _ = metrics.roc_curve(y, pred_proba[:,1])
roc_auc = metrics.auc(fpr, tpr)
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, 'b',
label='AUC = %0.2f'% roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.2])
plt.ylim([-0.1,1.2])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
Explanation: Random Forest
End of explanation
<END_TASK> |
151 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Pandas Crash Course
<img style="float
Step1: Pandas Series and DataFrame objects
There are two main data structures in pandas
Step2: Dataframes use the IPython display method to look pretty, but will show just fine when printed also. (There's a way to make all of the dataframes print pretty via the IPython.display.display method, but this isn't necessary to view the values)
Step3: Indices can be named
Step4: Data Input Output
Step5: CSV Files
Step6: hdf5 files
Step7: Data types
Show the datatypes of each column
Step8: We can create dataframes of multiple datatypes
Step9: We can also set the 'cow color' column to a category
Step10: Indexing and Setting Data
Pandas does a lot of different operations, here are the meat and potatoes. The following describes the indexing of data, but setting the data is as simple as a reassignment.
Step11: Head and Tail
Print the beginning and ending entries of a pandas data structure
Step12: We can also separate the metadata (labels, etc) from the data, yielding a numpy-like output.
Step13: Indexing Data
Pandas provides the means to index data via named columns, or as numpy like indices. Indexing is [row, column], just as it was in numpy.
Data is visible via column
Step14: Note that tab completion is enabled for column names
Step15: <div>
<img style="float
Step16: Label based indexing (.loc)
Slice based on the labels.
Step17: Integer based indexing (.iloc)
Slice based on the index number.
Step18: Fast single element label indexing (.at) - fast .loc
Intended for fast, single indexes.
Step19: Fast single element label indexing (.iat) - fast .iloc
Intended for fast, single indexes.
Step20: Logical indexing
A condition is used to select the values within a slice or the entire Pandas object. Using a conditional statement, a true/false DataFrame is produced
Step21: That matrix can then be used to index the DataFrame
Step22: Logical indexing via isin
It's also possible to filter via the index value
Step23: Sorting by column
Step24: Dealing with Missing Data
By convention, pandas uses the NaN value to represent missing data. There are a few functions surrounding the handling of NaN values
Step25: Print a logical DataFrame where NaN is located
Step26: Drop all rows with NaN
Step27: Replace NaN entries
Step28: Concatenating and Merging Data
Bringing together DataFrames or Series objects
Step29: Append
Adding an additional group after the first group
Step30: SQL-like merging
Pandas can do structured query language (SQL) like merges of data
Step31: Grouping Operations
Often, there is a need to summarize the data or change the output of the data to make it easier to work with, especially for categorical data types.
Step32: Pivot Table
Without changing the data in any way, summarize the output in a different format. Specify the indicies, columns, and values
Step33: Stacking
Column labels can be brought into the rows.
Step34: Groupby
Groupby groups values, creating a Python object to which functions can be applied
Step35: Operations on Pandas Data Objects
Wether it's the entire data frame or a series within a single dataframe, there are a variety of methods that can be applied. Here's a list of a few helpful ones
Step36: Rotation
Note that the values rotated out leave NaN behind
Step37: Add, subtract, multiply, divide
Step38: Histogram
Step39: Describe
Excluding NaN values, print some descriptive statistics about the collection of values.
Step40: Transpose
Exchange the rows and columns (flip about the diagonal)
Step41: Applying Any Function to Pandas Data Objects
Pandas objects have methods that allow function to be applied with greater control, namely the .apply function
Step42: Lambda functions may also be used
Step43: String functions
Step44: Plotting
Pandas exposes the matplotlib library for use.
Step45: Matplotlib styles are available too | <ASSISTANT_TASK:>
Python Code:
# Python imports
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
Explanation: Pandas Crash Course
<img style="float: right; border:3px solid black" src="images/10_Panda_DailyMail_7_Nov_2013.jpg" border="5" width=30%>
Pandas is a Python package that aims to make working with data as easy and intuitive as possible. It fills the role of a foundational real world data manipulation library and interfaces with many other Python packages.
By the end of this file you should have seen simple examples of:
Pandas Series and DataFrame objects
Data IO
Data types
Indexing and setting data
Dealing with missing data
Concatinating and merging data
Grouping Operations
Operations on Pandas data objects
Applying any function to Pandas data objects
Plotting
Further Reading:
http://pandas.pydata.org/pandas-docs/stable/10min.html
https://pandas.pydata.org/pandas-docs/stable/comparison_with_sql.html#compare-with-sql-join
Image Credit: David Jenkins at Bifengxia Panda Reserve in Chengdu
End of explanation
series1 = pd.Series([1,2,3,4])
print(series1)
Explanation: Pandas Series and DataFrame objects
There are two main data structures in pandas:
- Series (1 dimensional data)
- Dataframes (2 dimensional data)
- There are other, lesser used data structures used for higher dimensional data, but are less frequently used
- Panel (3 dimensional data) - panel will be removed from future versions of Pandas and replaced with xarray
- Xarray (>2 dimensions)
Here, the 1- and 2-dimensional data sets are the focus of this lesson.
Pandas DataFrames are analogus to R's data.frame, but aim to provide additional functionality.
Both dataframes and series data structures have indicies, which are shown on the left:
End of explanation
df1 = pd.DataFrame([[1,2,3,4],[10,20,30,40]])
print(df1)
df1
Explanation: Dataframes use the IPython display method to look pretty, but will show just fine when printed also. (There's a way to make all of the dataframes print pretty via the IPython.display.display method, but this isn't necessary to view the values):
End of explanation
# Rename the columns
df1.columns = ['A','B','C','D']
df1.index = ['zero','one']
df1
# Create the dataframe with the columns
df1 = pd.DataFrame([[1,2,3,4],[10,20,30,40]], columns=['A','B','C',"D"], index=['zero','one'])
df1
Explanation: Indices can be named:
End of explanation
df1 = pd.DataFrame(np.random.randn(5,4), columns = ['A','B','C','D'], index=['zero','one','two','three','four'])
print(df1)
Explanation: Data Input Output
End of explanation
df1.to_csv('datafiles/pandas_df1.csv')
!ls datafiles
df2 = pd.read_csv('datafiles/pandas_df1.csv', index_col=0)
print(df2)
Explanation: CSV Files
End of explanation
df1.to_hdf('datafiles/pandas_df1.h5', 'df')
!ls datafiles
df2 = pd.read_hdf('datafiles/pandas_df1.h5', 'df')
print(df2)
Explanation: hdf5 files
End of explanation
df2.dtypes
Explanation: Data types
Show the datatypes of each column:
End of explanation
col1 = range(6)
col2 = np.random.rand(6)
col3 = ['zero','one','two','three','four','five']
col4 = ['blue', 'cow','blue', 'cow','blue', 'cow']
df_types = pd.DataFrame( {'integers': col1, 'floats': col2, 'words': col3, 'cow color': col4} )
print(df_types)
df_types.dtypes
Explanation: We can create dataframes of multiple datatypes:
End of explanation
df_types['cow color'] = df_types['cow color'].astype("category")
df_types.dtypes
Explanation: We can also set the 'cow color' column to a category:
End of explanation
time_stamps = pd.date_range(start='2000-01-01', end='2000-01-20', freq='D') # Define index of time stamps
df1 = pd.DataFrame(np.random.randn(20,4), columns = ['A','B','C','D'], index=time_stamps)
print(df1)
Explanation: Indexing and Setting Data
Pandas does a lot of different operations, here are the meat and potatoes. The following describes the indexing of data, but setting the data is as simple as a reassignment.
End of explanation
df1.head(3) # Show the first n rows, default is 5
df1.tail() # Show the last n rows
Explanation: Head and Tail
Print the beginning and ending entries of a pandas data structure
End of explanation
df1.columns
df1.values
Explanation: We can also separate the metadata (labels, etc) from the data, yielding a numpy-like output.
End of explanation
df1['A'].head() # df1.A.head() is equivalent
Explanation: Indexing Data
Pandas provides the means to index data via named columns, or as numpy like indices. Indexing is [row, column], just as it was in numpy.
Data is visible via column:
End of explanation
df1.A
Explanation: Note that tab completion is enabled for column names:
End of explanation
df1[:2]
Explanation: <div>
<img style="float: left;" src="images/10-01_column-tab.png" width=30%>
</div>
We can specify row ranges:
End of explanation
df1.loc[:'2000-01-5',"A"] # Note that this includes the upper index
Explanation: Label based indexing (.loc)
Slice based on the labels.
End of explanation
df1.iloc[:3,0] # Note that this does not include the upper index like numpy
Explanation: Integer based indexing (.iloc)
Slice based on the index number.
End of explanation
index_timestamp = pd.Timestamp('2000-01-03') # Create a timestamp object to index
df1.at[index_timestamp,"A"] # Index using timestamp (vs string)
Explanation: Fast single element label indexing (.at) - fast .loc
Intended for fast, single indexes.
End of explanation
df1.iat[3,0]
Explanation: Fast single element label indexing (.iat) - fast .iloc
Intended for fast, single indexes.
End of explanation
df1.head()>0.5
Explanation: Logical indexing
A condition is used to select the values within a slice or the entire Pandas object. Using a conditional statement, a true/false DataFrame is produced:
End of explanation
df1[df1>0.5].head() # Note that the values that were 'False' are 'NaN'
Explanation: That matrix can then be used to index the DataFrame:
End of explanation
df_types
bool_series = df_types['cow color'].isin(['blue'])
print(bool_series) # Show the logical indexing
df_types[bool_series] # Index where the values are true
Explanation: Logical indexing via isin
It's also possible to filter via the index value:
End of explanation
df_types.sort_values(by="floats")
Explanation: Sorting by column
End of explanation
df_nan = pd.DataFrame(np.random.rand(6,2), columns = ['A','B'])
df_nan
df_nan['B'] = df_nan[df_nan['B']>0.5] # Prints NaN Where ['B'] <= 0.5
print(df_nan)
Explanation: Dealing with Missing Data
By convention, pandas uses the NaN value to represent missing data. There are a few functions surrounding the handling of NaN values:
End of explanation
df_nan.isnull()
Explanation: Print a logical DataFrame where NaN is located:
End of explanation
df_nan.dropna(how = 'any')
Explanation: Drop all rows with NaN:
End of explanation
df_nan.fillna(value = -1)
Explanation: Replace NaN entries:
End of explanation
df1 = pd.DataFrame(np.zeros([3,3], dtype=np.int))
df1
df2 = pd.concat([df1, df1], axis=0)
df2 = df2.reset_index(drop=True) # Renumber indexing
df2
Explanation: Concatenating and Merging Data
Bringing together DataFrames or Series objects:
Concatenate
End of explanation
newdf = pd.DataFrame({0: [1], 1:[1], 2:[1]})
print(newdf)
df3 = df2.append(newdf, ignore_index=True)
df3
Explanation: Append
Adding an additional group after the first group:
End of explanation
left = pd.DataFrame({'numbers': ['K0', 'K1', 'K2', 'K3'],
'English': ['one', 'two', 'three', 'four'],
'Spanish': ['uno', 'dos', 'tres', 'quatro'],
'German': ['erste', 'zweite','dritte','vierte']})
left
right = pd.DataFrame({'numbers': ['K0', 'K1', 'K2', 'K3'],
'French': ['un', 'deux', 'trois', 'quatre'],
'Afrikaans': ['een', 'twee', 'drie', 'vier']})
right
result = pd.merge(left, right, on='numbers')
result
Explanation: SQL-like merging
Pandas can do structured query language (SQL) like merges of data:
End of explanation
dfg = pd.DataFrame({'A': ['clogs','sandals','jellies']*2,
'B': ['socks','footies']*3,
'C': [1,1,1,3,2,2],
'D': np.random.rand(6)})
dfg
Explanation: Grouping Operations
Often, there is a need to summarize the data or change the output of the data to make it easier to work with, especially for categorical data types.
End of explanation
dfg.pivot_table(index=['A','B'], columns=['C'], values='D')
Explanation: Pivot Table
Without changing the data in any way, summarize the output in a different format. Specify the indicies, columns, and values:
End of explanation
dfg.stack()
Explanation: Stacking
Column labels can be brought into the rows.
End of explanation
dfg.groupby(['B']).count()
dfg.groupby(['A']).mean()
Explanation: Groupby
Groupby groups values, creating a Python object to which functions can be applied:
End of explanation
dfg['D'].mean()
Explanation: Operations on Pandas Data Objects
Wether it's the entire data frame or a series within a single dataframe, there are a variety of methods that can be applied. Here's a list of a few helpful ones:
Simple statistics (mean, stdev, etc).
End of explanation
dfg['D']
dfg_Ds = dfg['D'].shift(2)
dfg_Ds
Explanation: Rotation
Note that the values rotated out leave NaN behind:
End of explanation
dfg['D'].div(dfg_Ds )
Explanation: Add, subtract, multiply, divide:
Operations are element-wise:
End of explanation
dfg
dfg['C'].value_counts()
Explanation: Histogram
End of explanation
df_types.describe()
Explanation: Describe
Excluding NaN values, print some descriptive statistics about the collection of values.
End of explanation
df_types.T
Explanation: Transpose
Exchange the rows and columns (flip about the diagonal):
End of explanation
def f(x): # Define function
return x + 1
dfg['C'].apply(f)
Explanation: Applying Any Function to Pandas Data Objects
Pandas objects have methods that allow function to be applied with greater control, namely the .apply function:
End of explanation
dfg['C'].apply(lambda x: x + 1)
Explanation: Lambda functions may also be used
End of explanation
dfg['A'].str.title() # Make the first letter uppercase
Explanation: String functions:
Pandas has access to string methods:
End of explanation
n = 100
X = np.linspace(0, 5, n)
Y1,Y2 = np.log((X)**2+2), np.sin(X)+2
dfp = pd.DataFrame({'X' : X, 'Y1': Y1, 'Y2': Y2})
dfp.head()
dfp.plot(x = 'X')
plt.show()
Explanation: Plotting
Pandas exposes the matplotlib library for use.
End of explanation
style_name = 'classic'
plt.style.use(style_name)
dfp.plot(x = 'X')
plt.title('Log($x^2$) and Sine', fontsize=16)
plt.xlabel('X Label', fontsize=16)
plt.ylabel('Y Label', fontsize=16)
plt.show()
mpl.rcdefaults() # Reset matplotlib rc defaults
Explanation: Matplotlib styles are available too:
End of explanation
<END_TASK> |
152 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<h2 align="center">点击下列图标在线运行HanLP</h2>
<div align="center">
<a href="https
Step1: 加载模型
HanLP的工作流程是先加载模型,模型的标示符存储在hanlp.pretrained这个包中,按照NLP任务归类。
Step2: 调用hanlp.load进行加载,模型会自动下载到本地缓存:
Step3: 词性标注
词性标注任务的输入为已分词的一个或多个句子:
Step4: 注意上面两个“希望”的词性各不相同,一个是名词另一个是动词。
自定义词典
自定义词典为词性标注任务的成员变量,以CTB标准为例:
Step5: 自定义单个词性:
Step6: 根据上下文自定义词性: | <ASSISTANT_TASK:>
Python Code:
!pip install hanlp -U
Explanation: <h2 align="center">点击下列图标在线运行HanLP</h2>
<div align="center">
<a href="https://colab.research.google.com/github/hankcs/HanLP/blob/doc-zh/plugins/hanlp_demo/hanlp_demo/zh/pos_stl.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<a href="https://mybinder.org/v2/gh/hankcs/HanLP/doc-zh?filepath=plugins%2Fhanlp_demo%2Fhanlp_demo%2Fzh%2Fpos_stl.ipynb" target="_blank"><img src="https://mybinder.org/badge_logo.svg" alt="Open In Binder"/></a>
</div>
安装
无论是Windows、Linux还是macOS,HanLP的安装只需一句话搞定:
End of explanation
import hanlp
hanlp.pretrained.pos.ALL # 语种见名称最后一个字段或相应语料库
Explanation: 加载模型
HanLP的工作流程是先加载模型,模型的标示符存储在hanlp.pretrained这个包中,按照NLP任务归类。
End of explanation
pos = hanlp.load(hanlp.pretrained.pos.CTB9_POS_ELECTRA_SMALL)
Explanation: 调用hanlp.load进行加载,模型会自动下载到本地缓存:
End of explanation
pos(["我", "的", "希望", "是", "希望", "张晚霞", "的", "背影", "被", "晚霞", "映红", "。"])
Explanation: 词性标注
词性标注任务的输入为已分词的一个或多个句子:
End of explanation
print(pos.dict_tags)
Explanation: 注意上面两个“希望”的词性各不相同,一个是名词另一个是动词。
自定义词典
自定义词典为词性标注任务的成员变量,以CTB标准为例:
End of explanation
pos.dict_tags = {'HanLP': 'state-of-the-art-tool'}
pos(["HanLP", "为", "生产", "环境", "带来", "次", "世代", "最", "先进", "的", "多语种", "NLP", "技术", "。"])
Explanation: 自定义单个词性:
End of explanation
pos.dict_tags = {('的', '希望'): ('补语成分', '名词'), '希望': '动词'}
pos(["我", "的", "希望", "是", "希望", "张晚霞", "的", "背影", "被", "晚霞", "映红", "。"])
Explanation: 根据上下文自定义词性:
End of explanation
<END_TASK> |
153 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Tutorial Outline
Introduction
Preprerequisites
How does it work?
Authentication
Authentication keys
MongoDB Collection
Starting a Stream
Stream Listener
Connect to a streaming API
Data Access and Analysis
Load results to a DataFrame
Visualization
Introduction
Twitter provides two types of API to access their data
Step1: Authentication keys
Go to https
Step2: MongoDB Collection
Connect to MongoDB and create/get a collection.
Step6: Starting a Stream
We need a listener which should extend tweepy.StreamListener class. There is a number of methods that you can extend to instruct the listener class to perform functionality. Some of the important methods are
Step8: Connect to a streaming API
There are two methods to connect to a stream
Step9: Data Access and Analysis
Now that we have stored all these tweets in a MongoDB collection, let's take a look at one of these tweets
Step10: Load results to a DataFrame
Step11: Checking the highest used words
Step12: Visualization | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import tweepy
import matplotlib.pyplot as plt
import pymongo
import ipywidgets as wgt
from IPython.display import display
from sklearn.feature_extraction.text import CountVectorizer
import re
from datetime import datetime
%matplotlib inline
Explanation: Tutorial Outline
Introduction
Preprerequisites
How does it work?
Authentication
Authentication keys
MongoDB Collection
Starting a Stream
Stream Listener
Connect to a streaming API
Data Access and Analysis
Load results to a DataFrame
Visualization
Introduction
Twitter provides two types of API to access their data:
RESTful API: Used to get data about existing data objects like statuses "tweets", user, ... etc
Streaming API: Used to get live statuses "tweets" as they are sent
The reason why you would like to use streaming API:
Capture large amount of data because RESTful API has limited access to older data
Real-time analysis like monitoring social discussion about a live event
In house archive like archiving social discussion about your brand(s)
AI response system for a twitter account like automated reply and filing questions or providing answers
Preprerequisites
Python 2 or 3
Jupyter /w IPyWidgets
Pandas
Numpy
Matplotlib
MogoDB Installtion
Pymongo
Scikit-learn
Tweepy
Twitter account
How does it work?
Twitter streaming API can provide data through a streaming HTTP response. This is very similar to downloading a file where you read a number of bytes and store it to disk and repeat until the end of file. The only difference is this stream is endless. The only things that could stop this stream are:
If you closed your connection to the streaming response
If your connection speed is not capable of receiving data and the servers buffer is filling up
This means that this process will be using the thread that it was launched from until it is stopped. In production, you should always start this in a different thread or process to make sure your software doesn't freeze until you stop the stream.
Authentication
You will need four numbers from twitter development to start using streaming API. First, let's import some important libraries for dealing with twitter API, data analysis, data storage ... etc
End of explanation
api_key = "yP0yoCitoUNgD63ebMerGyJaE" # <---- Add your API Key
api_secret = "kLO5YUtlth3cd4lOHLy8nlLHW5npVQgUfO4FhsyCn6wCMIz5E6" # <---- Add your API Secret
access_token = "259862037-iMXNjfL8JBApm4LVcdfwc3FcMm7Xta4TKg5cd44K" # <---- Add your access token
access_token_secret = "UIgh08dtmavzlvlWWukIXwN5HDIQD0wNwyn5sPzhrynBf" # <---- Add your access token secret
auth = tweepy.OAuthHandler(api_key, api_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
Explanation: Authentication keys
Go to https://apps.twitter.com/
Create an App (if you don't have one yet)
Grant read-only access to your account
Copy the four keys and paste them here:
End of explanation
col = pymongo.MongoClient()["tweets"]["StreamingTutorial"]
col.count()
Explanation: MongoDB Collection
Connect to MongoDB and create/get a collection.
End of explanation
class MyStreamListener(tweepy.StreamListener):
counter = 0
def __init__(self, max_tweets=1000, *args, **kwargs):
self.max_tweets = max_tweets
self.counter = 0
super().__init__(*args, **kwargs)
def on_connect(self):
self.counter = 0
self.start_time = datetime.now()
def on_status(self, status):
# Increment counter
self.counter += 1
# Store tweet to MongoDB
col.insert_one(status._json)
if self.counter % 1 == 0:
value = int(100.00 * self.counter / self.max_tweets)
mining_time = datetime.now() - self.start_time
progress_bar.value = value
html_value = <span class="label label-primary">Tweets/Sec: %.1f</span> % (self.counter / max([1,mining_time.seconds]))
html_value += <span class="label label-success">Progress: %.1f%%</span> % (self.counter / self.max_tweets * 100.0)
html_value += <span class="label label-info">ETA: %.1f Sec</span> % ((self.max_tweets - self.counter) / (self.counter / max([1,mining_time.seconds])))
wgt_status.value = html_value
#print("%s/%s" % (self.counter, self.max_tweets))
if self.counter >= self.max_tweets:
myStream.disconnect()
print("Finished")
print("Total Mining Time: %s" % (mining_time))
print("Tweets/Sec: %.1f" % (self.max_tweets / mining_time.seconds))
progress_bar.value = 0
myStreamListener = MyStreamListener(max_tweets=100)
myStream = tweepy.Stream(auth = api.auth, listener=myStreamListener)
Explanation: Starting a Stream
We need a listener which should extend tweepy.StreamListener class. There is a number of methods that you can extend to instruct the listener class to perform functionality. Some of the important methods are:
on_status(self, status): This will pass a status "tweet" object when a tweet is received
on_data(self, raw_data): Called when any any data is received and the raw data will be passed
on_error(self, status_code): Called when you get a response with code other than 200 (ok)
Stream Listener
End of explanation
keywords = ["Jupyter",
"Python",
"Data Mining",
"Machine Learning",
"Data Science",
"Big Data",
"DataMining",
"MachineLearning",
"DataScience",
"BigData",
"IoT",
"#R",
]
# Visualize a progress bar to track progress
progress_bar = wgt.IntProgress(value=0)
display(progress_bar)
wgt_status = wgt.HTML(value=<span class="label label-primary">Tweets/Sec: 0.0</span>)
display(wgt_status)
# Start a filter with an error counter of 20
for error_counter in range(20):
try:
myStream.filter(track=keywords)
print("Tweets collected: %s" % myStream.listener.counter)
print("Total tweets in collection: %s" % col.count())
break
except:
print("ERROR# %s" % (error_counter + 1))
Explanation: Connect to a streaming API
There are two methods to connect to a stream:
filter(follow=None, track=None, async=False, locations=None, stall_warnings=False, languages=None, encoding='utf8', filter_level=None)
firehose(count=None, async=False)
Firehose captures everything. You should make sure that you have connection speed that can handle the stream and you have the storage capacity that can store these tweets at the same rate. We cannot really use firehose for this tutorial but we'll be using filter.
You have to specify one of two things to filter:
follow: A list of user ID to follow. This will stream all their tweets, retweets, and others retweeting their tweets. This doesn't include mentions and manual retweets where the user doesn't press the retweet button.
track: A string or list of string to be used for filtering. If you use multiple words separated by spaces, this will be used for AND operator. If you use multiple words in a string separated by commas or pass a list of words this will be treated as OR operator.
Note: track is case insensitive.
What to track?
I want to collect all tweets that contains any of these words:
Jupyter
Python
Data Mining
Machine Learning
Data Science
Big Data
IoT
R
This could be done with a string or a list. It is easier to to it with a list to make your code clear to read.
End of explanation
col.find_one()
Explanation: Data Access and Analysis
Now that we have stored all these tweets in a MongoDB collection, let's take a look at one of these tweets
End of explanation
dataset = [{"created_at": item["created_at"],
"text": item["text"],
"user": "@%s" % item["user"]["screen_name"],
"source": item["source"],
} for item in col.find()]
dataset = pd.DataFrame(dataset)
dataset
Explanation: Load results to a DataFrame
End of explanation
cv = CountVectorizer()
count_matrix = cv.fit_transform(dataset.text)
word_count = pd.DataFrame(cv.get_feature_names(), columns=["word"])
word_count["count"] = count_matrix.sum(axis=0).tolist()[0]
word_count = word_count.sort_values("count", ascending=False).reset_index(drop=True)
word_count[:50]
Explanation: Checking the highest used words
End of explanation
def get_source_name(x):
value = re.findall(pattern="<[^>]+>([^<]+)</a>", string=x)
if len(value) > 0:
return value[0]
else:
return ""
dataset.source_name = dataset.source.apply(get_source_name)
source_counts = dataset.source_name.value_counts().sort_values()[-10:]
bottom = [index for index, item in enumerate(source_counts.index)]
plt.barh(bottom, width=source_counts, color="orange", linewidth=0)
y_labels = ["%s %.1f%%" % (item, 100.0*source_counts[item]/len(dataset)) for index,item in enumerate(source_counts.index)]
plt.yticks(np.array(bottom)+0.4, y_labels)
source_counts
Explanation: Visualization
End of explanation
<END_TASK> |
154 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Ejemplo cuencas
En el siguiente ejemplo se presentan las funcionalidades básicas de la herramienta wmf.Stream y wmf.Basin
dentro de los temas tocados se presenta
Step1: Este es como se leen los mapas de direcciones y dem para el trazado de cuencas y corrientes
Step2: Trazado de corrientes
Importantes para determinar por donde se acumula el flujo y por lo tanto por donde se debe trazar la cuenca, estos elementos operan más como una guia que como un resultado final, pueden ser usados directamente para el trazado de las cuencas acudiendo a su propiedad structure, en la cual en las dos primeras entradas se alojan la coordenada X y la coordenada Y
Trazado de una corriente
Step3: El perfil de una corriente puede ser utilizado como punto de referencia para la búsqueda de puntos de trazado
Step4: Trazado de cuenca
Trazado de cuencas mediante el objeto Basin
La cuenca se traza a partir de un par de coordenadas, de un DEM y de un DIR, se le pueden agregar parametros como el nombre o el umbral para producir corrientes, pero este es opcional, este tipo de cuencas no se dejan simular, para ello se debe usar la herramienta SimuBasin
Step5: La ultima cuenca tiene un conteo de celdas igual a 1, lo cual significa que no se ha trazado nada y que por esta celda no pasa ninguna otra celda, por lo tanto esto no es una cuenca, y no debe ser usado para ningún tipo de cálculos, en la siguiente línea este elemento es eliminado
Step6: Balance sobre cuencas
EL objeto Basin Trae por defecto funciones para el cálculño de propiedades geomorfológicas y para el cálculo de caudales mediante el método de balances de largo plazo. a continuación se presenta la funcionalidad del mismo.
Step7: En la Figura se presenta el caudal medio estimado para cada elemento de la cuenca, incluidas celdas en donde no se considera la presencia de red hídrica.
Cuando se ha calculado el caudal medi tambien se ha calculado la evaporación sobre la cuenca, esta se puede ver en la variable cuenca.CellETR
Step8: La figura anterior ha sido guardada en el disco mediante el comando ruta = 'Caldas_ETR.png', en este caso ha sido sobre el directorio de trabajo actual, si este se cambia, se cambia el directorio donde se guarda.
El módulo permite estimar caudales máximos y mínimos mediante rtegionalización de caudales extremos mediante la ecuación
Step9: Cada entrada en Qmax y Qmin corresponde al periodo de retorno Tr [2.33, 5, 10, 25, 50, 100], estos pueden ser cambiados al interior de la función al cambiar la propiedad Tr en el momento en que esta es invocada.
Step10: Guardado en shp
Step11: Geomorfologia
Aca se explica un poco las funciones que hay de geomorfología | <ASSISTANT_TASK:>
Python Code:
#Paquete Watershed Modelling Framework (WMF) para el trabajo con cuencas.
from wmf import wmf
Explanation: Ejemplo cuencas
En el siguiente ejemplo se presentan las funcionalidades básicas de la herramienta wmf.Stream y wmf.Basin
dentro de los temas tocados se presenta:
Trazado de corrientes.
Perfil de corrientes.
Trazado de cuencas.
Balances para estimación de caudal.
Análisis Geomorfológico de cuencas.
End of explanation
# Lectura del DEM
DEM = wmf.read_map_raster('/media/nicolas/discoGrande/raster/dem_corr.tif',isDEMorDIR=True, dxp=30.0)
DIR = wmf.read_map_raster('/media/nicolas/discoGrande/raster/dirAMVA.tif',isDEMorDIR=True, dxp= 30.0)
wmf.cu.nodata=-9999.0; wmf.cu.dxp=30.0
DIR[DIR<=0]=wmf.cu.nodata.astype(int)
DIR=wmf.cu.dir_reclass(DIR,wmf.cu.ncols,wmf.cu.nrows)
Explanation: Este es como se leen los mapas de direcciones y dem para el trazado de cuencas y corrientes
End of explanation
st = wmf.Stream(-75.618,6.00,DEM=DEM,DIR=DIR,name ='Rio Medellin')
st.structure
st.Plot_Profile()
Explanation: Trazado de corrientes
Importantes para determinar por donde se acumula el flujo y por lo tanto por donde se debe trazar la cuenca, estos elementos operan más como una guia que como un resultado final, pueden ser usados directamente para el trazado de las cuencas acudiendo a su propiedad structure, en la cual en las dos primeras entradas se alojan la coordenada X y la coordenada Y
Trazado de una corriente
End of explanation
# Mediante el comando de busqueda hemos buscado donde se localizan las coordenadas que cumplen la propiedad de estar a
#una distancia de la salida que oscila entre 10000 y 10100 metros.
np.where((st.structure[3]>10000) & (st.structure[3]<10100))
Explanation: El perfil de una corriente puede ser utilizado como punto de referencia para la búsqueda de puntos de trazado
End of explanation
# Las coordenadas en la entrada 289 son:
print st.structure[0,289]
print st.structure[1,289]
# La cuenca puede ser trtazada utilizando las coordenadas de forma implicita (como en este ejemplo), o de una
# manera explicita como se realizaría en la segunda línea de código.
cuenca = wmf.Basin(-75.6364,6.11051,DEM,DIR,name='ejemplo',stream=st)
# en esta segunda linea estamos trazando una cuenca con unas coordenadas que no son exactas y no se sabe si estan
# sobre la corriente, este problema se corrige al pasarle la corriente al trazador mediante el comando stream, el cual
# recibe como entrada el objeto corriente previamente obtenido.
cuenca2 = wmf.Basin(-75.6422,6.082,DEM,DIR,name='ejemplo',stream=st)
# Cuenca error: en este caso no se para el argumento stream, por lo que la cuenca se traza sobre las coordenadas
# que se han dado, lo cual probablemente produzca un error.
cuenca3 = wmf.Basin(-75.6364,6.11051,DEM,DIR,name='ejemplo',stream=st)
# Se imprime la cantidad de celdas que comprenden a cada una de las cuencas obtenidas, esto para ver que efectivamente
# existe una diferencia entre ambas debida a las diferencias de coordenadas.
print cuenca.ncells
print cuenca2.ncells
print cuenca3.ncells
Explanation: Trazado de cuenca
Trazado de cuencas mediante el objeto Basin
La cuenca se traza a partir de un par de coordenadas, de un DEM y de un DIR, se le pueden agregar parametros como el nombre o el umbral para producir corrientes, pero este es opcional, este tipo de cuencas no se dejan simular, para ello se debe usar la herramienta SimuBasin
End of explanation
del(cuenca3)
Explanation: La ultima cuenca tiene un conteo de celdas igual a 1, lo cual significa que no se ha trazado nada y que por esta celda no pasa ninguna otra celda, por lo tanto esto no es una cuenca, y no debe ser usado para ningún tipo de cálculos, en la siguiente línea este elemento es eliminado:
End of explanation
# Balance en una cuenca asumiendo precipitación anual igual a 2000 mm/año sobre toda la cuenca
cuenca.GetQ_Balance(2100)
# La variable de balance de largo plazo se calcula para cada celda de la cuenca y queda almacenada en cuenca.CellQmed
cuenca.Plot_basin(cuenca.CellQmed)
Explanation: Balance sobre cuencas
EL objeto Basin Trae por defecto funciones para el cálculño de propiedades geomorfológicas y para el cálculo de caudales mediante el método de balances de largo plazo. a continuación se presenta la funcionalidad del mismo.
End of explanation
# Plot de la evaporación sobre la cuenca de caldas
cuenca.Plot_basin(cuenca.CellETR, extra_lat= 0.001, extra_long= 0.001, lines_spaces= 0.02,
ruta = 'Caldas_ETR.png')
Explanation: En la Figura se presenta el caudal medio estimado para cada elemento de la cuenca, incluidas celdas en donde no se considera la presencia de red hídrica.
Cuando se ha calculado el caudal medi tambien se ha calculado la evaporación sobre la cuenca, esta se puede ver en la variable cuenca.CellETR
End of explanation
# Estimacion de maximos, por defecto lo hace por gumbel, lo puede hacer tambien por lognormal
Qmax = cuenca.GetQ_Max(cuenca.CellQmed)
Qmax2 = cuenca.GetQ_Max(cuenca.CellQmed, Tr= [3, 15])
# Estimacion de minimos, por defecto lo hace por gumbel, lo puede hacer tambien por lognormal
Qmin = cuenca.GetQ_Min(cuenca.CellQmed)
Qmin[Qmin<0]=0
Explanation: La figura anterior ha sido guardada en el disco mediante el comando ruta = 'Caldas_ETR.png', en este caso ha sido sobre el directorio de trabajo actual, si este se cambia, se cambia el directorio donde se guarda.
El módulo permite estimar caudales máximos y mínimos mediante rtegionalización de caudales extremos mediante la ecuación:
$Q_{max}(T_r) = \widehat{Q}{max} + K{dist}(T_r) \sigma_{max}$
$Q_{min}(T_r) = \widehat{Q}{min} - K{dist}(T_r) \sigma_{min}$
End of explanation
# Plot del caudal máximo para un periodo de retorno de 2.33
cuenca.Plot_basin(Qmax[0])
# Plot del caudal máximo para un periodo de retorno de 100
cuenca.Plot_basin(Qmax[5])
Explanation: Cada entrada en Qmax y Qmin corresponde al periodo de retorno Tr [2.33, 5, 10, 25, 50, 100], estos pueden ser cambiados al interior de la función al cambiar la propiedad Tr en el momento en que esta es invocada.
End of explanation
cuenca.Save_Basin2Map('Cuenca.kml',DriverFormat='kml')
cuenca.Save_Net2Map('Red.kml',DriverFormat='kml',qmed=cuenca.CellQmed)
Explanation: Guardado en shp:
Tanto la cuenca como la red hídrica se pueden guardar en shp para poder ser vistos en cualquier visor gis, tambien se puede guardar en otro tipo de archivos como kml.
End of explanation
# Calcula geomorfología por cauces
cuenca.GetGeo_Cell_Basics()
# reporte de geomorfologia generico y los almacena en cuenca.GeoParameters y en cuenca.Tc
cuenca.GetGeo_Parameters()
cuenca.GeoParameters
# Tiempos de concentracion
cuenca.Tc
cuenca.Plot_Tc()
cuenca.GetGeo_IsoChrones(1.34)
cuenca.Plot_basin(cuenca.CellTravelTime)
cuenca.Plot_Travell_Hist()
cuenca.GetGeo_Ppal_Hipsometric()
cuenca.PlotPpalStream()
cuenca.Plot_Hipsometric()
Explanation: Geomorfologia
Aca se explica un poco las funciones que hay de geomorfología
End of explanation
<END_TASK> |
155 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
HTTPX
<img src="https
Step1: Using httpbin.org | <ASSISTANT_TASK:>
Python Code:
from datetime import datetime
Explanation: HTTPX
<img src="https://raw.githubusercontent.com/encode/httpx/master/docs/img/butterfly.png" alt="HTTPX Logo" style="width: 400px;"/>
https://www.python-httpx.org/
A next-generation HTTP client for Python.
HTTPX is a fully featured HTTP client for Python 3, which provides sync and async APIs, and support for both HTTP/1.1 and HTTP/2.
Alternatives: aiohttp, asks
End of explanation
import requests
def requests_get(index=None):
response = requests.get("https://httpbin.org/delay/1")
response.raise_for_status()
print(f"{index} - {response.status_code} - {response.elapsed}")
requests_get()
before = datetime.now()
for index in range(0, 5):
requests_get(index)
after = datetime.now()
print(f"total time: {after - before}")
import httpx
def httpx_get(index=None):
response = httpx.get("https://httpbin.org/delay/1")
response.raise_for_status()
print(f"{index} - {response.status_code} - {response.elapsed}")
httpx_get()
before = datetime.now()
for index in range(0, 5):
httpx_get(index)
after = datetime.now()
print(f"total time: {after - before}")
async with httpx.AsyncClient() as client:
response = await client.get('https://httpbin.org/delay/1')
print(response)
async def async_httpx_get(index=None):
async with httpx.AsyncClient() as client:
response = await client.get("https://httpbin.org/delay/1")
response.raise_for_status()
print(f"{index} - {response.status_code} - {response.elapsed}")
await async_httpx_get()
before = datetime.now()
for index in range(0, 5):
await async_httpx_get(index)
after = datetime.now()
print(f"total time: {after - before}")
many_gets = tuple(async_httpx_get(index) for index in range(0,5))
import asyncio
before = datetime.now()
await asyncio.gather(*many_gets)
after = datetime.now()
print(f"total time: {after - before}")
semaphore = asyncio.Semaphore(3)
async def async_semaphore_httpx_get(index=None):
async with semaphore:
async with httpx.AsyncClient() as client:
response = await client.get("https://httpbin.org/delay/1")
response.raise_for_status()
print(f"{index} - {response.status_code} - {response.elapsed}")
semaphore_many_gets = tuple(
async_semaphore_httpx_get(index) for index in range(0,10))
before = datetime.now()
await asyncio.gather(*semaphore_many_gets)
after = datetime.now()
print(f"total time: {after - before}")
Explanation: Using httpbin.org:
https://httpbin.org/delay/1
End of explanation
<END_TASK> |
156 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<table>
<tr align=left><td><img align=left src="../images/CC-BY.png">
<td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Marc Spiegelman, Based on ipython notebook by Kyle Mandli from his course [Introduction to numerical methods](https
Step1: Root Finding
GOAL
Step2: which simply replaces any use of f(x) with x - cos(x), for example
Step3: or let's plot this for $x\in [-10,10]$
Step4: Which, in this range, has a single root somewhere in the bracket $0<x<5$. The question is how to find it?
The "right answer"
The correct approach is to use (and understand) a good algorithm for bracketed root finding of functions of a single variable of which scipy.optimize provides several. Here we will use the brentq algorithm which is a workhorse for rootfinding as it is guaranteed to find at least one root given a proper bracket (where $f$ changes sign). The method is a generalized secant method so doesn't require derivatives of $f$ and has super-linear convergence (a simple bisection scheme also guarantees a root but has only linear convergence).
Step5: Successive Substition (Fixed point iteration)
A more naive approach is to rewrite $f(x) = 0$ as a fixed point iteration
$$ x = g(x) $$
where $g(x)$ is another function such that when $x$ satisfies this equation, it is a root of $f$. For example here we could choose $g(x) = \cos(x)$.
We can turn this equation into an iterative method by setting $x_0 =0.$ and forming a sequence of numbers
$$ x_n = g(x_{n-1})$$
and hope it converges. Algorithmically we could do something like
Step6: and plot out the residual.
Step7: which oscillates towards the value of the root. If we look at the absolute value of the error, we see that it converges linearly i.e.
$$ |e_{n+1}| = K|e_{n}| $$
Where $K$ is a value that analysis shows should be $K=|g'(x^)|$ where $x^$ is the root. For our problem $g'(x) = -\sin(x)$ and $K=0.673612029183$. Because $K<1$ the fixed point iteration is a "contraction" and the error eventually $\rightarrow 0$ as $n\rightarrow\infty$. We demonstrated that this works for this problem graphically
Step8: and numerically by comparing the ratio of $|f(x_{n+1})|/|f(x_n)|$
Step9: Newton's Method
A potentially more efficient method for Non-linear problems is Newton's method which can be considered another fixed-point iteration but promises much better convergence (near the fixed point, for simple roots).
The basic idea is that given some initial guess $x_n$ such that $f(x_n) \neq 0$, there is some correction $\delta_n$ such that $f(x_n + \delta_n) = 0$. Expanding $f$ in a Taylor series around $x_n$ we get the linear approximation
$$ f(x_n + \delta_n) \approx f(x_n) + f'(x_n)\delta_n + O(\delta_n^2) = 0$$
neglecting terms of order $\delta_n^2$ we can solve for the correction that would be exact if the problem were linear, i.e.
$$ \delta_n = -f(x_n)/f'(x_n) $$
then the next iterate is given by
$$ x_{n+1} = x_{n} + \delta_n $$
and iterate until the residual $|f(x)| < \mathrm{tol}$ for some tolerance.
Algorithmically...
Step10: Analysis shows that near a simple root, Newton's method converges quadratically i.e.
$$|e_{n+1}| = C |e_n|^2$$
thus doubling the number of significant digits per iteration. This analysis is only valid near the root and in general, Newton's method can be highly unstable (for example if it finds a region where $f'(x)$ is close to zero), and in general requires some additional controls to maintain a bracket.
Comparing the two methods for this problem, however, shows that Newton's method converges quadratically, while the fixed point iteration converges linearly | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy
import matplotlib.pyplot as plt
Explanation: <table>
<tr align=left><td><img align=left src="../images/CC-BY.png">
<td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Marc Spiegelman, Based on ipython notebook by Kyle Mandli from his course [Introduction to numerical methods](https://github.com/mandli/intro-numerical-methods)</td>
</table>
End of explanation
f = lambda x: x - numpy.cos(x)
Explanation: Root Finding
GOAL: Find where $f(x) = 0$.
Example: A simple non-linear problem
let $f(x) = x - \cos(x)$, find values of $x$ where $f(x)=0$.
Because $f$ is non-linear it is possible that it has no roots, a finite number of roots or even an infinite number of roots. Thus the first thing one should do is try to visualize $f$ over a range of $x$ to see, qualitatively, whether there are any zero crossings and to identify brackets where $f$ changes sign.
Here we will introduce our function using an "inlined-function" or "lambda function" in python
End of explanation
print 'f(0.)=',f(0.)
Explanation: which simply replaces any use of f(x) with x - cos(x), for example
End of explanation
x = numpy.linspace(-10.,10., 100)
plt.figure()
plt.plot(x,f(x),'b')
plt.hold(True)
plt.plot(x,numpy.zeros(x.shape),'r--')
plt.xlabel("x")
plt.ylabel("f(x)")
plt.title("A plot")
plt.show()
Explanation: or let's plot this for $x\in [-10,10]$
End of explanation
from scipy.optimize import brentq
# give a bracket [a,b] such that f(a)*f(b) <= 0
a = 0.
b = 5.
x0 = brentq(f,a,b)
print
print "root x0 = {0}, in bracket [{1},{2}]".format(x0,a,b)
print "residual f(x0) = {0}".format(f(x0))
plt.figure()
plt.plot(x,f(x),'b')
plt.hold(True)
plt.plot(x,numpy.zeros(x.shape),'r--')
plt.plot(x0,f(x0),'go')
plt.xlabel("x")
plt.ylabel("f(x)")
plt.title("A root at $x_0={0}$".format(x0))
plt.show()
Explanation: Which, in this range, has a single root somewhere in the bracket $0<x<5$. The question is how to find it?
The "right answer"
The correct approach is to use (and understand) a good algorithm for bracketed root finding of functions of a single variable of which scipy.optimize provides several. Here we will use the brentq algorithm which is a workhorse for rootfinding as it is guaranteed to find at least one root given a proper bracket (where $f$ changes sign). The method is a generalized secant method so doesn't require derivatives of $f$ and has super-linear convergence (a simple bisection scheme also guarantees a root but has only linear convergence).
End of explanation
g = lambda x: numpy.cos(x)
xn = numpy.zeros(21)
for i in xrange(len(xn)-1):
print "step {0}: x = {1}, residual f(x) = {2}".format(i,xn[i], f(xn[i]))
xn[i+1] = g(xn[i])
Explanation: Successive Substition (Fixed point iteration)
A more naive approach is to rewrite $f(x) = 0$ as a fixed point iteration
$$ x = g(x) $$
where $g(x)$ is another function such that when $x$ satisfies this equation, it is a root of $f$. For example here we could choose $g(x) = \cos(x)$.
We can turn this equation into an iterative method by setting $x_0 =0.$ and forming a sequence of numbers
$$ x_n = g(x_{n-1})$$
and hope it converges. Algorithmically we could do something like
End of explanation
plt.figure()
plt.plot(range(len(xn)),f(xn),'b-o')
plt.xlabel('Iterations')
plt.ylabel('Residual $f(x)$')
plt.title('Convergence of fixed point iteration)')
plt.show()
Explanation: and plot out the residual.
End of explanation
plt.figure()
plt.semilogy(range(len(xn)),numpy.abs(f(xn)),'b-o')
plt.xlabel('Iterations')
plt.ylabel('Residual $|f(x)|$')
plt.title('Convergence of fixed point iteration')
plt.show()
Explanation: which oscillates towards the value of the root. If we look at the absolute value of the error, we see that it converges linearly i.e.
$$ |e_{n+1}| = K|e_{n}| $$
Where $K$ is a value that analysis shows should be $K=|g'(x^)|$ where $x^$ is the root. For our problem $g'(x) = -\sin(x)$ and $K=0.673612029183$. Because $K<1$ the fixed point iteration is a "contraction" and the error eventually $\rightarrow 0$ as $n\rightarrow\infty$. We demonstrated that this works for this problem graphically
End of explanation
print
for i in range(len(xn)-1):
print 'Step = {0}, K={1}'.format((i+1),numpy.abs(f(xn[i+1]))/numpy.abs(f(xn[i])))
gprime = lambda x: -numpy.sin(x)
print
print "|g'(x0)| = {0}".format(numpy.abs(gprime(x0)))
Explanation: and numerically by comparing the ratio of $|f(x_{n+1})|/|f(x_n)|$
End of explanation
fprime = lambda x: 1. + numpy.sin(x)
xnn = numpy.zeros(10)
print "\nNewton's Method\n"
i = 0
tol = 1.e-16
while numpy.abs(f(xnn[i])) > tol:
print "step {0}: x = {1}, residual f(x) = {2}".format(i,xnn[i], f(xnn[i]))
xnn[i+1] = xnn[i] - f(xnn[i])/fprime(xnn[i])
i += 1
imax = i
xnn = xnn[:imax]
Explanation: Newton's Method
A potentially more efficient method for Non-linear problems is Newton's method which can be considered another fixed-point iteration but promises much better convergence (near the fixed point, for simple roots).
The basic idea is that given some initial guess $x_n$ such that $f(x_n) \neq 0$, there is some correction $\delta_n$ such that $f(x_n + \delta_n) = 0$. Expanding $f$ in a Taylor series around $x_n$ we get the linear approximation
$$ f(x_n + \delta_n) \approx f(x_n) + f'(x_n)\delta_n + O(\delta_n^2) = 0$$
neglecting terms of order $\delta_n^2$ we can solve for the correction that would be exact if the problem were linear, i.e.
$$ \delta_n = -f(x_n)/f'(x_n) $$
then the next iterate is given by
$$ x_{n+1} = x_{n} + \delta_n $$
and iterate until the residual $|f(x)| < \mathrm{tol}$ for some tolerance.
Algorithmically...
End of explanation
plt.figure()
plt.semilogy(range(len(xn)),numpy.abs(f(xn)),'b-o',label='fixed point')
plt.hold(True)
plt.semilogy(range(len(xnn)),numpy.abs(f(xnn)),'r-o',label='newton')
plt.xlabel('Iterations')
plt.ylabel('Residual $|f(x)|$')
plt.legend(loc='best')
plt.title('Comparison of Fixed point iteration to Newtons Method')
plt.show()
Explanation: Analysis shows that near a simple root, Newton's method converges quadratically i.e.
$$|e_{n+1}| = C |e_n|^2$$
thus doubling the number of significant digits per iteration. This analysis is only valid near the root and in general, Newton's method can be highly unstable (for example if it finds a region where $f'(x)$ is close to zero), and in general requires some additional controls to maintain a bracket.
Comparing the two methods for this problem, however, shows that Newton's method converges quadratically, while the fixed point iteration converges linearly
End of explanation
<END_TASK> |
157 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<center>
<img src="../../img/ods_stickers.jpg">
Открытый курс по машинному обучению. Сессия №3
<center>Автор материала
Step1: Здесь у нас текст вопроса, затем табуляция и теги вопроса
Step2: Обратите внимание на то, что такие данные я уже не хочу загружать в оперативную память и, пока можно, буду пользоваться эффективными утилитами UNIX – head, tail, wc, cat, cut и прочими.
3. Предобработка данных
Давайте выберем в наших данных все вопросы с тегами javascript, java, python, ruby, php, c++, c#, go, scala и swift и подготовим обучающую выборку в формате Vowpal Wabbit. Будем решать задачу 10-классовой классификации вопросов по перечисленным тегам.
Вообще, как мы видим, у каждого вопроса может быть несколько тегов, но мы упростим себе задачу и будем у каждого вопроса выбирать один из перечисленных тегов либо игнорировать вопрос, если таковых тегов нет.
Но вообще VW поддерживает multilabel classification (аргумент --multilabel_oaa).
<br>
<br>
Реализуйте в виде отдельного файла preprocess.py код для подготовки данных. Он должен отобрать строки, в которых есть перечисленные теги, и переписать их в отдельный файл в формат Vowpal Wabbit. Детали
Step3: Должно получиться вот такое число строк – 4389054. Как видите, 10 Гб у меня обработались примерно за полторы минуты.
Step4: Поделите выборку на обучающую, проверочную и тестовую части в равной пропорции - по 1463018 в каждый файл. Перемешивать не надо, первые 1463018 строк должны пойти в обучающую часть stackoverflow_train.vw, последние 1463018 – в тестовую stackoverflow_test.vw, оставшиеся – в проверочную stackoverflow_valid.vw.
Также сохраните векторы ответов для проверочной и тестовой выборки в отдельные файлы stackoverflow_valid_labels.txt и stackoverflow_test_labels.txt.
Тут вам помогут утилиты head, tail, split, cat и cut.
Step5: 4. Обучение и проверка моделей
Обучите Vowpal Wabbit на выборке stackoverflow_train.vw 9 раз, перебирая параметры passes (1,3,5), ngram (1,2,3).
Остальные параметры укажите следующие
Step6: <font color='red'> Вопрос 1.</font> Какое сочетание параметров дает наибольшую долю правильных ответов на проверочной выборке stackoverflow_valid.vw?
- Биграммы и 3 прохода по выборке
- Триграммы и 5 проходов по выборке
- Биграммы и 1 проход по выборке <--
- Униграммы и 1 проход по выборке
Проверьте лучшую (по доле правильных ответов на валидации) модель на тестовой выборке.
Step7: <font color='red'> Вопрос 2.</font> Как соотносятся доли правильных ответов лучшей (по доле правильных ответов на валидации) модели на проверочной и на тестовой выборках? (здесь % – это процентный пункт, т.е., скажем, снижение с 50% до 40% – это на 10%, а не 20%).
- На тестовой ниже примерно на 2%
- На тестовой ниже примерно на 3%
- Результаты почти одинаковы – отличаются меньше чем на 0.5% <--
Обучите VW с параметрами, подобранными на проверочной выборке, теперь на объединении обучающей и проверочной выборок. Посчитайте долю правильных ответов на тестовой выборке. | <ASSISTANT_TASK:>
Python Code:
!head -1 hw8_data/stackoverflow.10kk.tsv
!head -1 hw8_data/stackoverflow_10mln.tsv
Explanation: <center>
<img src="../../img/ods_stickers.jpg">
Открытый курс по машинному обучению. Сессия №3
<center>Автор материала: программист-исследователь Mail.Ru Group Юрий Кашницкий
<center> Домашнее задание № 8
<center> Vowpal Wabbit в задаче классификации тегов вопросов на Stackoverflow
План
1. Введение
2. Описание данных
3. Предобработка данных
4. Обучение и проверка моделей
5. Заключение
1. Введение
В этом задании вы будете делать примерно то же, что я каждую неделю – в Mail.Ru Group: обучать модели на выборке в несколько гигабайт. Задание можно выполнить и на Windows с Python, но я рекомендую поработать под *NIX-системой (например, через Docker) и активно использовать язык bash.
Немного снобизма (простите, но правда): если вы захотите работать в лучших компаниях мира в области ML, вам все равно понадобится опыт работы с bash под UNIX.
Веб-форма для ответов.
Для выполнения задания понадобится установленный Vowpal Wabbit (уже есть в докер-контейнере курса, см. инструкцию в Wiki репозитория нашего курса) и примерно 70 Гб дискового пространства. Я тестировал решение не на каком-то суперкомпе, а на Macbook Pro 2015 (8 ядер, 16 Гб памяти), и самая тяжеловесная модель обучалась около 12 минут, так что задание реально выполнить и с простым железом. Но если вы планируете когда-либо арендовать сервера Amazon, можно попробовать это сделать уже сейчас.
Материалы в помощь:
- интерактивный тьюториал CodeAcademy по утилитам командной строки UNIX (примерно на час-полтора)
- статья про то, как арендовать на Amazon машину (еще раз: это не обязательно для выполнения задания, но будет хорошим опытом, если вы это делаете впервые)
2. Описание данных
Имеются 10 Гб вопросов со StackOverflow – скачайте и распакуйте архив.
Формат данных простой:<br>
<center>текст вопроса (слова через пробел) TAB теги вопроса (через пробел)
Здесь TAB – это символ табуляции.
Пример первой записи в выборке:
End of explanation
%%time
!wc -l stackoverflow_10mln.tsv
%%time
!wc -l hw8_data/stackoverflow.10kk.tsv
Explanation: Здесь у нас текст вопроса, затем табуляция и теги вопроса: css, css3 и css-selectors. Всего в выборке таких вопросов 10 миллионов.
End of explanation
import os
from tqdm import tqdm
from time import time
import numpy as np
from sklearn.metrics import accuracy_score
Explanation: Обратите внимание на то, что такие данные я уже не хочу загружать в оперативную память и, пока можно, буду пользоваться эффективными утилитами UNIX – head, tail, wc, cat, cut и прочими.
3. Предобработка данных
Давайте выберем в наших данных все вопросы с тегами javascript, java, python, ruby, php, c++, c#, go, scala и swift и подготовим обучающую выборку в формате Vowpal Wabbit. Будем решать задачу 10-классовой классификации вопросов по перечисленным тегам.
Вообще, как мы видим, у каждого вопроса может быть несколько тегов, но мы упростим себе задачу и будем у каждого вопроса выбирать один из перечисленных тегов либо игнорировать вопрос, если таковых тегов нет.
Но вообще VW поддерживает multilabel classification (аргумент --multilabel_oaa).
<br>
<br>
Реализуйте в виде отдельного файла preprocess.py код для подготовки данных. Он должен отобрать строки, в которых есть перечисленные теги, и переписать их в отдельный файл в формат Vowpal Wabbit. Детали:
- скрипт должен работать с аргументами командной строки: с путями к файлам на входе и на выходе
- строки обрабатываются по одной (можно использовать tqdm для подсчета числа итераций)
- если табуляций в строке нет или их больше одной, считаем строку поврежденной и пропускаем
- в противном случае смотрим, сколько в строке тегов из списка javascript, java, python, ruby, php, c++, c#, go, scala и swift. Если ровно один, то записываем строку в выходной файл в формате VW: label | text, где label – число от 1 до 10 (1 - javascript, ... 10 – swift). Пропускаем те строки, где интересующих тегов больше или меньше одного
- из текста вопроса надо выкинуть двоеточия и вертикальные палки, если они есть – в VW это спецсимволы
End of explanation
!python preprocess.py hw8_data/stackoverflow.10kk.tsv hw8_data/stackoverflow.vw
!wc -l hw8_data/stack.vw
!python preprocess.py stackoverflow_10mln.tsv stackoverflow.vw
Explanation: Должно получиться вот такое число строк – 4389054. Как видите, 10 Гб у меня обработались примерно за полторы минуты.
End of explanation
#!head -1463018 hw8_data/stackoverflow.vw > hw8_data/stackoverflow_train.vw
#!tail -1463018 hw8_data/stackoverflow.vw > hw8_data/stackoverflow_test.vw
#!tail -n+1463018 hw8_data/stackoverflow.vw | head -n+1463018 > hw8_data/stackoverflow_valid.vw
#!split -l 1463018 hw8_data/stackoverflow.vw hw8_data/stack
!mv hw8_data/stackaa hw8_data/stack_train.vw
!mv hw8_data/stackab hw8_data/stack_valid.vw
!mv hw8_data/stackac hw8_data/stack_test.vw
!cut -d '|' -f 1 hw8_data/stack_valid.vw > hw8_data/stack_valid_labels.txt
!cut -d '|' -f 1 hw8_data/stack_test.vw > hw8_data/stack_test_labels.txt
Explanation: Поделите выборку на обучающую, проверочную и тестовую части в равной пропорции - по 1463018 в каждый файл. Перемешивать не надо, первые 1463018 строк должны пойти в обучающую часть stackoverflow_train.vw, последние 1463018 – в тестовую stackoverflow_test.vw, оставшиеся – в проверочную stackoverflow_valid.vw.
Также сохраните векторы ответов для проверочной и тестовой выборки в отдельные файлы stackoverflow_valid_labels.txt и stackoverflow_test_labels.txt.
Тут вам помогут утилиты head, tail, split, cat и cut.
End of explanation
%%time
for p in [1,3,5]:
for n in [1,2,3]:
!vw --oaa 10 \
-d hw8_data/stack_train.vw \
--loss_function squared \
--passes {p} \
--ngram {n} \
-f hw8_data/stack_model_{p}_{n}.vw \
--bit_precision 28 \
--random_seed 17 \
--quiet \
--c
print ('stack_model_{}_{}.vw is ready'.format(p,n))
%%time
for p in [1,3,5]:
for n in [1,2,3]:
!vw -i hw8_data/stack_model_{p}_{n}.vw \
-t -d hw8_data/stack_valid.vw \
-p hw8_data/stack_valid_pred_{p}_{n}.txt \
--quiet
print ('stack_valid_pred_{}_{}.txt is ready'.format(p,n))
%%time
with open('hw8_data/stack_valid_labels.txt') as valid_labels_file :
valid_labels = [float(label) for label in valid_labels_file.readlines()]
scores=[]
best_valid_score=0
for p in [1,3,5]:
for n in [1,2,3]:
with open('hw8_data/stack_valid_pred_'+str(p)+'_'+str(n)+'.txt') as pred_file:
valid_pred = [float(label) for label in pred_file.readlines()]
#if (n,p) in [(2,3),(3,5),(2,1),(1,1)]:
acc_score=accuracy_score(valid_labels, valid_pred)
scores.append(((n,p),acc_score))
if acc_score>best_valid_score:
best_valid_score=acc_score
print(n,p,round(acc_score,4))
scores.sort(key=lambda tup: tup[1],reverse=True)
print(scores)
best_valid_scoret_valid_score
Explanation: 4. Обучение и проверка моделей
Обучите Vowpal Wabbit на выборке stackoverflow_train.vw 9 раз, перебирая параметры passes (1,3,5), ngram (1,2,3).
Остальные параметры укажите следующие: bit_precision=28 и seed=17. Также скажите VW, что это 10-классовая задача.
Проверяйте долю правильных ответов на выборке stackoverflow_valid.vw. Выберите лучшую модель и проверьте качество на выборке stackoverflow_test.vw.
End of explanation
!vw -i hw8_data/stack_model_1_2.vw \
-t -d hw8_data/stack_test.vw \
-p hw8_data/stack_test_pred_1_2.txt \
--quiet
%%time
with open('hw8_data/stack_test_labels.txt') as test_labels_file :
test_labels = [float(label) for label in test_labels_file.readlines()]
with open('hw8_data/stack_test_pred_1_2.txt') as pred_file:
test_pred = [float(label) for label in pred_file.readlines()]
test_acc_score=accuracy_score(test_labels, test_pred)
print(round(test_acc_score,4))
100*round(test_acc_score,4)-100*round(best_valid_score,4)
Explanation: <font color='red'> Вопрос 1.</font> Какое сочетание параметров дает наибольшую долю правильных ответов на проверочной выборке stackoverflow_valid.vw?
- Биграммы и 3 прохода по выборке
- Триграммы и 5 проходов по выборке
- Биграммы и 1 проход по выборке <--
- Униграммы и 1 проход по выборке
Проверьте лучшую (по доле правильных ответов на валидации) модель на тестовой выборке.
End of explanation
!cat hw8_data/stack_train.vw hw8_data/stack_valid.vw > hw8_data/stack_merged.vw
%%time
!vw --oaa 10 \
-d hw8_data/stack_merged.vw \
--loss_function squared \
--passes 1 \
--ngram 2 \
-f hw8_data/stack_model_merged.vw \
--bit_precision 28 \
--random_seed 17 \
--quiet \
-c
%%time
!vw -i hw8_data/stack_model_merged.vw \
-t -d hw8_data/stack_test.vw \
-p hw8_data/stack_test_pred_merged.txt \
--quiet
%%time
with open('hw8_data/stack_test_labels.txt') as test_labels_file :
test_labels = [float(label) for label in test_labels_file.readlines()]
with open('hw8_data/stack_test_pred_merged.txt') as pred_file:
test_pred = [float(label) for label in pred_file.readlines()]
merged_acc_score=accuracy_score(test_labels, test_pred)
print(round(merged_acc_score,4))
100*round(merged_acc_score,4)-100*round(test_acc_score,4)
Explanation: <font color='red'> Вопрос 2.</font> Как соотносятся доли правильных ответов лучшей (по доле правильных ответов на валидации) модели на проверочной и на тестовой выборках? (здесь % – это процентный пункт, т.е., скажем, снижение с 50% до 40% – это на 10%, а не 20%).
- На тестовой ниже примерно на 2%
- На тестовой ниже примерно на 3%
- Результаты почти одинаковы – отличаются меньше чем на 0.5% <--
Обучите VW с параметрами, подобранными на проверочной выборке, теперь на объединении обучающей и проверочной выборок. Посчитайте долю правильных ответов на тестовой выборке.
End of explanation
<END_TASK> |
158 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Tutorial "Algorithmic Methods for Network Analysis with NetworKit" (Part 3)
Determining Important Nodes
There are a number of ways to measure the importance of nodes in a network. Possibly the easiest is the degree, i.e. the number of neighbors. In a social network, for example, a person who knows many others could be an important person. However, is this notion really meaningful? Probably not, since it does not consider the importance of the neighbors. Also, there is an interesting effect in social networks with respect to neighborhood sizes. Let us investigate this effect a little bit
Step1: Thus, ... % of the users in this network have fewer friends than their friends have on average. While this result cannot be generalized exactly like this to other networks, the qualitative effect is often seen in social (and other) networks. Thus, let us now consider measures that broaden the rather narrow scope of the degree.
$k$-core Decomposition
Thus, the next concept we consider is the $k$-core decomposition. To answer the following Q&A session, go back to the lecture slides.
Q&A Session #5
What is the definition of an $i$-core? (Note that $k$ is often used for the largest core only!)
Answer
Step2: Centrality Measures
The $k$-core decomposition is rather, as the name suggests, a decomposition of the vertices into discrete subsets. Nodes with the same coreness (i.e. in the same shell) have equal importance. Rankings where many vertices are equally important are often not very meaningful. That is why the $k$-core decomposition should not be interpreted as a fine-grained ranking mechanism.
Q&A Session #6
Take the Facebook graph MIT8 and find the most central nodes. Take the relevance of their neighbors into account. Consider that MIT8 models a social network, not a web graph. Which algorithm would you choose? (Hint | <ASSISTANT_TASK:>
Python Code:
from networkit import *
%matplotlib inline
cd ~/workspace/NetworKit
G = readGraph("input/MIT8.edgelist", Format.EdgeListTabZero)
# Code for 3-3) and 3-4)
%matplotlib inline
import matplotlib.pyplot as plt
# def avgFriendDegree(v):
count = 0 # count the number of persons whose friends have on average more friends
Explanation: Tutorial "Algorithmic Methods for Network Analysis with NetworKit" (Part 3)
Determining Important Nodes
There are a number of ways to measure the importance of nodes in a network. Possibly the easiest is the degree, i.e. the number of neighbors. In a social network, for example, a person who knows many others could be an important person. However, is this notion really meaningful? Probably not, since it does not consider the importance of the neighbors. Also, there is an interesting effect in social networks with respect to neighborhood sizes. Let us investigate this effect a little bit:
Q&A Session #4
Do you think your number of online friends is above/below/on average? (You do not have to answer this question openly.)
Answer (may be secret):
What do you expect: How many people (in percent) in a social network have fewer friends than their friends on average?
Answer (choose one):
a) 0 - 25%
b) 26 - 50%
c) 51 - 75%
d) 76 - 100%
Use the Facebook graph. Compute for each vertex the average degree of its neighbors.
Answer:
Count the number of persons whose friends have on average more friends. What is their percentage in this network?
Answer:
End of explanation
# Code for 5-3)
mit8 = readGraph("input/MIT8.edgelist", Format.EdgeListTabZero)
airf1 = readGraph("input/airfoil1.graph", Format.METIS)
gen = generators.ErdosRenyiGenerator(1000, 0.01)
er1000 = gen.generate()
# for g in {mit8, airf1, er1000}:
Explanation: Thus, ... % of the users in this network have fewer friends than their friends have on average. While this result cannot be generalized exactly like this to other networks, the qualitative effect is often seen in social (and other) networks. Thus, let us now consider measures that broaden the rather narrow scope of the degree.
$k$-core Decomposition
Thus, the next concept we consider is the $k$-core decomposition. To answer the following Q&A session, go back to the lecture slides.
Q&A Session #5
What is the definition of an $i$-core? (Note that $k$ is often used for the largest core only!)
Answer:
Why do you think it can be considered a more robust measure for importance compared to the degree?
Answer:
Compute the $k$-core decomposition of the three networks used before. Then print the non-empty $i$-shells by using the method coreNumbers(). What results (similarities/differences) do you expect? Are these expectations met by the results?
Answer:
What disadvantage do you see when using $k$-core decomposition to rate nodes according to their importance?
Answer:
End of explanation
# Code for 6-1) and 6-2)
Explanation: Centrality Measures
The $k$-core decomposition is rather, as the name suggests, a decomposition of the vertices into discrete subsets. Nodes with the same coreness (i.e. in the same shell) have equal importance. Rankings where many vertices are equally important are often not very meaningful. That is why the $k$-core decomposition should not be interpreted as a fine-grained ranking mechanism.
Q&A Session #6
Take the Facebook graph MIT8 and find the most central nodes. Take the relevance of their neighbors into account. Consider that MIT8 models a social network, not a web graph. Which algorithm would you choose? (Hint: Look at the lecture slides!)
Answer:
What are the 15 most important nodes according to the method in 1)?
Answer:
What other centrality measures do you recall?
Answer:
After you answered the questions, proceed with Tutorial #4.
End of explanation
<END_TASK> |
159 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Experiment
Step1: Load and check data
Step2: ## Analysis
Experiment Details
Step3: Plot accuracy over epochs | <ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import glob
import tabulate
import pprint
import click
import numpy as np
import pandas as pd
from ray.tune.commands import *
from nupic.research.frameworks.dynamic_sparse.common.browser import *
import matplotlib.pyplot as plt
from matplotlib import rcParams
from scipy.ndimage.filters import gaussian_filter1d
%config InlineBackend.figure_format = 'retina'
import seaborn as sns
sns.set(style="whitegrid")
sns.set_palette("colorblind")
Explanation: Experiment:
Evaluate pruning by magnitude weighted by coactivations (more thorough evaluation), compare it to baseline (SET), in GSC. Applied only to linear layers
Motivation.
Check if results are consistently above baseline.
Conclusion
End of explanation
exps = ['comparison_pruning_2' , 'comparison_iterative_pruning_2', 'comparison_set_2']
paths = [os.path.expanduser("~/nta/results/{}".format(e)) for e in exps]
df = load_many(paths)
df.head(5)
df.shape
df.columns
df['model'].unique()
# calculate density for each model
df.loc[df['model'] == 'PruningModel', 'density'] = df.loc[df['model'] == 'PruningModel', 'target_final_density']
df.loc[df['model'] == 'IterativePruningModel', 'density'] = df.loc[df['model'] == 'IterativePruningModel', 'target_final_density']
df.loc[df['model'] == 'SET', 'density'] = df.loc[df['model'] == 'SET', 'on_perc']
Explanation: Load and check data
End of explanation
# Did any trials failed?
num_epochs = 200
df[df["epochs"]<num_epochs]["epochs"].count()
# Removing failed or incomplete trials
df_origin = df.copy()
df = df_origin[df_origin["epochs"]>=30]
df.shape
# helper functions
def mean_and_std(s):
return "{:.3f} ± {:.3f}".format(s.mean(), s.std())
def round_mean(s):
return "{:.0f}".format(round(s.mean()))
stats = ['min', 'max', 'mean', 'std']
def agg(columns, filter=None, round=3):
if filter is None:
return (df.groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'model': ['count']})).round(round)
else:
return (df[filter].groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'model': ['count']})).round(round)
agg(['density', 'model'])
Explanation: ## Analysis
Experiment Details
End of explanation
# translate model names
rcParams['figure.figsize'] = 16, 8
sns.scatterplot(data=df, x='density', y='val_acc_max', hue='model')
sns.lineplot(data=df, x='density', y='val_acc_max', hue='model', legend=False);
Explanation: Plot accuracy over epochs
End of explanation
<END_TASK> |
160 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Copyright 2018 Google LLC.
Step1: Cat vs. Dog Image Classification
Exercise 2
Step2: These are just a few of the options available (for more, see the Keras documentation. Let's quickly go over what we just wrote
Step3: Next, let's apply the datagen transformations to a cat image from the training set to produce five random variants. Rerun the cell a few times to see fresh batches of random variants.
Step4: Add Data Augmentation to the Preprocessing Step
Now let's add our data-augmentation transformations from Exploring Data Augmentation to our data preprocessing configuration
Step5: If we train a new network using this data augmentation configuration, our network will never see the same input twice. However the inputs that it sees are still heavily intercorrelated, so this might not be quite enough to completely get rid of overfitting.
Adding Dropout
Another popular strategy for fighting overfitting is to use dropout.
TIP
Step6: Retrain the Model
With data augmentation and dropout in place, let's retrain our convnet model. This time, let's train on all 2,000 images available, for 30 epochs, and validate on all 1,000 validation images. (This may take a few minutes to run.) See if you can write the code yourself
Step7: Solution
Click below for the solution.
Step8: Note that with data augmentation in place, the 2,000 training images are randomly transformed each time a new training epoch runs, which means that the model will never see the same image twice during training.
Evaluate the Results
Let's evaluate the results of model training with data augmentation and dropout
Step9: Much better! We are no longer overfitting, and we have gained ~3 validation accuracy percentage points (see the green line in the top chart). In fact, judging by our training profile, we could keep fitting our model for 30+ more epochs and we could probably get to ~80%!
Clean Up
Before running the next exercise, run the following cell to terminate the kernel and free memory resources | <ASSISTANT_TASK:>
Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 Google LLC.
End of explanation
from tensorflow.keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
Explanation: Cat vs. Dog Image Classification
Exercise 2: Reducing Overfitting
Estimated completion time: 30 minutes
In this notebook we will build on the model we created in Exercise 1 to classify cats vs. dogs, and improve accuracy by employing a couple strategies to reduce overfitting: data augmentation and dropout.
We will follow these steps:
Explore how data augmentation works by making random transformations to training images.
Add data augmentation to our data preprocessing.
Add dropout to the convnet.
Retrain the model and evaluate loss and accuracy.
Let's get started!
Exploring Data Augmentation
Let's get familiar with the concept of data augmentation, an essential way to fight overfitting for computer vision models.
In order to make the most of our few training examples, we will "augment" them via a number of random transformations, so that at training time, our model will never see the exact same picture twice. This helps prevent overfitting and helps the model generalize better.
This can be done by configuring a number of random transformations to be performed on the images read by our ImageDataGenerator instance. Let's get started with an example:
End of explanation
!wget --no-check-certificate \
https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip -O \
/tmp/cats_and_dogs_filtered.zip
import os
import zipfile
local_zip = '/tmp/cats_and_dogs_filtered.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp')
zip_ref.close()
base_dir = '/tmp/cats_and_dogs_filtered'
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
# Directory with our training cat pictures
train_cats_dir = os.path.join(train_dir, 'cats')
# Directory with our training dog pictures
train_dogs_dir = os.path.join(train_dir, 'dogs')
# Directory with our validation cat pictures
validation_cats_dir = os.path.join(validation_dir, 'cats')
# Directory with our validation dog pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs')
train_cat_fnames = os.listdir(train_cats_dir)
train_dog_fnames = os.listdir(train_dogs_dir)
Explanation: These are just a few of the options available (for more, see the Keras documentation. Let's quickly go over what we just wrote:
rotation_range is a value in degrees (0–180), a range within which to randomly rotate pictures.
width_shift and height_shift are ranges (as a fraction of total width or height) within which to randomly translate pictures vertically or horizontally.
shear_range is for randomly applying shearing transformations.
zoom_range is for randomly zooming inside pictures.
horizontal_flip is for randomly flipping half of the images horizontally. This is relevant when there are no assumptions of horizontal assymmetry (e.g. real-world pictures).
fill_mode is the strategy used for filling in newly created pixels, which can appear after a rotation or a width/height shift.
Let's take a look at our augmented images. First let's set up our example files, as in Exercise 1.
NOTE: The 2,000 images used in this exercise are excerpted from the "Dogs vs. Cats" dataset available on Kaggle, which contains 25,000 images. Here, we use a subset of the full dataset to decrease training time for educational purposes.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from tensorflow.keras.preprocessing.image import array_to_img, img_to_array, load_img
img_path = os.path.join(train_cats_dir, train_cat_fnames[2])
img = load_img(img_path, target_size=(150, 150)) # this is a PIL image
x = img_to_array(img) # Numpy array with shape (150, 150, 3)
x = x.reshape((1,) + x.shape) # Numpy array with shape (1, 150, 150, 3)
# The .flow() command below generates batches of randomly transformed images
# It will loop indefinitely, so we need to `break` the loop at some point!
i = 0
for batch in datagen.flow(x, batch_size=1):
plt.figure(i)
imgplot = plt.imshow(array_to_img(batch[0]))
i += 1
if i % 5 == 0:
break
Explanation: Next, let's apply the datagen transformations to a cat image from the training set to produce five random variants. Rerun the cell a few times to see fresh batches of random variants.
End of explanation
# Adding rescale, rotation_range, width_shift_range, height_shift_range,
# shear_range, zoom_range, and horizontal flip to our ImageDataGenerator
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,)
# Note that the validation data should not be augmented!
val_datagen = ImageDataGenerator(rescale=1./255)
# Flow training images in batches of 32 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
train_dir, # This is the source directory for training images
target_size=(150, 150), # All images will be resized to 150x150
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
# Flow validation images in batches of 32 using val_datagen generator
validation_generator = val_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
Explanation: Add Data Augmentation to the Preprocessing Step
Now let's add our data-augmentation transformations from Exploring Data Augmentation to our data preprocessing configuration:
End of explanation
from tensorflow.keras import layers
from tensorflow.keras import Model
from tensorflow.keras.optimizers import RMSprop
# Our input feature map is 150x150x3: 150x150 for the image pixels, and 3 for
# the three color channels: R, G, and B
img_input = layers.Input(shape=(150, 150, 3))
# First convolution extracts 16 filters that are 3x3
# Convolution is followed by max-pooling layer with a 2x2 window
x = layers.Conv2D(16, 3, activation='relu')(img_input)
x = layers.MaxPooling2D(2)(x)
# Second convolution extracts 32 filters that are 3x3
# Convolution is followed by max-pooling layer with a 2x2 window
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.MaxPooling2D(2)(x)
# Third convolution extracts 64 filters that are 3x3
# Convolution is followed by max-pooling layer with a 2x2 window
x = layers.Convolution2D(64, 3, activation='relu')(x)
x = layers.MaxPooling2D(2)(x)
# Flatten feature map to a 1-dim tensor
x = layers.Flatten()(x)
# Create a fully connected layer with ReLU activation and 512 hidden units
x = layers.Dense(512, activation='relu')(x)
# Add a dropout rate of 0.5
x = layers.Dropout(0.5)(x)
# Create output layer with a single node and sigmoid activation
output = layers.Dense(1, activation='sigmoid')(x)
# Configure and compile the model
model = Model(img_input, output)
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=0.001),
metrics=['acc'])
Explanation: If we train a new network using this data augmentation configuration, our network will never see the same input twice. However the inputs that it sees are still heavily intercorrelated, so this might not be quite enough to completely get rid of overfitting.
Adding Dropout
Another popular strategy for fighting overfitting is to use dropout.
TIP: To learn more about dropout, see Training Neural Networks in Machine Learning Crash Course.
Let's reconfigure our convnet architecture from Exercise 1 to add some dropout, right before the final classification layer:
End of explanation
# WRITE CODE TO TRAIN THE MODEL ON ALL 2000 IMAGES FOR 30 EPOCHS, AND VALIDATE
# ON ALL 1,000 VALIDATION IMAGES
Explanation: Retrain the Model
With data augmentation and dropout in place, let's retrain our convnet model. This time, let's train on all 2,000 images available, for 30 epochs, and validate on all 1,000 validation images. (This may take a few minutes to run.) See if you can write the code yourself:
End of explanation
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50,
verbose=2)
Explanation: Solution
Click below for the solution.
End of explanation
# Retrieve a list of accuracy results on training and validation data
# sets for each training epoch
acc = history.history['acc']
val_acc = history.history['val_acc']
# Retrieve a list of list results on training and validation data
# sets for each training epoch
loss = history.history['loss']
val_loss = history.history['val_loss']
# Get number of epochs
epochs = range(len(acc))
# Plot training and validation accuracy per epoch
plt.plot(epochs, acc)
plt.plot(epochs, val_acc)
plt.title('Training and validation accuracy')
plt.figure()
# Plot training and validation loss per epoch
plt.plot(epochs, loss)
plt.plot(epochs, val_loss)
plt.title('Training and validation loss')
Explanation: Note that with data augmentation in place, the 2,000 training images are randomly transformed each time a new training epoch runs, which means that the model will never see the same image twice during training.
Evaluate the Results
Let's evaluate the results of model training with data augmentation and dropout:
End of explanation
import os, signal
os.kill(os.getpid(), signal.SIGKILL)
Explanation: Much better! We are no longer overfitting, and we have gained ~3 validation accuracy percentage points (see the green line in the top chart). In fact, judging by our training profile, we could keep fitting our model for 30+ more epochs and we could probably get to ~80%!
Clean Up
Before running the next exercise, run the following cell to terminate the kernel and free memory resources:
End of explanation
<END_TASK> |
161 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Example 1
Step1: Load time series data
Step2: There are a few supported file formats. AT2 files can be loaded as follows
Step3: Create site profile
This is about the simplest profile that we can create. Linear-elastic soil and rock.
Step4: Create the site response calculator
Step5: Specify the output
Step6: Perform the calculation
Compute the response of the site, and store the state within the calculation object. Nothing is provided.
Step7: Calculate all of the outputs from the calculation object.
Step8: Plot the outputs
Create a few plots of the output. | <ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pysra
%matplotlib inline
# Increased figure sizes
plt.rcParams["figure.dpi"] = 120
Explanation: Example 1 : Time series SRA
Time series analysis to compute surface response spectrum and site
amplification functions.
End of explanation
fname = "data/NIS090.AT2"
with open(fname) as fp:
next(fp)
description = next(fp).strip()
next(fp)
parts = next(fp).split()
time_step = float(parts[1])
accels = [float(p) for l in fp for p in l.split()]
ts = pysra.motion.TimeSeriesMotion(fname, description, time_step, accels)
ts.accels
Explanation: Load time series data
End of explanation
ts = pysra.motion.TimeSeriesMotion.load_at2_file(fname)
ts.accels
fig, ax = plt.subplots()
ax.plot(ts.times, ts.accels)
ax.set(xlabel="Time (sec)", ylabel="Accel (g)")
fig.tight_layout();
Explanation: There are a few supported file formats. AT2 files can be loaded as follows:
End of explanation
profile = pysra.site.Profile(
[
pysra.site.Layer(pysra.site.SoilType("Soil", 18.0, None, 0.05), 30, 400),
pysra.site.Layer(pysra.site.SoilType("Rock", 24.0, None, 0.01), 0, 1200),
]
)
profile.plot('initial_shear_vel')
Explanation: Create site profile
This is about the simplest profile that we can create. Linear-elastic soil and rock.
End of explanation
calc = pysra.propagation.LinearElasticCalculator()
Explanation: Create the site response calculator
End of explanation
freqs = np.logspace(-1, 2, num=500)
outputs = pysra.output.OutputCollection(
[
pysra.output.ResponseSpectrumOutput(
# Frequency
freqs,
# Location of the output
pysra.output.OutputLocation("outcrop", index=0),
# Damping
0.05,
),
pysra.output.ResponseSpectrumRatioOutput(
# Frequency
freqs,
# Location in (denominator),
pysra.output.OutputLocation("outcrop", index=-1),
# Location out (numerator)
pysra.output.OutputLocation("outcrop", index=0),
# Damping
0.05,
),
pysra.output.FourierAmplitudeSpectrumOutput(
# Frequency
freqs,
# Location of the output
pysra.output.OutputLocation("outcrop", index=0),
# Bandwidth for Konno-Omachi smoothing window
ko_bandwidth=30,
)
]
)
Explanation: Specify the output
End of explanation
calc(ts, profile, profile.location("outcrop", index=-1))
Explanation: Perform the calculation
Compute the response of the site, and store the state within the calculation object. Nothing is provided.
End of explanation
outputs(calc)
Explanation: Calculate all of the outputs from the calculation object.
End of explanation
for o in outputs:
o.plot(style="indiv")
Explanation: Plot the outputs
Create a few plots of the output.
End of explanation
<END_TASK> |
162 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Function Approximation with a Multilayer Perceptron
This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>
This code illustrates
Step1: Function definitions. Here we consider a hard-coded two-layer perception with one hidden layer, using the hyperbolic tangent as activation function, and a linear output layer. The output of the perceptron can hence be written as
\begin{equation}
\hat{f}(x,\boldsymbol{\theta}) = \sum_{i=1}^3v_i\sigma(w_ix+b_i)
\end{equation}
where $\sigma(x) = \tanh(x)$.
Instead of specifying all the parameters individually, we group them in a single vector $\boldsymbol{\theta}$ with
\begin{equation}
\boldsymbol{\theta}=\begin{pmatrix}
v_1 & w_1 & b_1 & v_2 & w_2 & b_2 & v_3 & w_3 & b_3\end{pmatrix}
\end{equation}
Step2: The cost function is the mean-squared error, i.e.,
\begin{equation}
J(\boldsymbol{\theta},\mathbb{X}^{[\text{train}]},\mathbb{Y}^{[\text{train}]}) = \frac{1}{N}\sum_{i=1}^N\left(\hat{f}(x_i^{[\text{train}]},\boldsymbol{\theta}) - y_i^{[\text{train}]}\right)^2
\end{equation}
The gradient of the cost function can be computed by hand as
\begin{equation}
\nabla_{\boldsymbol{\theta}}J(\boldsymbol{\theta},\mathbb{X}^{[\text{train}]},\mathbb{Y}^{[\text{train}]}) = \frac{1}{N}\sum_{i=1}^N\left(\hat{f}(x_i^{[\text{train}]},\boldsymbol{\theta}) - y_i^{[\text{train}]}\right)\begin{pmatrix}
\sigma(\theta_2x_i^{[\text{train}]}+\theta_3) \
\theta_1\sigma^\prime(\theta_2x_i^{[\text{train}]}+\theta_3)x_i^{[\text{train}]} \
\theta_1\sigma^\prime(\theta_2x_i^{[\text{train}]}+\theta_3) \
\sigma(\theta_5x_i^{[\text{train}]}+\theta_6) \
\theta_4\sigma^\prime(\theta_5x_i^{[\text{train}]}+\theta_6)x_i^{[\text{train}]} \
\theta_4\sigma^\prime(\theta_5x_i^{[\text{train}]}+\theta_6) \
\sigma(\theta_8x_i^{[\text{train}]}+\theta_9) \
\theta_7\sigma^\prime(\theta_8x_i^{[\text{train}]}+\theta_9)x_i^{[\text{train}]} \
\theta_7\sigma^\prime(\theta_8x_i^{[\text{train}]}+\theta_9) \end{pmatrix}
\end{equation}
where $\sigma^\prime(x) = \frac{\partial}{\partial x}\tanh(x) = 1 - \tanh^2(x)$
Step3: Here, we compare two different optimization functions. The first function is classical steepest gradient descent with a fixed step size. We evaluate the cost function in each step of the loop.
Step4: The second optimizer is the Adam algorithm [1]. The Adam algorithm uses adaptive estimates of lower-order moments to improve the convergence of gradient descent. Due to its good performance, the Adam algorithm is very widely used in machine learning and in particular deep learning nowadays.
The Adam algorithm uses the following hyperparameters
Step5: Carry out the optimization using 50000 iterations with gradient descent and Adam. | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
function_select = 5
def myfun(x):
functions = {
1: np.power(x,2), # quadratic function
2: np.sin(x), # sinus
3: np.sign(x), # signum
4: np.exp(x), # exponential function
5: np.abs(x)
}
return functions.get(function_select)
# Generate training data.
N = 32
x_train = np.linspace(-2, 2, num=N).reshape(-1,1)
# Generate the evaluation data.
# (can exceed the range of the training data to evaluate the prediction capabilities)
x_eval = np.linspace(-2, 2, num=N).reshape(-1,1)
Explanation: Function Approximation with a Multilayer Perceptron
This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>
This code illustrates:
* training of neural networks by hand
* approximation of a function using a multilayer perceptron consisting of 3 hidden units with sigmoid activation function and a linear output unit
* stochastic gradient descent versus Adam
End of explanation
def sigma(x):
return np.tanh(x)
# First order derivative of sigma (here tanh)
def sigma_prime(x):
return 1-(np.tanh(x))**2
def MLP(x,theta):
# hard-coded MLP, all parameters are inside theta
y = theta[0]*sigma(theta[1]*x + theta[2]) + theta[3]*sigma(theta[4]*x+theta[5]) + theta[6]*sigma(theta[7]*x+theta[8])
return y
Explanation: Function definitions. Here we consider a hard-coded two-layer perception with one hidden layer, using the hyperbolic tangent as activation function, and a linear output layer. The output of the perceptron can hence be written as
\begin{equation}
\hat{f}(x,\boldsymbol{\theta}) = \sum_{i=1}^3v_i\sigma(w_ix+b_i)
\end{equation}
where $\sigma(x) = \tanh(x)$.
Instead of specifying all the parameters individually, we group them in a single vector $\boldsymbol{\theta}$ with
\begin{equation}
\boldsymbol{\theta}=\begin{pmatrix}
v_1 & w_1 & b_1 & v_2 & w_2 & b_2 & v_3 & w_3 & b_3\end{pmatrix}
\end{equation}
End of explanation
def cost_function(x, y, theta):
# cost function is mean-squared error bvetween the training set x and the y
difference = np.array([MLP(e, theta) for e in x]) - y
return np.dot(difference.T, difference)/len(x)
# gradient of the cost function
def cost_function_gradient(x, y, theta):
gradient = np.zeros(9)
for k in range(len(x)):
ig = np.zeros(9)
ig[0] = sigma(theta[1]*x[k]+theta[2])
ig[1] = theta[0]*sigma_prime(theta[1]*x[k]+theta[2])*x[k]
ig[2] = theta[0]*sigma_prime(theta[1]*x[k]+theta[2])
ig[3] = sigma(theta[4]*x[k]+theta[5])
ig[4] = theta[3]*sigma_prime(theta[4]*x[k]+theta[5])*x[k]
ig[5] = theta[3]*sigma_prime(theta[4]*x[k]+theta[5])
ig[6] = sigma(theta[7]*x[k]+theta[8])
ig[7] = theta[6]*sigma_prime(theta[7]*x[k]+theta[8])*x[k]
ig[8] = theta[6]*sigma_prime(theta[7]*x[k]+theta[8])
gradient += 2*(MLP(x[k],theta) - y[k])*ig
return gradient / len(x)
Explanation: The cost function is the mean-squared error, i.e.,
\begin{equation}
J(\boldsymbol{\theta},\mathbb{X}^{[\text{train}]},\mathbb{Y}^{[\text{train}]}) = \frac{1}{N}\sum_{i=1}^N\left(\hat{f}(x_i^{[\text{train}]},\boldsymbol{\theta}) - y_i^{[\text{train}]}\right)^2
\end{equation}
The gradient of the cost function can be computed by hand as
\begin{equation}
\nabla_{\boldsymbol{\theta}}J(\boldsymbol{\theta},\mathbb{X}^{[\text{train}]},\mathbb{Y}^{[\text{train}]}) = \frac{1}{N}\sum_{i=1}^N\left(\hat{f}(x_i^{[\text{train}]},\boldsymbol{\theta}) - y_i^{[\text{train}]}\right)\begin{pmatrix}
\sigma(\theta_2x_i^{[\text{train}]}+\theta_3) \
\theta_1\sigma^\prime(\theta_2x_i^{[\text{train}]}+\theta_3)x_i^{[\text{train}]} \
\theta_1\sigma^\prime(\theta_2x_i^{[\text{train}]}+\theta_3) \
\sigma(\theta_5x_i^{[\text{train}]}+\theta_6) \
\theta_4\sigma^\prime(\theta_5x_i^{[\text{train}]}+\theta_6)x_i^{[\text{train}]} \
\theta_4\sigma^\prime(\theta_5x_i^{[\text{train}]}+\theta_6) \
\sigma(\theta_8x_i^{[\text{train}]}+\theta_9) \
\theta_7\sigma^\prime(\theta_8x_i^{[\text{train}]}+\theta_9)x_i^{[\text{train}]} \
\theta_7\sigma^\prime(\theta_8x_i^{[\text{train}]}+\theta_9) \end{pmatrix}
\end{equation}
where $\sigma^\prime(x) = \frac{\partial}{\partial x}\tanh(x) = 1 - \tanh^2(x)$
End of explanation
def approx_1d_function_gd(x_train, theta_initial, epochs):
# Generate labels for training data (here: the image of the selected function applied to X_train)
y_train = myfun(x_train)
# initialize weights to random data
theta = theta_initial
epsilon = 0.001
cost_evolution = np.zeros(epochs)
# carry out gradient descent
for k in range(epochs):
gradient = cost_function_gradient(x_train, y_train, theta)
theta = theta - epsilon * gradient
cost_evolution[k] = cost_function(x_train, y_train, theta)
return theta, cost_evolution
Explanation: Here, we compare two different optimization functions. The first function is classical steepest gradient descent with a fixed step size. We evaluate the cost function in each step of the loop.
End of explanation
def approx_1d_function_adam(x_train, theta_initial, epochs):
y_train = myfun(x_train)
theta = theta_initial
beta1 = 0.9
beta2 = 0.999
alpha = 0.001
epsilon = 1e-8
cost_evolution = np.zeros(epochs)
m = np.zeros(theta.shape)
t = 0
v = np.zeros(theta.shape)
for k in range(epochs):
t += 1
g = cost_function_gradient(x_train, y_train, theta)
m = beta1*m + (1-beta1)*g
v = beta2*v + (1-beta2)*(g**2)
mhat = m/(1-beta1**t)
vhat = v/(1-beta2**t)
theta = theta - alpha*mhat/(np.sqrt(vhat)+epsilon)
cost_evolution[k] = cost_function(x_train, y_train, theta)
return theta, cost_evolution
Explanation: The second optimizer is the Adam algorithm [1]. The Adam algorithm uses adaptive estimates of lower-order moments to improve the convergence of gradient descent. Due to its good performance, the Adam algorithm is very widely used in machine learning and in particular deep learning nowadays.
The Adam algorithm uses the following hyperparameters:
* The learning rate $\alpha$, which we set to $\alpha = 0.001$
* The decay rates $\beta_1$ and $\beta_2$, which we set to $\beta_1 = 0.9$ and $\beta_2 = 0.999$
* A small constant $\epsilon = 10^{-8}$
Starting with an initial random $\boldsymbol{\theta}_0$ and estimates of the first and second moment $\boldsymbol{m}_0=\boldsymbol{0}$ and $\boldsymbol{v}_0=\boldsymbol{0}$, the algorithm works as follows:
Set $t=0$.<br>
while not converged, do<br>
$t\leftarrow t+1$<br>
$\boldsymbol{g}t \leftarrow \nabla{\boldsymbol{\theta}}J(\boldsymbol{\theta},\mathbb{X}^{[\textrm{train}]},\mathbb{Y}^{[\textrm{train}]})$<br>
$\boldsymbol{m}t \leftarrow \beta_1\cdot\boldsymbol{m}{t-1}+(1-\beta_1)\cdot \boldsymbol{g}{t}$<br>
$\boldsymbol{v}_t \leftarrow \beta_2\cdot \boldsymbol{v}{t-1}+(1-\beta_2)\cdot(\boldsymbol{g}t\odot\boldsymbol{g}_t)$<br>
$\boldsymbol{p}_t\leftarrow \boldsymbol{m}_t/(1-\beta_1^t)$<br>
$\boldsymbol{q}_t \leftarrow \boldsymbol{v}_t/(1-\beta_2^t)$<br>
$\boldsymbol{\theta}_t \leftarrow \boldsymbol{\theta}{t-1} - \alpha\cdot \boldsymbol{p}\oslash(\sqrt{\boldsymbol{q}}+\epsilon)$<br>
where $\oslash$ denotes element-wise division of two vectors. Furthermore, $\sqrt{\boldsymbol{q}}$ means element-wise application of the square-root to the elements of the vector $\boldsymbol{q}$.
Here, we implement and run the Adam algorithm.
[1] D. P. Kingma and J. L. Ba, "Adam: A Method for Stochastic Optimization," published at ICLR 2015, available at https://arxiv.org/pdf/1412.6980.pdf
End of explanation
epochs = 50000
np.random.seed(1)
theta_initial = np.random.randn(9)
theta_adam, cost_evolution_adam = approx_1d_function_adam(x_train, theta_initial, epochs)
theta_gd, cost_evolution_gd = approx_1d_function_gd(x_train, theta_initial, epochs)
# compute evaluation
predictions = MLP(x_eval, theta_adam)
fig = plt.figure(1, figsize=(18,6))
font = {'size' : 14}
plt.rc('font', **font)
plt.rc('text', usetex=matplotlib.checkdep_usetex(True))
plt.rc('text.latex', preamble=r'\usepackage{amsmath}\usepackage{amssymb}\usepackage{bm}')
ax = fig.add_subplot(1, 2, 1)
plt.plot(x_eval, myfun(x_eval), '-', color='royalblue', linewidth=1.0)
plt.plot(x_eval, predictions, '-', label='output', color='darkorange', linewidth=2.0)
plt.plot(x_train, myfun(x_train), '.', color='royalblue',markersize=14)
plt.xlim((min(x_train),max(x_train)))
plt.grid(which='both');
plt.rcParams.update({'font.size': 14})
plt.xlabel('$x$');
plt.ylabel('$y$')
plt.title('3 neurons in hidden layer with %d iterations of gradient descent' % epochs)
plt.legend(['Function $f(x)$', r'MLP output $\hat{f}(x,\bm{\theta})$', 'Training set'])
ax = fig.add_subplot(1, 2, 2)
plt.plot(x_eval, [theta_adam[0]*sigma(theta_adam[1]*x + theta_adam[2]) for x in x_eval], '--', label='output', color='xkcd:azure', linewidth=2.0)
plt.plot(x_eval, [theta_adam[3]*sigma(theta_adam[4]*x + theta_adam[5]) for x in x_eval], '--', label='output', color='xkcd:coral', linewidth=2.0)
plt.plot(x_eval, [theta_adam[6]*sigma(theta_adam[7]*x + theta_adam[8]) for x in x_eval], '--', label='output', color='xkcd:green', linewidth=2.0)
plt.grid(which='both');
plt.xlim((min(x_train),max(x_train)))
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.title('Weighted output of the 3 neurons')
plt.legend(['$v_1\sigma(w_1x+b_1)$', '$v_2\sigma(w_2x+b_2)$', '$v_3\sigma(w_3x+b_3)$'])
#plt.savefig('3neuron_MLP_abs.pdf',bbox_inches='tight')
plt.show()
fig = plt.figure(2, figsize=(8,6))
plt.plot(range(epochs), cost_evolution_gd)
plt.plot(range(epochs), cost_evolution_adam)
plt.xlim((0,20000))
plt.ylim((0,1))
plt.grid(which='both')
plt.legend(['Steepest Gradient Descent', 'Adam'])
plt.xlabel('Iterations')
plt.ylabel(r'Cost function $J(\bm{\theta},\mathbb{X}^{[\textrm{train}]},\mathbb{Y}^{[\textrm{train}]})$')
#plt.savefig('3neuron_MLP_abs_cost.pdf',bbox_inches='tight')
Explanation: Carry out the optimization using 50000 iterations with gradient descent and Adam.
End of explanation
<END_TASK> |
163 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including
Step1: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has
Step2: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
Step3: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define
Step4: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
Step5: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy! | <ASSISTANT_TASK:>
Python Code:
# Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
Explanation: Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9.
We'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.
End of explanation
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
Explanation: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has:
1. an image of a handwritten digit and
2. a corresponding label (a number 0-9 that identifies the image)
We'll call the images, which will be the input to our neural network, X and their corresponding labels Y.
We're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].
Flattened data
For this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values.
Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.
End of explanation
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
show_digit(0)
Explanation: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
End of explanation
# Define the neural network
def build_model():
with tf.device("/gpu:0"):
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# Include the input layer, hidden layer(s), and set how you want to train the model
# This model assumes that your network is named "net"
# Create input layer, sized to the shape of the 28x28 image
net = tflearn.input_data([None, trainX.shape[1]])
# Create intermediate layers
# First layer of 150 seems to make sense in context of 784 pixels per image ~1:5 ratio
net = tflearn.fully_connected(net, 150, activation='ReLU')
# Second hidden layer of 150 is again an approximate ~1:5 ratio
net = tflearn.fully_connected(net, 30, activation='ReLU')
# Create output layer
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.01, loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
# Build the model
model = build_model()
Explanation: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define:
The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data.
Hidden layers, which recognize patterns in data and connect the input to the output layer, and
The output layer, which defines how the network learns and outputs a label for a given image.
Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units).
Then, to set how you train the network, use:
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with categorical cross-entropy.
Finally, you put all this together to create the model with tflearn.DNN(net).
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=100)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
End of explanation
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy!
End of explanation
<END_TASK> |
164 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Data BootCamp Project
Step1: Who has Grit?
Hockey has always been a eloquent yet brutal sport, in this analysis I'm finding the player that embodies the brutality.
Step2: AHHH
The graph above is the simplest display of my data. Out of roughly 900 NHL players, only a few are recognizeable from the mass.
Step3: This is the way my quantitative data looks. Most of the column headers are self explanatory, but i'll go into further detail later.
Step4: Above is the quantitative means of the data i've acquired. Grit is a weighted compilation of penalty minutes, hits, blocked shots, and fights (making it somewhat subjective).
Step5: Since i'm primarily interested in players with the most Grit, i'm going to limit my searches to a higher percentile.
Step6: Of the original 900, these are the 10 players with the most Grit.
Step7: Above is how my qualitative data is structured. I've seperated the datasets for ease of manipulation.
Step8: Above are the hardiest players in the NHL, but how do they perform?
Step9: The two graphs above represent hits given and hits received respectively.
Step10: The above graph reflects the amount of games played during the season. The x-axis is simply the index value attributed to the player. | <ASSISTANT_TASK:>
Python Code:
import pandas as pd #PandasPandas
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
print('PandaPandaPanda ', pd.__version__)
df=pd.read_csv('NHLQUANT.csv')
Explanation: Data BootCamp Project
End of explanation
plt.plot(df.index,df['Grit'])
Explanation: Who has Grit?
Hockey has always been a eloquent yet brutal sport, in this analysis I'm finding the player that embodies the brutality.
End of explanation
df.head(10)
Explanation: AHHH
The graph above is the simplest display of my data. Out of roughly 900 NHL players, only a few are recognizeable from the mass.
End of explanation
df.mean()
Explanation: This is the way my quantitative data looks. Most of the column headers are self explanatory, but i'll go into further detail later.
End of explanation
pd.to_numeric(df, errors='ignore')
y=df["Age"]
z=df["Grit"]
plt.plot(y,z)
df['Grit']>130
df.ix[df['Grit']>130]
Explanation: Above is the quantitative means of the data i've acquired. Grit is a weighted compilation of penalty minutes, hits, blocked shots, and fights (making it somewhat subjective).
End of explanation
df.ix[df['Grit']>300]
df.ix[df['Grit']>400]
Best=df.ix[df['Grit']>400]
Best.sort("Age").plot.barh('Age',"Grit")
Explanation: Since i'm primarily interested in players with the most Grit, i'm going to limit my searches to a higher percentile.
End of explanation
QL=pd.read_csv("NHLQUAL.csv")
QL.head(5)
Explanation: Of the original 900, these are the 10 players with the most Grit.
End of explanation
print(QL.at[61, "First Name"]+QL.at[61, 'Last Name'],QL.at[94, 'First Name']+QL.at[94, 'Last Name'],
QL.at[712, "First Name"]+QL.at[712, "Last Name"],QL.at[209, 'First Name']+QL.at[209, 'Last Name'],QL.at[306, "First Name"]+QL.at[306, 'Last Name'],QL.at[497, 'First Name']+QL.at[497, 'Last Name'],QL.at[524, 'First Name']+QL.at[524, 'Last Name'],QL.at[565, 'First Name']+QL.at[565, 'Last Name'],QL.at[641, 'First Name']+QL.at[641, 'Last Name'],QL.at[877, 'First Name']+QL.at[877, 'Last Name'])
Explanation: Above is how my qualitative data is structured. I've seperated the datasets for ease of manipulation.
End of explanation
Best.sort("Age").plot.barh('Age',"HitF")
Best.sort("Age").plot.barh('Age',"HitA")
Explanation: Above are the hardiest players in the NHL, but how do they perform?
End of explanation
Best.plot(Best.index ,'GP')
plt.ylim([60,85])
Explanation: The two graphs above represent hits given and hits received respectively.
End of explanation
fig, ax=plt.subplots(nrows=2, ncols=1, sharex=True, sharey=True)
Best['G'].plot(ax=ax[0],color='green')
Best['A'].plot(ax=ax[1],color='red')
Explanation: The above graph reflects the amount of games played during the season. The x-axis is simply the index value attributed to the player.
End of explanation
<END_TASK> |
165 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
This IPython Notebook illustrates the use of the openmc.mgxs.Library class. The Library class is designed to automate the calculation of multi-group cross sections for use cases with one or more domains, cross section types, and/or nuclides. In particular, this Notebook illustrates the following features
Step1: First we need to define materials that will be used in the problem
Step2: With our three materials, we can now create a Materials object that can be exported to an actual XML file.
Step3: Now let's move on to the geometry. This problem will be a square array of fuel pins and control rod guide tubes for which we can use OpenMC's lattice/universe feature. The basic universe will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces for fuel and clad, as well as the outer bounding surfaces of the problem.
Step4: With the surfaces defined, we can now construct a fuel pin cell from cells that are defined by intersections of half-spaces created by the surfaces.
Step5: Likewise, we can construct a control rod guide tube with the same surfaces.
Step6: Using the pin cell universe, we can construct a 17x17 rectangular lattice with a 1.26 cm pitch.
Step7: Next, we create a NumPy array of fuel pin and guide tube universes for the lattice.
Step8: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
Step9: We now must create a geometry that is assigned a root universe and export it to XML.
Step10: With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles.
Step11: Let us also create a plot to verify that our fuel assembly geometry was created successfully.
Step12: As we can see from the plot, we have a nice array of fuel and guide tube pin cells with fuel, cladding, and water!
Create an MGXS Library
Now we are ready to generate multi-group cross sections! First, let's define a 20-energy-group and 1-energy-group.
Step13: Next, we will instantiate an openmc.mgxs.Library for the energy and delayed groups with our the fuel assembly geometry.
Step14: Now, we can run OpenMC to generate the cross sections.
Step15: Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a StatePoint object.
Step16: The statepoint is now ready to be analyzed by the Library. We simply have to load the tallies from the statepoint into the Library and our MGXS objects will compute the cross sections for us under-the-hood.
Step17: Using Tally Arithmetic to Compute the Delayed Neutron Precursor Concentrations
Finally, we illustrate how one can leverage OpenMC's tally arithmetic data processing feature with MGXS objects. The openmc.mgxs module uses tally arithmetic to compute multi-group cross sections with automated uncertainty propagation. Each MGXS object includes an xs_tally attribute which is a "derived" Tally based on the tallies needed to compute the cross section type of interest. These derived tallies can be used in subsequent tally arithmetic operations. For example, we can use tally artithmetic to compute the delayed neutron precursor concentrations using the Beta and DelayedNuFissionXS objects. The delayed neutron precursor concentrations are modeled using the following equations
Step18: Another useful feature of the Python API is the ability to extract the surface currents for the interfaces and surfaces of a mesh. We can inspect the currents for the mesh by getting the pandas dataframe.
Step19: Cross Section Visualizations
In addition to inspecting the data in the tallies by getting the pandas dataframe, we can also plot the tally data on the domain mesh. Below is the delayed neutron fraction tallied in each mesh cell for each delayed group. | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import math
import matplotlib.pyplot as plt
import numpy as np
import openmc
import openmc.mgxs
Explanation: This IPython Notebook illustrates the use of the openmc.mgxs.Library class. The Library class is designed to automate the calculation of multi-group cross sections for use cases with one or more domains, cross section types, and/or nuclides. In particular, this Notebook illustrates the following features:
Calculation of multi-energy-group and multi-delayed-group cross sections for a fuel assembly
Automated creation, manipulation and storage of MGXS with openmc.mgxs.Library
Steady-state pin-by-pin delayed neutron fractions (beta) for each delayed group.
Generation of surface currents on the interfaces and surfaces of a Mesh.
Generate Input Files
End of explanation
# 1.6 enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_nuclide('U235', 3.7503e-4)
fuel.add_nuclide('U238', 2.2625e-2)
fuel.add_nuclide('O16', 4.6007e-2)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_nuclide('H1', 4.9457e-2)
water.add_nuclide('O16', 2.4732e-2)
water.add_nuclide('B10', 8.0042e-6)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_nuclide('Zr90', 7.2758e-3)
Explanation: First we need to define materials that will be used in the problem: fuel, water, and cladding.
End of explanation
# Create a materials collection and export to XML
materials = openmc.Materials((fuel, water, zircaloy))
materials.export_to_xml()
Explanation: With our three materials, we can now create a Materials object that can be exported to an actual XML file.
End of explanation
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(r=0.39218)
clad_outer_radius = openmc.ZCylinder(r=0.45720)
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-10.71, boundary_type='reflective')
max_x = openmc.XPlane(x0=+10.71, boundary_type='reflective')
min_y = openmc.YPlane(y0=-10.71, boundary_type='reflective')
max_y = openmc.YPlane(y0=+10.71, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-10., boundary_type='reflective')
max_z = openmc.ZPlane(z0=+10., boundary_type='reflective')
Explanation: Now let's move on to the geometry. This problem will be a square array of fuel pins and control rod guide tubes for which we can use OpenMC's lattice/universe feature. The basic universe will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces for fuel and clad, as well as the outer bounding surfaces of the problem.
End of explanation
# Create a Universe to encapsulate a fuel pin
fuel_pin_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
fuel_pin_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
fuel_pin_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
fuel_pin_universe.add_cell(moderator_cell)
Explanation: With the surfaces defined, we can now construct a fuel pin cell from cells that are defined by intersections of half-spaces created by the surfaces.
End of explanation
# Create a Universe to encapsulate a control rod guide tube
guide_tube_universe = openmc.Universe(name='Guide Tube')
# Create guide tube Cell
guide_tube_cell = openmc.Cell(name='Guide Tube Water')
guide_tube_cell.fill = water
guide_tube_cell.region = -fuel_outer_radius
guide_tube_universe.add_cell(guide_tube_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='Guide Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
guide_tube_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='Guide Tube Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
guide_tube_universe.add_cell(moderator_cell)
Explanation: Likewise, we can construct a control rod guide tube with the same surfaces.
End of explanation
# Create fuel assembly Lattice
assembly = openmc.RectLattice(name='1.6% Fuel Assembly')
assembly.pitch = (1.26, 1.26)
assembly.lower_left = [-1.26 * 17. / 2.0] * 2
Explanation: Using the pin cell universe, we can construct a 17x17 rectangular lattice with a 1.26 cm pitch.
End of explanation
# Create array indices for guide tube locations in lattice
template_x = np.array([5, 8, 11, 3, 13, 2, 5, 8, 11, 14, 2, 5, 8,
11, 14, 2, 5, 8, 11, 14, 3, 13, 5, 8, 11])
template_y = np.array([2, 2, 2, 3, 3, 5, 5, 5, 5, 5, 8, 8, 8, 8,
8, 11, 11, 11, 11, 11, 13, 13, 14, 14, 14])
# Create universes array with the fuel pin and guide tube universes
universes = np.tile(fuel_pin_universe, (17,17))
universes[template_x, template_y] = guide_tube_universe
# Store the array of universes in the lattice
assembly.universes = universes
Explanation: Next, we create a NumPy array of fuel pin and guide tube universes for the lattice.
End of explanation
# Create root Cell
root_cell = openmc.Cell(name='root cell', fill=assembly)
# Add boundary planes
root_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z
# Create root Universe
root_universe = openmc.Universe(universe_id=0, name='root universe')
root_universe.add_cell(root_cell)
Explanation: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
End of explanation
# Create Geometry and export to XML
geometry = openmc.Geometry(root_universe)
geometry.export_to_xml()
Explanation: We now must create a geometry that is assigned a root universe and export it to XML.
End of explanation
# OpenMC simulation parameters
batches = 50
inactive = 10
particles = 2500
# Instantiate a Settings object
settings = openmc.Settings()
settings.batches = batches
settings.inactive = inactive
settings.particles = particles
settings.output = {'tallies': False}
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-10.71, -10.71, -10, 10.71, 10.71, 10.]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings.source = openmc.Source(space=uniform_dist)
# Export to "settings.xml"
settings.export_to_xml()
Explanation: With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles.
End of explanation
# Plot our geometry
plot = openmc.Plot.from_geometry(geometry)
plot.pixels = (250, 250)
plot.color_by = 'material'
openmc.plot_inline(plot)
Explanation: Let us also create a plot to verify that our fuel assembly geometry was created successfully.
End of explanation
# Instantiate a 20-group EnergyGroups object
energy_groups = openmc.mgxs.EnergyGroups()
energy_groups.group_edges = np.logspace(-3, 7.3, 21)
# Instantiate a 1-group EnergyGroups object
one_group = openmc.mgxs.EnergyGroups()
one_group.group_edges = np.array([energy_groups.group_edges[0], energy_groups.group_edges[-1]])
Explanation: As we can see from the plot, we have a nice array of fuel and guide tube pin cells with fuel, cladding, and water!
Create an MGXS Library
Now we are ready to generate multi-group cross sections! First, let's define a 20-energy-group and 1-energy-group.
End of explanation
# Instantiate a tally mesh
mesh = openmc.RegularMesh(mesh_id=1)
mesh.dimension = [17, 17, 1]
mesh.lower_left = [-10.71, -10.71, -10000.]
mesh.width = [1.26, 1.26, 20000.]
# Initialize an 20-energy-group and 6-delayed-group MGXS Library
mgxs_lib = openmc.mgxs.Library(geometry)
mgxs_lib.energy_groups = energy_groups
mgxs_lib.num_delayed_groups = 6
# Specify multi-group cross section types to compute
mgxs_lib.mgxs_types = ['total', 'transport', 'nu-scatter matrix', 'kappa-fission', 'inverse-velocity', 'chi-prompt',
'prompt-nu-fission', 'chi-delayed', 'delayed-nu-fission', 'beta']
# Specify a "mesh" domain type for the cross section tally filters
mgxs_lib.domain_type = 'mesh'
# Specify the mesh domain over which to compute multi-group cross sections
mgxs_lib.domains = [mesh]
# Construct all tallies needed for the multi-group cross section library
mgxs_lib.build_library()
# Create a "tallies.xml" file for the MGXS Library
tallies_file = openmc.Tallies()
mgxs_lib.add_to_tallies_file(tallies_file, merge=True)
# Instantiate a current tally
mesh_filter = openmc.MeshSurfaceFilter(mesh)
current_tally = openmc.Tally(name='current tally')
current_tally.scores = ['current']
current_tally.filters = [mesh_filter]
# Add current tally to the tallies file
tallies_file.append(current_tally)
# Export to "tallies.xml"
tallies_file.export_to_xml()
Explanation: Next, we will instantiate an openmc.mgxs.Library for the energy and delayed groups with our the fuel assembly geometry.
End of explanation
# Run OpenMC
openmc.run()
Explanation: Now, we can run OpenMC to generate the cross sections.
End of explanation
# Load the last statepoint file
sp = openmc.StatePoint('statepoint.50.h5')
Explanation: Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a StatePoint object.
End of explanation
# Initialize MGXS Library with OpenMC statepoint data
mgxs_lib.load_from_statepoint(sp)
# Extrack the current tally separately
current_tally = sp.get_tally(name='current tally')
Explanation: The statepoint is now ready to be analyzed by the Library. We simply have to load the tallies from the statepoint into the Library and our MGXS objects will compute the cross sections for us under-the-hood.
End of explanation
# Set the time constants for the delayed precursors (in seconds^-1)
precursor_halflife = np.array([55.6, 24.5, 16.3, 2.37, 0.424, 0.195])
precursor_lambda = math.log(2.0) / precursor_halflife
beta = mgxs_lib.get_mgxs(mesh, 'beta')
# Create a tally object with only the delayed group filter for the time constants
beta_filters = [f for f in beta.xs_tally.filters if type(f) is not openmc.DelayedGroupFilter]
lambda_tally = beta.xs_tally.summation(nuclides=beta.xs_tally.nuclides)
for f in beta_filters:
lambda_tally = lambda_tally.summation(filter_type=type(f), remove_filter=True) * 0. + 1.
# Set the mean of the lambda tally and reshape to account for nuclides and scores
lambda_tally._mean = precursor_lambda
lambda_tally._mean.shape = lambda_tally.std_dev.shape
# Set a total nuclide and lambda score
lambda_tally.nuclides = [openmc.Nuclide(name='total')]
lambda_tally.scores = ['lambda']
delayed_nu_fission = mgxs_lib.get_mgxs(mesh, 'delayed-nu-fission')
# Use tally arithmetic to compute the precursor concentrations
precursor_conc = beta.xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) * \
delayed_nu_fission.xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) / lambda_tally
# The difference is a derived tally which can generate Pandas DataFrames for inspection
precursor_conc.get_pandas_dataframe().head(10)
Explanation: Using Tally Arithmetic to Compute the Delayed Neutron Precursor Concentrations
Finally, we illustrate how one can leverage OpenMC's tally arithmetic data processing feature with MGXS objects. The openmc.mgxs module uses tally arithmetic to compute multi-group cross sections with automated uncertainty propagation. Each MGXS object includes an xs_tally attribute which is a "derived" Tally based on the tallies needed to compute the cross section type of interest. These derived tallies can be used in subsequent tally arithmetic operations. For example, we can use tally artithmetic to compute the delayed neutron precursor concentrations using the Beta and DelayedNuFissionXS objects. The delayed neutron precursor concentrations are modeled using the following equations:
$$\frac{\partial}{\partial t} C_{k,d} (t) = \int_{0}^{\infty}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r} \beta_{k,d} (t) \nu_d \sigma_{f,x}(\mathbf{r},E',t)\Phi(\mathbf{r},E',t) - \lambda_{d} C_{k,d} (t) $$
$$C_{k,d} (t=0) = \frac{1}{\lambda_{d}} \int_{0}^{\infty}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r} \beta_{k,d} (t=0) \nu_d \sigma_{f,x}(\mathbf{r},E',t=0)\Phi(\mathbf{r},E',t=0) $$
End of explanation
current_tally.get_pandas_dataframe().head(10)
Explanation: Another useful feature of the Python API is the ability to extract the surface currents for the interfaces and surfaces of a mesh. We can inspect the currents for the mesh by getting the pandas dataframe.
End of explanation
# Extract the energy-condensed delayed neutron fraction tally
beta_by_group = beta.get_condensed_xs(one_group).xs_tally.summation(filter_type='energy', remove_filter=True)
beta_by_group.mean.shape = (17, 17, 6)
beta_by_group.mean[beta_by_group.mean == 0] = np.nan
# Plot the betas
plt.figure(figsize=(18,9))
fig = plt.subplot(231)
plt.imshow(beta_by_group.mean[:,:,0], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 1')
fig = plt.subplot(232)
plt.imshow(beta_by_group.mean[:,:,1], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 2')
fig = plt.subplot(233)
plt.imshow(beta_by_group.mean[:,:,2], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 3')
fig = plt.subplot(234)
plt.imshow(beta_by_group.mean[:,:,3], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 4')
fig = plt.subplot(235)
plt.imshow(beta_by_group.mean[:,:,4], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 5')
fig = plt.subplot(236)
plt.imshow(beta_by_group.mean[:,:,5], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 6')
Explanation: Cross Section Visualizations
In addition to inspecting the data in the tallies by getting the pandas dataframe, we can also plot the tally data on the domain mesh. Below is the delayed neutron fraction tallied in each mesh cell for each delayed group.
End of explanation
<END_TASK> |
166 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Copyright 2020 The TensorFlow Authors.
Step1: データ増強
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: データセットをダウンロードする
このチュートリアルでは、tf_flowers データセットを使用します。便宜上、TensorFlow Dataset を使用してデータセットをダウンロードします。他のデータインポート方法に関する詳細は、画像読み込みのチュートリアルをご覧ください。
Step3: 花のデータセットには 5 つのクラスがあります。
Step4: データセットから画像を取得し、それを使用してデータ増強を実演してみましょう。
Step5: Keras 前処理レイヤーを使用する
リサイズとリスケール
Keras 前処理レイヤーを使用して、画像を一定の形状にサイズ変更し(tf.keras.layers.Resizing)、ピクセル値を再スケールする(tf.keras.layers.Rescaling)ことができます。
Step6: 注意
Step7: ピクセルが [0, 1] の範囲にあることを確認します。
Step8: データ増強
tf.keras.layers.RandomFlip や tf.keras.layers.RandomRotation などの Keras 前処理レイヤーをデータ拡張に使用することができます。
前処理レイヤーをいくつか作成し、同じ画像に繰り返して適用してみましょう。
Step9: データ拡張には、tf.keras.layers.RandomContrast、tf.keras.layers.RandomCrop、tf.keras.layers.RandomZoom など、様々な前処理レイヤーを使用できます。
Keras 前処理レイヤーを使用するための 2 つのオプション
これらの前処理レイヤーを使用できる方法には 2 つありませうが、これらには重要なトレードオフが伴います。
オプション 1
Step10: この場合、2 つの重要なポイントがあります。
データ増強はデバイス上で他のレイヤーと同期して実行されるため、GPU アクセラレーションの恩恵を受けることができます。
model.saveを使用してモデルをエクスポートすると、前処理レイヤーはモデルの残りの部分と一緒に保存されます。後でこのモデルをデプロイする場合、画像は自動的に(レイヤーの設定に従い)標準化されます。これにより、サーバーサイドでロジックを再実装する手間が省けます。
注意
Step11: このアプローチでは、Dataset.map を使用して、拡張画像のバッチを生成するデータセットを作成します。この場合は、
データ拡張は CPU 上で非同期に行われ、ノンブロッキングです。以下に示すように、Dataset.prefetch を使用して GPU 上でのモデルのトレーニングをデータの前処理にオーバーラップさせることができます。
この場合、Model.save を呼び出しても、前処理レイヤーはモデルと一緒にエクスポートされません。保存前にモデルに前処理レイヤーをアタッチするか、サーバー側で前処理レイヤーを再実装する必要があります。トレーニングの後、エクスポートする前に前処理レイヤーをアタッチすることができます。
1 番目のオプションの例については、画像分類チュートリアルをご覧ください。次に、2 番目のオプションを見てみましょう。
前処理レイヤーをデータセットに適用する
前に作成した前処理レイヤーを使用して、トレーニング、検証、テスト用のデータセットを構成します。また、パフォーマンス向上のために、並列読み取りとバッファ付きプリフェッチを使用してデータセットを構成し、I/O がブロックされることなくディスクからバッチを生成できるようにします。(データセットのパフォーマンスに関する詳細は、tf.data API によるパフォーマンス向上ガイドをご覧ください。)
注意
Step12: モデルをトレーニングする
完全を期すために、準備したデータセットを使用してモデルをトレーニングします。
Sequential モデルは、それぞれに最大プールレイヤー(tf.keras.layers.MaxPooling2D)を持つ3つの畳み込みブロック(tf.keras.layers.Conv2D)で構成されます。ReLU 活性化関数('relu')により活性化されたユニットが 128 個ある完全に接続されたレイヤー(tf.keras.layers.Dense)があります。このモデルの精度は調整されていません(このチュートリアルの目的は、標準的なアプローチを示すことであるため)。
Step13: tf.keras.optimizers.Adam オプティマイザとtf.keras.losses.SparseCategoricalCrossentropy 損失関数を選択します。各トレーニングエポックのトレーニングと検証の精度を表示するには、Model.compile に metrics 引数を渡します。
Step14: 数エポック、トレーニングします。
Step15: カスタムデータ増強
また、カスタムデータ拡張レイヤーを作成することもできます。
このセクションでは、これを行うための 2 つの方法を説明します。
まず、tf.keras.layers.Lambda レイヤーを作成します。簡潔なコードを書くには良い方法です。
次に、subclassing を介して新しいレイヤーを記述します。こうすることで、さらに制御できるようになります。
どちらのレイヤーも、確率に従って、画像の色をランダムに反転します。
Step16: 次に、サブクラス化してカスタムレイヤーを実装します。
Step17: どちらのレイヤーも、上記 1 と 2 のオプションで説明した使用が可能です。
tf.image を使用する
上記の Keras 前処理ユーティリティは便利ではありますが、より細かい制御には、tf.data や tf.image を使用して独自のデータ拡張パイプラインやレイヤーを書くことができます。(また、<a>TensorFlow Addons 画像
Step18: 作業に必要な画像を取得します。
Step19: 以下の関数を使用して元の画像と拡張画像を並べて視覚化し、比較してみましょう。
Step20: データ増強
画像をフリップする
tf.image.flip_left_right を使って、画像を縦方向または横方向に反転します。
Step21: 画像をグレースケールにする
tf.image.rgb_to_grayscale を使って、画像をグレースケールにできます。
Step22: 画像の彩度を処理する
tf.image.adjust_saturation を使用し、彩度係数を指定して画像の彩度を操作します。
Step23: 画像の明るさを変更する
tf.image.adjust_brightness を使用し、明度係数を指定して画像の明度を変更します。
Step24: 画像を中央でトリミングする
tf.image.central_crop を使用して、画像の中央から希望する部分までをトリミングします。
Step25: 画像を回転させる
tf.image.rot90 を使用して、画像を 90 度回転させます。
Step26: ランダム変換
警告
Step27: 画像のコントラストをランダムに変更する
tf.image.stateless_random_contrast を使用し、コントラスト範囲と seed を指定して、image のコントラストをランダムに変更します。コントラスト範囲は、[lower, upper] の間隔でランダムに選択され、指定された seed に関連付けられます。
Step28: ランダムに画像をトリミングする
tf.image.stateless_random_crop を使用し、ターゲットの size と seed を指定して image をランダムにトリミングします。image から切り取られる部分は、ランダムに選択されたオフセットにあり、指定された seed に関連付けられています。
Step29: データ増強をデータセットに適用する
前に説明したように、Dataset.map を使用してデータセットにデータ拡張を適用します。
Step30: 次に、画像のサイズ変更と再スケーリングのためのユーティリティ関数を定義します。この関数は、データセット内の画像のサイズとスケールを統一するために使用されます。
Step31: また、画像にランダム変換を適用できる augment 関数も定義します。この関数は、次のステップのデータセットで使用されます。
Step32: オプション 1
Step33: augment 関数をトレーニングデータセットにマッピングします。
Step34: オプション 2
Step35: ラッパー関数 f をトレーニングデータセットにマッピングし、resize_and_rescale 関数を検証セットとテストセットにマッピングします。 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow.keras import layers
Explanation: データ増強
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tutorials/images/data_augmentation"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/images/data_augmentation.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/images/data_augmentation.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/images/data_augmentation.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
</table>
概要
このチュートリアルでは、データ拡張を説明します。これは、画像の回転といったランダム(ただし現実的)な変換を適用することで、トレーニングセットの多様性を拡大する手法です。
データ拡張を次の 2 つの方法で適用する方法を学習します。
tf.keras.layers.Resizing、tf.keras.layers.Rescaling、tf.keras.layers.RandomFlip、および tf.keras.layers.RandomRotation などの Keras 前処理レイヤーを使用します。
tf.image.flip_left_right、tf.image.rgb_to_grayscale、tf.image.adjust_brightness、tf.image.central_crop、および tf.image.stateless_random* などの tf.image メソッドを使用します。
セットアップ
End of explanation
(train_ds, val_ds, test_ds), metadata = tfds.load(
'tf_flowers',
split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
Explanation: データセットをダウンロードする
このチュートリアルでは、tf_flowers データセットを使用します。便宜上、TensorFlow Dataset を使用してデータセットをダウンロードします。他のデータインポート方法に関する詳細は、画像読み込みのチュートリアルをご覧ください。
End of explanation
num_classes = metadata.features['label'].num_classes
print(num_classes)
Explanation: 花のデータセットには 5 つのクラスがあります。
End of explanation
get_label_name = metadata.features['label'].int2str
image, label = next(iter(train_ds))
_ = plt.imshow(image)
_ = plt.title(get_label_name(label))
Explanation: データセットから画像を取得し、それを使用してデータ増強を実演してみましょう。
End of explanation
IMG_SIZE = 180
resize_and_rescale = tf.keras.Sequential([
layers.Resizing(IMG_SIZE, IMG_SIZE),
layers.Rescaling(1./255)
])
Explanation: Keras 前処理レイヤーを使用する
リサイズとリスケール
Keras 前処理レイヤーを使用して、画像を一定の形状にサイズ変更し(tf.keras.layers.Resizing)、ピクセル値を再スケールする(tf.keras.layers.Rescaling)ことができます。
End of explanation
result = resize_and_rescale(image)
_ = plt.imshow(result)
Explanation: 注意: 上記のリスケーリングレイヤーは、ピクセル値を [0,1] の範囲に標準化します。代わりに [-1,1] を用いる場合には、tf.keras.layers.Rescaling(1./127.5, offset=-1) と記述します。
次のようにして、これらのレイヤーを画像に適用した結果を可視化します。
End of explanation
print("Min and max pixel values:", result.numpy().min(), result.numpy().max())
Explanation: ピクセルが [0, 1] の範囲にあることを確認します。
End of explanation
data_augmentation = tf.keras.Sequential([
layers.RandomFlip("horizontal_and_vertical"),
layers.RandomRotation(0.2),
])
# Add the image to a batch.
image = tf.cast(tf.expand_dims(image, 0), tf.float32)
plt.figure(figsize=(10, 10))
for i in range(9):
augmented_image = data_augmentation(image)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_image[0])
plt.axis("off")
Explanation: データ増強
tf.keras.layers.RandomFlip や tf.keras.layers.RandomRotation などの Keras 前処理レイヤーをデータ拡張に使用することができます。
前処理レイヤーをいくつか作成し、同じ画像に繰り返して適用してみましょう。
End of explanation
model = tf.keras.Sequential([
# Add the preprocessing layers you created earlier.
resize_and_rescale,
data_augmentation,
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
# Rest of your model.
])
Explanation: データ拡張には、tf.keras.layers.RandomContrast、tf.keras.layers.RandomCrop、tf.keras.layers.RandomZoom など、様々な前処理レイヤーを使用できます。
Keras 前処理レイヤーを使用するための 2 つのオプション
これらの前処理レイヤーを使用できる方法には 2 つありませうが、これらには重要なトレードオフが伴います。
オプション 1: 前処理レイヤーをモデルの一部にする
End of explanation
aug_ds = train_ds.map(
lambda x, y: (resize_and_rescale(x, training=True), y))
Explanation: この場合、2 つの重要なポイントがあります。
データ増強はデバイス上で他のレイヤーと同期して実行されるため、GPU アクセラレーションの恩恵を受けることができます。
model.saveを使用してモデルをエクスポートすると、前処理レイヤーはモデルの残りの部分と一緒に保存されます。後でこのモデルをデプロイする場合、画像は自動的に(レイヤーの設定に従い)標準化されます。これにより、サーバーサイドでロジックを再実装する手間が省けます。
注意: データ拡張はテスト時には非アクティブなので、(Model.evaluate や Model.predict ではなく) Model.fit への呼び出し時にのみ、入力画像を拡張します。
オプション 2: 前処理レイヤーをデータセットに適用する
End of explanation
batch_size = 32
AUTOTUNE = tf.data.AUTOTUNE
def prepare(ds, shuffle=False, augment=False):
# Resize and rescale all datasets.
ds = ds.map(lambda x, y: (resize_and_rescale(x), y),
num_parallel_calls=AUTOTUNE)
if shuffle:
ds = ds.shuffle(1000)
# Batch all datasets.
ds = ds.batch(batch_size)
# Use data augmentation only on the training set.
if augment:
ds = ds.map(lambda x, y: (data_augmentation(x, training=True), y),
num_parallel_calls=AUTOTUNE)
# Use buffered prefetching on all datasets.
return ds.prefetch(buffer_size=AUTOTUNE)
train_ds = prepare(train_ds, shuffle=True, augment=True)
val_ds = prepare(val_ds)
test_ds = prepare(test_ds)
Explanation: このアプローチでは、Dataset.map を使用して、拡張画像のバッチを生成するデータセットを作成します。この場合は、
データ拡張は CPU 上で非同期に行われ、ノンブロッキングです。以下に示すように、Dataset.prefetch を使用して GPU 上でのモデルのトレーニングをデータの前処理にオーバーラップさせることができます。
この場合、Model.save を呼び出しても、前処理レイヤーはモデルと一緒にエクスポートされません。保存前にモデルに前処理レイヤーをアタッチするか、サーバー側で前処理レイヤーを再実装する必要があります。トレーニングの後、エクスポートする前に前処理レイヤーをアタッチすることができます。
1 番目のオプションの例については、画像分類チュートリアルをご覧ください。次に、2 番目のオプションを見てみましょう。
前処理レイヤーをデータセットに適用する
前に作成した前処理レイヤーを使用して、トレーニング、検証、テスト用のデータセットを構成します。また、パフォーマンス向上のために、並列読み取りとバッファ付きプリフェッチを使用してデータセットを構成し、I/O がブロックされることなくディスクからバッチを生成できるようにします。(データセットのパフォーマンスに関する詳細は、tf.data API によるパフォーマンス向上ガイドをご覧ください。)
注意: データ拡張はトレーニングセットのみに適用されます。
End of explanation
model = tf.keras.Sequential([
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
Explanation: モデルをトレーニングする
完全を期すために、準備したデータセットを使用してモデルをトレーニングします。
Sequential モデルは、それぞれに最大プールレイヤー(tf.keras.layers.MaxPooling2D)を持つ3つの畳み込みブロック(tf.keras.layers.Conv2D)で構成されます。ReLU 活性化関数('relu')により活性化されたユニットが 128 個ある完全に接続されたレイヤー(tf.keras.layers.Dense)があります。このモデルの精度は調整されていません(このチュートリアルの目的は、標準的なアプローチを示すことであるため)。
End of explanation
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
Explanation: tf.keras.optimizers.Adam オプティマイザとtf.keras.losses.SparseCategoricalCrossentropy 損失関数を選択します。各トレーニングエポックのトレーニングと検証の精度を表示するには、Model.compile に metrics 引数を渡します。
End of explanation
epochs=5
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
loss, acc = model.evaluate(test_ds)
print("Accuracy", acc)
Explanation: 数エポック、トレーニングします。
End of explanation
def random_invert_img(x, p=0.5):
if tf.random.uniform([]) < p:
x = (255-x)
else:
x
return x
def random_invert(factor=0.5):
return layers.Lambda(lambda x: random_invert_img(x, factor))
random_invert = random_invert()
plt.figure(figsize=(10, 10))
for i in range(9):
augmented_image = random_invert(image)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_image[0].numpy().astype("uint8"))
plt.axis("off")
Explanation: カスタムデータ増強
また、カスタムデータ拡張レイヤーを作成することもできます。
このセクションでは、これを行うための 2 つの方法を説明します。
まず、tf.keras.layers.Lambda レイヤーを作成します。簡潔なコードを書くには良い方法です。
次に、subclassing を介して新しいレイヤーを記述します。こうすることで、さらに制御できるようになります。
どちらのレイヤーも、確率に従って、画像の色をランダムに反転します。
End of explanation
class RandomInvert(layers.Layer):
def __init__(self, factor=0.5, **kwargs):
super().__init__(**kwargs)
self.factor = factor
def call(self, x):
return random_invert_img(x)
_ = plt.imshow(RandomInvert()(image)[0])
Explanation: 次に、サブクラス化してカスタムレイヤーを実装します。
End of explanation
(train_ds, val_ds, test_ds), metadata = tfds.load(
'tf_flowers',
split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
Explanation: どちらのレイヤーも、上記 1 と 2 のオプションで説明した使用が可能です。
tf.image を使用する
上記の Keras 前処理ユーティリティは便利ではありますが、より細かい制御には、tf.data や tf.image を使用して独自のデータ拡張パイプラインやレイヤーを書くことができます。(また、<a>TensorFlow Addons 画像: 演算</a>および TensorFlow I/O: 色空間の変換もご覧ください。)
花のデータセットは、前にデータ拡張で構成したので、再インポートして最初からやり直しましょう。
End of explanation
image, label = next(iter(train_ds))
_ = plt.imshow(image)
_ = plt.title(get_label_name(label))
Explanation: 作業に必要な画像を取得します。
End of explanation
def visualize(original, augmented):
fig = plt.figure()
plt.subplot(1,2,1)
plt.title('Original image')
plt.imshow(original)
plt.subplot(1,2,2)
plt.title('Augmented image')
plt.imshow(augmented)
Explanation: 以下の関数を使用して元の画像と拡張画像を並べて視覚化し、比較してみましょう。
End of explanation
flipped = tf.image.flip_left_right(image)
visualize(image, flipped)
Explanation: データ増強
画像をフリップする
tf.image.flip_left_right を使って、画像を縦方向または横方向に反転します。
End of explanation
grayscaled = tf.image.rgb_to_grayscale(image)
visualize(image, tf.squeeze(grayscaled))
_ = plt.colorbar()
Explanation: 画像をグレースケールにする
tf.image.rgb_to_grayscale を使って、画像をグレースケールにできます。
End of explanation
saturated = tf.image.adjust_saturation(image, 3)
visualize(image, saturated)
Explanation: 画像の彩度を処理する
tf.image.adjust_saturation を使用し、彩度係数を指定して画像の彩度を操作します。
End of explanation
bright = tf.image.adjust_brightness(image, 0.4)
visualize(image, bright)
Explanation: 画像の明るさを変更する
tf.image.adjust_brightness を使用し、明度係数を指定して画像の明度を変更します。
End of explanation
cropped = tf.image.central_crop(image, central_fraction=0.5)
visualize(image, cropped)
Explanation: 画像を中央でトリミングする
tf.image.central_crop を使用して、画像の中央から希望する部分までをトリミングします。
End of explanation
rotated = tf.image.rot90(image)
visualize(image, rotated)
Explanation: 画像を回転させる
tf.image.rot90 を使用して、画像を 90 度回転させます。
End of explanation
for i in range(3):
seed = (i, 0) # tuple of size (2,)
stateless_random_brightness = tf.image.stateless_random_brightness(
image, max_delta=0.95, seed=seed)
visualize(image, stateless_random_brightness)
Explanation: ランダム変換
警告: ランダム画像演算には tf.image.random* および tf.image.stateless_random* の 2 つのセットがあります。tf.image.random* 演算は、TF 1.x の古い RNG を使用するため、使用することは強くお勧めしません。代わりに、このチュートリアルで紹介したランダム画像演算を使用してください。詳細については、乱数の生成を参照してください。
画像にランダムな変換を適用すると、データセットの一般化と拡張にさらに役立ちます。現在の tf.image は、次の 8 つのランダム画像演算 (ops) を提供します。
tf.image.stateless_random_brightness
tf.image.stateless_random_contrast
tf.image.stateless_random_crop
tf.image.stateless_random_flip_left_right
tf.image.stateless_random_flip_up_down
tf.image.stateless_random_hue
tf.image.stateless_random_jpeg_quality
tf.image.stateless_random_saturation
これらのランダム画像演算は機能的であり、出力は入力にのみ依存します。これにより、高性能で決定論的な入力パイプラインで簡単に使用できるようになります。各ステップで seed 値を入力する必要があります。同じ seedを指定すると、呼び出された回数に関係なく、同じ結果が返されます。
注意: seed は、形状が (2,) の Tensor で、値は任意の整数です。
以降のセクションでは、次のことを行います。
ランダム画像演算を使用して画像を変換する例を見る。
ランダム変換をトレーニングデータセットに適用する方法を示す。
画像の明るさをランダムに変更する
tf.image.stateless_random_brightness を使用し、明度係数と seed を指定して、image の明度をランダムに変更します。明度係数は、[-max_delta, max_delta) の範囲でランダムに選択され、指定された seed に関連付けられます。
End of explanation
for i in range(3):
seed = (i, 0) # tuple of size (2,)
stateless_random_contrast = tf.image.stateless_random_contrast(
image, lower=0.1, upper=0.9, seed=seed)
visualize(image, stateless_random_contrast)
Explanation: 画像のコントラストをランダムに変更する
tf.image.stateless_random_contrast を使用し、コントラスト範囲と seed を指定して、image のコントラストをランダムに変更します。コントラスト範囲は、[lower, upper] の間隔でランダムに選択され、指定された seed に関連付けられます。
End of explanation
for i in range(3):
seed = (i, 0) # tuple of size (2,)
stateless_random_crop = tf.image.stateless_random_crop(
image, size=[210, 300, 3], seed=seed)
visualize(image, stateless_random_crop)
Explanation: ランダムに画像をトリミングする
tf.image.stateless_random_crop を使用し、ターゲットの size と seed を指定して image をランダムにトリミングします。image から切り取られる部分は、ランダムに選択されたオフセットにあり、指定された seed に関連付けられています。
End of explanation
(train_datasets, val_ds, test_ds), metadata = tfds.load(
'tf_flowers',
split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
Explanation: データ増強をデータセットに適用する
前に説明したように、Dataset.map を使用してデータセットにデータ拡張を適用します。
End of explanation
def resize_and_rescale(image, label):
image = tf.cast(image, tf.float32)
image = tf.image.resize(image, [IMG_SIZE, IMG_SIZE])
image = (image / 255.0)
return image, label
Explanation: 次に、画像のサイズ変更と再スケーリングのためのユーティリティ関数を定義します。この関数は、データセット内の画像のサイズとスケールを統一するために使用されます。
End of explanation
def augment(image_label, seed):
image, label = image_label
image, label = resize_and_rescale(image, label)
image = tf.image.resize_with_crop_or_pad(image, IMG_SIZE + 6, IMG_SIZE + 6)
# Make a new seed.
new_seed = tf.random.experimental.stateless_split(seed, num=1)[0, :]
# Random crop back to the original size.
image = tf.image.stateless_random_crop(
image, size=[IMG_SIZE, IMG_SIZE, 3], seed=seed)
# Random brightness.
image = tf.image.stateless_random_brightness(
image, max_delta=0.5, seed=new_seed)
image = tf.clip_by_value(image, 0, 1)
return image, label
Explanation: また、画像にランダム変換を適用できる augment 関数も定義します。この関数は、次のステップのデータセットで使用されます。
End of explanation
# Create a `Counter` object and `Dataset.zip` it together with the training set.
counter = tf.data.experimental.Counter()
train_ds = tf.data.Dataset.zip((train_datasets, (counter, counter)))
Explanation: オプション 1: tf.data.experimental.Counte を使用する
tf.data.experimental.Counter オブジェクト (counter と呼ぶ) を作成し、(counter, counter) を含むデータセットを zip します。これにより、データセット内の各画像が、counter に基づいて(一意の値形状 (2,))に関連付けられるようになります。これは後で、ランダム変換の seed 値として augment 関数に渡されます。
End of explanation
train_ds = (
train_ds
.shuffle(1000)
.map(augment, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
val_ds = (
val_ds
.map(resize_and_rescale, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
test_ds = (
test_ds
.map(resize_and_rescale, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
Explanation: augment 関数をトレーニングデータセットにマッピングします。
End of explanation
# Create a generator.
rng = tf.random.Generator.from_seed(123, alg='philox')
# Create a wrapper function for updating seeds.
def f(x, y):
seed = rng.make_seeds(2)[0]
image, label = augment((x, y), seed)
return image, label
Explanation: オプション 2: tf.random.Generator を使用する
seed の初期値で tf.random.Generator オブジェクトを作成します。同じジェネレータオブジェクトに make_seeds 関数を呼び出すと、必ず新しい一意の seed 値が返されます。
ラッパー関数を 1) make_seeds 関数を呼び出し、2) 新たに生成された seed 値を augment 関数に渡してランダム変換を行うように定義します。
注意: tf.random.Generator オブジェクトは RNG 状態を tf.Variable に格納するため、checkpoint または SavedModel として保存できます。詳細につきましては、乱数の生成を参照してください。
End of explanation
train_ds = (
train_datasets
.shuffle(1000)
.map(f, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
val_ds = (
val_ds
.map(resize_and_rescale, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
test_ds = (
test_ds
.map(resize_and_rescale, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
Explanation: ラッパー関数 f をトレーニングデータセットにマッピングし、resize_and_rescale 関数を検証セットとテストセットにマッピングします。
End of explanation
<END_TASK> |
167 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Copyright 2019 The TensorFlow Authors.
Step1: Get started with TensorBoard
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Using the MNIST dataset as the example, normalize the data and write a function that creates a simple Keras model for classifying the images into 10 classes.
Step3: Using TensorBoard with Keras Model.fit()
When training with Keras's Model.fit(), adding the tf.keras.callbacks.TensorBoard callback ensures that logs are created and stored. Additionally, enable histogram computation every epoch with histogram_freq=1 (this is off by default)
Place the logs in a timestamped subdirectory to allow easy selection of different training runs.
Step4: Start TensorBoard through the command line or within a notebook experience. The two interfaces are generally the same. In notebooks, use the %tensorboard line magic. On the command line, run the same command without "%".
Step5: <!-- <img class="tfo-display-only-on-site" src="https
Step6: The training code follows the advanced quickstart tutorial, but shows how to log metrics to TensorBoard. Choose loss and optimizer
Step7: Create stateful metrics that can be used to accumulate values during training and logged at any point
Step8: Define the training and test functions
Step9: Set up summary writers to write the summaries to disk in a different logs directory
Step10: Start training. Use tf.summary.scalar() to log metrics (loss and accuracy) during training/testing within the scope of the summary writers to write the summaries to disk. You have control over which metrics to log and how often to do it. Other tf.summary functions enable logging other types of data.
Step11: Open TensorBoard again, this time pointing it at the new log directory. We could have also started TensorBoard to monitor training while it progresses.
Step12: <!-- <img class="tfo-display-only-on-site" src="https | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
# Load the TensorBoard notebook extension
%load_ext tensorboard
import tensorflow as tf
import datetime
# Clear any logs from previous runs
!rm -rf ./logs/
Explanation: Get started with TensorBoard
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tensorboard/get_started"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorboard/blob/master/docs/get_started.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorboard/blob/master/docs/get_started.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorboard/docs/get_started.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In machine learning, to improve something you often need to be able to measure it. TensorBoard is a tool for providing the measurements and visualizations needed during the machine learning workflow. It enables tracking experiment metrics like loss and accuracy, visualizing the model graph, projecting embeddings to a lower dimensional space, and much more.
This quickstart will show how to quickly get started with TensorBoard. The remaining guides in this website provide more details on specific capabilities, many of which are not included here.
End of explanation
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
Explanation: Using the MNIST dataset as the example, normalize the data and write a function that creates a simple Keras model for classifying the images into 10 classes.
End of explanation
model = create_model()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback])
Explanation: Using TensorBoard with Keras Model.fit()
When training with Keras's Model.fit(), adding the tf.keras.callbacks.TensorBoard callback ensures that logs are created and stored. Additionally, enable histogram computation every epoch with histogram_freq=1 (this is off by default)
Place the logs in a timestamped subdirectory to allow easy selection of different training runs.
End of explanation
%tensorboard --logdir logs/fit
Explanation: Start TensorBoard through the command line or within a notebook experience. The two interfaces are generally the same. In notebooks, use the %tensorboard line magic. On the command line, run the same command without "%".
End of explanation
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
train_dataset = train_dataset.shuffle(60000).batch(64)
test_dataset = test_dataset.batch(64)
Explanation: <!-- <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/tensorboard/blob/master/docs/images/quickstart_model_fit.png?raw=1"/> -->
A brief overview of the dashboards shown (tabs in top navigation bar):
The Scalars dashboard shows how the loss and metrics change with every epoch. You can use it to also track training speed, learning rate, and other scalar values.
The Graphs dashboard helps you visualize your model. In this case, the Keras graph of layers is shown which can help you ensure it is built correctly.
The Distributions and Histograms dashboards show the distribution of a Tensor over time. This can be useful to visualize weights and biases and verify that they are changing in an expected way.
Additional TensorBoard plugins are automatically enabled when you log other types of data. For example, the Keras TensorBoard callback lets you log images and embeddings as well. You can see what other plugins are available in TensorBoard by clicking on the "inactive" dropdown towards the top right.
Using TensorBoard with other methods
When training with methods such as tf.GradientTape(), use tf.summary to log the required information.
Use the same dataset as above, but convert it to tf.data.Dataset to take advantage of batching capabilities:
End of explanation
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
optimizer = tf.keras.optimizers.Adam()
Explanation: The training code follows the advanced quickstart tutorial, but shows how to log metrics to TensorBoard. Choose loss and optimizer:
End of explanation
# Define our metrics
train_loss = tf.keras.metrics.Mean('train_loss', dtype=tf.float32)
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy('train_accuracy')
test_loss = tf.keras.metrics.Mean('test_loss', dtype=tf.float32)
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy('test_accuracy')
Explanation: Create stateful metrics that can be used to accumulate values during training and logged at any point:
End of explanation
def train_step(model, optimizer, x_train, y_train):
with tf.GradientTape() as tape:
predictions = model(x_train, training=True)
loss = loss_object(y_train, predictions)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_loss(loss)
train_accuracy(y_train, predictions)
def test_step(model, x_test, y_test):
predictions = model(x_test)
loss = loss_object(y_test, predictions)
test_loss(loss)
test_accuracy(y_test, predictions)
Explanation: Define the training and test functions:
End of explanation
current_time = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
train_log_dir = 'logs/gradient_tape/' + current_time + '/train'
test_log_dir = 'logs/gradient_tape/' + current_time + '/test'
train_summary_writer = tf.summary.create_file_writer(train_log_dir)
test_summary_writer = tf.summary.create_file_writer(test_log_dir)
Explanation: Set up summary writers to write the summaries to disk in a different logs directory:
End of explanation
model = create_model() # reset our model
EPOCHS = 5
for epoch in range(EPOCHS):
for (x_train, y_train) in train_dataset:
train_step(model, optimizer, x_train, y_train)
with train_summary_writer.as_default():
tf.summary.scalar('loss', train_loss.result(), step=epoch)
tf.summary.scalar('accuracy', train_accuracy.result(), step=epoch)
for (x_test, y_test) in test_dataset:
test_step(model, x_test, y_test)
with test_summary_writer.as_default():
tf.summary.scalar('loss', test_loss.result(), step=epoch)
tf.summary.scalar('accuracy', test_accuracy.result(), step=epoch)
template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}'
print (template.format(epoch+1,
train_loss.result(),
train_accuracy.result()*100,
test_loss.result(),
test_accuracy.result()*100))
# Reset metrics every epoch
train_loss.reset_states()
test_loss.reset_states()
train_accuracy.reset_states()
test_accuracy.reset_states()
Explanation: Start training. Use tf.summary.scalar() to log metrics (loss and accuracy) during training/testing within the scope of the summary writers to write the summaries to disk. You have control over which metrics to log and how often to do it. Other tf.summary functions enable logging other types of data.
End of explanation
%tensorboard --logdir logs/gradient_tape
Explanation: Open TensorBoard again, this time pointing it at the new log directory. We could have also started TensorBoard to monitor training while it progresses.
End of explanation
!tensorboard dev upload \
--logdir logs/fit \
--name "(optional) My latest experiment" \
--description "(optional) Simple comparison of several hyperparameters" \
--one_shot
Explanation: <!-- <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/tensorboard/blob/master/docs/images/quickstart_gradient_tape.png?raw=1"/> -->
That's it! You have now seen how to use TensorBoard both through the Keras callback and through tf.summary for more custom scenarios.
TensorBoard.dev: Host and share your ML experiment results
TensorBoard.dev is a free public service that enables you to upload your TensorBoard logs and get a permalink that can be shared with everyone in academic papers, blog posts, social media, etc. This can enable better reproducibility and collaboration.
To use TensorBoard.dev, run the following command:
End of explanation
<END_TASK> |
168 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Compare the different ICA algorithms in MNE
Different ICA algorithms are fit to raw MEG data, and the corresponding maps
are displayed.
Step1: Read and preprocess the data. Preprocessing consists of
Step2: Define a function that runs ICA on the raw MEG data and plots the components
Step3: FastICA
Step4: Picard
Step5: Infomax
Step6: Extended Infomax | <ASSISTANT_TASK:>
Python Code:
# Authors: Pierre Ablin <pierreablin@gmail.com>
#
# License: BSD (3-clause)
from time import time
import mne
from mne.preprocessing import ICA
from mne.datasets import sample
print(__doc__)
Explanation: Compare the different ICA algorithms in MNE
Different ICA algorithms are fit to raw MEG data, and the corresponding maps
are displayed.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
picks = mne.pick_types(raw.info, meg=True)
reject = dict(mag=5e-12, grad=4000e-13)
raw.filter(1, 30, fir_design='firwin')
Explanation: Read and preprocess the data. Preprocessing consists of:
MEG channel selection
1-30 Hz band-pass filter
End of explanation
def run_ica(method, fit_params=None):
ica = ICA(n_components=20, method=method, fit_params=fit_params,
random_state=0)
t0 = time()
ica.fit(raw, picks=picks, reject=reject)
fit_time = time() - t0
title = ('ICA decomposition using %s (took %.1fs)' % (method, fit_time))
ica.plot_components(title=title)
Explanation: Define a function that runs ICA on the raw MEG data and plots the components
End of explanation
run_ica('fastica')
Explanation: FastICA
End of explanation
run_ica('picard')
Explanation: Picard
End of explanation
run_ica('infomax')
Explanation: Infomax
End of explanation
run_ica('infomax', fit_params=dict(extended=True))
Explanation: Extended Infomax
End of explanation
<END_TASK> |
169 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Converting $\LaTeX$ to <span style="font-variant
Step1: Now data contains the text that is stored in this file.
Step2: Let us look at the output file example.pdf that is produced if we run $\LaTeX$ on this file.
If you are nor running <span style="font-variant
Step3: Next, we open the file example.html. The scanner we are going to implement has to write its output into this file.
Step4: <hr style="height
Step5: The function end_html writes the closing </body> and </html> tags.
Step6: The function start_math_block starts a math block. This is useful for formulas enclosed in $$. This type of formulas is displayed in a line by itself.
Step7: The function start_math_inline starts an <em style="color
Step8: The function start_math_block ends a math block.
Step9: The functions start_sum and end_sum write code to display formulas involving sums. For example, to display the expression
$$ \sum\limits_{i=1}^n i^2 $$
we can use the following MathML
Step10: The functions start_sqrt and end_sqrt write code to display formulas involving square roots. For example, to display the expression
$$ \sqrt{a^2 + b^2} $$
we can use the following MathML
Step11: In order to write exponents we have to use the tag <msup>. For example, the expression $a^2$
is equivalent to the following markup
Step12: In order to write fractions we have to use the tag <mfrac>. For example, the expression $\frac{1}{6}$
is equivalent to the following markup
Step13: Arguments of functions like the square root or exponents have to be enclosed in pairs of <mrow> and </mrow> tags.
Step14: Variable names should be enclosed in pairs of <mi> and </mi> tags. For example, the variable $x$ is displayed by the following MathML
Step15: Numbers should be enclosed in pairs of <mn> and </mn> tags. For example, the number $6$ is displayed by the following MathML
Step16: The symbol $\cdot$ is created by the following MathML
Step17: Mathematical operators should be enclosed in pairs of <mo> and </mo> tags. For example, the operator $+$ is displayed by the following MathML
Step18: The symbol $\pi$ is created by the following MathML
Step19: The symbol $\leq$ is created by the following MathML
Step20: The symbol $\geq$ is created by the following MathML
Step21: The function write_any writes a single character unadorned to the output file.
Step22: We will be use the library ply to translate $\LaTeX$ into
<span style="font-variant
Step23: We have to declare all tokens below. We will need tokens for the following parts of the $\LaTeX$ file
Step24: When we see a closing brace } things get difficult. The reason is that we need to know what type of formula is being closed. Is it a square root, the subscript of a sum, the superscript of a sum, some part of a fraction, or an exponent. My idea is to use a stack that is attached to the lexer, i.e. we have a variable lexer.stack that stores this information. We need a stack since formulas might be nested.
Furthermore, the scanner has two different states. Either we are inside a formula, i.e. inside something that is enclosed in dollar symbols, or we are inside text that needs to be echoed unchanged to the output file.
Step25: ... lots of token definitions here ...
Step26: The line below is necessary to trick ply.lex into assuming this program is written in an ordinary python file instead of a Jupyter notebook.
Step27: The line below generates the scanner.
Step28: Next, we feed our input string into the generated scanner.
Step29: In order to scan the data that we provided in the last line, we iterate over all tokens generated by our scanner. | <ASSISTANT_TASK:>
Python Code:
with open('example.tex') as f:
data = f.read()
Explanation: Converting $\LaTeX$ to <span style="font-variant:small-caps;">Html</span>
The purpose of the following exercise is to implement a translator from $\LaTeX$ to
MathML. $\LaTeX$ is a document markup language
that is especially well suited to present text that contains mathematical formulas. MathML is the part of <span style="font-variant:small-caps;">Html</span> that deals with the representation of mathematical formulas. As $\LaTeX$ provides a very rich
document markup language and we can only afford to spend a few hours on this exercise, we confine
ourselves to a small subset of $\LaTeX$. The file example.tex contains some $\LaTeX$. The goal of this exercise is to implement a translator that is able to transform this file into MathML.
We start with reading the file.
End of explanation
print(data)
Explanation: Now data contains the text that is stored in this file.
End of explanation
!open example.pdf
Explanation: Let us look at the output file example.pdf that is produced if we run $\LaTeX$ on this file.
If you are nor running <span style="font-variant:small-caps;">MacOS</span> you have to replace the command
open with the name of an executable that can open a .pdf-file.
End of explanation
outfile = open('example.html', 'w')
Explanation: Next, we open the file example.html. The scanner we are going to implement has to write its output into this file.
End of explanation
def start_html():
outfile.write('<!doctype html>\n')
outfile.write('<html>\n')
outfile.write('<head>\n')
outfile.write('<script type="text/javascript" ')
outfile.write('src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML">\n')
outfile.write('</script>\n')
outfile.write('<body>\n\n')
Explanation: <hr style="height:4px;background-color:blue">
Below are some predefined functions that you can use to create the <span style="font-variant:small-caps;">Html</span> file.
<hr style="height:4px;background-color:blue">
The function start_html writes the header of the <span style="font-variant:small-caps;">Html</span> file
and the opening <body> tag to the file opened above.
End of explanation
def end_html():
outfile.write('</body>\n')
outfile.write('</html>\n')
Explanation: The function end_html writes the closing </body> and </html> tags.
End of explanation
def start_math_block():
outfile.write('<math xmlns="http://www.w3.org/1998/Math/MathML" display="block">\n')
Explanation: The function start_math_block starts a math block. This is useful for formulas enclosed in $$. This type of formulas is displayed in a line by itself.
End of explanation
def start_math_inline():
outfile.write('<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline">\n')
Explanation: The function start_math_inline starts an <em style="color:blue">inline formula</em>, i.e. a formula enclosed in $. Formulas of this type are part of the surrounding text.
End of explanation
def end_math():
outfile.write('</math>\n')
Explanation: The function start_math_block ends a math block.
End of explanation
def start_sum():
outfile.write('<munderover>\n')
outfile.write('<mo>∑</mo>\n')
def end_sum():
outfile.write('</munderover>\n')
Explanation: The functions start_sum and end_sum write code to display formulas involving sums. For example, to display the expression
$$ \sum\limits_{i=1}^n i^2 $$
we can use the following MathML:
```
<munderover>
<mo>∑</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mrow>
<mi>n</mi>
</mrow>
</munderover>
<msup>
<mi>i</mi>
<mrow>
<mn>2</mn>
</mrow>
</msup>
```
End of explanation
def start_sqrt():
outfile.write('<msqrt>\n')
def end_sqrt():
outfile.write('</msqrt>\n')
Explanation: The functions start_sqrt and end_sqrt write code to display formulas involving square roots. For example, to display the expression
$$ \sqrt{a^2 + b^2} $$
we can use the following MathML:
<msqrt>
<mrow>
<msup>
<mi>a</mi>
<mrow>
<mn>2</mn>
</mrow>
</msup>
<mo>+</mo>
<msup>
<mi>b</mi>
<mrow>
<mn>2</mn>
</mrow>
</msup>
</mrow>
</msqrt>
End of explanation
def start_super():
outfile.write('<msup>\n')
def end_super():
outfile.write('</msup>\n')
Explanation: In order to write exponents we have to use the tag <msup>. For example, the expression $a^2$
is equivalent to the following markup:
<msup>
<mi>a</mi>
<mrow>
<mn>2</mn>
</mrow>
</msup>
Note that the exponent is enclosed in <mrow> </mrow> tags.
End of explanation
def start_fraction():
outfile.write('<mfrac>\n')
def end_fraction():
outfile.write('</mfrac>\n')
Explanation: In order to write fractions we have to use the tag <mfrac>. For example, the expression $\frac{1}{6}$
is equivalent to the following markup:
<mfrac>
<mrow>
<mn>1</mn>
</mrow>
<mrow>
<mn>6</mn>
</mrow>
</mfrac>
Note that both nominator and denominator are enclosed in <mrow> </mrow> tags.
End of explanation
def start_row():
outfile.write('<mrow>\n')
def end_row():
outfile.write('</mrow>\n')
Explanation: Arguments of functions like the square root or exponents have to be enclosed in pairs of <mrow> and </mrow> tags.
End of explanation
def write_var(v):
outfile.write('<mi>' + v + '</mi>\n')
Explanation: Variable names should be enclosed in pairs of <mi> and </mi> tags. For example, the variable $x$ is displayed by the following MathML:
<mi>x</mi>
End of explanation
def write_number(n):
outfile.write('<mn>' + n + '</mn>\n')
Explanation: Numbers should be enclosed in pairs of <mn> and </mn> tags. For example, the number $6$ is displayed by the following MathML:
<mn>6</mn>
End of explanation
def write_times():
outfile.write('<mo>⋅</mo>\n')
Explanation: The symbol $\cdot$ is created by the following MathML:
<mo>&sdot;</mo>
End of explanation
def write_operator(op):
outfile.write('<mo>' + op + '</mo>\n')
Explanation: Mathematical operators should be enclosed in pairs of <mo> and </mo> tags. For example, the operator $+$ is displayed by the following MathML:
<mo>+</mo>
End of explanation
def write_pi():
outfile.write('<mn>π</mn>\n')
Explanation: The symbol $\pi$ is created by the following MathML:
<mn>&pi;</mn>
End of explanation
def write_leq():
outfile.write('<mo>≤</mo>\n')
Explanation: The symbol $\leq$ is created by the following MathML:
<mn>&le;</mn>
End of explanation
def write_geq():
outfile.write('<mo>≥</mo>\n')
Explanation: The symbol $\geq$ is created by the following MathML:
<mn>&ge;</mn>
End of explanation
def write_any(char):
outfile.write(char)
Explanation: The function write_any writes a single character unadorned to the output file.
End of explanation
import ply.lex as lex
Explanation: We will be use the library ply to translate $\LaTeX$ into
<span style="font-variant:small-caps;">MathML</span>.
We only use the scanner that is provided by the module ply.lex.
Hence we import the module ply.lex that contains the scanner generator from ply.
End of explanation
tokens = [ 'HEAD', # r'\documentclass\{article\}'
'BEGIN_DOCUMENT', # r'\begin\{document\}'
'your token definitions here, lots of them'
]
Explanation: We have to declare all tokens below. We will need tokens for the following parts of the $\LaTeX$ file:
- The $\LaTeX$ file starts with the string \documentclass{article}.
- Next, there is the string \begin{document} that starts the content.
- The string \end{document} ends the content.
- The string $$ starts and ends a formula that is displayed on a line by itself.
- The string $ starts and ends a formula that is displayed as part of the text.
- The string \sum\limits_\{ starts the definition of a sum.
- The string \sqrt\{ starts the definition of a square root.
- The string \frac\{ starts the definition of a fraction.
- A variable taken to a power starts something like a^{.
- $\vdots$
End of explanation
states = [ ('formula', 'exclusive') ]
def t_HEAD(t):
r'\\documentclass\{article\}'
pass
def t_BEGIN_DOCUMENT(t):
r'\\begin\{document\}'
start_html()
def t_END_DOCUMENT(t):
r'\\end\{document\}'
end_html()
def t_DOLLAR_DOLLAR(t):
r'\$\$'
t.lexer.begin('formula')
t.lexer.stack = []
t.lexer.stack.append('INITIAL')
start_math_block()
def t_DOLLAR(t):
"your code here"
def t_ANY(t):
r'.|\n'
write_any(t.value)
def t_formula_DOLLAR_DOLLAR(t):
r'\$\$'
t.lexer.begin('INITIAL')
end_math()
def t_formula_DOLLAR(t):
"your code here"
Explanation: When we see a closing brace } things get difficult. The reason is that we need to know what type of formula is being closed. Is it a square root, the subscript of a sum, the superscript of a sum, some part of a fraction, or an exponent. My idea is to use a stack that is attached to the lexer, i.e. we have a variable lexer.stack that stores this information. We need a stack since formulas might be nested.
Furthermore, the scanner has two different states. Either we are inside a formula, i.e. inside something that is enclosed in dollar symbols, or we are inside text that needs to be echoed unchanged to the output file.
End of explanation
def t_formula_error(t):
print(f"Illegal character in state 'formula': '{t.value[0]}'")
t.lexer.skip(1)
Explanation: ... lots of token definitions here ...
End of explanation
__file__ = 'main'
Explanation: The line below is necessary to trick ply.lex into assuming this program is written in an ordinary python file instead of a Jupyter notebook.
End of explanation
lexer = lex.lex(debug=True)
Explanation: The line below generates the scanner.
End of explanation
lexer.input(data)
Explanation: Next, we feed our input string into the generated scanner.
End of explanation
def scan(lexer):
for t in lexer:
pass
scan(lexer)
outfile.close()
!open 'example.html'
Explanation: In order to scan the data that we provided in the last line, we iterate over all tokens generated by our scanner.
End of explanation
<END_TASK> |
170 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Moving through a physical experience
Some examples of IoT application that ping a digital twin
This doc does not really teach anything new, but it will give us some exercise with the previous examples. We will need a more complicated model in future steps.
* Adding more connections.
* Making connections using ids.
This is the SDK repo on Github
Here is the doc on the query language
Step1: Note the query object loves to drop values. To keep from making multiple queries, save the data somewhere.
Step3: and a df of the tickets
Step4: Let's sell a couple more of those tickets.
Step5: The purpose is to create a simulation. So I'm just hacking something out real quick to make our model look a little fuller.
Step6: You should be able to load the storage explorer at this point and look at the relationships.
I'm going to add some areas.
* People go into areas when the concert begins.
* areas have capacity
Also adding lines
Step7: Ok now that we know what tickets we want to sell, let's push them to the digital twin ecosystem. This is exactly the same as what we did with Customers in step one.
Step8: Now establishing a realtionship between the lines and the areas. If a person leaves one area they have to get into the line of the next area in order to get there.
Step9: Now I'm going to add some people into different parts of the experience. This will change from user to user, but you can use this as an example of how this would work on your twin. Use the queries above to get the specific ids of your twins.
Step10: Sell a couple more tickets | <ASSISTANT_TASK:>
Python Code:
from azure.identity import AzureCliCredential
from azure.digitaltwins.core import DigitalTwinsClient
# using yaml instead of
import yaml
import uuid
# using altair instead of matplotlib for vizuals
import numpy as np
import pandas as pd
# you will get this from the ADT resource at portal.azure.com
your_digital_twin_url = "home-test-twin.api.wcus.digitaltwins.azure.net"
azure_cli = AzureCliCredential()
service_client = DigitalTwinsClient(
your_digital_twin_url, azure_cli)
service_client
query_expression = "SELECT * FROM digitaltwins t where IS_OF_MODEL('dtmi:billmanh:patron;1')"
query_result = service_client.query_twins(query_expression)
Explanation: Moving through a physical experience
Some examples of IoT application that ping a digital twin
This doc does not really teach anything new, but it will give us some exercise with the previous examples. We will need a more complicated model in future steps.
* Adding more connections.
* Making connections using ids.
This is the SDK repo on Github
Here is the doc on the query language
End of explanation
values = []
for i in query_result:
values.append(i)
df_customers = pd.DataFrame([[i['$dtId'],i['satisfaction']] for i in values],
columns=['id','satisfaction'])
df_customers
Explanation: Note the query object loves to drop values. To keep from making multiple queries, save the data somewhere.
End of explanation
query_expression = f
SELECT * FROM digitaltwins t where IS_OF_MODEL('dtmi:mymodels:ticket;1') and t.state='open'
query_result = service_client.query_twins(query_expression)
available_tickets_df = pd.DataFrame([[i['$dtId'],i['event_title'],i['state']] for i in query_result],
columns = ['$dtId','event_title','state'])
available_tickets_df
Explanation: and a df of the tickets
End of explanation
tickets_sold = available_tickets_df.drop_duplicates(subset='event_title')['$dtId'].tolist()
tickets_sold
customers_sold = df_customers['id'].tolist()[5:7]
customers_sold
Explanation: Let's sell a couple more of those tickets.
End of explanation
for c,t in enumerate(tickets_sold):
print(t)
tickethoder_relationship = {
"$relationshipId": f"{t}ownedBy{customers_sold[c]}",
"$sourceId": t,
"$relationshipName": "ownedBy",
"$targetId": customers_sold[c],
"bought_online": False
}
service_client.upsert_relationship(
tickethoder_relationship["$sourceId"],
tickethoder_relationship["$relationshipId"],
tickethoder_relationship
)
patch = [
{
"op": "replace",
"path": "",
"value": "sold"
}
]
service_client.update_component(t,"state", patch)
Explanation: The purpose is to create a simulation. So I'm just hacking something out real quick to make our model look a little fuller.
End of explanation
# # # Delete the model that you don't want.
# # service_client.delete_model(patron_model_id)
# # Create it if you just deleted it.
# # area_model_json = yaml.safe_load(open("models/area.json"))
# line_model_json = yaml.safe_load(open("models/Patron.json"))
# service_client.create_models([line_model_json])
Explanation: You should be able to load the storage explorer at this point and look at the relationships.
I'm going to add some areas.
* People go into areas when the concert begins.
* areas have capacity
Also adding lines:
* Lines lead to other places, but one line can lead to many places (like the entrance).
* Lines have a capacity
End of explanation
for r in range(4):
digital_twin_id = f'area-{r}'
print(digital_twin_id)
dt_json = {
"$metadata": {
"$model": "dtmi:mymodels:area;1"
},
"capacity": 3,
"status": "open"
}
created_twin = service_client.upsert_digital_twin(digital_twin_id, dt_json)
print(created_twin)
for r in range(4):
digital_twin_id = f'line-{r}'
print(digital_twin_id)
dt_json = {
"$metadata": {
"$model": "dtmi:mymodels:line;1"
}
}
created_twin = service_client.upsert_digital_twin(digital_twin_id, dt_json)
print(created_twin)
Explanation: Ok now that we know what tickets we want to sell, let's push them to the digital twin ecosystem. This is exactly the same as what we did with Customers in step one.
End of explanation
def line_to_room(line,area,walking):
tickethoder_relationship = {
"$relationshipId": f"{line}leadsTo{area}",
"$sourceId": line,
"$relationshipName": "leadsTo",
"$targetId": area,
"walk_distance": walking
}
service_client.upsert_relationship(
tickethoder_relationship["$sourceId"],
tickethoder_relationship["$relationshipId"],
tickethoder_relationship
)
line_to_room("line-0","area-0",5)
line_to_room("line-1","area-1",5)
line_to_room("line-2","area-2",5)
line_to_room("line-3","area-3",5)
Explanation: Now establishing a realtionship between the lines and the areas. If a person leaves one area they have to get into the line of the next area in order to get there.
End of explanation
def cust_to_area(cust,area):
tickethoder_relationship = {
"$relationshipId": f"{cust}locatedIn{area}",
"$sourceId": cust,
"$relationshipName": "locatedIn",
"$targetId": area,
}
service_client.upsert_relationship(
tickethoder_relationship["$sourceId"],
tickethoder_relationship["$relationshipId"],
tickethoder_relationship
)
cust_to_area("customer-e6f49d8a-711b-41c3-9db8-c7ece3dbc32c","line-1")
cust_to_area("customer-21e17d28-76c3-4c04-8df9-396703692a68","line-1")
# cust_to_area("customer-25e19268-3433-4f09-afe3-94f466313368","line-0")
# cust_to_area("customer-c87adbfa-1c6e-4ea9-9f03-83e3877ef5fc","line-2")
Explanation: Now I'm going to add some people into different parts of the experience. This will change from user to user, but you can use this as an example of how this would work on your twin. Use the queries above to get the specific ids of your twins.
End of explanation
def ticket_to_cust(ticket,cust):
tickethoder_relationship = {
"$relationshipId": f"{ticket}ownedBy{cust}",
"$sourceId": ticket,
"$relationshipName": "ownedBy",
"$targetId": cust,
}
service_client.upsert_relationship(
tickethoder_relationship["$sourceId"],
tickethoder_relationship["$relationshipId"],
tickethoder_relationship
)
ticket_to_cust("ticket-58bf00bf-d5b8-4d06-9002-f8bf1693cc56","customer-418cbc74-b101-4f50-8cf0-6075a2a8053c")
ticket_to_cust("ticket-67f3cbfb-4b35-4e10-bf6d-3fd216093c3c","customer-9c9b5c36-69f6-4f48-9362-4aaac4cb1be4")
Explanation: Sell a couple more tickets
End of explanation
<END_TASK> |
171 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Structures like these are encoded in "PDB" files
Entries are determined by columns in the file, not by spaces between the columns
Step1: Predict what the following will do
Step2: Write a program that | <ASSISTANT_TASK:>
Python Code:
#record atom_name chain x y z occupancy atom_type
# | | | | | | | |
#ATOM 1086 CG LYS A 141 -4.812 9.683 2.584 1.00 26.78 N0
# | | | |
# atom_num amino_acid resid_num bfactor
Explanation: Structures like these are encoded in "PDB" files
Entries are determined by columns in the file, not by spaces between the columns
End of explanation
line_frompdb = "ATOM 1086 N SER A 141 -4.812 9.683 2.584 1.00 26.78 N0"
print(line_frompdb[2:4])
Explanation: Predict what the following will do
End of explanation
#record atom_name chain x y z occupancy atom_type
# | | | | | | | |
#ATOM 1086 CG LYS A 141 -4.812 9.683 2.584 1.00 26.78 N0
# | | | |
# atom_num amino_acid resid_num bfactor
Explanation: Write a program that:
+ Reads a pdb file (download 1stn.pdb)
+ Grabs all "ATOM" lines whose atom type is "CA"
+ Shifts the position of the molecule in x by +10 angstroms
+ Writes out a new pdb file containing these shifted atoms
+ If you want to check your work, download PyMOL and open up the files
End of explanation
<END_TASK> |
172 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter
Step1: Note
Step2: Lesson
Step3: Project 1
Step4: We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
Step5: TODO
Step6: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
Step7: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.
TODO
Step8: Examine the ratios you've calculated for a few words
Step9: Looking closely at the values you just calculated, we see the following
Step10: Examine the new ratios you've calculated for the same words from before
Step11: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments.
Now run the following cells to see more ratios.
The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see all the words in the list.)
The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write reversed(pos_neg_ratios.most_common()).)
You should continue to see values similar to the earlier ones we checked – neutral words will be close to 0, words will get more positive as their ratios approach and go above 1, and words will get more negative as their ratios approach and go below -1. That's why we decided to use the logs instead of the raw ratios.
Step12: End of Project 1.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Transforming Text into Numbers<a id='lesson_3'></a>
The cells here include code Andrew shows in the next video. We've included it so you can run the code along with the video without having to type in everything.
Step13: Project 2
Step14: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074
Step15: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer.
Step16: TODO
Step17: Run the following cell. It should display (1, 74074)
Step18: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
Step20: TODO
Step21: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0.
Step23: TODO
Step24: Run the following two cells. They should print out'POSITIVE' and 1, respectively.
Step25: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively.
Step29: End of Project 2.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Project 3
Step30: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1.
Step31: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set).
We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.
Step32: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
Step33: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network.
Step34: That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network.
Step35: With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.
End of Project 3.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Understanding Neural Noise<a id='lesson_4'></a>
The following cells include includes the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
Step39: Project 4
Step40: Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1.
Step41: That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions.
Step42: End of Project 4.
Andrew's solution was actually in the previous video, so rewatch that video if you had any problems with that project. Then continue on to the next lesson.
Analyzing Inefficiencies in our Network<a id='lesson_5'></a>
The following cells include the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
Step46: Project 5
Step47: Run the following cell to recreate the network and train it once again.
Step48: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
Step49: End of Project 5.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Further Noise Reduction<a id='lesson_6'></a>
Step53: Project 6
Step54: Run the following cell to train your network with a small polarity cutoff.
Step55: And run the following cell to test it's performance. It should be
Step56: Run the following cell to train your network with a much larger polarity cutoff.
Step57: And run the following cell to test it's performance.
Step58: End of Project 6.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Analysis | <ASSISTANT_TASK:>
Python Code:
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter: @iamtrask
Blog: http://iamtrask.github.io
What You Should Already Know
neural networks, forward and back-propagation
stochastic gradient descent
mean squared error
and train/test splits
Where to Get Help if You Need it
Re-watch previous Udacity Lectures
Leverage the recommended Course Reading Material - Grokking Deep Learning (Check inside your classroom for a discount code)
Shoot me a tweet @iamtrask
Tutorial Outline:
Intro: The Importance of "Framing a Problem" (this lesson)
Curate a Dataset
Developing a "Predictive Theory"
PROJECT 1: Quick Theory Validation
Transforming Text to Numbers
PROJECT 2: Creating the Input/Output Data
Putting it all together in a Neural Network (video only - nothing in notebook)
PROJECT 3: Building our Neural Network
Understanding Neural Noise
PROJECT 4: Making Learning Faster by Reducing Noise
Analyzing Inefficiencies in our Network
PROJECT 5: Making our Network Train and Run Faster
Further Noise Reduction
PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary
Analysis: What's going on in the weights?
Lesson: Curate a Dataset<a id='lesson_1'></a>
The cells from here until Project 1 include code Andrew shows in the videos leading up to mini project 1. We've included them so you can run the code along with the videos without having to type in everything.
End of explanation
len(reviews)
reviews[0]
labels[0]
Explanation: Note: The data in reviews.txt we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
Explanation: Lesson: Develop a Predictive Theory<a id='lesson_2'></a>
End of explanation
from collections import Counter
import numpy as np
Explanation: Project 1: Quick Theory Validation<a id='project_1'></a>
There are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook.
You'll find the Counter class to be useful in this exercise, as well as the numpy library.
End of explanation
# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
Explanation: We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
End of explanation
# TODO: Loop over all the words in all the reviews and increment the counts in the appropriate counter objects
for label, review in zip(labels, reviews):
words = review.lower().replace(",", " ").replace(".", " ").split(" ")
total_counts.update(words)
if label == "POSITIVE" :
positive_counts.update(words)
else:
negative_counts.update(words)
Explanation: TODO: Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter.
Note: Throughout these projects, you should use split(' ') to divide a piece of text (such as a review) into individual words. If you use split() instead, you'll get slightly different results than what the videos and solutions show.
End of explanation
# Examine the counts of the most common words in positive reviews
positive_counts.most_common()
# Examine the counts of the most common words in negative reviews
negative_counts.most_common()
Explanation: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
End of explanation
# Create Counter object to store positive/negative ratios
pos_neg_ratios = Counter()
# TODO: Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times
for word, count in total_counts.most_common():
ratio = positive_counts[word]/(float(negative_counts[word])+1.0)
pos_neg_ratios.update({word:ratio})
if count <100:
break
Explanation: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.
TODO: Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in pos_neg_ratios.
Hint: the positive-to-negative ratio for a given word can be calculated with positive_counts[word] / float(negative_counts[word]+1). Notice the +1 in the denominator – that ensures we don't divide by zero for words that are only seen in positive reviews.
End of explanation
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
Explanation: Examine the ratios you've calculated for a few words:
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
x = np.arange(0,5,0.01)
y = [np.log(x) if x >= 1 else -np.log(1/(x + 0.01)) for x in x]
import matplotlib.pyplot as plt
plt.plot(x,y)
plt.grid(True)
plt.show()
# TODO: Convert ratios to logs
for word, ratio in pos_neg_ratios.most_common():
if ratio >=1:
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log(1/(ratio + 0.01))
Explanation: Looking closely at the values you just calculated, we see the following:
Words that you would expect to see more often in positive reviews – like "amazing" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be.
Words that you would expect to see more often in negative reviews – like "terrible" – have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be.
Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews – like "the" – have values very close to 1. A perfectly neutral word – one that was used in exactly the same number of positive reviews as negative reviews – would be almost exactly 1. The +1 we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway.
Ok, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like "amazing" has a value above 4, whereas a very negative word like "terrible" has a value around 0.18. Those values aren't easy to compare for a couple of reasons:
Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys.
When comparing absolute values it's easier to do that around zero than one.
To fix these issues, we'll convert all of our ratios to new values using logarithms.
TODO: Go through all the ratios you calculated and convert their values using the following formulas:
For any postive words, convert the ratio using np.log(ratio)
For any negative words, convert the ratio using -np.log(1/(ratio + 0.01))
That second equation may look strange, but what it's doing is dividing one by a very small number, which will produce a larger positive number. Then, it takes the log of that, which produces numbers similar to the ones for the postive words. Finally, we negate the values by adding that minus sign up front. In the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but oppositite signs.
End of explanation
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
Explanation: Examine the new ratios you've calculated for the same words from before:
End of explanation
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid confusion.
# If you explore the documentation for the Counter class,
# you will see you could also find the 30 least common
# words like this: pos_neg_ratios.most_common()[:-31:-1]
Explanation: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments.
Now run the following cells to see more ratios.
The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see all the words in the list.)
The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write reversed(pos_neg_ratios.most_common()).)
You should continue to see values similar to the earlier ones we checked – neutral words will be close to 0, words will get more positive as their ratios approach and go above 1, and words will get more negative as their ratios approach and go below -1. That's why we decided to use the logs instead of the raw ratios.
End of explanation
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
Explanation: End of Project 1.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Transforming Text into Numbers<a id='lesson_3'></a>
The cells here include code Andrew shows in the next video. We've included it so you can run the code along with the video without having to type in everything.
End of explanation
# TODO: Create set named "vocab" containing all of the words from all of the reviews
vocab = set(total_counts.keys())
Explanation: Project 2: Creating the Input/Output Data<a id='project_2'></a>
TODO: Create a set named vocab that contains every word in the vocabulary.
End of explanation
vocab_size = len(vocab)
print(vocab_size)
Explanation: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074
End of explanation
from IPython.display import Image
Image(filename='sentiment_network_2.png')
Explanation: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer.
End of explanation
# TODO: Create layer_0 matrix with dimensions 1 by vocab_size, initially filled with zeros
layer_0 = np.zeros((1,vocab_size))
Explanation: TODO: Create a numpy array called layer_0 and initialize it to all zeros. You will find the zeros function particularly helpful here. Be sure you create layer_0 as a 2-dimensional matrix with 1 row and vocab_size columns.
End of explanation
layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png')
Explanation: Run the following cell. It should display (1, 74074)
End of explanation
# Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
# display the map of words to indices
word2index
Explanation: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
End of explanation
def update_input_layer(review):
Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
# use global avoide create new variable that may fill your RAM!
global layer_0
# clear out previous state by resetting the layer to be all 0s
layer_0 *= 0
# TODO: count how many times each word is used in the given review and store the results in layer_0
words = review.lower().replace(",", " ").replace(".", " ").split(" ")
for word in words:
layer_0[0][word2index[word]] += 1
Explanation: TODO: Complete the implementation of update_input_layer. It should count
how many times each word is used in the given review, and then store
those counts at the appropriate indices inside layer_0.
End of explanation
update_input_layer(reviews[0])
layer_0
Explanation: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0.
End of explanation
def get_target_for_label(label):
Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
# TODO: Your code here
return 1 if label=="POSITIVE" else 0
Explanation: TODO: Complete the implementation of get_target_for_labels. It should return 0 or 1,
depending on whether the given label is NEGATIVE or POSITIVE, respectively.
End of explanation
labels[1]
get_target_for_label(labels[0])
Explanation: Run the following two cells. They should print out'POSITIVE' and 1, respectively.
End of explanation
labels[1]
get_target_for_label(labels[1])
Explanation: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively.
End of explanation
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.lower().replace(",", " ").replace(".", " ").split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# The input layer, a two-dimensional matrix with shape 1 x input_nodes
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.lower().replace(",", " ").replace(".", " ").split(" "):
# NOTE: This if-check was not in the version of this method created in Project 2,
# and it appears in Andrew's Project 3 solution without explanation.
# It simply ensures the word is actually a key in word2index before
# accessing it, which is important because accessing an invalid key
# with raise an exception in Python. This allows us to ignore unknown
# words encountered in new reviews.
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] += 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# Run a forward pass through the network, like in the "train" function.
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
Explanation: End of Project 2.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Project 3: Building a Neural Network<a id='project_3'></a>
TODO: We've included the framework of a class called SentimentNetork. Implement all of the items marked TODO in the code. These include doing the following:
- Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer.
- Do not add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs.
- Re-use the code from earlier in this notebook to create the training data (see TODOs in the code)
- Implement the pre_process_data function to create the vocabulary for our training data generating functions
- Ensure train trains over the entire corpus
Where to Get Help if You Need it
Re-watch earlier Udacity lectures
Chapters 3-5 - Grokking Deep Learning - (Check inside your classroom for a discount code)
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
Explanation: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set).
We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.
End of explanation
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network.
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network.
End of explanation
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.lower().replace(",", " ").replace(".", " ").split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].lower().replace(",", " ").replace(".", " ").split(" "):
review_counter[word] += 1
review_counter.most_common()
Explanation: With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.
End of Project 3.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Understanding Neural Noise<a id='lesson_4'></a>
The following cells include includes the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
End of explanation
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.lower().replace(",", " ").replace(".", " ").split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# The input layer, a two-dimensional matrix with shape 1 x input_nodes
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.lower().replace(",", " ").replace(".", " ").split(" "):
# NOTE: This if-check was not in the version of this method created in Project 2,
# and it appears in Andrew's Project 3 solution without explanation.
# It simply ensures the word is actually a key in word2index before
# accessing it, which is important because accessing an invalid key
# with raise an exception in Python. This allows us to ignore unknown
# words encountered in new reviews.
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# Run a forward pass through the network, like in the "train" function.
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
Explanation: Project 4: Reducing Noise in Our Input Data<a id='project_4'></a>
TODO: Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following:
* Copy the SentimentNetwork class you created earlier into the following cell.
* Modify update_input_layer so it does not count how many times each word is used, but rather just stores whether or not a word was used.
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions.
End of explanation
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
Explanation: End of Project 4.
Andrew's solution was actually in the previous video, so rewatch that video if you had any problems with that project. Then continue on to the next lesson.
Analyzing Inefficiencies in our Network<a id='lesson_5'></a>
The following cells include the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
End of explanation
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
## New for Project 5: Removed self.layer_0; added self.layer_1
# The input layer, a two-dimensional matrix with shape 1 x hidden_nodes
self.layer_1 = np.zeros((1,hidden_nodes))
## New for Project 5: Removed update_input_layer function
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
## New for Project 5: changed name of first parameter form 'training_reviews'
# to 'training_reviews_raw'
def train(self, training_reviews_raw, training_labels):
## New for Project 5: pre-process training reviews so we can deal
# directly with the indices of non-zero inputs
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
## New for Project 5: Removed call to 'update_input_layer' function
# because 'layer_0' is no longer used
# Hidden layer
## New for Project 5: Add in only the weights for non-zero items
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1'
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1'
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
## New for Project 5: Only update the weights that were used in the forward pass
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# Run a forward pass through the network, like in the "train" function.
## New for Project 5: Removed call to update_input_layer function
# because layer_0 is no longer used
# Hidden layer
## New for Project 5: Identify the indices used in the review and then add
# just those weights to layer_1
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use self.layer_1 instead of local layer_1
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
Explanation: Project 5: Making our Network More Efficient<a id='project_5'></a>
TODO: Make the SentimentNetwork class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following:
* Copy the SentimentNetwork class from the previous project into the following cell.
* Remove the update_input_layer function - you will not need it in this version.
* Modify init_network:
You no longer need a separate input layer, so remove any mention of self.layer_0
You will be dealing with the old hidden layer more directly, so create self.layer_1, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero
Modify train:
Change the name of the input parameter training_reviews to training_reviews_raw. This will help with the next step.
At the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from word2index) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local list variable named training_reviews that should contain a list for each review in training_reviews_raw. Those lists should contain the indices for words found in the review.
Remove call to update_input_layer
Use self's layer_1 instead of a local layer_1 object.
In the forward pass, replace the code that updates layer_1 with new logic that only adds the weights for the indices used in the review.
When updating weights_0_1, only update the individual weights that were used in the forward pass.
Modify run:
Remove call to update_input_layer
Use self's layer_1 instead of a local layer_1 object.
Much like you did in train, you will need to pre-process the review so you can work with word indices, then update layer_1 by adding weights for the indices used in the review.
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to recreate the network and train it once again.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
End of explanation
Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
Explanation: End of Project 5.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Further Noise Reduction<a id='lesson_6'></a>
End of explanation
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
## New for Project 6: added min_count and polarity_cutoff parameters
def __init__(self, reviews,labels,min_count = 10,polarity_cutoff = 0.1,hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
min_count(int) - Words should only be added to the vocabulary
if they occur more than this many times
polarity_cutoff(float) - The absolute value of a word's positive-to-negative
ratio must be at least this big to be considered.
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
## New for Project 6: added min_count and polarity_cutoff arguments to pre_process_data call
self.pre_process_data(reviews, labels, polarity_cutoff, min_count)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
## New for Project 6: added min_count and polarity_cutoff parameters
def pre_process_data(self, reviews, labels, polarity_cutoff, min_count):
## ----------------------------------------
## New for Project 6: Calculate positive-to-negative ratios for words before
# building vocabulary
#
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt >= 50):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio + 0.01)))
#
## end New for Project 6
## ----------------------------------------
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
## New for Project 6: only add words that occur at least min_count times
# and for words with pos/neg ratios, only add words
# that meet the polarity_cutoff
if(total_counts[word] > min_count):
if(word in pos_neg_ratios.keys()):
if((pos_neg_ratios[word] >= polarity_cutoff) or (pos_neg_ratios[word] <= -polarity_cutoff)):
review_vocab.add(word)
else:
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
## New for Project 5: Removed self.layer_0; added self.layer_1
# The input layer, a two-dimensional matrix with shape 1 x hidden_nodes
self.layer_1 = np.zeros((1,hidden_nodes))
## New for Project 5: Removed update_input_layer function
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
## New for Project 5: changed name of first parameter form 'training_reviews'
# to 'training_reviews_raw'
def train(self, training_reviews_raw, training_labels):
## New for Project 5: pre-process training reviews so we can deal
# directly with the indices of non-zero inputs
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
## New for Project 5: Removed call to 'update_input_layer' function
# because 'layer_0' is no longer used
# Hidden layer
## New for Project 5: Add in only the weights for non-zero items
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1'
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1'
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
## New for Project 5: Only update the weights that were used in the forward pass
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# Run a forward pass through the network, like in the "train" function.
## New for Project 5: Removed call to update_input_layer function
# because layer_0 is no longer used
# Hidden layer
## New for Project 5: Identify the indices used in the review and then add
# just those weights to layer_1
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use self.layer_1 instead of local layer_1
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
Explanation: Project 6: Reducing Noise by Strategically Reducing the Vocabulary<a id='project_6'></a>
TODO: Improve SentimentNetwork's performance by reducing more noise in the vocabulary. Specifically, do the following:
* Copy the SentimentNetwork class from the previous project into the following cell.
* Modify pre_process_data:
Add two additional parameters: min_count and polarity_cutoff
Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.)
Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like.
Change so words are only added to the vocabulary if they occur in the vocabulary more than min_count times.
Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least polarity_cutoff
Modify __init__:
Add the same two parameters (min_count and polarity_cutoff) and use them when you call pre_process_data
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to train your network with a small polarity cutoff.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: And run the following cell to test it's performance. It should be
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to train your network with a much larger polarity cutoff.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: And run the following cell to test it's performance.
End of explanation
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.keys():
most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])
return most_similar.most_common()
get_most_similar_words("excellent")
get_most_similar_words("terrible")
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#00ff00")
else:
neg+=1
colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize))
p.scatter(x="x1", y="x2", size=8, source=source,color=colors_list)
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
Explanation: End of Project 6.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Analysis: What's Going on in the Weights?<a id='lesson_7'></a>
End of explanation
<END_TASK> |
173 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Performance Benchmarking for KV Drive
The goal of these set of experiments is to characterize the variability across platforms in a systematic and consistent way in terms of KV drive. The steps of experiments are as follows,
Run Stress-ng benchmarks on one KV drive;
Run Stress-ng benchmarks on machine issdm-6, and get the "without limit" result;
Find all the common benchmarks from both results;
Calculate the speedup (normalized value) of each benchmark based on the one from KV drive (issdm-6 (without limit) / KV drive);
Use torpor to calculate the best cpu quota by minimizing the average speedups. We will later use this parameter to limit the cpu usage in the docker container;
Run Stress-ng benchmarks in the constrained docker container on machine issdm-6, and get the "with limit" result;
Calculate the speedup based on KV drive again (issdm-6 (with limit) / KV drive), then we get a new "speedup range", which should be must smaller than the previous one.
Run a bunch of other benchmarks on both KV drive and constrained docker container to verify if they are all within in the later "speedup range".
Make conclusion.
Step1: First, we load all test data.
Step2: Let's have a look at the pattern of data.
Step3: Show all the test machines.
Step4: Define some predicates for machines and limits
Step5: Show the number of stress tests on different machines
Step6: Because those failed benchmarks are not shown in the result report, we want to know how many common successful stress tests on the target machine and kv3.
Step7: Read the normalized results.
Step8: Show some of the data lines. The normalized value is the speedup based on kv3. It becomes a negative value when the benchmark runs on the target machine is slower than on kv3 (slowdown).
Step9: Show those benchmarks are not both successful completed on the issdm-6 and kv3.
Step10: Show those benchmarks are not both successful completed on the t2.micro and kv3.
Step11: We can find the number of benchmarks are speed-up and slowdown, respectively.
Step12: The average of normalized value for results under CPU restriction
Step13: Experiment Results from issdm-6
Let's have a look at the histogram of frequency of normalized value based on stress tests without CPU restriction running on issdm-6.
Step14: Here is the rank of normalized value from stress tests without CPU restriction
Step15: Now let's have a look at the histogram of frequency of normalized value based on stress tests with CPU restriction running on issdm-6.
Step16: Here is the rank of normalized value from stress tests with CPU restriction
Step17: We notice that the stressng-cpu-jenkin looks like an outlier. Let's redraw the histogram without this one.
Step18: Summary
We got the boundary of normalized value on issdm-6 from -29.394675 to 54.266945 by using parameters --cpuset-cpus=1 --cpu-quota=7234 --cpu-period=100000, which means the docker container only uses 7.234ms CPU worth of run-time every 100ms on cpu 1 (See cpu for more details).
Experiment Results from t2.micro
Let's have a look at the histogram of frequency of normalized value based on stress tests without CPU restriction running on t2.micro.
Step19: Here is the rank of normalized value from stress tests without CPU restriction
Step20: Let's have a look at the histogram of frequency of normalized value based on stress tests with CPU restriction running on t2.micro.
Step21: Here is the rank of normalized value from stress tests with CPU restriction
Step22: We notice that the stressng-memory-stack looks like an outlier. Let's redraw the histogram without this one.
Step23: The stressng-cpu-jenkin benchmark is a collection of (non-cryptographic) hash functions for multi-byte keys. See Jenkins hash function from Wikipedia for more details.
Summary
We got the boundary of normalized value on t2.micro from -198.440535 to 119.904761 by using parameters --cpuset-cpus=0 --cpu-quota=25750 --cpu-period=100000, which means the docker container only uses 7.234ms CPU worth of run-time every 100ms on cpu 0 (See cpu for more details).
Verification
Now we use 9 other benchmark programs to verify this result. These programs are,
- blogbench
Step24: Show number of test benchmarks.
Step25: Order the test results by the absolute of normalized value
Step26: Verification Tests on issdm-6
Histogram of frequency of normalized value.
Step27: Print the max the min normalized value,
Step28: The average of noramlized value is,
Step29: If we remove all nbench tests, the frequency histogram changes to
Step30: The max the min normalized value changes to,
Step31: The average of noramlized value changes to,
Step32: Verification Tests on t2.micro
Histogram of frequency of normalized value.
Step33: The average of noramlized value of the verification benchmarks is,
Step34: Let's see the frequency histogram after removing right-most four outliers.
Step35: Print the max the min normalized value,
Step36: The average of noramlized value without the four outliners is, | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import random
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
pd.set_option("display.max_rows", 8)
Explanation: Performance Benchmarking for KV Drive
The goal of these set of experiments is to characterize the variability across platforms in a systematic and consistent way in terms of KV drive. The steps of experiments are as follows,
Run Stress-ng benchmarks on one KV drive;
Run Stress-ng benchmarks on machine issdm-6, and get the "without limit" result;
Find all the common benchmarks from both results;
Calculate the speedup (normalized value) of each benchmark based on the one from KV drive (issdm-6 (without limit) / KV drive);
Use torpor to calculate the best cpu quota by minimizing the average speedups. We will later use this parameter to limit the cpu usage in the docker container;
Run Stress-ng benchmarks in the constrained docker container on machine issdm-6, and get the "with limit" result;
Calculate the speedup based on KV drive again (issdm-6 (with limit) / KV drive), then we get a new "speedup range", which should be must smaller than the previous one.
Run a bunch of other benchmarks on both KV drive and constrained docker container to verify if they are all within in the later "speedup range".
Make conclusion.
End of explanation
df = pd.read_csv('stress-ng/third/torpor-results/alltests.csv')
Explanation: First, we load all test data.
End of explanation
df.head()
Explanation: Let's have a look at the pattern of data.
End of explanation
df['machine'].unique()
Explanation: Show all the test machines.
End of explanation
machine_is_issdm_6 = df['machine'] == 'issdm-6'
machine_is_t2_micro = df['machine'] == 't2.micro'
machine_is_kv3 = df['machine'] == 'kv3'
limits_is_with = df['limits'] == 'with'
limits_is_without = df['limits'] == 'without'
Explanation: Define some predicates for machines and limits
End of explanation
df_issdm_6_with_limit = df[machine_is_issdm_6 & limits_is_with]
df_t2_micro_with_limit = df[machine_is_t2_micro & limits_is_with]
df_kv3_without_limit = df[machine_is_kv3 & limits_is_without]
print(
len(df_issdm_6_with_limit), # machine issdm-6 with limit
len(df[machine_is_issdm_6 & limits_is_without]), # machine issdm-6 without limit
len(df_t2_micro_with_limit), # machine t2.micro with limit
len(df[machine_is_t2_micro & limits_is_without]), # machine t2.micro without limit
len(df_kv3_without_limit) # machine kv3 without limit
)
Explanation: Show the number of stress tests on different machines
End of explanation
issdm_6_with_limit_merge_kv3 = pd.merge(df_issdm_6_with_limit, df_kv3_without_limit, how='inner', on='benchmark')
t2_micro_with_limit_merge_kv3 = pd.merge(df_t2_micro_with_limit, df_kv3_without_limit, how='inner', on='benchmark')
print(
# common successful tests from issdm-6 and kv3
len(issdm_6_with_limit_merge_kv3),
# common successful tests from t2.micro and kv3
len(t2_micro_with_limit_merge_kv3)
)
Explanation: Because those failed benchmarks are not shown in the result report, we want to know how many common successful stress tests on the target machine and kv3.
End of explanation
df_normalized = pd.read_csv('stress-ng/third/torpor-results/alltests_with_normalized_results_1.1.csv')
Explanation: Read the normalized results.
End of explanation
df_normalized.head()
Explanation: Show some of the data lines. The normalized value is the speedup based on kv3. It becomes a negative value when the benchmark runs on the target machine is slower than on kv3 (slowdown).
End of explanation
df_issdm_6_with_limit[~df_issdm_6_with_limit['benchmark'].isin(issdm_6_with_limit_merge_kv3['benchmark'])]
Explanation: Show those benchmarks are not both successful completed on the issdm-6 and kv3.
End of explanation
df_t2_micro_with_limit[~df_t2_micro_with_limit['benchmark'].isin(t2_micro_with_limit_merge_kv3['benchmark'])]
Explanation: Show those benchmarks are not both successful completed on the t2.micro and kv3.
End of explanation
normalized_limits_is_with = df_normalized['limits'] == 'with'
normalized_limits_is_without = df_normalized['limits'] == 'without'
normalized_machine_is_issdm_6 = df_normalized['machine'] == 'issdm-6'
normalized_machine_is_t2_micro = df_normalized['machine'] == 't2.micro'
normalized_is_speed_up = df_normalized['normalized'] > 0
normalized_is_slow_down = df_normalized['normalized'] < 0
print(
# issdm-6 without CPU restriction
len(df_normalized[normalized_limits_is_without & normalized_machine_is_issdm_6 & normalized_is_speed_up]), # 1. speed-up
len(df_normalized[normalized_limits_is_without & normalized_machine_is_issdm_6 & normalized_is_slow_down]), # 2. slowdown
# issdm-6 with CPU restriction
len(df_normalized[normalized_limits_is_with & normalized_machine_is_issdm_6 & normalized_is_speed_up]), # 3. speed-up
len(df_normalized[normalized_limits_is_with & normalized_machine_is_issdm_6 & normalized_is_slow_down]), # 4. slowdown
# t2.micro without CPU restriction
len(df_normalized[normalized_limits_is_without & normalized_machine_is_t2_micro & normalized_is_speed_up]), # 5. speed-up
len(df_normalized[normalized_limits_is_without & normalized_machine_is_t2_micro & normalized_is_slow_down]), # 6. slowdown
# t2.micro with CPU restriction
len(df_normalized[normalized_limits_is_with & normalized_machine_is_t2_micro & normalized_is_speed_up]), # 7. speed-up
len(df_normalized[normalized_limits_is_with & normalized_machine_is_t2_micro & normalized_is_slow_down]) # 8. slowdown
)
Explanation: We can find the number of benchmarks are speed-up and slowdown, respectively.
End of explanation
print(
# For issdm-6
df_normalized[normalized_machine_is_issdm_6 & normalized_limits_is_with]['normalized'].mean(),
# For t2_micro
df_normalized[normalized_machine_is_t2_micro & normalized_limits_is_with]['normalized'].mean()
)
Explanation: The average of normalized value for results under CPU restriction
End of explanation
df_normalized_issdm_6_without_limit = df_normalized[normalized_machine_is_issdm_6 & normalized_limits_is_without]
df_normalized_issdm_6_without_limit.normalized.hist(bins=150, figsize=(25,12), xlabelsize=20, ylabelsize=20)
plt.title('stress tests run on issdm-6 without CPU restriction', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25)
Explanation: Experiment Results from issdm-6
Let's have a look at the histogram of frequency of normalized value based on stress tests without CPU restriction running on issdm-6.
End of explanation
df_normalized_issdm_6_without_limit_sorted = df_normalized_issdm_6_without_limit.sort_values(by='normalized', ascending=0)
df_normalized_issdm_6_without_limit_sorted_head = df_normalized_issdm_6_without_limit_sorted.head()
df_normalized_issdm_6_without_limit_sorted_tail = df_normalized_issdm_6_without_limit_sorted.tail()
df_normalized_issdm_6_without_limit_sorted_head.append(df_normalized_issdm_6_without_limit_sorted_tail)
Explanation: Here is the rank of normalized value from stress tests without CPU restriction
End of explanation
df_normalized_issdm_6_with_limit = df_normalized[normalized_machine_is_issdm_6 & normalized_limits_is_with]
df_normalized_issdm_6_with_limit.normalized.hist(color='Orange', bins=150, figsize=(25,12), xlabelsize=20, ylabelsize=20)
plt.title('stress tests run on issdm-6 with CPU restriction', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25)
Explanation: Now let's have a look at the histogram of frequency of normalized value based on stress tests with CPU restriction running on issdm-6.
End of explanation
df_normalized_issdm_6_with_limit_sorted = df_normalized_issdm_6_with_limit.sort_values(by='normalized', ascending=0)
df_normalized_issdm_6_with_limit_sorted_head = df_normalized_issdm_6_with_limit_sorted.head()
df_normalized_issdm_6_with_limit_sorted_tail = df_normalized_issdm_6_with_limit_sorted.tail()
df_normalized_issdm_6_with_limit_sorted_head.append(df_normalized_issdm_6_with_limit_sorted_tail)
Explanation: Here is the rank of normalized value from stress tests with CPU restriction
End of explanation
df_normalized_issdm_6_no_outlier = df_normalized_issdm_6_with_limit['benchmark'] != 'stressng-cpu-jenkin'
df_normalized_issdm_6_with_limit[df_normalized_issdm_6_no_outlier].normalized.hist(color='Green', bins=150, figsize=(25,12), xlabelsize=20, ylabelsize=20)
plt.title('stress tests run on issdm-6 with CPU restriction (no outlier)', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25)
Explanation: We notice that the stressng-cpu-jenkin looks like an outlier. Let's redraw the histogram without this one.
End of explanation
df_normalized_t2_micro_without_limit = df_normalized[normalized_machine_is_t2_micro & normalized_limits_is_without]
df_normalized_t2_micro_without_limit.normalized.hist(bins=150,figsize=(30,12), xlabelsize=20, ylabelsize=20)
plt.title('stress tests run on t2.micro without CPU restriction', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25)
Explanation: Summary
We got the boundary of normalized value on issdm-6 from -29.394675 to 54.266945 by using parameters --cpuset-cpus=1 --cpu-quota=7234 --cpu-period=100000, which means the docker container only uses 7.234ms CPU worth of run-time every 100ms on cpu 1 (See cpu for more details).
Experiment Results from t2.micro
Let's have a look at the histogram of frequency of normalized value based on stress tests without CPU restriction running on t2.micro.
End of explanation
df_normalized_t2_micro_without_limit_sorted = df_normalized_t2_micro_without_limit.sort_values(by='normalized', ascending=0)
df_normalized_t2_micro_without_limit_sorted_head = df_normalized_t2_micro_without_limit_sorted.head()
df_normalized_t2_micro_without_limit_sorted_tail = df_normalized_t2_micro_without_limit_sorted.tail()
df_normalized_t2_micro_without_limit_sorted_head.append(df_normalized_t2_micro_without_limit_sorted_tail)
Explanation: Here is the rank of normalized value from stress tests without CPU restriction
End of explanation
df_normalized_t2_micro_with_limit = df_normalized[normalized_machine_is_t2_micro & normalized_limits_is_with]
df_normalized_t2_micro_with_limit.normalized.hist(color='Orange', bins=150, figsize=(30,12), xlabelsize=20, ylabelsize=20)
plt.title('stress tests run on t2.micro with CPU restriction', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25)
Explanation: Let's have a look at the histogram of frequency of normalized value based on stress tests with CPU restriction running on t2.micro.
End of explanation
df_normalized_t2_micro_with_limit_sorted = df_normalized_t2_micro_with_limit.sort_values(by='normalized', ascending=0)
df_normalized_t2_micro_with_limit_sorted_head = df_normalized_t2_micro_with_limit_sorted.head()
df_normalized_t2_micro_with_limit_sorted_tail = df_normalized_t2_micro_with_limit_sorted.tail()
df_normalized_t2_micro_with_limit_sorted_head.append(df_normalized_t2_micro_with_limit_sorted_tail)
Explanation: Here is the rank of normalized value from stress tests with CPU restriction
End of explanation
df_normalized_t2_micro_no_outlier = df_normalized_t2_micro_with_limit['benchmark'] != 'stressng-memory-stack'
df_normalized_t2_micro_with_limit[df_normalized_t2_micro_no_outlier].normalized.hist(color='Green', bins=150, figsize=(30,12), xlabelsize=20, ylabelsize=20)
plt.title('stress tests run on t2.micro with CPU restriction (no outlier)', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25)
Explanation: We notice that the stressng-memory-stack looks like an outlier. Let's redraw the histogram without this one.
End of explanation
df_verification = pd.read_csv('verification/results/2/alltests_with_normalized_results_1.1.csv')
Explanation: The stressng-cpu-jenkin benchmark is a collection of (non-cryptographic) hash functions for multi-byte keys. See Jenkins hash function from Wikipedia for more details.
Summary
We got the boundary of normalized value on t2.micro from -198.440535 to 119.904761 by using parameters --cpuset-cpus=0 --cpu-quota=25750 --cpu-period=100000, which means the docker container only uses 7.234ms CPU worth of run-time every 100ms on cpu 0 (See cpu for more details).
Verification
Now we use 9 other benchmark programs to verify this result. These programs are,
- blogbench: filesystem benchmark.
- compilebench: It tries to age a filesystem by simulating some of the disk IO common in creating, compiling, patching, stating and reading kernel trees.
- fhourstones: This integer benchmark solves positions in the game of connect-4.
- himeno: Himeno benchmark score is affected by the performance of a computer, especially memory band width. This benchmark program takes measurements to proceed major loops in solving the Poisson’s equation solution using the Jacobi iteration method.
- interbench: It is designed to measure the effect of changes in Linux kernel design or system configuration changes such as cpu, I/O scheduler and filesystem changes and options.
- nbench: NBench(Wikipedia) is a synthetic computing benchmark program developed in the mid-1990s by the now defunct BYTE magazine intended to measure a computer's CPU, FPU, and Memory System speed.
- pybench: It is a collection of tests that provides a standardized way to measure the performance of Python implementations.
- ramsmp: RAMspeed is a free open source command line utility to measure cache and memory performance of computer systems.
- stockfish-7: It is a simple benchmark by letting Stockfish analyze a set of positions for a given limit each.
Read verification tests data.
End of explanation
len(df_verification) / 2
Explanation: Show number of test benchmarks.
End of explanation
df_verification_rank = df_verification.reindex(df_verification.normalized.abs().sort_values(ascending=0).index)
df_verification_rank.head(8)
Explanation: Order the test results by the absolute of normalized value
End of explanation
df_verification_issdm_6 = df_verification[df_verification['machine'] == 'issdm-6']
df_verification_issdm_6.normalized.hist(color='y', bins=150,figsize=(20,10), xlabelsize=20, ylabelsize=20)
plt.title('verification tests run on issdm-6', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25)
Explanation: Verification Tests on issdm-6
Histogram of frequency of normalized value.
End of explanation
print(
df_verification_issdm_6['normalized'].max(),
df_verification_issdm_6['normalized'].min()
)
Explanation: Print the max the min normalized value,
End of explanation
df_verification_issdm_6['normalized'].mean()
Explanation: The average of noramlized value is,
End of explanation
df_verification_issdm_6_no_nbench = df_verification_issdm_6[~df_verification_issdm_6['benchmark'].str.startswith('nbench')]
df_verification_issdm_6_no_nbench.normalized.hist(color='greenyellow', bins=150,figsize=(20,10), xlabelsize=20, ylabelsize=20)
plt.title('verification tests run on issdm-6 (no nbench)', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25)
Explanation: If we remove all nbench tests, the frequency histogram changes to
End of explanation
print(
df_verification_issdm_6_no_nbench['normalized'].max(),
df_verification_issdm_6_no_nbench['normalized'].min()
)
Explanation: The max the min normalized value changes to,
End of explanation
df_verification_issdm_6_no_nbench['normalized'].mean()
Explanation: The average of noramlized value changes to,
End of explanation
df_verification_t2_micro = df_verification[df_verification['machine'] == 't2.micro']
df_verification_t2_micro.normalized.hist(color='y', bins=150,figsize=(20,10), xlabelsize=20, ylabelsize=20)
plt.title('verification tests run on t2.micro', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25)
Explanation: Verification Tests on t2.micro
Histogram of frequency of normalized value.
End of explanation
df_verification_t2_micro['normalized'].mean()
Explanation: The average of noramlized value of the verification benchmarks is,
End of explanation
df_verification_top_benchmakrs = df_verification_rank[df_verification_rank['machine'] == 't2.micro'].head(4)['benchmark']
df_verification_t2_micro_no_outliers = df_verification_t2_micro[~df_verification_t2_micro['benchmark'].isin(df_verification_top_benchmakrs)]
df_verification_t2_micro_no_outliers.normalized.hist(color='greenyellow', bins=150,figsize=(20,10), xlabelsize=20, ylabelsize=20)
plt.title('verification tests on t2.micro (no outliers)', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25)
Explanation: Let's see the frequency histogram after removing right-most four outliers.
End of explanation
print(
df_verification_t2_micro_no_outliers['normalized'].max(),
df_verification_t2_micro_no_outliers['normalized'].min()
)
Explanation: Print the max the min normalized value,
End of explanation
df_verification_t2_micro_no_outliers['normalized'].mean()
Explanation: The average of noramlized value without the four outliners is,
End of explanation
<END_TASK> |
174 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
The first step in any data analysis is acquiring and munging the data
Our starting data set can be found here
Step1: Problems
Step2: Problems
Step3: Problems
Step4: If we want to look at covariates, we need a new approach.
We'll use Cox proprtional hazards, a very popular regression model.
To fit in python we use the module lifelines
Step5: Once we've fit the data, we need to do something useful with it. Try to do the following things
Step6: Model selection
Difficult to do with classic tools (here)
Problem | <ASSISTANT_TASK:>
Python Code:
running_id = 0
output = [[0]]
with open("E:/output.txt") as file_open:
for row in file_open.read().split("\n"):
cols = row.split(",")
if cols[0] == output[-1][0]:
output[-1].append(cols[1])
output[-1].append(True)
else:
output.append(cols)
output = output[1:]
for row in output:
if len(row) == 6:
row += [datetime(2016, 5, 3, 20, 36, 8, 92165), False]
output = output[1:-1]
def convert_to_days(dt):
day_diff = dt / np.timedelta64(1, 'D')
if day_diff == 0:
return 23.0
else:
return day_diff
df = pd.DataFrame(output, columns=["id", "advert_time", "male","age","search","brand","conversion_time","event"])
df["lifetime"] = pd.to_datetime(df["conversion_time"]) - pd.to_datetime(df["advert_time"])
df["lifetime"] = df["lifetime"].apply(convert_to_days)
df["male"] = df["male"].astype(int)
df["search"] = df["search"].astype(int)
df["brand"] = df["brand"].astype(int)
df["age"] = df["age"].astype(int)
df["event"] = df["event"].astype(int)
df = df.drop('advert_time', 1)
df = df.drop('conversion_time', 1)
df = df.set_index("id")
df = df.dropna(thresh=2)
df.median()
###Parametric Bayes
#Shout out to Cam Davidson-Pilon
## Example fully worked model using toy data
## Adapted from http://blog.yhat.com/posts/estimating-user-lifetimes-with-pymc.html
## Note that we've made some corrections
N = 2500
##Generate some random data
lifetime = pm.rweibull( 2, 5, size = N )
birth = pm.runiform(0, 10, N)
censor = ((birth + lifetime) >= 10)
lifetime_ = lifetime.copy()
lifetime_[censor] = 10 - birth[censor]
alpha = pm.Uniform('alpha', 0, 20)
beta = pm.Uniform('beta', 0, 20)
@pm.observed
def survival(value=lifetime_, alpha = alpha, beta = beta ):
return sum( (1-censor)*(log( alpha/beta) + (alpha-1)*log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(50000, 30000)
pm.Matplot.plot(mcmc)
mcmc.trace("alpha")[:]
Explanation: The first step in any data analysis is acquiring and munging the data
Our starting data set can be found here:
http://jakecoltman.com in the pyData post
It is designed to be roughly similar to the output from DCM's path to conversion
Download the file and transform it into something with the columns:
id,lifetime,age,male,event,search,brand
where lifetime is the total time that we observed someone not convert for and event should be 1 if we see a conversion and 0 if we don't. Note that all values should be converted into ints
It is useful to note that end_date = datetime.datetime(2016, 5, 3, 20, 36, 8, 92165)
End of explanation
censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist())
alpha = pm.Uniform("alpha", 0,50)
beta = pm.Uniform("beta", 0,50)
@pm.observed
def survival(value=df["lifetime"], alpha = alpha, beta = beta ):
return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000)
def weibull_median(alpha, beta):
return beta * ((log(2)) ** ( 1 / alpha))
plt.hist([weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))])
Explanation: Problems:
1 - Try to fit your data from section 1
2 - Use the results to plot the distribution of the median
Note that the media of a Weibull distribution is:
$$β(log 2)^{1/α}$$
End of explanation
censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist())
alpha = pm.Uniform("alpha", 0,50)
beta = pm.Uniform("beta", 0,50)
@pm.observed
def survival(value=df["lifetime"], alpha = alpha, beta = beta ):
return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000, burn = 3000, thin = 20)
pm.Matplot.plot(mcmc)
#Solution to Q5
## Adjusting the priors impacts the overall result
## If we give a looser, less informative prior then we end up with a broader, shorter distribution
## If we give much more informative priors, then we get a tighter, taller distribution
censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist())
## Note the narrowing of the prior
alpha = pm.Normal("alpha", 1.7, 10000)
beta = pm.Normal("beta", 18.5, 10000)
####Uncomment this to see the result of looser priors
## Note this ends up pretty much the same as we're already very loose
#alpha = pm.Uniform("alpha", 0, 30)
#beta = pm.Uniform("beta", 0, 30)
@pm.observed
def survival(value=df["lifetime"], alpha = alpha, beta = beta ):
return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000, burn = 5000, thin = 20)
pm.Matplot.plot(mcmc)
#plt.hist([weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))])
Explanation: Problems:
4 - Try adjusting the number of samples for burning and thinnning
5 - Try adjusting the prior and see how it affects the estimate
End of explanation
medians = [weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))]
testing_value = 14.9
number_of_greater_samples = sum([x >= testing_value for x in medians])
100 * (number_of_greater_samples / len(medians))
Explanation: Problems:
7 - Try testing whether the median is greater than a different values
End of explanation
#Fitting solution
cf = lifelines.CoxPHFitter()
cf.fit(df, 'lifetime', event_col = 'event')
cf.summary
Explanation: If we want to look at covariates, we need a new approach.
We'll use Cox proprtional hazards, a very popular regression model.
To fit in python we use the module lifelines:
http://lifelines.readthedocs.io/en/latest/
End of explanation
#Solution to 1
fig, axis = plt.subplots(nrows=1, ncols=1)
cf.baseline_survival_.plot(ax = axis, title = "Baseline Survival")
regressors = np.array([[1,45,0,0]])
survival = cf.predict_survival_function(regressors)
survival.head()
#Solution to plotting multiple regressors
fig, axis = plt.subplots(nrows=1, ncols=1, sharex=True)
regressor1 = np.array([[1,45,0,1]])
regressor2 = np.array([[1,23,1,1]])
survival_1 = cf.predict_survival_function(regressor1)
survival_2 = cf.predict_survival_function(regressor2)
plt.plot(survival_1,label = "45 year old male - search")
plt.plot(survival_2,label = "45 year old male - display")
plt.legend(loc = "upper")
odds = survival_1 / survival_2
plt.plot(odds, c = "red")
Explanation: Once we've fit the data, we need to do something useful with it. Try to do the following things:
1 - Plot the baseline survival function
2 - Predict the functions for a particular set of features
3 - Plot the survival function for two different set of features
4 - For your results in part 3 caculate how much more likely a death event is for one than the other for a given period of time
End of explanation
from pyBMA import CoxPHFitter
bmaCox = CoxPHFitter.CoxPHFitter()
bmaCox.fit(df, "lifetime", event_col= "event", priors= [0.5]*4)
bmaCox.summary
#Low probability for everything favours parsimonious models
bmaCox = CoxPHFitter.CoxPHFitter()
bmaCox.fit(df, "lifetime", event_col= "event", priors= [0.1]*4)
bmaCox.summary
#Boost probability of brand
bmaCox = CoxPHFitter.CoxPHFitter()
bmaCox.fit(df, "lifetime", event_col= "event", priors= [0.3, 0.9, 0.001, 0.3])
print(bmaCox.summary)
Explanation: Model selection
Difficult to do with classic tools (here)
Problem:
1 - Calculate the BMA coefficient values
2 - Try running with different priors
End of explanation
<END_TASK> |
175 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Batch Normalization – Practice
Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.
This is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was
Step3: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
This version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization
We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.
This version of the function does not include batch normalization.
Step6: We'll use the following function to create convolutional layers in our network. They are very basic
Step8: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
This cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
Step10: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
Add batch normalization
We've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference.
If you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.
TODO
Step12: TODO
Step13: TODO
Step15: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output
Step17: TODO
Step18: TODO | <ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False)
Explanation: Batch Normalization – Practice
Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.
This is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was:
1. Complicated enough that training would benefit from batch normalization.
2. Simple enough that it would train quickly, since this is meant to be a short exercise just to give you some practice adding batch normalization.
3. Simple enough that the architecture would be easy to understand without additional resources.
This notebook includes two versions of the network that you can edit. The first uses higher level functions from the tf.layers package. The second is the same network, but uses only lower level functions in the tf.nn package.
Batch Normalization with tf.layers.batch_normalization
Batch Normalization with tf.nn.batch_normalization
The following cell loads TensorFlow, downloads the MNIST dataset if necessary, and loads it into an object named mnist. You'll need to run this cell before running anything else in the notebook.
End of explanation
DO NOT MODIFY THIS CELL
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
Explanation: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
This version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization
We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.
This version of the function does not include batch normalization.
End of explanation
DO NOT MODIFY THIS CELL
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)
return conv_layer
Explanation: We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with odd depths, and strides of 2x2 on layers with even depths. We aren't bothering with pooling layers at all in this network.
This version of the function does not include batch normalization.
End of explanation
DO NOT MODIFY THIS CELL
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
This cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
End of explanation
def fully_connected(prev_layer, num_units, training=True):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=None)
layer = tf.layers.batch_normalization(layer, training=training)
return tf.nn.relu(layer)
Explanation: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
Add batch normalization
We've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference.
If you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
End of explanation
def conv_layer(prev_layer, layer_depth, training=True):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=None)
conv_layer = tf.layers.batch_normalization(conv_layer, training=training)
return tf.nn.relu(conv_layer)
Explanation: TODO: Modify conv_layer to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.
End of explanation
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
is_training = tf.placeholder(tf.bool)
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.
End of explanation
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
Explanation: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output: Accuracy on 100 samples. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference.
Batch Normalization using tf.nn.batch_normalization<a id="example_2"></a>
Most of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature – something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM – then you may need to know these sorts of things.
This version of the network uses tf.nn for almost everything, and expects you to implement batch normalization using tf.nn.batch_normalization.
Optional TODO: You can run the next three cells before you edit them just to see how the network performs without batch normalization. However, the results should be pretty much the same as you saw with the previous example before you added batch normalization.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: For convenience, we continue to use tf.layers.dense for the fully_connected layer. By this point in the class, you should have no problem replacing that with matrix operations between the prev_layer and explicit weights and biases variables.
End of explanation
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
in_channels = prev_layer.get_shape().as_list()[3]
out_channels = layer_depth*4
weights = tf.Variable(
tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))
bias = tf.Variable(tf.zeros(out_channels))
conv_layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')
conv_layer = tf.nn.bias_add(conv_layer, bias)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
Explanation: TODO: Modify conv_layer to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: Unlike in the previous example that used tf.layers, adding batch normalization to these convolutional layers does require some slight differences to what you did in fully_connected.
End of explanation
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training.
End of explanation
<END_TASK> |
176 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
Step3: And we can see the characters encoded as integers.
Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
Step5: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this
Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
Step7: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob.
Step8: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. For example,
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow will create different weight matrices for all cell objects. Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Below, we implement the build_lstm function to create these LSTM cells and the initial state.
Step9: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Step10: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Step11: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
Step12: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Step13: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular
Step14: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Step15: Saved checkpoints
Read up on saving and loading checkpoints here
Step16: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
Step17: Here, pass in the path to a checkpoint and sample from the network. | <ASSISTANT_TASK:>
Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
text[:100]
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
encoded[:100]
Explanation: And we can see the characters encoded as integers.
End of explanation
len(vocab)
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of characters per batch and number of batches we can make
characters_per_batch = n_seqs * n_steps
n_batches = len(arr)//characters_per_batch
# Keep only enough characters to make full batches
arr = arr[:n_batches * characters_per_batch]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# The targets, shifted by one
y = np.zeros_like(x)
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
yield x, y
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/sequence_batching@1x.png" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
End of explanation
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob.
End of explanation
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. For example,
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow will create different weight matrices for all cell objects. Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Below, we implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
x: Input tensor
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# That is, the shape should be batch_size*num_steps rows by lstm_size columns
seq_output = tf.concat(lstm_output, axis=1)
x = tf.reshape(seq_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name='predictions')
return out, logits
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
End of explanation
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per batch_size per step
y_one_hot = tf.one_hot(targets, num_classes)
y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
# Softmax cross entropy loss
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
loss = tf.reduce_mean(loss)
return loss
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
End of explanation
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
End of explanation
batch_size = 100 # Sequences per batch
num_steps = 100 # Number of sequence steps per batch
lstm_size = 512 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.001 # Learning rate
keep_prob = 0.5 # Dropout keep probability
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
End of explanation
tf.train.get_checkpoint_state('checkpoints')
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation
<END_TASK> |
177 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Singular value decomposition (SVD)
The singular value decompostion of a real-valued $m \times n$ matrix $\boldsymbol{A}$ is
Step1: Note
Step2: Performing the SVD and counting the number of singular values that are greater than $10^{-9}$
Step3: With only three nonzero singular values, we could reconstruct the matrix with very little data - just three singular values and six vectors.
Removing noise
We consider the same matrix problem again, this time with some back ground noise in the white regions.
Step4: The effect of the noise is clear in the image.
We can try to eliminate much of the background noise via a low-rank approximation of the noisy image that discards information associated with small singular values of the matrix.
Step5: We can see that much of the noise in the image has been eliminated.
Image compression
Gray scale image
We load a colour PNG file. It uses three colour channels (red/green/blue), with at each pixel an 8-bit unsigned integer (in the range $[0, 255]$, but sometimes represented as a float) for each colour for the colour intensity. This is know as 24-bit colour - three channels times 8 bit.
We load the image as three matrices (red, green, blue), each with dimension equal to the number pixels in each direction
Step6: We could work with the colour image, but it is simpler to work with a gray scale image because then we have only one value for the colour intensity at each pixel rather than three (red/green/blue).
Step7: We can convert the image to a regular matrix with values between 0 and 255, with each entry corresponding to a pixel in the image. Creating the matrix and inspecting first four rows and three columns (top left corner of the image)
Step8: Now, maybe we can discard information associated with small singular values without perceiving any visual change in the image. To explore this, we compute the SVD of the gray scale image
Step9: The argument full_matrices=False tells NumPy to not store all the redundant zero terms in the $\boldsymbol{\Sigma}$ array. This is the normal approach in practice, but not in most text books. Note that NumPy return the singular values as a one-dimendional array, not as a matrix.
We now print the largest and smallest singular values, and plot all the singular values $\sigma_{i}$ on a log-scale
Step10: We can now try compressing the image. We first try retaining using only the largest 25% of values
Step11: We have discarded 3/4 of the singular values, but can barely perceive a difference in the image.
To explore other levels of compression, we write a function that takes the fraction of singular values we wish to retain
Step12: Let's try retaining just 10% of the singular values
Step13: Even with only 10% if the singular values retains, it is hard to perceive a difference between the images. Next we try keeping only 2%
Step14: We now see some image clear degradation, but the image is sill recognisable. We'll try one more case where we retain only 0.5% of the singular values.
Step15: The image quality is now quite poor.
Colour image
Step16: We can extract the red, green and blue components to have a look
Step17: We now compute an SVD for the matrix of each colour
Step18: Compressing the matrix for each colouring separately and then reconstructing the three-dimensional array
Step19: Comparing the compressed and original images side-by-side
Step20: Retaining 10% of the singular values for each colour, we can see some artifacts in the compressed image, which indicates that using the SVD for each colour independently is probably not a good idea.
Colour image
Step21: Compute the SVD of each channel
Step22: Compress each channel, and display compressed channels in gray scale
Step23: Combine compressed channels
Step24: Interactive compression
We'll now create an interactive image with sliders to interactively control the compression level.
Step25: Effective rank
Determining the rank of a matrix is not a binary question in the context of floating point arithmetic or measurement errors. The SVD can be used to determine the 'effective rank' of a matrix.
Consider the matrix
Step26: Clearly the first two rows are linearly dependent and the rank of this matrix is 2. We can verify this using NumPy
Step27: We now add some noise in the range $(0, 10^{-6})$ to the matrix entries
Step28: We now test the rank
Step29: The problem is that we have a 'data set' that is linearly dependent, but this is being masked by very small measurement noise.
Computing the SVD of the matrix with noise and printing the singular values
Step30: If we define the effective rank as the number of singular values that are greater than the noise level, the effective rank of $\boldsymbol{A}$ is 2.
Rank deficient least-squares problems
For least squares problem, we have seen before that we solve
$$
\boldsymbol{A}^{T} \boldsymbol{A} \hat{\boldsymbol{x}} = \boldsymbol{A}^{T} \boldsymbol{b}
$$
or
$$
\begin{align}
\hat{\boldsymbol{x}} &= (\boldsymbol{A}^{T} \boldsymbol{A})^{-1} \boldsymbol{A}^{T} \boldsymbol{b}
\
&= \boldsymbol{A}^{+}\boldsymbol{b}
\end{align}
$$
Everything is fine as long as $\boldsymbol{A}$ is full rank. The problem is that we might have data that leads to $\boldsymbol{A}$ not being full rank. For example, if we try to fit a polynomial in $x$ and $y$, but the data lies on a line.
We have covered in the lectures how to handle least-squares problems that are rank deficient. Here we present an example.
Example
Step31: To find the polynomial coefficients we want to solve
\begin{equation}
\begin{bmatrix}
1 & x_{1} & y_{1} & x_{1}y_{1} \
1 & x_{2} & y_{2} & x_{2}y_{2} \
1 & x_{3} & y_{3} & x_{3}y_{3} \
1 & x_{4} & y_{4} & x_{4}y_{4} \
\end{bmatrix}
\begin{bmatrix}
c_{00} \ c_{10} \ c_{01} \ c_{11}
\end{bmatrix}
=
\begin{bmatrix}
f_{1} \ f_{2} \ f_{3} \ f_{4}
\end{bmatrix}
\end{equation}
where the matrix is the Vandermonde matrix. We can use a NumPy function to create the Vandermonde matrix
Step32: It is clear by inspection that $\boldsymbol{A}$ is not full rank, and is rank 2.
Computing the SVD of $\boldsymbol{A}$ and printing the singular values
Step33: We can see that two of the singular values are zero. To find a least-squares fit to the data with minimal $\| \boldsymbol{c}\|_{2}$ we compute
$$
\hat{\boldsymbol{c}} = \boldsymbol{V}{1} \boldsymbol{\Sigma}^{+}
\boldsymbol{U}{1}^{T}\boldsymbol{b}
$$
Creating $\boldsymbol{V}{1}$, $\boldsymbol{\Sigma}^{+}$ and $\boldsymbol{U}{1}$ (recall that the NumPy SVD returns $\boldsymbol{V}^{T}$ rather than $\boldsymbol{V}$)
Step34: Computing the least-squares solution from $\hat{\boldsymbol{c}} = \boldsymbol{V}{1} \boldsymbol{\Sigma}^{+} \boldsymbol{U}{1}^{T}\boldsymbol{b}$
Step35: The solution is $f(x, y) = 1 + 2x$, which in this case in fact interpolates the data points. Plotting the function, we have a plane that passes through the points.
Step36: We now try adding some noise to the sample positions and the measured values. The Vandermonde matrix is no longer singular so we can solve $\boldsymbol{A} \boldsymbol{c} = \boldsymbol{f}$ to get the polynomial coefficients
Step37: We now see significant coefficients for the $y$ and $xy$ terms in the interpolating polynomial just as a consequence of adding small amount of noise. Plotting the surface and the points, we see in dramatic impact of the noise.
Step38: Performing an SVD on the matrix with noise and printing the singular values
Step39: We see that two of the values are considerably small than the others. If we set these to zero and follow the least-squares procedure for rank-deficient problem
Step40: We see that the fitting polynomial is very close to the noise-free case.
Principal component analysis
Principal component analysis finds a transformation such that covariance of a data set is zero in the transformed directions, and the variance in these directions is greatest. From a dataset this tells us which are the 'important' parameters in a system.
Consider taking $N = 200$ measurements of two quantities $x_{1}$ and $x_{2}$. We model the system by
Step41: We collect the data in a $200 \times 2$ matrix $\boldsymbol{X}$ (200 measurements, 2 variables)
Step42: We can compute the covariance matrix $\boldsymbol{C}$ by making the columns of $\boldsymbol{X}$ zero mean and computing $\boldsymbol{X}^{T}\boldsymbol{X}^{T}/(N-1)$
Step43: The covariance matrix is square and symmetric, so w can diagonalise it by computing the eigenvalues and eigenvectors.
We could also compute the SVD of $\boldsymbol{X}$ since the $\boldsymbol{V}$ is made of the eigenvectors of $\boldsymbol{X}^{T}\boldsymbol{X}^{T}$
Step44: Plotting the data set and the principal directions | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
Explanation: Singular value decomposition (SVD)
The singular value decompostion of a real-valued $m \times n$ matrix $\boldsymbol{A}$ is:
$$
\boldsymbol{A} = \boldsymbol{U} \boldsymbol{\Sigma} \boldsymbol{V}^{T}
$$
where
$\boldsymbol{U}$ is an $m \times m$ orthogonal matrix;
$\boldsymbol{\Sigma}$ is an $m \times n$ diagonal matrix with diagonal entries $\sigma_{1} \ge \sigma_{2} \ge \ldots \ge \sigma_{p} \ge 0$, where $p = \min(m, n)$; and
$\boldsymbol{U}$ is an $n \times n$ orthogonal matrix.
We will use NumPy to compute the SVD and Matplotlib to visualise results, so we first import some modules:
End of explanation
A = np.ones((100, 200))
A[33:33 + 4, 33:133] = 0.0
A[78:78 + 4, 33:133] = 0.0
A[33:78+4, 33:33+4] = 0.0
A[33:78+4, 129:129+4] = 0.0
plt.imshow(A, cmap='gray', interpolation='none')
plt.show()
Explanation: Note: If you run this notebook yourself it can take sometime because it computes number of moderate size SVD problems.
Low rank approximations
Recall that we can represent a matrix as a sum of rank-1 matrices:
$$
\boldsymbol{A} = \sum_{i} \sigma_{i} \boldsymbol{u}{i} \boldsymbol{v}^{T}{i}
$$
where $\sigma_{i}$ is the $i$th singular value and $\boldsymbol{u}{i}$ and $\boldsymbol{v}{i}$ are the $i$th columns vectors of $\boldsymbol{U}$ and $\boldsymbol{V}$, respectively from the SVD. Clearly, for any $\sigma_{i} = 0$ we can avoid storing the data that makes no contribution. If $\sigma_{i}$ is small, then the contribution of $\boldsymbol{u}{i} \boldsymbol{v}^{T}{i}$ is small and we discard it and introduce only a small 'error' to the matrix. We will use low rank approximations in a number of examples in this notebook.
Data compression
We start with a $100 \times 200$ matrix that has entries equal to one or zero. We create a matrix with all entries set to zero, and we then set some entries equal to one in the pattern of rectangle.
End of explanation
U, s, V = np.linalg.svd(A, full_matrices=False)
print("Number of singular values greater than 1.0e-9: {}".format((s > 1.0e-9).sum()))
Explanation: Performing the SVD and counting the number of singular values that are greater than $10^{-9}$:
End of explanation
A = np.ones((100, 200))
A = A - 1.0e-1*np.random.rand(100, 200)
A[33:33 + 4, 33:133] = 0.0
A[78:78 + 4, 33:133] = 0.0
A[33:78+4, 33:33+4] = 0.0
A[33:78+4, 129:129+4] = 0.0
plt.imshow(A, cmap='gray', interpolation='none');
Explanation: With only three nonzero singular values, we could reconstruct the matrix with very little data - just three singular values and six vectors.
Removing noise
We consider the same matrix problem again, this time with some back ground noise in the white regions.
End of explanation
# Compute SVD of nois matrix
U, s, V = np.linalg.svd(A, full_matrices=False)
# Set any singular values less than 1.0 equation zero
s[s < 1.0] = 0.0
# Reconstruct low rank approximation and display
A_denoised = np.dot(U, np.dot(np.diag(s), V))
plt.imshow(A_denoised, cmap='gray', interpolation='none')
plt.show();
Explanation: The effect of the noise is clear in the image.
We can try to eliminate much of the background noise via a low-rank approximation of the noisy image that discards information associated with small singular values of the matrix.
End of explanation
from urllib.request import urlopen
url = "https://github.com/garth-wells/notebooks-3M1/raw/master/photo/2020-1.png"
img_colour = Image.open(urlopen(url))
img_colour = img_colour.convert('RGB')
print("Image size (pixels):", img_colour.size)
print("Image array shape: ", np.array(img_colour).shape)
plt.figure(figsize=(15, 15/1.77))
plt.imshow(img_colour);
Explanation: We can see that much of the noise in the image has been eliminated.
Image compression
Gray scale image
We load a colour PNG file. It uses three colour channels (red/green/blue), with at each pixel an 8-bit unsigned integer (in the range $[0, 255]$, but sometimes represented as a float) for each colour for the colour intensity. This is know as 24-bit colour - three channels times 8 bit.
We load the image as three matrices (red, green, blue), each with dimension equal to the number pixels in each direction:
End of explanation
img_bw = img_colour.convert('L')
plt.figure(figsize=(15, 15/1.77))
plt.imshow(img_bw, cmap='gray');
print("Image array shape: {}".format(img_bw.size))
plt.savefig("bw.pdf")
Explanation: We could work with the colour image, but it is simpler to work with a gray scale image because then we have only one value for the colour intensity at each pixel rather than three (red/green/blue).
End of explanation
img_array = np.array(img_bw)
print("Image shape:", img_array.shape)
print(img_array[:4, :3])
Explanation: We can convert the image to a regular matrix with values between 0 and 255, with each entry corresponding to a pixel in the image. Creating the matrix and inspecting first four rows and three columns (top left corner of the image):
End of explanation
U, s, V = np.linalg.svd(img_array, full_matrices=False)
Explanation: Now, maybe we can discard information associated with small singular values without perceiving any visual change in the image. To explore this, we compute the SVD of the gray scale image:
End of explanation
print("Number of singular values: {}".format(len(s)))
print("Max, min singular values: {}, {}".format(s[0], s[-1]))
plt.xlabel('$i$')
plt.ylabel('$\sigma_i$')
plt.title('Singular values')
plt.yscale('log')
plt.plot(s, 'bo');
plt.savefig("bw-svd.pdf")
Explanation: The argument full_matrices=False tells NumPy to not store all the redundant zero terms in the $\boldsymbol{\Sigma}$ array. This is the normal approach in practice, but not in most text books. Note that NumPy return the singular values as a one-dimendional array, not as a matrix.
We now print the largest and smallest singular values, and plot all the singular values $\sigma_{i}$ on a log-scale:
End of explanation
# Compute num_sigma/4 (25%) and zero values
r = int(0.25*len(s))
# Re-construct low rank approximation (this may look a little cryptic, but we use the below
# expression to avoid unecessary computation)
compressed = U[:,:r].dot(s[:r, np.newaxis]*V[:r,:])
compressed = compressed.astype(int)
# Plot compressed and original image
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(18, 18/1.77));
axes[0].set_title('Compressed image with largest 25% of singular values retained')
axes[0].imshow(compressed, cmap='gray');
axes[1].set_title('Original image')
axes[1].imshow(img_array, cmap='gray');
Explanation: We can now try compressing the image. We first try retaining using only the largest 25% of values:
End of explanation
def compress_image(U, s, V, f):
"Compress image where 0 < f <= 1 is the fraction on singular values to retain"
r = int(f*len(s))
return (U[:,:r].dot(s[:r, np.newaxis]*V[:r,:])).astype(int)
Explanation: We have discarded 3/4 of the singular values, but can barely perceive a difference in the image.
To explore other levels of compression, we write a function that takes the fraction of singular values we wish to retain:
End of explanation
# Compress image/matrix
compressed = compress_image(U, s, V, 0.1)
# Plot compressed and original image
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(20, 20/1.77))
axes[0].set_title('Compressed image with largest 10% of singular values retained')
axes[0].imshow(compressed, cmap='gray');
axes[1].set_title('Original image')
axes[1].imshow(img_array, cmap='gray');
plt.savefig("bw-0-10.pdf")
Explanation: Let's try retaining just 10% of the singular values:
End of explanation
# Compress image/matrix
compressed = compress_image(U, s, V, 0.02)
# Plot compressed and original image
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(20, 20/1.77))
axes[0].set_title('Compressed image with largest 2% of singular values retained')
axes[0].imshow(compressed, cmap='gray');
axes[1].set_title('Original image')
axes[1].imshow(img_array, cmap='gray');
plt.savefig("bw-0-02.pdf")
Explanation: Even with only 10% if the singular values retains, it is hard to perceive a difference between the images. Next we try keeping only 2%:
End of explanation
# Compress image/matrix
compressed = compress_image(U, s, V, 0.005)
# Plot compressed and original image
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(20, 20/1.77))
axes[0].set_title('Compressed image with largest 0.5% of singular values retained')
axes[0].imshow(compressed, cmap='gray');
axes[1].set_title('Original image')
axes[1].imshow(img_array, cmap='gray');
plt.savefig("bw-0-005.pdf")
Explanation: We now see some image clear degradation, but the image is sill recognisable. We'll try one more case where we retain only 0.5% of the singular values.
End of explanation
print("Image array shape: {}".format(img_colour.size))
plt.figure(figsize=(20,20/1.77))
plt.title('This is a photo of 2020 3M1 class members')
plt.imshow(img_colour);
Explanation: The image quality is now quite poor.
Colour image: RGB
We'll now try compressing a colour image.
End of explanation
# Display red, green and blue channels by zeroing other channels
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(20, 20/1.77))
img_array = np.array(img_colour)
# Zero the g/b channels
red = img_array.copy()
red[:,:,(1,2)] = 0.0
axes[0].imshow(red);
# Zero the r/b channels
green = img_array.copy()
green[:,:,(0,2)] = 0.0
axes[1].imshow(green);
# Zero the r/g channels
blue = img_array.copy()
blue[:,:,(0,1)] = 0.0
axes[2].imshow(blue);
Explanation: We can extract the red, green and blue components to have a look:
End of explanation
# Compute SVD for each colour
U, s, V = [0]*3, [0]*3, [0]*3
for i in range(3):
U[i], s[i], V[i] = np.linalg.svd(img_array[:, :, i], full_matrices=False)
Explanation: We now compute an SVD for the matrix of each colour:
End of explanation
# Compress each colour separately
compressed = [compress_image(U[i], s[i], V[i], 0.1) for i in range(3)]
# Reconstruct 3D RGB array and filter any values outside of (0, 1)
compressed = np.dstack(compressed)
Explanation: Compressing the matrix for each colouring separately and then reconstructing the three-dimensional array:
End of explanation
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(20, 20/1.77))
axes[0].set_title('Image with largest 10% of singular values retained')
axes[0].imshow(compressed, interpolation="nearest");
axes[1].set_title('Original image')
axes[1].imshow(img_colour);
Explanation: Comparing the compressed and original images side-by-side:
End of explanation
img_colour_ycbcr = np.array(img_colour.convert("YCbCr"))
# Display Luminance(Y), Blue Chroma(Cb) and Red Chroma(Cr) channels
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(20, 20/1.77))
Y = img_colour_ycbcr[:,:,0]
axes[0].imshow(Y, cmap='gray');
Cb = img_colour_ycbcr[:,:,1]
axes[1].imshow(Cb, cmap='gray');
Cr = img_colour_ycbcr[:,:,2]
axes[2].imshow(Cr, cmap='gray');
Explanation: Retaining 10% of the singular values for each colour, we can see some artifacts in the compressed image, which indicates that using the SVD for each colour independently is probably not a good idea.
Colour image: YCbCr
A better approach is to split the image into YCbCr, rather than RGB.
YCbCr is splits the image into luminance (Y), and chrominance (Cb and Cr) colour values.
End of explanation
# Compute SVD for each channel
U, s, V = [0]*3, [0]*3, [0]*3
for i in range(3):
U[i], s[i], V[i] = np.linalg.svd(img_colour_ycbcr[:, :, i], full_matrices=False)
Explanation: Compute the SVD of each channel:
End of explanation
# Compress each component separately
compressed = [compress_image(U[0], s[0], V[0], 0.05),
compress_image(U[1], s[1], V[1], 0.005),
compress_image(U[2], s[2], V[2], 0.005)]
# Reconstruct 3D YCbCr array
compressed = np.dstack(compressed)
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(20, 20/1.77))
Y = compressed[:,:,0]
axes[0].imshow(Y, cmap='gray');
Cb = compressed[:,:,1]
axes[1].imshow(Cb, cmap='gray');
Cr = compressed[:,:,2]
axes[2].imshow(Cr, cmap='gray');
Explanation: Compress each channel, and display compressed channels in gray scale:
End of explanation
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(20, 20/1.77))
axes[0].set_title('Image with largest 20% of brightness singular values retained and 0.5% colours')
im = Image.fromarray(np.uint8(compressed), mode="YCbCr")
axes[0].imshow(im)
axes[1].set_title('Original image')
axes[1].imshow(img_colour);
Explanation: Combine compressed channels:
End of explanation
from ipywidgets import widgets
from ipywidgets import interact
url = "https://github.com/garth-wells/notebooks-3M1/raw/master/photo/IMG_20190117_141222563.png"
img = Image.open(urlopen(url))
img_colour_ycbcr = np.array(img.convert("YCbCr"))
# Compute SVD for each channel
U0, s0, V0 = [0]*3, [0]*3, [0]*3
for i in range(3):
U0[i], s0[i], V0[i] = np.linalg.svd(img_colour_ycbcr[:, :, i], full_matrices=False)
@interact(ratio_Y=(0.005, 0.4, 0.02),
ratio_Cb=(0.001, 0.1, 0.01),
ratio_Cr=(0.001, 0.1, 0.01))
def plot_image(ratio_Y=0.1, ratio_Cb=0.01, ratio_Cr=0.01):
compressed = [compress_image(U0[0], s0[0], V0[0], ratio_Y),
compress_image(U0[1], s0[1], V0[1], ratio_Cb),
compress_image(U0[2], s0[2], V0[2], ratio_Cr)]
# Reconstruct 3D YCbCr array
compressed = np.dstack(compressed)
img_compressed = Image.fromarray(np.uint8(compressed), mode="YCbCr")
# Show
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(20, 20/1.77))
axes[0].set_title('Compressed image')
axes[0].imshow(img_compressed)
axes[1].set_title('Original image')
axes[1].imshow(img)
Explanation: Interactive compression
We'll now create an interactive image with sliders to interactively control the compression level.
End of explanation
A = np.array([[1, 1, 1], [2, 2, 2], [1, 0 ,1]])
print(A)
Explanation: Effective rank
Determining the rank of a matrix is not a binary question in the context of floating point arithmetic or measurement errors. The SVD can be used to determine the 'effective rank' of a matrix.
Consider the matrix:
End of explanation
print("Rank of A is: {}".format(np.linalg.matrix_rank(A)))
Explanation: Clearly the first two rows are linearly dependent and the rank of this matrix is 2. We can verify this using NumPy:
End of explanation
np.random.seed(10)
A = A + 1.0e-6*np.random.rand(A.shape[0], A.shape[1])
Explanation: We now add some noise in the range $(0, 10^{-6})$ to the matrix entries:
End of explanation
print("Rank of A (with noise) is: {}".format(np.linalg.matrix_rank(A)))
Explanation: We now test the rank:
End of explanation
U, s, V = np.linalg.svd(A)
print("The singular values of A (with noise) are: {}".format(s))
Explanation: The problem is that we have a 'data set' that is linearly dependent, but this is being masked by very small measurement noise.
Computing the SVD of the matrix with noise and printing the singular values:
End of explanation
x, y, f = np.zeros(4), np.zeros(4), np.zeros(4)
x[0], y[0], f[0] = 1.0, 0.0, 3.0
x[1], y[1], f[1] = 2.0, 0.0, 5.0
x[2], y[2], f[2] = 3.0, 0.0, 7.0
x[3], y[3], f[3] = 4.0, 0.0, 9.0
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('f')
ax.scatter(x, y, f)
plt.show()
Explanation: If we define the effective rank as the number of singular values that are greater than the noise level, the effective rank of $\boldsymbol{A}$ is 2.
Rank deficient least-squares problems
For least squares problem, we have seen before that we solve
$$
\boldsymbol{A}^{T} \boldsymbol{A} \hat{\boldsymbol{x}} = \boldsymbol{A}^{T} \boldsymbol{b}
$$
or
$$
\begin{align}
\hat{\boldsymbol{x}} &= (\boldsymbol{A}^{T} \boldsymbol{A})^{-1} \boldsymbol{A}^{T} \boldsymbol{b}
\
&= \boldsymbol{A}^{+}\boldsymbol{b}
\end{align}
$$
Everything is fine as long as $\boldsymbol{A}$ is full rank. The problem is that we might have data that leads to $\boldsymbol{A}$ not being full rank. For example, if we try to fit a polynomial in $x$ and $y$, but the data lies on a line.
We have covered in the lectures how to handle least-squares problems that are rank deficient. Here we present an example.
Example: fitting points in a two-dimensional space
Say we are given four data points that depend on $x$ and $y$, and we are asked to fit a polynomial of the form
$$
f(x, y) = c_{00} + c_{10}x + c_{01}y + c_{11}xy
$$
to the data points. Normally, we would expect to be able to fit the above polynomial to four data points by interpolation, i.e. solving $\boldsymbol{A} \boldsymbol{c} = \boldsymbol{f}$ where
$\boldsymbol{A}$ a square Vandermonde matrix. However, if the points happened to lie on a line, then $\boldsymbol{A}$ will be singular. If the points happen to almost lie on a line, then $\boldsymbol{A}$ will be close to singular.
A possibility is to exclude zero or small singular values from the process, thereby finding a least-squares fit with minimal $\|\boldsymbol{c}\|_{2}$. We test this for the data set
\begin{equation}
f_{1}(1, 0) = 3, \
f_{2}(2, 0) = 5, \
f_{3}(3, 0) = 7, \
f_{4}(4, 0) = 9.
\end{equation}
The data lies on the line $y = 0$, and is in fact is linear in $x$.
We create arrays to hold this data, and visualise the points:
End of explanation
A = np.polynomial.polynomial.polyvander2d(y, x, [1, 1])
print(A)
Explanation: To find the polynomial coefficients we want to solve
\begin{equation}
\begin{bmatrix}
1 & x_{1} & y_{1} & x_{1}y_{1} \
1 & x_{2} & y_{2} & x_{2}y_{2} \
1 & x_{3} & y_{3} & x_{3}y_{3} \
1 & x_{4} & y_{4} & x_{4}y_{4} \
\end{bmatrix}
\begin{bmatrix}
c_{00} \ c_{10} \ c_{01} \ c_{11}
\end{bmatrix}
=
\begin{bmatrix}
f_{1} \ f_{2} \ f_{3} \ f_{4}
\end{bmatrix}
\end{equation}
where the matrix is the Vandermonde matrix. We can use a NumPy function to create the Vandermonde matrix:
End of explanation
U, s, V = np.linalg.svd(A)
print(s)
Explanation: It is clear by inspection that $\boldsymbol{A}$ is not full rank, and is rank 2.
Computing the SVD of $\boldsymbol{A}$ and printing the singular values:
End of explanation
# Create view of U with last two columns removed
U1 = U[:, :2]
# Create view of V with last two columns removed
V1 = V[:2,:]
# Create Sigma^{+} by inverting the nonzero singular values and
# discarding the zero singular values
S1 = np.diag(1.0/s[:-2])
print(S1)
Explanation: We can see that two of the singular values are zero. To find a least-squares fit to the data with minimal $\| \boldsymbol{c}\|_{2}$ we compute
$$
\hat{\boldsymbol{c}} = \boldsymbol{V}{1} \boldsymbol{\Sigma}^{+}
\boldsymbol{U}{1}^{T}\boldsymbol{b}
$$
Creating $\boldsymbol{V}{1}$, $\boldsymbol{\Sigma}^{+}$ and $\boldsymbol{U}{1}$ (recall that the NumPy SVD returns $\boldsymbol{V}^{T}$ rather than $\boldsymbol{V}$):
End of explanation
c = np.transpose(V1).dot(S1.dot(U1.T).dot(f))
print(c)
Explanation: Computing the least-squares solution from $\hat{\boldsymbol{c}} = \boldsymbol{V}{1} \boldsymbol{\Sigma}^{+} \boldsymbol{U}{1}^{T}\boldsymbol{b}$:
End of explanation
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
# Plot points
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
ax.set_zlabel('$f$')
ax.scatter(x, y, f)
# Plot surface
X = np.arange(0, 5, 0.2)
Y = np.arange(-5, 5, 0.2)
X, Y = np.meshgrid(X, Y)
Z = 1.0 + 2.0*X + 0.0*Y
surf = ax.plot_surface(X, Y, Z, rstride=5, cstride=5, alpha=0.1)
ax.view_init(elev=30, azim=80)
plt.show()
Explanation: The solution is $f(x, y) = 1 + 2x$, which in this case in fact interpolates the data points. Plotting the function, we have a plane that passes through the points.
End of explanation
np.random.seed(20)
xn = x + 1.0e-3*(1.0 - np.random.rand(len(x)))
yn = y + 1.0e-3*(1.0 - np.random.rand(len(y)))
fn = f + 1.0e-3*(1.0 - np.random.rand(len(f)))
A = np.polynomial.polynomial.polyvander2d(yn, xn, [1, 1])
c = np.linalg.solve(A, fn)
print(c)
Explanation: We now try adding some noise to the sample positions and the measured values. The Vandermonde matrix is no longer singular so we can solve $\boldsymbol{A} \boldsymbol{c} = \boldsymbol{f}$ to get the polynomial coefficients:
End of explanation
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
# Plot points
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
ax.set_zlabel('$f$')
ax.scatter(xn, yn, fn)
# Plot surface
X = np.arange(0, 5, 0.2)
Y = np.arange(-5, 5, 0.2)
X, Y = np.meshgrid(X, Y)
Z = c[0] + c[1]*X + c[2]*Y + c[3]*X*Y
surf = ax.plot_surface(X, Y, Z, rstride=5, cstride=5, alpha=0.1)
ax.view_init(elev=30, azim=80)
plt.show()
Explanation: We now see significant coefficients for the $y$ and $xy$ terms in the interpolating polynomial just as a consequence of adding small amount of noise. Plotting the surface and the points, we see in dramatic impact of the noise.
End of explanation
U, s, V = np.linalg.svd(A)
print(s)
Explanation: Performing an SVD on the matrix with noise and printing the singular values:
End of explanation
# Create view of U with last two columns removed
U1 = U[:, :2]
# Create view of V with last two columns removed
V1 = V[:2,:]
# Create \Sigma^{+}
S1 = np.diag(1.0/s[:-2])
c = np.transpose(V1).dot(S1.dot(U1.T).dot(f))
print(c)
Explanation: We see that two of the values are considerably small than the others. If we set these to zero and follow the least-squares procedure for rank-deficient problem:
End of explanation
np.random.seed(1)
x0 = np.random.randn(200) + 5.0
x1 = 1.5*x0 + np.random.rand(len(x0))
ax = plt.axes()
ax.scatter(x0, x1, alpha=0.5);
ax.set_xlabel('$x_{1}$');
ax.set_ylabel('$x_{2}$');
Explanation: We see that the fitting polynomial is very close to the noise-free case.
Principal component analysis
Principal component analysis finds a transformation such that covariance of a data set is zero in the transformed directions, and the variance in these directions is greatest. From a dataset this tells us which are the 'important' parameters in a system.
Consider taking $N = 200$ measurements of two quantities $x_{1}$ and $x_{2}$. We model the system by:
End of explanation
X = np.column_stack((x0, x1))
Explanation: We collect the data in a $200 \times 2$ matrix $\boldsymbol{X}$ (200 measurements, 2 variables):
End of explanation
for c in range(X.shape[1]):
X[:,c] = X[:,c] - np.mean(X[:,c])
C = (X.T).dot(X)/(len(x0)-1.0)
Explanation: We can compute the covariance matrix $\boldsymbol{C}$ by making the columns of $\boldsymbol{X}$ zero mean and computing $\boldsymbol{X}^{T}\boldsymbol{X}^{T}/(N-1)$
End of explanation
U, s, V = np.linalg.svd(C)
print(s)
Explanation: The covariance matrix is square and symmetric, so w can diagonalise it by computing the eigenvalues and eigenvectors.
We could also compute the SVD of $\boldsymbol{X}$ since the $\boldsymbol{V}$ is made of the eigenvectors of $\boldsymbol{X}^{T}\boldsymbol{X}^{T}$:
End of explanation
ax = plt.axes()
ax.set_aspect(1.0);
ax.set_ylim(-4.0, 4.0);
ax.set_xlabel('$x_{1}$')
ax.set_ylabel('$x_{2}$')
ax.quiver(V[0, 0], V[0, 1], angles='xy',scale_units='xy',scale=0.3);
ax.quiver(V[1, 0], V[1, 1], angles='xy',scale_units='xy',scale=1);
ax.scatter(X[:,0], X[:,1], alpha=0.2);
Explanation: Plotting the data set and the principal directions:
End of explanation
<END_TASK> |
178 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<a id='top'> </a>
Author
Step1: Cosmic-ray composition effective area analysis
Table of contents
Load simulation DataFrame and apply quality cuts
Define functions to be fit to effective area
Calculate effective areas
Plot result
Step2: Load simulation DataFrame and apply quality cuts
[ back to top ]
Step3: Define energy binning for this analysis
Step4: Define functions to be fit to effective area
Step5: Calculate effective areas
Step6: Fit functions to effective area data
Step7: Plot result
Step8: Effective area as quality cuts are sequentially applied | <ASSISTANT_TASK:>
Python Code:
%load_ext watermark
%watermark -u -d -v -p numpy,matplotlib,scipy,pandas,sklearn,mlxtend
Explanation: <a id='top'> </a>
Author: James Bourbeau
End of explanation
%matplotlib inline
from __future__ import division, print_function
from collections import defaultdict
import os
import numpy as np
from scipy import optimize
from scipy.stats import chisquare
import pandas as pd
import matplotlib.pyplot as plt
import seaborn.apionly as sns
import comptools as comp
color_dict = comp.analysis.get_color_dict()
Explanation: Cosmic-ray composition effective area analysis
Table of contents
Load simulation DataFrame and apply quality cuts
Define functions to be fit to effective area
Calculate effective areas
Plot result
End of explanation
# config = 'IC79'
config = 'IC86.2012'
df_sim = comp.load_sim(config=config, test_size=0)
df_sim
# df_sim, cut_dict_sim = comp.load_dataframe(datatype='sim', config=config, return_cut_dict=True)
# selection_mask = np.array([True] * len(df_sim))
# # standard_cut_keys = ['IceTopQualityCuts', 'lap_InIce_containment',
# # # 'num_hits_1_60', 'max_qfrac_1_60',
# # 'InIceQualityCuts', 'num_hits_1_60']
# standard_cut_keys = ['passed_IceTopQualityCuts', 'FractionContainment_Laputop_InIce',
# 'passed_InIceQualityCuts', 'num_hits_1_60']
# # for cut in ['MilliNCascAbove2', 'MilliQtotRatio', 'MilliRloglBelow2', 'StochRecoSucceeded']:
# # standard_cut_keys += ['InIceQualityCuts_{}'.format(cut)]
# for key in standard_cut_keys:
# selection_mask *= cut_dict_sim[key]
# print(key, np.sum(selection_mask))
# df_sim = df_sim[selection_mask]
Explanation: Load simulation DataFrame and apply quality cuts
[ back to top ]
End of explanation
log_energy_bins = np.arange(5.0, 9.51, 0.05)
# log_energy_bins = np.arange(5.0, 9.51, 0.1)
energy_bins = 10**log_energy_bins
energy_midpoints = (energy_bins[1:] + energy_bins[:-1]) / 2
energy_min_fit, energy_max_fit = 5.8, 7.0
midpoints_fitmask = (energy_midpoints >= 10**energy_min_fit) & (energy_midpoints <= 10**energy_max_fit)
log_energy_bins
np.log10(energy_midpoints[midpoints_fitmask])
Explanation: Define energy binning for this analysis
End of explanation
def constant(energy, c):
return c
def linefit(energy, m, b):
return m*np.log10(energy) + b
def sigmoid_flat(energy, p0, p1, p2):
return p0 / (1 + np.exp(-p1*np.log10(energy) + p2))
def sigmoid_slant(energy, p0, p1, p2, p3):
return (p0 + p3*np.log10(energy)) / (1 + np.exp(-p1*np.log10(energy) + p2))
def red_chisquared(obs, fit, sigma, n_params):
zero_mask = sigma != 0
return np.nansum(((obs[zero_mask] - fit[zero_mask])/sigma[zero_mask]) ** 2) / (len(obs[zero_mask]) - n_params)
# return np.sum(((obs - fit)/sigma) ** 2) / (len(obs) - 1 - n_params)
np.sum(midpoints_fitmask)-3
Explanation: Define functions to be fit to effective area
End of explanation
eff_area, eff_area_error, _ = comp.calculate_effective_area_vs_energy(df_sim, energy_bins)
eff_area_light, eff_area_error_light, _ = comp.calculate_effective_area_vs_energy(df_sim[df_sim.MC_comp_class == 'light'], energy_bins)
eff_area_heavy, eff_area_error_heavy, _ = comp.calculate_effective_area_vs_energy(df_sim[df_sim.MC_comp_class == 'heavy'], energy_bins)
eff_area, eff_area_error, _ = comp.analysis.get_effective_area(df_sim,
energy_bins, energy='MC')
eff_area_light, eff_area_error_light, _ = comp.analysis.get_effective_area(
df_sim[df_sim.MC_comp_class == 'light'],
energy_bins, energy='MC')
eff_area_heavy, eff_area_error_heavy, _ = comp.analysis.get_effective_area(
df_sim[df_sim.MC_comp_class == 'heavy'],
energy_bins, energy='MC')
eff_area_light
Explanation: Calculate effective areas
End of explanation
p0 = [1.5e5, 8.0, 50.0]
popt_light, pcov_light = optimize.curve_fit(sigmoid_flat, energy_midpoints[midpoints_fitmask],
eff_area_light[midpoints_fitmask], p0=p0,
sigma=eff_area_error_light[midpoints_fitmask])
popt_heavy, pcov_heavy = optimize.curve_fit(sigmoid_flat, energy_midpoints[midpoints_fitmask],
eff_area_heavy[midpoints_fitmask], p0=p0,
sigma=eff_area_error_heavy[midpoints_fitmask])
print(popt_light)
print(popt_heavy)
perr_light = np.sqrt(np.diag(pcov_light))
print(perr_light)
perr_heavy = np.sqrt(np.diag(pcov_heavy))
print(perr_heavy)
avg = (popt_light[0] + popt_heavy[0]) / 2
print('avg eff area = {}'.format(avg))
eff_area_light
light_chi2 = red_chisquared(eff_area_light, sigmoid_flat(energy_midpoints, *popt_light),
eff_area_error_light, len(popt_light))
print(light_chi2)
heavy_chi2 = red_chisquared(eff_area_heavy,
sigmoid_flat(energy_midpoints, *popt_heavy),
eff_area_error_heavy, len(popt_heavy))
print(heavy_chi2)
Explanation: Fit functions to effective area data
End of explanation
fig, ax = plt.subplots()
# plot effective area data points with poisson errors
ax.errorbar(np.log10(energy_midpoints), eff_area_light, yerr=eff_area_error_light,
ls='None', marker='.')
ax.errorbar(np.log10(energy_midpoints), eff_area_heavy, yerr=eff_area_error_heavy,
ls='None', marker='.')
# plot corresponding sigmoid fits to effective area
x = 10**np.arange(5.0, 9.5, 0.01)
ax.plot(np.log10(x), sigmoid_flat(x, *popt_light),
color=color_dict['light'], label='light', marker='None', ls='-')
ax.plot(np.log10(x), sigmoid_flat(x, *popt_heavy),
color=color_dict['heavy'], label='heavy', marker='None')
avg_eff_area = (sigmoid_flat(x, *popt_light) + sigmoid_flat(x, *popt_heavy)) / 2
ax.plot(np.log10(x), avg_eff_area,
color=color_dict['total'], label='avg', marker='None')
ax.fill_between(np.log10(x),
avg_eff_area-0.01*avg_eff_area,
avg_eff_area+0.01*avg_eff_area,
color=color_dict['total'], alpha=0.5)
ax.axvline(6.4, marker='None', ls='-.', color='k')
ax.set_ylabel('Effective area [m$^2$]')
ax.set_xlabel('$\mathrm{\log_{10}(E_{true}/GeV)}$')
# ax.set_title('$\mathrm{A_{eff} = 143177 \pm 1431.77 \ m^2}$')
ax.grid()
# ax.set_ylim([0, 180000])
ax.set_xlim([5.4, 8.1])
ax.set_title(config)
#set label style
ax.ticklabel_format(style='sci',axis='y')
ax.yaxis.major.formatter.set_powerlimits((0,0))
leg = plt.legend(title='True composition')
for legobj in leg.legendHandles:
legobj.set_linewidth(2.0)
# eff_area_outfile = os.path.join(comp.paths.figures_dir, 'effective-area-{}.png'.format(config))
# comp.check_output_dir(eff_area_outfile)
# plt.savefig(eff_area_outfile)
plt.show()
Explanation: Plot result
End of explanation
df_sim, cut_dict_sim = comp.load_dataframe(datatype='sim', config='IC79', return_cut_dict=True)
standard_cut_keys = ['num_hits_1_60', 'IceTopQualityCuts', 'lap_InIce_containment',
# 'num_hits_1_60', 'max_qfrac_1_60',
'InIceQualityCuts']
# for cut in ['MilliNCascAbove2', 'MilliQtotRatio', 'MilliRloglBelow2', 'StochRecoSucceeded']:
# standard_cut_keys += ['InIceQualityCuts_{}'.format(cut)]
eff_area_dict = {}
eff_area_err_dict = {}
selection_mask = np.array([True] * len(df_sim))
for key in standard_cut_keys:
selection_mask *= cut_dict_sim[key]
print(key, np.sum(selection_mask))
eff_area, eff_area_error, _ = comp.analysis.get_effective_area(df_sim[selection_mask],
energy_bins, energy='MC')
# eff_area, eff_area_error = comp.analysis.effective_area.effective_area(df_sim[selection_mask],
# np.arange(5.0, 9.51, 0.1))
eff_area_dict[key] = eff_area
eff_area_err_dict[key] = eff_area_error
fig, ax = plt.subplots()
cut_labels = {'num_hits_1_60': 'NStations/NChannels', 'IceTopQualityCuts': 'IceTopQualityCuts',
'lap_InIce_containment': 'InIce containment', 'InIceQualityCuts': 'InIceQualityCuts'}
for key in standard_cut_keys:
# plot effective area data points with poisson errors
ax.errorbar(np.log10(energy_midpoints), eff_area_dict[key], yerr=eff_area_err_dict[key],
ls='None', marker='.', label=cut_labels[key], alpha=0.75)
ax.set_ylabel('Effective area [m$^2$]')
ax.set_xlabel('$\log_{10}(E_{\mathrm{MC}}/\mathrm{GeV})$')
ax.grid()
# ax.set_ylim([0, 180000])
ax.set_xlim([5.4, 9.6])
#set label style
ax.ticklabel_format(style='sci',axis='y')
ax.yaxis.major.formatter.set_powerlimits((0,0))
leg = plt.legend()
plt.savefig('/home/jbourbeau/public_html/figures/effective-area-cuts.png')
plt.show()
Explanation: Effective area as quality cuts are sequentially applied
End of explanation
<END_TASK> |
179 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Working with events
This tutorial describes event representation and how event arrays are used to
subselect data.
As usual we'll start by importing the modules we need, loading some
example data <sample-dataset>, and cropping the
Step1: The tutorial tut-events-vs-annotations describes in detail the
different ways of obtaining an
Step2: Reading and writing events from/to a file
Event arrays are
Step3: When writing event arrays to disk, the format will be inferred from the file
extension you provide. By convention, MNE-Python expects events files to
either have an
Step4: .. sidebar
Step5: It is also possible to combine two Event IDs using
Step6: Note, however, that merging events is not necessary if you simply want to
pool trial types for analysis; the next section describes how MNE-Python uses
event dictionaries to map integer Event IDs to more descriptive label
strings.
Mapping Event IDs to trial descriptors
So far in this tutorial we've only been dealing with integer Event IDs, which
were assigned based on DC voltage pulse magnitude (which is ultimately
determined by the experimenter's choices about what signals to send to the
STIM channels). Keeping track of which Event ID corresponds to which
experimental condition can be cumbersome, and it is often desirable to pool
experimental conditions during analysis. You may recall that the mapping of
integer Event IDs to meaningful descriptions for the sample dataset
<sample-dataset> is given in this table
<sample-data-event-dict-table> in the introductory tutorial
<tut-overview>. Here we simply reproduce that mapping as an
event dictionary
Step7: Event dictionaries like this one are used when extracting epochs from
continuous data, and the resulting
Step8: Plotting events and raw data together
Events can also be plotted alongside the
Step9: Making equally-spaced Events arrays
For some experiments (such as those intending to analyze resting-state
activity) there may not be any experimental events included in the raw
recording. In such cases, an Events array of equally-spaced events can be
generated using | <ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
raw.crop(tmax=60).load_data()
Explanation: Working with events
This tutorial describes event representation and how event arrays are used to
subselect data.
As usual we'll start by importing the modules we need, loading some
example data <sample-dataset>, and cropping the :class:~mne.io.Raw
object to just 60 seconds before loading it into RAM to save memory:
End of explanation
events = mne.find_events(raw, stim_channel='STI 014')
Explanation: The tutorial tut-events-vs-annotations describes in detail the
different ways of obtaining an :term:Events array <events> from a
:class:~mne.io.Raw object (see the section
overview-tut-events-section for details). Since the sample
dataset <sample-dataset> includes experimental events recorded on
:term:stim channel STI 014, we'll start this tutorial by parsing the
events from that channel using :func:mne.find_events:
End of explanation
sample_data_events_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw-eve.fif')
events_from_file = mne.read_events(sample_data_events_file)
assert np.array_equal(events, events_from_file[:len(events)])
Explanation: Reading and writing events from/to a file
Event arrays are :class:NumPy array <numpy.ndarray> objects, so they could
be saved to disk as binary :file:.npy files using :func:numpy.save.
However, MNE-Python provides convenience functions :func:mne.read_events
and :func:mne.write_events for reading and writing event arrays as either
text files (common file extensions are :file:.eve, :file:.lst, and
:file:.txt) or binary :file:.fif files. The example dataset includes the
results of mne.find_events(raw) in a :file:.fif file. Since we've
truncated our :class:~mne.io.Raw object, it will have fewer events than the
events file loaded from disk (which contains events for the entire
recording), but the events should match for the first 60 seconds anyway:
End of explanation
mne.find_events(raw, stim_channel='STI 014')
Explanation: When writing event arrays to disk, the format will be inferred from the file
extension you provide. By convention, MNE-Python expects events files to
either have an :file:.eve extension or to have a file basename ending in
-eve or _eve (e.g., :file:{my_experiment}_eve.fif), and will issue
a warning if this convention is not respected.
Subselecting and combining events
The output of :func:~mne.find_events above (repeated here) told us the
number of events that were found, and the unique integer event IDs present:
End of explanation
events_no_button = mne.pick_events(events, exclude=32)
Explanation: .. sidebar:: Including/excluding events
Just like `~mne.pick_events`, `~mne.read_events` also has ``include``
and ``exclude`` parameters.
If some of those events are not of interest, you can easily subselect events
using :func:mne.pick_events, which has parameters include and
exclude. For example, in the sample data Event ID 32 corresponds to a
subject button press, which could be excluded as:
End of explanation
merged_events = mne.merge_events(events, [1, 2, 3], 1)
print(np.unique(merged_events[:, -1]))
Explanation: It is also possible to combine two Event IDs using :func:mne.merge_events;
the following example will combine Event IDs 1, 2 and 3 into a single event
labelled 1:
End of explanation
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'buttonpress': 32}
Explanation: Note, however, that merging events is not necessary if you simply want to
pool trial types for analysis; the next section describes how MNE-Python uses
event dictionaries to map integer Event IDs to more descriptive label
strings.
Mapping Event IDs to trial descriptors
So far in this tutorial we've only been dealing with integer Event IDs, which
were assigned based on DC voltage pulse magnitude (which is ultimately
determined by the experimenter's choices about what signals to send to the
STIM channels). Keeping track of which Event ID corresponds to which
experimental condition can be cumbersome, and it is often desirable to pool
experimental conditions during analysis. You may recall that the mapping of
integer Event IDs to meaningful descriptions for the sample dataset
<sample-dataset> is given in this table
<sample-data-event-dict-table> in the introductory tutorial
<tut-overview>. Here we simply reproduce that mapping as an
event dictionary:
End of explanation
fig = mne.viz.plot_events(events, sfreq=raw.info['sfreq'],
first_samp=raw.first_samp, event_id=event_dict)
fig.subplots_adjust(right=0.7) # make room for legend
Explanation: Event dictionaries like this one are used when extracting epochs from
continuous data, and the resulting :class:~mne.Epochs object allows pooling
by requesting partial trial descriptors. For example, if we wanted to pool
all auditory trials, instead of merging Event IDs 1 and 2 using the
:func:~mne.merge_events function, we can make use of the fact that the keys
of event_dict contain multiple trial descriptors separated by /
characters: requesting 'auditory' trials will select all epochs with
Event IDs 1 and 2; requesting 'left' trials will select all epochs with
Event IDs 1 and 3. An example of this is shown in a later tutorial.
.. TODO replace above sentence when the relevant tut is ready:
An example of this is shown later, in the epoch-pooling section of
the epochs-intro-tutorial tutorial.
Plotting events
Another use of event dictionaries is when plotting events, which can serve as
a useful check that your event signals were properly sent to the STIM
channel(s) and that MNE-Python has successfully found them. The function
:func:mne.viz.plot_events will plot each event versus its sample number
(or, if you provide the sampling frequency, it will plot them versus time in
seconds). It can also account for the offset between sample number and sample
index in Neuromag systems, with the first_samp parameter. If an event
dictionary is provided, it will be used to generate a legend:
End of explanation
raw.plot(events=events, start=5, duration=10, color='gray',
event_color={1: 'r', 2: 'g', 3: 'b', 4: 'm', 5: 'y', 32: 'k'})
Explanation: Plotting events and raw data together
Events can also be plotted alongside the :class:~mne.io.Raw object they
were extracted from, by passing the Event array as the events parameter
of :meth:raw.plot <mne.io.Raw.plot>:
End of explanation
new_events = mne.make_fixed_length_events(raw, start=5, stop=50, duration=2.)
Explanation: Making equally-spaced Events arrays
For some experiments (such as those intending to analyze resting-state
activity) there may not be any experimental events included in the raw
recording. In such cases, an Events array of equally-spaced events can be
generated using :func:mne.make_fixed_length_events:
End of explanation
<END_TASK> |
180 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Plotting a builtin waveform
In the tools sub-package is a module called plot_source_wave which can be used to plot any of the builtin waveforms in time and frequency domains. The module takes the following arguments
Step1: Plotting a user-defined waveform
This notebook can be used to plot a user-defined waveform in time and frequency domains.
You can use the following code to experiment (in this notebook) with plotting different waveforms.
Step2: Determining a spatial resolution
You can use the following code as a guide to determining a spatial resolution for a simulation. | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from gprMax.waveforms import Waveform
from tools.plot_source_wave import check_timewindow, mpl_plot
w = Waveform()
w.type = 'ricker'
w.amp = 1
w.freq = 25e6
timewindow = 300e-9
dt = 8.019e-11
timewindow, iterations = check_timewindow(timewindow, dt)
plt = mpl_plot(w, timewindow, dt, iterations, fft=True)
Explanation: Plotting a builtin waveform
In the tools sub-package is a module called plot_source_wave which can be used to plot any of the builtin waveforms in time and frequency domains. The module takes the following arguments:
type is the type of the waveform, e.g. ricker
amp is the amplitude of the waveform
freq is the centre frequency of the waveform
timewindow is the time window used to view the waveform, i.e. the time window of the proposed simulation
dt is the time step used to view the waveform, i.e. the time step of the proposed simulation
There is an optional argument:
-fft a switch to turn on the FFT plotting for a single field component or current
For example (to use the module outside this notebook) to plot a Ricker waveform (and FFT) with an amplitude of 1, centre frequency of 1.5GHz and with a time window of 3ns and time step of 1.926ps:
python -m tools.plot_source_wave ricker 1 1.5e9 3e-9 1.926e-12 -fft
You can use the following code to experiment (in this notebook) with plotting different waveforms.
End of explanation
%matplotlib inline
import numpy as np
from gprMax.waveforms import Waveform
from tools.plot_source_wave import check_timewindow, mpl_plot
waveformvalues = np.loadtxt('/Users/cwarren/Desktop/sajad/SW_corrected.txt', skiprows=1, dtype=np.float32)
w = Waveform()
w.type = 'user'
w.amp = 1
w.freq = 0
w.uservalues = waveformvalues[:]
timewindow = 2e-9
dt = 4.71731e-12
timewindow, iterations = check_timewindow(timewindow, dt)
w.uservalues = np.zeros(iterations, dtype=np.float32)
w.uservalues[0:len(waveformvalues)] = waveformvalues[:]
plt = mpl_plot(w, timewindow, dt, iterations, fft=True)
Explanation: Plotting a user-defined waveform
This notebook can be used to plot a user-defined waveform in time and frequency domains.
You can use the following code to experiment (in this notebook) with plotting different waveforms.
End of explanation
from math import sqrt
# Speed of light in vacuum (m/s)
c = 299792458
# Highest relative permittivity present in model
er = 81
# Maximum frequency present in model
fmax = 80e6
# Minimum wavelength
wmin = c / (fmax * sqrt(er))
# Maximum spatial resolution (allowing 10 cells per wavelength)
dmin = wmin / 10
# Time steps at CFL limits for cubic cells
dt3D = dmin / (sqrt(3) * c)
dt2D = dmin / (sqrt(2) * c)
print('Minimum wavelength: {:g} m'.format(wmin))
print('Maximum spatial resolution: {:g} m'.format(dmin))
print('Time step for 3D cubic cell: {:g} s'.format(dt3D))
print('Time step for 2D square cell: {:g} s'.format(dt2D))
Explanation: Determining a spatial resolution
You can use the following code as a guide to determining a spatial resolution for a simulation.
End of explanation
<END_TASK> |
181 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Data
Both datasets are text collections from this site.
TCP-ECCO (170mb uncompressed) can be downloaded here
Lincoln (700kb uncompressed) can be downloaded here
Step1: Intialize swhoosh index
Step2: Add documents to swhoosh index
Step3: Save and close, then reload swoosh index
Note that the index MUST be saved -- it does not autosave!
Step4: Wrap Index in an IndexReader and get statistics needed for BM25
Step5: Get term info and postings for 'lincoln'
Step6: Run a BM25 search with Whoosh API
Step7: Compare results to Whoosh
Step8: Note
Step9: And repeating with multiprocessing enabled
Step10: Comparing to whoosh default
Step11: And to whoosh with multiprocessing enabled
Step12: Again, the matcher results are the same
Step14: Query Performance (BM25)
To benchmark this, we extrac token's from one of lincoln's speechs (not in TCP-ECCO), and select queries at random from the resulting ~5000 tokens.
Step15: 3 word queries
Step16: 6 word queries
Step17: 30 word queries | <ASSISTANT_TASK:>
Python Code:
def get_lincoln():
for filepath in sorted(glob.glob('Lincoln/*.txt')):
with open(filepath, 'r', encoding='latin') as f:
doc = f.read()
yield {'filepath': filepath, 'doc': doc}
def get_TCP():
for filepath in sorted(glob.glob('TCP-ECCO/*.txt')):
with open(filepath, 'r', encoding='latin') as f:
doc = f.read()
yield {'filepath': filepath, 'doc': doc}
Explanation: Data
Both datasets are text collections from this site.
TCP-ECCO (170mb uncompressed) can be downloaded here
Lincoln (700kb uncompressed) can be downloaded here
End of explanation
s = swhoosh.Index('randomIdx/randomIdx', simple_schema(), reset=True)
s.load()
Explanation: Intialize swhoosh index
End of explanation
t = time.time()
s.add_documents(get_lincoln())
print("TIME:", time.time() - t)
Explanation: Add documents to swhoosh index
End of explanation
s.save_and_close()
with open('randomIdx/randomIdx.manager', 'rb') as f:
s = pickle.load(f)
s.load()
Explanation: Save and close, then reload swoosh index
Note that the index MUST be saved -- it does not autosave!
End of explanation
r = s.reader()
print(r.doc_count())
print(r.doc_frequency('doc',b'lincoln'))
print(r.doc_field_length(21, 'doc'))
print(r.avg_field_length('doc'))
Explanation: Wrap Index in an IndexReader and get statistics needed for BM25
End of explanation
# returns (overall frequency, num docs, start loc in postings file, postings size)
s._idx['doc'].terminfo(b'lincoln')
[swhoosh.postings.load2(a[2], a[1]) for a in s._idx['doc']._postings(b'lincoln')]
# what the postings look like: (docId, frequency, positions)
s._idx['doc']._postings(b'lincoln')
Explanation: Get term info and postings for 'lincoln'
End of explanation
qp = QueryParser("doc", schema=s._schema)
q = qp.parse("lincoln")
with s.searcher() as searcher:
results = searcher.search(q)
print(results)
for hit in results:
print('{:f}'.format(hit.score), ' | ', hit['filepath'])
Explanation: Run a BM25 search with Whoosh API
End of explanation
def make_clean_index(ix_dirname, paths, procs=1):
ix = whoosh.index.create_in(ix_dirname, schema=simple_schema())
writer = ix.writer(procs=procs)
for filepath in paths:
add_doc(writer, filepath)
writer.commit()
return ix
def add_doc(writer, filepath):
with open(filepath, 'rb') as f:
text = f.read().decode('latin')
writer.add_document(doc=text, filepath=filepath)
t = time.time()
ix = make_clean_index('wind', sorted(glob.glob('Lincoln/*.txt')))
print("TIME:", time.time() - t)
with ix.searcher() as searcher:
results = searcher.search(q)
print(results)
for hit in results:
print('{:f}'.format(hit.score), ' | ', hit['filepath'])
Explanation: Compare results to Whoosh
End of explanation
s = swhoosh.Index('randomIdx2/randomIdx2', simple_schema(), reset=True)
s.load()
t = time.time()
s.add_documents(get_TCP())
print("TIME:", time.time() - t)
Explanation: Note: the BM25 scores returned by whoosh's default settings are a tiny bit smaller because the default whoosh reader adds 1 to the current document length for some reason (I don't think this is correct).
Indexing a bigger collection
End of explanation
s = swhoosh.Index('randomIdx2/randomIdx2', simple_schema(), reset=True)
s.load()
t = time.time()
s.add_documents_multiprocessing(get_TCP(), num_procs=4)
print("TIME:", time.time() - t)
s.save()
Explanation: And repeating with multiprocessing enabled:
End of explanation
t = time.time()
ix = make_clean_index('wind2', sorted(glob.glob('TCP-ECCO/*.txt')))
print("TIME:", time.time() - t)
Explanation: Comparing to whoosh default:
End of explanation
t = time.time()
ix = make_clean_index('wind2', sorted(glob.glob('TCP-ECCO/*.txt')), procs=4)
print("TIME:", time.time() - t)
Explanation: And to whoosh with multiprocessing enabled:
End of explanation
with s.searcher() as searcher:
results = searcher.search(q)
print(results)
for hit in results:
print('{:f}'.format(hit.score), ' | ', hit['filepath'])
print('')
with ix.searcher() as searcher:
results = searcher.search(q)
print(results)
for hit in results:
print('{:f}'.format(hit.score), ' | ', hit['filepath'])
Explanation: Again, the matcher results are the same:
End of explanation
with open('randomIdx2/randomIdx2.manager', 'rb') as f:
s = pickle.load(f)
s.load()
ix = whoosh.index.open_dir('wind2')
import numpy as np
s1 = s.searcher()
s2 = ix.searcher()
qp = QueryParser("doc", schema=s._schema)
with open('Lincoln/24-speech-1856.txt', 'r', encoding='latin') as f:
data = f.read()
query_vocab = [t.text for t in s._schema['doc'].analyzer(data)]
print('Length of query vocab:',len(query_vocab))
def random_n_query(n):
Generates a random query of length n
return ' '.join(np.random.choice(query_vocab, size=n))
def benchmark_n_query(n, trials):
t_swhoosh, t_whoosh = 0, 0
for i in range(trials):
q = qp.parse(random_n_query(n))
t = time.time()
results = s1.search(q)
t_swhoosh += time.time() - t
t = time.time()
results = s2.search(q)
t_whoosh += time.time() - t
print('- Swhoosh time per query:', "{:.2f}".format(t_swhoosh / trials * 1000), "ms")
print('- Whoosh time per query:', "{:.2f}".format(t_whoosh / trials * 1000), "ms")
return t_swhoosh/trials, t_whoosh/trials
Explanation: Query Performance (BM25)
To benchmark this, we extrac token's from one of lincoln's speechs (not in TCP-ECCO), and select queries at random from the resulting ~5000 tokens.
End of explanation
x, y = benchmark_n_query(3, 100)
print('\nSwhoosh was', "{0:.0f}%".format(100*(y-x)/y), 'percent faster.')
Explanation: 3 word queries
End of explanation
x, y = benchmark_n_query(6, 100)
print('\nSwhoosh was', "{0:.0f}%".format(100*(y-x)/y), 'percent faster.')
Explanation: 6 word queries
End of explanation
x, y = benchmark_n_query(30, 100)
print('\nSwhoosh was', "{0:.0f}%".format(100*(y-x)/y), 'percent faster.')
Explanation: 30 word queries
End of explanation
<END_TASK> |
182 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Sparse Linear Inverse Demo with AMP
In this demo, we illustrate how to use the vampyre package for a simple sparse linear inverse problem. The problem is to estimate a sparse vector z0 from linear measurements of the form y=A.dot(z0)+w where w is Gaussian noise and A is a known linear transform -- a basic problem in compressed sensing. By sparse, we mean that the vector z0 has few non-zero values. Knowing that the vector is sparse can be used for improved reconstruction if an appropriate sparse reconstruction algorithm is used.
There are a large number of algorithms for sparse linear inverse problems. This demo uses the Generalized Approximate Message Passing (GAMP) method, one of several methods that will be included in the vampyre package. In going through this demo, you will learn to
Step1: We will also load the other packages we will use in this demo. This could be done before the above import.
Step2: Generating Synthetic Data
We begin by generating synthetic data. The model is
Step3: To generate the synthetic data for this demo, we use the following simple probabilistic model. For the input z0, we will use Bernouli-Gaussian (BG) distribution, a simple model in sparse signal processing. In the BG model, the components z0[j] are i.i.d. where each component can be on or off.
With probability prob_on, z0[i] is on with z0[i] ~ N(z0_mean_on,z0_var_on)
With probability 1-prob_on, z0[i] is off with z0[i]=0.
Thus, on average, on prob_on*nz0 are *on$. We set the parameters for the model as well as the SNR for the measurements.
Step4: Using these parameters, we can generate random sparse z0 following this distribution with the following simple code.
Step5: To illustrate the sparsity, we plot the vector z0. We can see from this plot that the majority of the components of z0 are zero.
Step6: Now, we create a random transform A and output z1 = A.dot(z0)
Step7: Finally, we add noise at the desired SNR
Step8: Setting up the AMP / GAMP Solver
Now that we have created the sparse data, we will use the vampyre package to recover z0 and z1 from y. In vampyre the methods to perform this estimation are called solvers. The basic Approximate Message Passing (AMP) algorithm was developed in
Step9: We next use the vampyre class, MixEst, to describe a mixture of the two distributions. This is done by creating a list, est_list, of the estimators and an array pz with the probability of each component. The resulting estimator, est_in, is the estimator for the prior $z$, which is also the input to the transform $A$. We give this a name Input since it corresponds to the input. But, any naming is fine. Or, you can let vampyre give it a generic name.
Step10: We next define the operator A. In this case the operator is defined by a matrix so we use the MatrixLT class.
Step11: Finally, we describe the likelihood function, p(y|z1). Since y=z1+w, we can describe this as a Gaussian estimator.
Step12: Running the GAMP Solver
Having described the input and output estimators and the variance handler, we can now construct a GAMP solver. The construtor takes the input and output estimators, the variance handler and other parameters. The paramter nit is the number of iterations. This is fixed for now. Later, we will add auto-termination. The other parameter, hist_list is optional, and will be described momentarily.
Step13: We can print a summary of the model which indicates the dimensions and the estimators.
Step14: We now run the solver by calling the solve() method. For a small problem like this, this should be close to instantaneous.
Step15: The VAMP solver estimate is the field zhat. We plot one column of this (icol=0) and compare it to the corresponding column of the true matrix z. You should see a very good match.
Step16: We can measure the normalized mean squared error as follows. The GAMP solver also produces an estimate of the MSE in the variable zvar0. We can extract this variable to compute the predicted MSE. We see that the normalized MSE is indeed low and closely matches the predicted value from VAMP.
Step18: Finally, we can plot the actual and predicted MSE as a function of the iteration number. When solver was contructed, we passed an argument hist_list=['z0', 'zvar0']. This indicated to store the value of the estimate z0 and predicted error variance zvar0 with each iteration. We can recover these values from solver.hist_dict, the history dictionary. Using the values we can compute and plot the normalized MSE on each iteartion. Since we are going to plot several times in this demo, we wrap the plotting routine in a function, plot_z0est().
When we run plot_z0est() we see that GAMP gets a low MSE in very few iterations, about 10.
Step19: Damping and Stability
A significant problem with GAMP is its stability. GAMP and AMP are designed for Gaussian i.i.d. matrices. For other matrices, the algorithms can diverge. This divergence issue is one of the main difficulties in using GAMP and AMP in practivce.
Recent research has shown that the convergence appears to be related to condition number of the matrix. Matrices A with higher condition numbers tend to cause GAMP / AMP to diverge. See, for example
Step20: Now, we create a synthetic data based on the matrix and re-run GAMP.
Step21: We plot the results and we can see that the algorithm diverges.
Step22: To fix the problem, one can apply damping. In damping, the GAMP algorithm is adjusted to take a partial step as controlled by a parameter step between 0 and 1. In general, the theory is that step <= 1/sqrt(cond_num). In practice, you can try different step sizes until you get reasonable results. A warning though | <ASSISTANT_TASK:>
Python Code:
import os
import sys
vp_path = os.path.abspath('../../')
if not vp_path in sys.path:
sys.path.append(vp_path)
import vampyre as vp
Explanation: Sparse Linear Inverse Demo with AMP
In this demo, we illustrate how to use the vampyre package for a simple sparse linear inverse problem. The problem is to estimate a sparse vector z0 from linear measurements of the form y=A.dot(z0)+w where w is Gaussian noise and A is a known linear transform -- a basic problem in compressed sensing. By sparse, we mean that the vector z0 has few non-zero values. Knowing that the vector is sparse can be used for improved reconstruction if an appropriate sparse reconstruction algorithm is used.
There are a large number of algorithms for sparse linear inverse problems. This demo uses the Generalized Approximate Message Passing (GAMP) method, one of several methods that will be included in the vampyre package. In going through this demo, you will learn to:
* Load the vampyre package
* Create synthetic data for a sparse linear inverse problem
* Set up the GAMP method in the vampyre package to perform the estimation for the linear inverse problem
* Measure the mean squared error (MSE) and compare the value to the predicted value from the VAMP method.
* Using the hist_list feature to track variables per iteration of the algorithm.
* Adjust the damping factor for ill-conditioned matrices.
An almost identical demo is available for the Vector AMP (VAMP) method. The VAMP method is more robust and similar to use. You can start on that demo instead.
Importing the Package
First we need to import the vampyre package. Since python does not have relative imports, you need to add the path location for the vampyre package to the system path. In this case, we have specified the path use a relative path location, but you can change this depending on where vampyre is located.
End of explanation
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: We will also load the other packages we will use in this demo. This could be done before the above import.
End of explanation
# Parameters
nz0 = 1000 # number of components of z0
nz1 = 500 # number of measurements z1
# Compute the shapes
zshape0 = (nz0,) # Shape of z0 matrix
zshape1 = (nz1,) # Shape of z1 matrix = shape of y matrix
Ashape = (nz1,nz0) # Shape of A matrix
Explanation: Generating Synthetic Data
We begin by generating synthetic data. The model is:
y = z1 + w, z1 = A.dot(z0)
where z0 and z1 are the unknown vectors, A is transform and w is noise. First, we set the dimensions and the shapes of the vectors we wil use.
End of explanation
prob_on = 0.1 # fraction of components that are *on*
z0_mean_on = 0 # mean for the on components
z0_var_on = 1 # variance for the on components
snr = 30 # SNR in dB
Explanation: To generate the synthetic data for this demo, we use the following simple probabilistic model. For the input z0, we will use Bernouli-Gaussian (BG) distribution, a simple model in sparse signal processing. In the BG model, the components z0[j] are i.i.d. where each component can be on or off.
With probability prob_on, z0[i] is on with z0[i] ~ N(z0_mean_on,z0_var_on)
With probability 1-prob_on, z0[i] is off with z0[i]=0.
Thus, on average, on prob_on*nz0 are *on$. We set the parameters for the model as well as the SNR for the measurements.
End of explanation
# Generate the random input
z0_on = np.random.normal(z0_mean_on, np.sqrt(z0_var_on), zshape0)
u = np.random.uniform(0, 1, zshape0) < prob_on
z0 = z0_on*u
Explanation: Using these parameters, we can generate random sparse z0 following this distribution with the following simple code.
End of explanation
ind = np.arange(nz0)
plt.plot(ind,z0);
Explanation: To illustrate the sparsity, we plot the vector z0. We can see from this plot that the majority of the components of z0 are zero.
End of explanation
A = np.random.normal(0, 1/np.sqrt(nz0), Ashape)
z1 = A.dot(z0)
Explanation: Now, we create a random transform A and output z1 = A.dot(z0)
End of explanation
zvar1 = np.mean(np.abs(z1)**2)
wvar = zvar1*np.power(10, -0.1*snr)
w = np.random.normal(0,np.sqrt(wvar), zshape1)
y = z1 + w
Explanation: Finally, we add noise at the desired SNR
End of explanation
est0_off = vp.estim.DiscreteEst(0,1,zshape0)
est0_on = vp.estim.GaussEst(z0_mean_on, z0_var_on,zshape0)
Explanation: Setting up the AMP / GAMP Solver
Now that we have created the sparse data, we will use the vampyre package to recover z0 and z1 from y. In vampyre the methods to perform this estimation are called solvers. The basic Approximate Message Passing (AMP) algorithm was developed in:
Donoho, David L., Arian Maleki, and Andrea Montanari. "Message-passing algorithms for compressed sensing." Proceedings of the National Academy of Sciences 106.45 (2009): 18914-18919.
The vampyre package currently implements a slightly more general solver, called Generalized AMP described in:
Rangan, Sundeep. "Generalized approximate message passing for estimation with random linear mixing." Proc. IEEE Internation Symposium on Information Theory (ISIT), 2011.
GAMP can handle nonlinear output channels. In this demo, we will restrict our attention to the linear Gaussian channel, so the GAMP solver essentially implements the AMP algorithm.
Similar to most of the solvers in the vampyre package, the GAMP solver needs precise specifications of the probability distributions of z0, z1 and y. For the linear inverse problem, we will specify three components:
* The prior p(z0);
* The transform A such that z1 = A.dot(z_0)
* The likelihood p(y|z1).
Both the prior and likelihood are described by estimators. The transform is described by an operator.
We first describe the estimator for the prior p(z0). The vampyre package will eventually have a large number of estimators to describe various densities. In this simple demo, p(z0) is what is called a mixture distribution since z0 is one distribution with probability 1-prob_on and a second distribution with probability prob_on. To describe this mixture distribution in the vampyre package, we need to first create estimator classes for each component distribution. To this end, the following code creates two estimators:
* est0_off: The estimator corresponding to the z0[j]=0. This is simply a discrete distribution with a point mass at zero.
* est0_on: The estimator corresponding to the case when z0[j] = N(z0_mean_on, z0_var_on). This is a Gaussian distribution
End of explanation
est_list = [est0_off, est0_on]
pz0 = np.array([1-prob_on, prob_on])
est0 = vp.estim.MixEst(est_list, w=pz0, name='Input')
Explanation: We next use the vampyre class, MixEst, to describe a mixture of the two distributions. This is done by creating a list, est_list, of the estimators and an array pz with the probability of each component. The resulting estimator, est_in, is the estimator for the prior $z$, which is also the input to the transform $A$. We give this a name Input since it corresponds to the input. But, any naming is fine. Or, you can let vampyre give it a generic name.
End of explanation
Aop = vp.trans.MatrixLT(A,zshape0)
Explanation: We next define the operator A. In this case the operator is defined by a matrix so we use the MatrixLT class.
End of explanation
est1 = vp.estim.GaussEst(y,wvar,zshape1,name='Output')
Explanation: Finally, we describe the likelihood function, p(y|z1). Since y=z1+w, we can describe this as a Gaussian estimator.
End of explanation
nit = 20 # number of iterations
solver = vp.solver.Gamp(est0,est1,Aop,hist_list=['z0', 'zvar0'],nit=nit)
Explanation: Running the GAMP Solver
Having described the input and output estimators and the variance handler, we can now construct a GAMP solver. The construtor takes the input and output estimators, the variance handler and other parameters. The paramter nit is the number of iterations. This is fixed for now. Later, we will add auto-termination. The other parameter, hist_list is optional, and will be described momentarily.
End of explanation
solver.summary()
Explanation: We can print a summary of the model which indicates the dimensions and the estimators.
End of explanation
solver.solve()
Explanation: We now run the solver by calling the solve() method. For a small problem like this, this should be close to instantaneous.
End of explanation
zhat0 = solver.z0
ind = np.array(range(nz0))
plt.plot(ind,z0)
plt.plot(ind,zhat0)
plt.legend(['True', 'Estimate']);
Explanation: The VAMP solver estimate is the field zhat. We plot one column of this (icol=0) and compare it to the corresponding column of the true matrix z. You should see a very good match.
End of explanation
zerr0_act = np.mean(np.abs(zhat0-z0)**2)
zerr0_pred = solver.zvar0
zpow0 = np.mean(np.abs(z0)**2)
mse_act = 10*np.log10(zerr0_act/zpow0)
mse_pred = 10*np.log10(zerr0_pred/zpow0)
print("Normalized MSE (dB): actual {0:f} pred {1:f}".format(mse_act, mse_pred))
Explanation: We can measure the normalized mean squared error as follows. The GAMP solver also produces an estimate of the MSE in the variable zvar0. We can extract this variable to compute the predicted MSE. We see that the normalized MSE is indeed low and closely matches the predicted value from VAMP.
End of explanation
def plot_z0_est(solver,z0):
Plots the true and predicted MSE for the estimates of z0
# Compute the MSE as a function of the iteration
zhat0_hist = solver.hist_dict['z0']
zvar0_hist = solver.hist_dict['zvar0']
nit = len(zhat0_hist)
mse_act = np.zeros(nit)
mse_pred = np.zeros(nit)
for it in range(nit):
zerr0_act = np.mean(np.abs(zhat0_hist[it]-z0)**2)
zerr0_pred = zvar0_hist[it]
mse_act[it] = 10*np.log10(zerr0_act/zpow0)
mse_pred[it] = 10*np.log10(zerr0_pred/zpow0)
plt.plot(range(nit), mse_act, 'o-', linewidth=2)
plt.plot(range(nit), mse_pred, 's', linewidth=1)
plt.xlabel('Iteration')
plt.ylabel('Normalized MSE (dB)')
plt.legend(['Actual', 'Predicted'])
plt.grid()
plot_z0_est(solver,z0)
Explanation: Finally, we can plot the actual and predicted MSE as a function of the iteration number. When solver was contructed, we passed an argument hist_list=['z0', 'zvar0']. This indicated to store the value of the estimate z0 and predicted error variance zvar0 with each iteration. We can recover these values from solver.hist_dict, the history dictionary. Using the values we can compute and plot the normalized MSE on each iteartion. Since we are going to plot several times in this demo, we wrap the plotting routine in a function, plot_z0est().
When we run plot_z0est() we see that GAMP gets a low MSE in very few iterations, about 10.
End of explanation
# Generate a random transform
A = vp.trans.rand_rot_invariant_mat(nz1,nz0,cond_num=10)
Aop = vp.trans.MatrixLT(A,zshape0)
z1 = A.dot(z0)
Explanation: Damping and Stability
A significant problem with GAMP is its stability. GAMP and AMP are designed for Gaussian i.i.d. matrices. For other matrices, the algorithms can diverge. This divergence issue is one of the main difficulties in using GAMP and AMP in practivce.
Recent research has shown that the convergence appears to be related to condition number of the matrix. Matrices A with higher condition numbers tend to cause GAMP / AMP to diverge. See, for example:
* Rangan, Sundeep, Philip Schniter, and Alyson Fletcher. "On the convergence of approximate message passing with arbitrary matrices." Proc. IEEE International Symposium on Information Theory (ISIT), 2014.
To illustrate we create a random matrix with a specified condition number. This can be done with the rand_rot_invariant command. Specifically, it creates a matrix A=USV.T where U and V are random orthogonal matrices and S has a specified condition number.
End of explanation
# Add noise
zvar1 = np.mean(np.abs(z1)**2)
wvar = zvar1*np.power(10, -0.1*snr)
w = np.random.normal(0,np.sqrt(wvar), zshape1)
y = z1 + w
# Create the estimator
est1 = vp.estim.GaussEst(y,wvar,zshape1,name='Output')
# Run GAMP
nit = 20
solver = vp.solver.Gamp(est0,est1,Aop,hist_list=['z0', 'zvar0'],nit=nit)
solver.solve()
Explanation: Now, we create a synthetic data based on the matrix and re-run GAMP.
End of explanation
plot_z0_est(solver,z0)
Explanation: We plot the results and we can see that the algorithm diverges.
End of explanation
# Run GAMP with damping
nit = 200
solver = vp.solver.Gamp(est0,est1,Aop,hist_list=['z0', 'zvar0'],nit=nit,step=0.3)
solver.solve()
# Plot the results
plot_z0_est(solver,z0)
Explanation: To fix the problem, one can apply damping. In damping, the GAMP algorithm is adjusted to take a partial step as controlled by a parameter step between 0 and 1. In general, the theory is that step <= 1/sqrt(cond_num). In practice, you can try different step sizes until you get reasonable results. A warning though: Sometimes you never get great results.
In this case, we take step=0.3. We also need to run the algorithm for many more iterations. We see we get better results although we have to run for more iterations.
End of explanation
<END_TASK> |
183 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Linear Elasticity in 2D for 3 Phases
Introduction
This example provides a demonstration of using PyMKS to compute the linear strain field for a three-phase composite material. It demonstrates how to generate data for delta microstructures and then use this data to calibrate the first order MKS influence coefficients. The calibrated influence coefficients are used to predict the strain response for a random microstructure and the results are compared with those from finite element. Finally, the influence coefficients are scaled up and the MKS results are again compared with the finite element data for a large problem.
PyMKS uses the finite element tool SfePy to generate both the strain fields to fit the MKS model and the verification data to evaluate the MKS model's accuracy.
Elastostatics Equations and Boundary Conditions
The governing equations for elasticostaics and the boundary conditions used in this example are the same as those provided in the Linear Elastic in 2D example.
Note that an inappropriate boundary condition is used in this example because current version of SfePy is unable to implement a periodic plus displacement boundary condition. This leads to some issues near the edges of the domain and introduces errors into the resizing of the coefficients. We are working to fix this issue, but note that the problem is not with the MKS regression itself, but with the calibration data used. The finite element package ABAQUS includes the displaced periodic boundary condition and can be used to calibrate the MKS regression correctly.
Modeling with MKS
Calibration Data and Delta Microstructures
The first order MKS influence coefficients are all that is needed to compute a strain field of a random microstructure as long as the ratio between the elastic moduli (also known as the contrast) is less than 1.5. If this condition is met we can expect a mean absolute error of 2% or less when comparing the MKS results with those computed using finite element methods [1].
Because we are using distinct phases and the contrast is low enough to only need the first-order coefficients, delta microstructures and their strain fields are all that we need to calibrate the first-order influence coefficients [2].
Here we use the make_delta_microstructure function from pymks.datasets to create the delta microstructures needed to calibrate the first-order influence coefficients for a two-phase microstructure. The make_delta_microstructure function uses SfePy to generate the data.
Step1: Let's take a look at a few of the delta microstructures by importing draw_microstructures from pymks.tools.
Step2: Using delta microstructures for the calibration of the first-order influence coefficients is essentially the same, as using a unit impulse response to find the kernel of a system in signal processing. Any given delta microstructure is composed of only two phases with the center cell having an alternative phase from the remainder of the domain. The number of delta microstructures that are needed to calibrated the first-order coefficients is $N(N-1)$ where $N$ is the number of phases, therefore in this example we need 6 delta microstructures.
Generating Calibration Data
The make_elasticFEstrain_delta function from pymks.datasets provides an easy interface to generate delta microstructures and their strain fields, which can then be used for calibration of the influence coefficients. The function calls the ElasticFESimulation class to compute the strain fields.
In this example, lets look at a three phase microstructure with elastic moduli values of 80, 100 and 120 and Poisson's ratio values all equal to 0.3. Let's also set the macroscopic imposed strain equal to 0.02. All of these parameters used in the simulation must be passed into the make_elasticFEstrain_delta function. The number of Poisson's ratio values and elastic moduli values indicates the number of phases. Note that make_elasticFEstrain_delta does not take a number of samples argument as the number of samples to calibrate the MKS is fixed by the number of phases.
Step3: Let's take a look at one of the delta microstructures and the $\varepsilon_{xx}$ strain field.
Step4: Because slice(None) (the default slice operator in Python, equivalent to array[
Step5: Now, pass the delta microstructures and their strain fields into the fit method to calibrate the first-order influence coefficients.
Step6: That's it, the influence coefficient have been calibrated. Let's take a look at them.
Step7: The influence coefficients for $l=0$ and $l = 1$ have a Gaussian-like shape, while the influence coefficients for $l=2$ are constant-valued. The constant-valued influence coefficients may seem superfluous, but are equally as important. They are equivalent to the constant term in multiple linear regression with categorical variables.
Predict of the Strain Field for a Random Microstructure
Let's now use our instance of the MKSLocalizationModel class with calibrated influence coefficients to compute the strain field for a random two-phase microstructure and compare it with the results from a finite element simulation.
The make_elasticFEstrain_random function from pymks.datasets is an easy way to generate a random microstructure and its strain field results from finite element analysis.
Step8: Note that the calibrated influence coefficients can only be used to reproduce the simulation with the same boundary conditions that they were calibrated with.
Now, to get the strain field from the MKSLocalizationModel, just pass the same microstructure to the predict method.
Step9: Finally let's compare the results from finite element simulation and the MKS model.
Step10: Let's plot the difference between the two strain fields.
Step11: The MKS model is able to capture the strain field for the random microstructure after being calibrated with delta microstructures.
Resizing the Coefficeints to use on Larger Microstructures
The influence coefficients that were calibrated on a smaller microstructure can be used to predict the strain field on a larger microstructure though spectral interpolation [3], but accuracy of the MKS model drops slightly. To demonstrate how this is done, let's generate a new larger random microstructure and its strain field.
Step12: The influence coefficients that have already been calibrated on a $n$ by $n$ delta microstructures, need to be resized to match the shape of the new larger $m$ by $m$ microstructure that we want to compute the strain field for. This can be done by passing the shape of the new larger microstructure into the resize_coeff method.
Step13: Let's now take a look that ther resized influence coefficients.
Step14: Because the coefficients have been resized, they will no longer work for our original $n$ by $n$ sized microstructures they were calibrated on, but they can now be used on the $m$ by $m$ microstructures. Just like before, just pass the microstructure as the argument of the predict method to get the strain field.
Step15: Again, let's plot the difference between the two strain fields. | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
n = 21
n_phases = 3
from pymks.tools import draw_microstructures
from pymks.datasets import make_delta_microstructures
X_delta = make_delta_microstructures(n_phases=n_phases, size=(n, n))
Explanation: Linear Elasticity in 2D for 3 Phases
Introduction
This example provides a demonstration of using PyMKS to compute the linear strain field for a three-phase composite material. It demonstrates how to generate data for delta microstructures and then use this data to calibrate the first order MKS influence coefficients. The calibrated influence coefficients are used to predict the strain response for a random microstructure and the results are compared with those from finite element. Finally, the influence coefficients are scaled up and the MKS results are again compared with the finite element data for a large problem.
PyMKS uses the finite element tool SfePy to generate both the strain fields to fit the MKS model and the verification data to evaluate the MKS model's accuracy.
Elastostatics Equations and Boundary Conditions
The governing equations for elasticostaics and the boundary conditions used in this example are the same as those provided in the Linear Elastic in 2D example.
Note that an inappropriate boundary condition is used in this example because current version of SfePy is unable to implement a periodic plus displacement boundary condition. This leads to some issues near the edges of the domain and introduces errors into the resizing of the coefficients. We are working to fix this issue, but note that the problem is not with the MKS regression itself, but with the calibration data used. The finite element package ABAQUS includes the displaced periodic boundary condition and can be used to calibrate the MKS regression correctly.
Modeling with MKS
Calibration Data and Delta Microstructures
The first order MKS influence coefficients are all that is needed to compute a strain field of a random microstructure as long as the ratio between the elastic moduli (also known as the contrast) is less than 1.5. If this condition is met we can expect a mean absolute error of 2% or less when comparing the MKS results with those computed using finite element methods [1].
Because we are using distinct phases and the contrast is low enough to only need the first-order coefficients, delta microstructures and their strain fields are all that we need to calibrate the first-order influence coefficients [2].
Here we use the make_delta_microstructure function from pymks.datasets to create the delta microstructures needed to calibrate the first-order influence coefficients for a two-phase microstructure. The make_delta_microstructure function uses SfePy to generate the data.
End of explanation
draw_microstructures(X_delta[::2])
Explanation: Let's take a look at a few of the delta microstructures by importing draw_microstructures from pymks.tools.
End of explanation
from pymks.datasets import make_elastic_FE_strain_delta
from pymks.tools import draw_microstructure_strain
elastic_modulus = (80, 100, 120)
poissons_ratio = (0.3, 0.3, 0.3)
macro_strain = 0.02
size = (n, n)
X_delta, strains_delta = make_elastic_FE_strain_delta(elastic_modulus=elastic_modulus,
poissons_ratio=poissons_ratio,
size=size, macro_strain=macro_strain)
Explanation: Using delta microstructures for the calibration of the first-order influence coefficients is essentially the same, as using a unit impulse response to find the kernel of a system in signal processing. Any given delta microstructure is composed of only two phases with the center cell having an alternative phase from the remainder of the domain. The number of delta microstructures that are needed to calibrated the first-order coefficients is $N(N-1)$ where $N$ is the number of phases, therefore in this example we need 6 delta microstructures.
Generating Calibration Data
The make_elasticFEstrain_delta function from pymks.datasets provides an easy interface to generate delta microstructures and their strain fields, which can then be used for calibration of the influence coefficients. The function calls the ElasticFESimulation class to compute the strain fields.
In this example, lets look at a three phase microstructure with elastic moduli values of 80, 100 and 120 and Poisson's ratio values all equal to 0.3. Let's also set the macroscopic imposed strain equal to 0.02. All of these parameters used in the simulation must be passed into the make_elasticFEstrain_delta function. The number of Poisson's ratio values and elastic moduli values indicates the number of phases. Note that make_elasticFEstrain_delta does not take a number of samples argument as the number of samples to calibrate the MKS is fixed by the number of phases.
End of explanation
draw_microstructure_strain(X_delta[0], strains_delta[0])
Explanation: Let's take a look at one of the delta microstructures and the $\varepsilon_{xx}$ strain field.
End of explanation
from pymks import MKSLocalizationModel
from pymks import PrimitiveBasis
prim_basis =PrimitiveBasis(n_states=3, domain=[0, 2])
model = MKSLocalizationModel(basis=prim_basis)
Explanation: Because slice(None) (the default slice operator in Python, equivalent to array[:]) was passed in to the make_elasticFEstrain_delta function as the argument for strain_index, the function returns all the strain fields. Let's also take a look at the $\varepsilon_{yy}$ and $\varepsilon_{xy}$ strain fields.
Calibrating First-Order Influence Coefficients
Now that we have the delta microstructures and their strain fields, we will calibrate the influence coefficients by creating an instance of the MKSLocalizatoinModel class. Because we are going to calibrate the influence coefficients with delta microstructures, we can create an instance of PrimitiveBasis with n_states equal to 3, and use it to create an instance of MKSLocalizationModel. The delta microstructures and their strain fields will then be passed to the fit method.
End of explanation
model.fit(X_delta, strains_delta)
Explanation: Now, pass the delta microstructures and their strain fields into the fit method to calibrate the first-order influence coefficients.
End of explanation
from pymks.tools import draw_coeff
draw_coeff(model.coeff)
Explanation: That's it, the influence coefficient have been calibrated. Let's take a look at them.
End of explanation
from pymks.datasets import make_elastic_FE_strain_random
np.random.seed(101)
X, strain = make_elastic_FE_strain_random(n_samples=1, elastic_modulus=elastic_modulus,
poissons_ratio=poissons_ratio, size=size,
macro_strain=macro_strain)
draw_microstructure_strain(X[0] , strain[0])
Explanation: The influence coefficients for $l=0$ and $l = 1$ have a Gaussian-like shape, while the influence coefficients for $l=2$ are constant-valued. The constant-valued influence coefficients may seem superfluous, but are equally as important. They are equivalent to the constant term in multiple linear regression with categorical variables.
Predict of the Strain Field for a Random Microstructure
Let's now use our instance of the MKSLocalizationModel class with calibrated influence coefficients to compute the strain field for a random two-phase microstructure and compare it with the results from a finite element simulation.
The make_elasticFEstrain_random function from pymks.datasets is an easy way to generate a random microstructure and its strain field results from finite element analysis.
End of explanation
strain_pred = model.predict(X)
Explanation: Note that the calibrated influence coefficients can only be used to reproduce the simulation with the same boundary conditions that they were calibrated with.
Now, to get the strain field from the MKSLocalizationModel, just pass the same microstructure to the predict method.
End of explanation
from pymks.tools import draw_strains_compare
draw_strains_compare(strain[0], strain_pred[0])
Explanation: Finally let's compare the results from finite element simulation and the MKS model.
End of explanation
from pymks.tools import draw_differences
draw_differences([strain[0] - strain_pred[0]], ['Finite Element - MKS'])
Explanation: Let's plot the difference between the two strain fields.
End of explanation
m = 3 * n
size = (m, m)
print size
X, strain = make_elastic_FE_strain_random(n_samples=1, elastic_modulus=elastic_modulus,
poissons_ratio=poissons_ratio, size=size,
macro_strain=macro_strain)
draw_microstructure_strain(X[0] , strain[0])
Explanation: The MKS model is able to capture the strain field for the random microstructure after being calibrated with delta microstructures.
Resizing the Coefficeints to use on Larger Microstructures
The influence coefficients that were calibrated on a smaller microstructure can be used to predict the strain field on a larger microstructure though spectral interpolation [3], but accuracy of the MKS model drops slightly. To demonstrate how this is done, let's generate a new larger random microstructure and its strain field.
End of explanation
model.resize_coeff(X[0].shape)
Explanation: The influence coefficients that have already been calibrated on a $n$ by $n$ delta microstructures, need to be resized to match the shape of the new larger $m$ by $m$ microstructure that we want to compute the strain field for. This can be done by passing the shape of the new larger microstructure into the resize_coeff method.
End of explanation
draw_coeff(model.coeff)
Explanation: Let's now take a look that ther resized influence coefficients.
End of explanation
strain_pred = model.predict(X)
draw_strains_compare(strain[0], strain_pred[0])
Explanation: Because the coefficients have been resized, they will no longer work for our original $n$ by $n$ sized microstructures they were calibrated on, but they can now be used on the $m$ by $m$ microstructures. Just like before, just pass the microstructure as the argument of the predict method to get the strain field.
End of explanation
draw_differences([strain[0] - strain_pred[0]], ['Finite Element - MKS'])
Explanation: Again, let's plot the difference between the two strain fields.
End of explanation
<END_TASK> |
184 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
The selectors module provides a platform-independent abstraction layer on top of the platform-specific I/O monitoring functions in select.
Operating Model
The APIs in selectors are event-based, similar to poll() from select. There are several implementations and the module automatically sets the alias DefaultSelector to refer to the most efficient one for the current system configuration.
A selector object provides methods for specifying what events to look for on a socket, and then lets the caller wait for events in a platform-independent way. Registering interest in an event creates a SelectorKey, which holds the socket, information about the events of interest, and optional application data. The owner of the selector calls its select() method to learn about events. The return value is a sequence of key objects and a bitmask indicating what events have occurred. A program using a selector should repeatedly call select(), then handle the events appropriately.
Echo Server
The echo server example below uses the application data in the SelectorKey to register a callback function to be invoked on the new event. The main loop gets the callback from the key and passes the socket and event mask to it. As the server starts, it registers the accept() function to be called for read events on the main server socket. Accepting the connection produces a new socket, which is then registered with the read() function as a callback for read events.
Step1: When read() receives no data from the socket, it interprets the read event as the other side of the connection being closed instead of sending data. It removes the socket from the selector and closes it. In order to avoid an infinite loop, this server also shuts itself down after it has finished communicating with a single client.
Echo Client
The echo client example below processes all of the I/O events in the main loop, instead of using callbacks. It sets up the selector to report read events on the socket, and to report when the socket is ready to send data. Because it is looking at two types of events, the client must check which occurred by examining the mask value. After all of its outgoing data has been sent, it changes the selector configuration to only report when there is data to read. | <ASSISTANT_TASK:>
Python Code:
# %load selectors_echo_server.py
import selectors
import socket
mysel = selectors.DefaultSelector()
keep_running = True
def read(connection, mask):
"Callback for read events"
global keep_running
client_address = connection.getpeername()
print('read({})'.format(client_address))
data = connection.recv(1024)
if data:
# A readable client socket has data
print(' received {!r}'.format(data))
connection.sendall(data)
else:
# Interpret empty result as closed connection
print(' closing')
mysel.unregister(connection)
connection.close()
# Tell the main loop to stop
keep_running = False
def accept(sock, mask):
"Callback for new connections"
new_connection, addr = sock.accept()
print('accept({})'.format(addr))
new_connection.setblocking(False)
mysel.register(new_connection, selectors.EVENT_READ, read)
server_address = ('localhost', 10000)
print('starting up on {} port {}'.format(*server_address))
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.setblocking(False)
server.bind(server_address)
server.listen(5)
mysel.register(server, selectors.EVENT_READ, accept)
while keep_running:
print('waiting for I/O')
for key, mask in mysel.select(timeout=1):
callback = key.data
callback(key.fileobj, mask)
print('shutting down')
mysel.close()
Explanation: The selectors module provides a platform-independent abstraction layer on top of the platform-specific I/O monitoring functions in select.
Operating Model
The APIs in selectors are event-based, similar to poll() from select. There are several implementations and the module automatically sets the alias DefaultSelector to refer to the most efficient one for the current system configuration.
A selector object provides methods for specifying what events to look for on a socket, and then lets the caller wait for events in a platform-independent way. Registering interest in an event creates a SelectorKey, which holds the socket, information about the events of interest, and optional application data. The owner of the selector calls its select() method to learn about events. The return value is a sequence of key objects and a bitmask indicating what events have occurred. A program using a selector should repeatedly call select(), then handle the events appropriately.
Echo Server
The echo server example below uses the application data in the SelectorKey to register a callback function to be invoked on the new event. The main loop gets the callback from the key and passes the socket and event mask to it. As the server starts, it registers the accept() function to be called for read events on the main server socket. Accepting the connection produces a new socket, which is then registered with the read() function as a callback for read events.
End of explanation
# %load selectors_echo_client.py
import selectors
import socket
mysel = selectors.DefaultSelector()
keep_running = True
outgoing = [
b'It will be repeated.',
b'This is the message. ',
]
bytes_sent = 0
bytes_received = 0
# Connecting is a blocking operation, so call setblocking()
# after it returns.
server_address = ('localhost', 10000)
print('connecting to {} port {}'.format(*server_address))
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect(server_address)
sock.setblocking(False)
# Set up the selector to watch for when the socket is ready
# to send data as well as when there is data to read.
mysel.register(
sock,
selectors.EVENT_READ | selectors.EVENT_WRITE,
)
while keep_running:
print('waiting for I/O')
for key, mask in mysel.select(timeout=1):
connection = key.fileobj
client_address = connection.getpeername()
print('client({})'.format(client_address))
if mask & selectors.EVENT_READ:
print(' ready to read')
data = connection.recv(1024)
if data:
# A readable client socket has data
print(' received {!r}'.format(data))
bytes_received += len(data)
# Interpret empty result as closed connection,
# and also close when we have received a copy
# of all of the data sent.
keep_running = not (
data or
(bytes_received and
(bytes_received == bytes_sent))
)
if mask & selectors.EVENT_WRITE:
print(' ready to write')
if not outgoing:
# We are out of messages, so we no longer need to
# write anything. Change our registration to let
# us keep reading responses from the server.
print(' switching to read-only')
mysel.modify(sock, selectors.EVENT_READ)
else:
# Send the next message.
next_msg = outgoing.pop()
print(' sending {!r}'.format(next_msg))
sock.sendall(next_msg)
bytes_sent += len(next_msg)
print('shutting down')
mysel.unregister(connection)
connection.close()
mysel.close()
Explanation: When read() receives no data from the socket, it interprets the read event as the other side of the connection being closed instead of sending data. It removes the socket from the selector and closes it. In order to avoid an infinite loop, this server also shuts itself down after it has finished communicating with a single client.
Echo Client
The echo client example below processes all of the I/O events in the main loop, instead of using callbacks. It sets up the selector to report read events on the socket, and to report when the socket is ready to send data. Because it is looking at two types of events, the client must check which occurred by examining the mask value. After all of its outgoing data has been sent, it changes the selector configuration to only report when there is data to read.
End of explanation
<END_TASK> |
185 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Face verification
Goals
train a network for face similarity using triplet loss
work data augmentation, generators and hard negative mining
Dataset
We will be using Labeled Faces in the Wild (LFW) dataset available openly at http
Step1: Processing the dataset
This part is similar to previous notebook on siamese nets, you may just run the cells to get the necessary inputs
The dataset consists of folders corresponding to each identity. The folder name is the name of the person.
We map each class (identity) to an integer id, and build mappings as dictionaries name_to_classid and classid_to_name
Set USE_SUBSET to False if you want to use the full dataset (GPU only!)
Step2: In each directory, there is one or more images corresponding to the identity. We map each image path with an integer id, then build a few dictionaries
Step3: The following histogram shows the number of images per class
Step4: The following function builds a large number of positives/negatives pairs (train and test)
Triplet loss
In the triplet loss model, we'll define 3 inputs $(a,+,-)$ for anchor, positive and negative.
Usage and differences with siamese nets
We release the hard constraint that all data of the same class should be squashed to a single point. Rather, images representation can live on a manifold, as long as they are closer to similar class images than to different class images
On large datasets, with careful hyperparameters, triplets and more advances metric learning method beat siamese nets
Outline
We will build positive pairs, and find a way to sample negatives to obtain triplets
Note that we don't need outputs anymore (positive vs negative), we're just building triplets
Step5: We end up with 1177 different pairs, which we'll append with a random sample (as negative) in the generator
Step6: As you can see, choosing randomly the negatives can be inefficient. For example it's reasonnable to think a old man will be a too easy negative if the anchor is a young woman.
Step7: Triplet Model
The loss of the triplet model is as follows
Step8: Shared Convolutional Network
You may as well build your own
Step9: Triplet Model
Exercise
Build the triplet model, using the skeleton below using the OOP Keras API
First run the 3 inputs through the shared conv
Then compute positive and negative similarities
Then call the triplet loss function using a Lambda layer
Step10: Warning
- You will need to run on GPU if you're on the large dataset
- On the small dataset, the model sometimes takes a few epochs before starting to decrease the loss
- This can be due to the init, learning rate, or too much dropout / augmentation
Step11: Exercise
What do you observe?
Try to make changes to the model / parameters to get a better convergence, you should be able to have much better result than with the ConvNet we gave you
Try to add data augmentation, or increase the size of the training set
You might want to be on GPU for testing several architectures, even on the small set
Step12: Displaying similar images
Step13: Test Recall@k model
for each test class with > 1 image, pick image at random, and compute similarity with all other images
compute recall @k
Step15: Hard Negative Mining
We'll mine negatives based on previous epoch's model. To do so, we'll compute distances with all anchors, and sample among the most similar negatives, but not the too difficult ones
Step16: Note that we are re-creating a HardTripletGenerator at each epoch. By doing so, we re-compute the new hard negatives with the newly updated model. On larger scale this operation can take a lot of time, and could be done every X epochs (X > 1).
Step17: You should see that the train loss is barely improving while the validation loss is decreasing. Remember that we are feeding the hardest triplets to the model!
Step18: Let's run the improved convnet SharedConv2 without negative hardming in order to have a fair comparison | <ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
# If you have a GPU, execute the following lines to restrict the amount of VRAM used:
gpus = tf.config.experimental.list_physical_devices('GPU')
if len(gpus) > 1:
print("Using GPU {}".format(gpus[0]))
tf.config.experimental.set_visible_devices(gpus[0], 'GPU')
else:
print("Using CPU")
import os
import random
import itertools
import tensorflow.keras.backend as K
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Input, Concatenate, Lambda, Dot
from tensorflow.keras.layers import Conv2D, MaxPool2D, GlobalAveragePooling2D, Flatten, Dropout
from tensorflow.keras import optimizers
import numpy as np
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
Explanation: Face verification
Goals
train a network for face similarity using triplet loss
work data augmentation, generators and hard negative mining
Dataset
We will be using Labeled Faces in the Wild (LFW) dataset available openly at http://vis-www.cs.umass.edu/lfw/
For computing purposes, we'll only restrict ourselves to a subpart of the dataset. You're welcome to train on the whole dataset on GPU, by changing the PATH in the following cells, and in data download
We will also load pretrained weights
End of explanation
PATH = "lfw/lfw-deepfunneled/"
USE_SUBSET = True
dirs = sorted(os.listdir(PATH))
if USE_SUBSET:
dirs = dirs[:500]
name_to_classid = {d:i for i,d in enumerate(dirs)}
classid_to_name = {v:k for k,v in name_to_classid.items()}
num_classes = len(name_to_classid)
print("number of classes: "+str(num_classes))
Explanation: Processing the dataset
This part is similar to previous notebook on siamese nets, you may just run the cells to get the necessary inputs
The dataset consists of folders corresponding to each identity. The folder name is the name of the person.
We map each class (identity) to an integer id, and build mappings as dictionaries name_to_classid and classid_to_name
Set USE_SUBSET to False if you want to use the full dataset (GPU only!)
End of explanation
# read all directories
img_paths = {c:[directory + "/" + img for img in sorted(os.listdir(PATH+directory))]
for directory,c in name_to_classid.items()}
# retrieve all images
all_images_path = []
for img_list in img_paths.values():
all_images_path += img_list
# map to integers
path_to_id = {v:k for k,v in enumerate(all_images_path)}
id_to_path = {v:k for k,v in path_to_id.items()}
# build mappings between images and class
classid_to_ids = {k:[path_to_id[path] for path in v] for k,v in img_paths.items()}
id_to_classid = {v:c for c,imgs in classid_to_ids.items() for v in imgs}
Explanation: In each directory, there is one or more images corresponding to the identity. We map each image path with an integer id, then build a few dictionaries:
- mappings from imagepath and image id: path_to_id and id_to_path
- mappings from class id to image ids: classid_to_ids and id_to_classid
End of explanation
from skimage.io import imread
from skimage.transform import resize
def resize100(img):
return resize(img, (100, 100), preserve_range=True, mode='reflect', anti_aliasing=True)[20:80,20:80,:]
def open_all_images(id_to_path):
all_imgs = []
for path in id_to_path.values():
all_imgs += [np.expand_dims(resize100(imread(PATH+path)),0)]
return np.vstack(all_imgs)
all_imgs = open_all_images(id_to_path)
mean = np.mean(all_imgs, axis=(0,1,2))
all_imgs -= mean
all_imgs.shape, str(all_imgs.nbytes / 1e6) + "Mo"
Explanation: The following histogram shows the number of images per class: there are many classes with only one image.
These classes are useful as negatives, only as we can't make a positive pair with them.
Now that we have a way to compute the pairs, let's open all the possible images. It will expand all the images into RAM memory. There are more than 1000 images, so 100Mo of RAM will be used, which will not cause any issue.
Note: if you plan on opening more images, you should not open them all at once, and rather build a generator
End of explanation
def build_pos_pairs_for_id(classid, max_num=50):
imgs = classid_to_ids[classid]
if len(imgs) == 1:
return []
pos_pairs = list(itertools.combinations(imgs, 2))
random.shuffle(pos_pairs)
return pos_pairs[:max_num]
def build_positive_pairs(class_id_range):
listX1 = []
listX2 = []
for class_id in class_id_range:
pos = build_pos_pairs_for_id(class_id)
for pair in pos:
listX1 += [pair[0]]
listX2 += [pair[1]]
perm = np.random.permutation(len(listX1))
return np.array(listX1)[perm], np.array(listX2)[perm]
split_num = int(num_classes * 0.8)
Xa_train, Xp_train = build_positive_pairs(range(0, split_num))
Xa_test, Xp_test = build_positive_pairs(range(split_num, num_classes-1))
# Gather the ids of all images that are used for train and test
all_img_train_idx = list(set(Xa_train) | set(Xp_train))
all_img_test_idx = list(set(Xa_test) | set(Xp_test))
Explanation: The following function builds a large number of positives/negatives pairs (train and test)
Triplet loss
In the triplet loss model, we'll define 3 inputs $(a,+,-)$ for anchor, positive and negative.
Usage and differences with siamese nets
We release the hard constraint that all data of the same class should be squashed to a single point. Rather, images representation can live on a manifold, as long as they are closer to similar class images than to different class images
On large datasets, with careful hyperparameters, triplets and more advances metric learning method beat siamese nets
Outline
We will build positive pairs, and find a way to sample negatives to obtain triplets
Note that we don't need outputs anymore (positive vs negative), we're just building triplets
End of explanation
Xa_train.shape, Xp_train.shape
from imgaug import augmenters as iaa
seq = iaa.Sequential([
iaa.Fliplr(0.5), # horizontally flip 50% of the images
])
class TripletGenerator(tf.keras.utils.Sequence):
def __init__(self, Xa_train, Xp_train, batch_size, all_imgs, neg_imgs_idx):
self.cur_img_index = 0
self.cur_img_pos_index = 0
self.batch_size = batch_size
self.imgs = all_imgs
self.Xa = Xa_train # Anchors
self.Xp = Xp_train
self.cur_train_index = 0
self.num_samples = Xa_train.shape[0]
self.neg_imgs_idx = neg_imgs_idx
def __len__(self):
return self.num_samples // self.batch_size
def __getitem__(self, batch_index):
low_index = batch_index * self.batch_size
high_index = (batch_index + 1) * self.batch_size
imgs_a = self.Xa[low_index:high_index] # Anchors
imgs_p = self.Xp[low_index:high_index] # Positives
imgs_n = random.sample(self.neg_imgs_idx, imgs_a.shape[0]) # Negatives
imgs_a = seq.augment_images(self.imgs[imgs_a])
imgs_p = seq.augment_images(self.imgs[imgs_p])
imgs_n = seq.augment_images(self.imgs[imgs_n])
# We also a null vector as placeholder for output, but it won't be needed:
return ([imgs_a, imgs_p, imgs_n], np.zeros(shape=(imgs_a.shape[0])))
batch_size = 128
gen = TripletGenerator(Xa_train, Xp_train, batch_size, all_imgs, all_img_train_idx)
len(all_img_test_idx), len(gen)
[xa, xp, xn], y = gen[0]
xa.shape, xp.shape, xn.shape
plt.figure(figsize=(16, 9))
for i in range(5):
plt.subplot(3, 5, i + 1)
plt.title("anchor")
plt.imshow((xa[i] + mean) / 255)
plt.axis('off')
for i in range(5):
plt.subplot(3, 5, i + 6)
plt.title("positive")
plt.imshow((xp[i] + mean) / 255)
plt.axis('off')
for i in range(5):
plt.subplot(3, 5, i + 11)
plt.title("negative")
plt.imshow((xn[i] + mean) / 255)
plt.axis('off')
plt.show()
Explanation: We end up with 1177 different pairs, which we'll append with a random sample (as negative) in the generator
End of explanation
gen_test = TripletGenerator(Xa_test, Xp_test, 32, all_imgs, all_img_test_idx)
len(gen_test)
Explanation: As you can see, choosing randomly the negatives can be inefficient. For example it's reasonnable to think a old man will be a too easy negative if the anchor is a young woman.
End of explanation
# Build a loss which doesn't take into account the y_true, as
# we'll be passing only 0
def identity_loss(y_true, y_pred):
return K.mean(y_pred - 0 * y_true)
# The real loss is here
def cosine_triplet_loss(X, margin=0.5):
positive_sim, negative_sim = X
# batch loss
losses = K.maximum(0.0, negative_sim - positive_sim + margin)
return K.mean(losses)
Explanation: Triplet Model
The loss of the triplet model is as follows:
$$ max(0, ||x_a - x_p||_2 - ||x_a - x_n||_2 + \alpha)$$
We'll be using cosine similarities instead of euclidean distances (seems to be working a bit better in that case), so the loss becomes:
$$ max(0, cos(x_a, x_n) - cos(x_a - x_p) + \alpha)$$
End of explanation
class SharedConv(tf.keras.Model):
def __init__(self):
super().__init__(self, name="sharedconv")
self.conv1 = Conv2D(16, 3, activation="relu", padding="same")
self.conv2 = Conv2D(16, 3, activation="relu", padding="same")
self.pool1 = MaxPool2D((2,2)) # 30,30
self.conv3 = Conv2D(32, 3, activation="relu", padding="same")
self.conv4 = Conv2D(32, 3, activation="relu", padding="same")
self.pool2 = MaxPool2D((2,2)) # 15,15
self.conv5 = Conv2D(64, 3, activation="relu", padding="same")
self.conv6 = Conv2D(64, 3, activation="relu", padding="same")
self.pool3 = MaxPool2D((2,2)) # 8,8
self.conv7 = Conv2D(64, 3, activation="relu", padding="same")
self.conv8 = Conv2D(32, 3, activation="relu", padding="same")
self.flatten = Flatten()
self.dropout1 = Dropout(0.2)
self.fc1 = Dense(40, activation="tanh")
self.dropout2 = Dropout(0.2)
self.fc2 = Dense(64)
def call(self, inputs):
x = self.pool1(self.conv2(self.conv1(inputs)))
x = self.pool2(self.conv4(self.conv3(x)))
x = self.pool3(self.conv6(self.conv5(x)))
x = self.flatten(self.conv8(self.conv7(x)))
x = self.fc1(self.dropout1(x))
return self.fc2(self.dropout2(x))
shared_conv = SharedConv()
Explanation: Shared Convolutional Network
You may as well build your own
End of explanation
class TripletNetwork(tf.keras.Model):
def __init__(self, shared_conv):
super().__init__(self, name="tripletnetwork")
# TODO
def call(self, inputs):
pass # TODO
model_triplet = TripletNetwork(shared_conv)
model_triplet.compile(loss=identity_loss, optimizer="rmsprop")
# %load solutions/triplet.py
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.models import load_model
best_model_fname = "triplet_checkpoint_b2.h5"
best_model_cb = ModelCheckpoint(best_model_fname, monitor='val_loss',
save_best_only=True, verbose=1)
Explanation: Triplet Model
Exercise
Build the triplet model, using the skeleton below using the OOP Keras API
First run the 3 inputs through the shared conv
Then compute positive and negative similarities
Then call the triplet loss function using a Lambda layer
End of explanation
history = model_triplet.fit(gen,
epochs=10,
validation_data = gen_test,
callbacks=[best_model_cb])
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='validation')
plt.ylim(0, 0.5)
plt.legend(loc='best')
plt.title('Loss');
model_triplet.load_weights("triplet_checkpoint_b2.h5")
Explanation: Warning
- You will need to run on GPU if you're on the large dataset
- On the small dataset, the model sometimes takes a few epochs before starting to decrease the loss
- This can be due to the init, learning rate, or too much dropout / augmentation
End of explanation
# You may load this model
# Trained on triplets but with larger dataset
# Far from perfect !
# model_triplet.load_weights("triplet_pretrained.h5")
Explanation: Exercise
What do you observe?
Try to make changes to the model / parameters to get a better convergence, you should be able to have much better result than with the ConvNet we gave you
Try to add data augmentation, or increase the size of the training set
You might want to be on GPU for testing several architectures, even on the small set
End of explanation
emb = shared_conv.predict(all_imgs)
emb = emb / np.linalg.norm(emb, axis=-1, keepdims=True)
pixelwise = np.reshape(all_imgs, (all_imgs.shape[0], 60*60*3))
def most_sim(idx, topn=5, mode="cosine"):
x = emb[idx]
if mode == "cosine":
x = x / np.linalg.norm(x)
sims = np.dot(emb, x)
ids = np.argsort(sims)[::-1]
return [(id,sims[id]) for id in ids[:topn]]
elif mode == "euclidean":
dists = np.linalg.norm(emb - x, axis=-1)
ids = np.argsort(dists)
return [(id,dists[id]) for id in ids[:topn]]
else:
dists = np.linalg.norm(pixelwise - pixelwise[idx], axis=-1)
ids = np.argsort(dists)
return [(id,dists[id]) for id in ids[:topn]]
def display(img):
img = img.astype('uint8')
plt.imshow(img)
plt.axis('off')
plt.show()
interesting_classes = list(filter(lambda x: len(x[1])>4, classid_to_ids.items()))
class_idx = random.choice(interesting_classes)[0]
print(class_idx)
img_idx = random.choice(classid_to_ids[class_idx])
for id, sim in most_sim(img_idx):
display(all_imgs[id] + mean)
print((classid_to_name[id_to_classid[id]], id, sim))
Explanation: Displaying similar images
End of explanation
test_ids = []
for class_id in range(split_num, num_classes-1):
img_ids = classid_to_ids[class_id]
if len(img_ids) > 1:
test_ids += img_ids
print(len(test_ids))
len([len(classid_to_ids[x]) for x in list(range(split_num, num_classes-1)) if len(classid_to_ids[x])>1])
def recall_k(k=10, mode="embedding"):
num_found = 0
for img_idx in test_ids:
image_class = id_to_classid[img_idx]
found_classes = []
if mode == "embedding":
found_classes = [id_to_classid[x] for (x, score) in most_sim(img_idx, topn=k+1)[1:]]
elif mode == "random":
found_classes = [id_to_classid[x] for x in random.sample(
list(set(all_img_test_idx + all_img_train_idx) - {img_idx}), k)]
elif mode == "image":
found_classes = [id_to_classid[x] for (x, score) in most_sim(img_idx, topn=k+1, mode="image")[1:]]
if image_class in found_classes:
num_found += 1
return num_found / len(test_ids)
recall_k(k=10), recall_k(k=10, mode="random")
Explanation: Test Recall@k model
for each test class with > 1 image, pick image at random, and compute similarity with all other images
compute recall @k: is the correct class within the k first images
End of explanation
# Naive way to compute all similarities between all images. May be optimized!
def build_similarities(conv, all_imgs):
embs = conv.predict(all_imgs)
embs = embs / np.linalg.norm(embs, axis=-1, keepdims=True)
all_sims = np.dot(embs, embs.T)
return all_sims
def intersect(a, b):
return list(set(a) & set(b))
def build_negatives(anc_idxs, pos_idxs, similarities, neg_imgs_idx, num_retries=20):
# If no similarities were computed, return a random negative
if similarities is None:
return random.sample(neg_imgs_idx,len(anc_idxs))
final_neg = []
# for each positive pair
for (anc_idx, pos_idx) in zip(anc_idxs, pos_idxs):
anchor_class = id_to_classid[anc_idx]
#positive similarity
sim = similarities[anc_idx, pos_idx]
# find all negatives which are semi(hard)
possible_ids = np.where((similarities[anc_idx] + 0.25) > sim)[0]
possible_ids = intersect(neg_imgs_idx, possible_ids)
appended = False
for iteration in range(num_retries):
if len(possible_ids) == 0:
break
idx_neg = random.choice(possible_ids)
if id_to_classid[idx_neg] != anchor_class:
final_neg.append(idx_neg)
appended = True
break
if not appended:
final_neg.append(random.choice(neg_imgs_idx))
return final_neg
class HardTripletGenerator(tf.keras.utils.Sequence):
def __init__(self, Xa_train, Xp_train, batch_size, all_imgs, neg_imgs_idx, conv):
self.batch_size = batch_size
self.imgs = all_imgs
self.Xa = Xa_train
self.Xp = Xp_train
self.num_samples = Xa_train.shape[0]
self.neg_imgs_idx = neg_imgs_idx
if conv:
print("Pre-computing similarities...", end=" ")
self.similarities = build_similarities(conv, self.imgs)
print("Done!")
else:
self.similarities = None
def __len__(self):
return self.num_samples // self.batch_size
def __getitem__(self, batch_index):
low_index = batch_index * self.batch_size
high_index = (batch_index + 1) * self.batch_size
imgs_a = self.Xa[low_index:high_index]
imgs_p = self.Xp[low_index:high_index]
imgs_n = build_negatives(imgs_a, imgs_p, self.similarities, self.neg_imgs_idx)
imgs_a = seq.augment_images(self.imgs[imgs_a])
imgs_p = seq.augment_images(self.imgs[imgs_p])
imgs_n = seq.augment_images(self.imgs[imgs_n])
return ([imgs_a, imgs_p, imgs_n], np.zeros(shape=(imgs_a.shape[0])))
batch_size = 128
gen_hard = HardTripletGenerator(Xa_train, Xp_train, batch_size, all_imgs, all_img_train_idx, shared_conv)
len(gen_hard)
[xa, xp, xn], y = gen_hard[0]
xa.shape, xp.shape, xn.shape
plt.figure(figsize=(16, 9))
for i in range(5):
plt.subplot(3, 5, i + 1)
plt.title("anchor")
plt.imshow((xa[i] + mean) / 255)
plt.axis('off')
for i in range(5):
plt.subplot(3, 5, i + 6)
plt.title("positive")
plt.imshow((xp[i] + mean) / 255)
plt.axis('off')
for i in range(5):
plt.subplot(3, 5, i + 11)
plt.title("negative")
plt.imshow((xn[i] + mean) / 255)
plt.axis('off')
plt.show()
class SharedConv2(tf.keras.Model):
Improved version of SharedConv
def __init__(self):
super().__init__(self, name="sharedconv2")
self.conv1 = Conv2D(16, 3, activation="relu", padding="same")
self.conv2 = Conv2D(16, 3, activation="relu", padding="same")
self.pool1 = MaxPool2D((2,2)) # 30,30
self.conv3 = Conv2D(32, 3, activation="relu", padding="same")
self.conv4 = Conv2D(32, 3, activation="relu", padding="same")
self.pool2 = MaxPool2D((2,2)) # 15,15
self.conv5 = Conv2D(64, 3, activation="relu", padding="same")
self.conv6 = Conv2D(64, 3, activation="relu", padding="same")
self.pool3 = MaxPool2D((2,2)) # 8,8
self.conv7 = Conv2D(64, 3, activation="relu", padding="same")
self.conv8 = Conv2D(32, 3, activation="relu", padding="same")
self.flatten = Flatten()
self.dropout1 = Dropout(0.2)
self.fc1 = Dense(64)
def call(self, inputs):
x = self.pool1(self.conv2(self.conv1(inputs)))
x = self.pool2(self.conv4(self.conv3(x)))
x = self.pool3(self.conv6(self.conv5(x)))
x = self.flatten(self.conv8(self.conv7(x)))
return self.fc1(self.dropout1(x))
tf.random.set_seed(1337)
shared_conv2 = SharedConv2()
model_triplet2 = TripletNetwork(shared_conv2)
opt = optimizers.SGD(lr=0.001, decay=1e-6, momentum=0.9, nesterov=True)
model_triplet2.compile(loss=identity_loss, optimizer=opt)
gen_test = TripletGenerator(Xa_test, Xp_test, 32, all_imgs, all_img_test_idx)
len(gen_test)
# At first epoch we don't generate hard triplets so that our model can learn the easy examples first
gen_hard = HardTripletGenerator(Xa_train, Xp_train, batch_size, all_imgs, all_img_train_idx, None)
Explanation: Hard Negative Mining
We'll mine negatives based on previous epoch's model. To do so, we'll compute distances with all anchors, and sample among the most similar negatives, but not the too difficult ones
End of explanation
loss, val_loss = [], []
best_model_fname_hard = "triplet_checkpoint_hard.h5"
best_val_loss = float("inf")
nb_epochs = 10
for epoch in range(nb_epochs):
print("built new hard generator for epoch " + str(epoch))
history = model_triplet2.fit(
gen_hard,
epochs=1,
validation_data = gen_test)
loss.extend(history.history["loss"])
val_loss.extend(history.history["val_loss"])
if val_loss[-1] < best_val_loss:
print("Saving best model")
model_triplet2.save_weights(best_model_fname_hard)
gen_hard = HardTripletGenerator(Xa_train, Xp_train, batch_size, all_imgs, all_img_train_idx, shared_conv2)
plt.plot(loss, label='train')
plt.plot(val_loss, label='validation')
plt.ylim(0, 0.5)
plt.legend(loc='best')
plt.title('Loss');
Explanation: Note that we are re-creating a HardTripletGenerator at each epoch. By doing so, we re-compute the new hard negatives with the newly updated model. On larger scale this operation can take a lot of time, and could be done every X epochs (X > 1).
End of explanation
emb = shared_conv2.predict(all_imgs)
emb = emb / np.linalg.norm(emb, axis=-1, keepdims=True)
recall_k(k=10), recall_k(k=10, mode="random")
Explanation: You should see that the train loss is barely improving while the validation loss is decreasing. Remember that we are feeding the hardest triplets to the model!
End of explanation
shared_conv2_nohard = SharedConv2()
model_triplet2_nohard = TripletNetwork(shared_conv2_nohard)
opt = optimizers.SGD(lr=0.001, decay=1e-6, momentum=0.9, nesterov=True)
model_triplet2_nohard.compile(loss=identity_loss, optimizer=opt)
gen_nohard = HardTripletGenerator(Xa_train, Xp_train, batch_size, all_imgs, all_img_train_idx, None)
history = model_triplet2_nohard.fit_generator(
generator=gen_nohard,
epochs=10,
validation_data=gen_test)
plt.plot(loss, label='train (hardmining)')
plt.plot(val_loss, label='validation (hardmining)')
plt.plot(history.history["loss"], label="train")
plt.plot(history.history["val_loss"], label="validation")
plt.ylim(0, 0.5)
plt.legend(loc='best')
plt.title('Loss hardmining vs no hardmining');
emb = shared_conv2_nohard.predict(all_imgs)
emb = emb / np.linalg.norm(emb, axis=-1, keepdims=True)
recall_k(k=10), recall_k(k=10, mode="random")
Explanation: Let's run the improved convnet SharedConv2 without negative hardming in order to have a fair comparison:
End of explanation
<END_TASK> |
186 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mri', 'sandbox-3', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: MRI
Source ID: SANDBOX-3
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:19
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation
<END_TASK> |
187 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Logistic Regression with L2 regularization
In this notebook, you will implement your own logistic regression classifier with L2 regularization. You will do the following
Step1: Unzipping files with Amazon Baby Products Reviews
For this assignment, we will use a subset of the Amazon product review dataset. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted primarily of positive reviews.
Step2: Loading the products data
We will use a dataset consisting of baby product reviews on Amazon.com.
Step3: Now, let us see a preview of what the dataset looks like.
Step4: One column of this dataset is 'sentiment', corresponding to the class label with +1 indicating a review with positive sentiment and -1 indicating one with negative sentiment.
Step5: Let us quickly explore more of this dataset. The 'name' column indicates the name of the product. Here we list the first 10 products in the dataset. We then count the number of positive and negative reviews.
Step6: Apply text cleaning on the review data
In this section, we will perform some simple feature cleaning. The last assignment used all words in building bag-of-words features, but here we limit ourselves to 193 words (for simplicity). We compiled a list of 193 most frequent words into a JSON file.
Now, we will load these words from this JSON file
Step7: Now, we will perform 2 simple data transformations
Step8: Below, we are removing all the punctuation from the strings in the review column and saving the result into a new column in the dataframe.
Step9: Now we proceed with Step 2. For each word in important_words, we compute a count for the number of times the word occurs in the review. We will store this count in a separate column (one for each word). The result of this feature processing is a single column for each word in important_words which keeps a count of the number of times the respective word occurs in the review text.
Note
Step10: The products DataFrame now contains one column for each of the 193 important_words. As an example, the column perfect contains a count of the number of times the word perfect occurs in each of the reviews.
Step11: Train-Validation split
We split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set.
Note
Step12: Using the list of the training data indicies and the validation data indicies to get a DataFrame with the training data and a DataFrame with the validation data.
Step13: Convert DataFrame to NumPy array
Just like in the second assignment of the previous module, we provide you with a function that extracts columns from a DataFrame and converts them into a NumPy array. Two arrays are returned
Step14: We convert both the training and validation sets into NumPy arrays.
Warning
Step15: Building on logistic regression with no L2 penalty assignment
Let us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as
Step16: Adding L2 penalty
Let us now work on extending logistic regression with L2 regularization. As discussed in the lectures, the L2 regularization is particularly useful in preventing overfitting. In this assignment, we will explore L2 regularization in detail.
Recall from lecture and the previous assignment that for logistic regression without an L2 penalty, the derivative of the log likelihood function is
Step17: Quiz question
Step18: Quiz question
Step19: Explore effects of L2 regularization
Now that we have written up all the pieces needed for regularized logistic regression, let's explore the benefits of using L2 regularization in analyzing sentiment for product reviews. As iterations pass, the log likelihood should increase.
Below, we train models with increasing amounts of regularization, starting with no L2 penalty, which is equivalent to our previous logistic regression implementation.
Step20: Compare coefficients
We now compare the coefficients for each of the models that were trained above. We will create a table of features and learned coefficients associated with each of the different L2 penalty values.
Below is a simple helper function that will help us create this table.
Step21: Now, let's run the function add_coefficients_to_table for each of the L2 penalty strengths.
Step22: Using the coefficients trained with L2 penalty 0, find the 5 most positive words (with largest positive coefficients). Save them to positive_words. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to negative_words.
Step23: Quiz Question. Which of the following is not listed in either positive_words or negative_words?
Step24: Plotting the Coefficient Path with Increase in L2 Penalty
Let us observe the effect of increasing L2 penalty on the 10 words just selected.
First, let's put the 6 L2 penalty values we considered in a list.
Step25: Next, let's put all the words we considered as features for the classification model plus the intercept features
Step26: Now, we will fill-in 2 dictionaries, one with the 5 positive words as the index for the dictionary and the other with the 5 negative words as the index for the dictionary. For each index (word), we fill in a list which has the coefficient value of the index (word) for the 6 different L2 penalties we considered.
Step27: Plotting coefficient path for positive words
Step28: Plotting coefficient path for negative words
Step29: The following 2 questions relate to the 2 figures above.
Quiz Question
Step30: Step 2
Step31: Now, getting the predictions for the training data and the validation data for the 6 L2 penalties we considered.
Step32: Step 3
Step33: Cleating a list of tuples with the entries as (accuracy, l2_penalty) for the training set and the validation set.
Step34: Quiz question
Step35: Quiz question | <ASSISTANT_TASK:>
Python Code:
import os
import zipfile
import string
import numpy as np
import pandas as pd
from sklearn import linear_model
from sklearn.feature_extraction.text import CountVectorizer
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Logistic Regression with L2 regularization
In this notebook, you will implement your own logistic regression classifier with L2 regularization. You will do the following:
Extract features from Amazon product reviews.
Write a function to compute the derivative of log likelihood function with an L2 penalty with respect to a single coefficient.
Implement gradient ascent with an L2 penalty.
Empirically explore how the L2 penalty can ameliorate overfitting.
Importing Libraries
End of explanation
# Put files in current direction into a list
files_list = [f for f in os.listdir('.') if os.path.isfile(f)]
# Filename of unzipped file
unzipped_file = 'amazon_baby_subset.csv'
# If upzipped file not in files_list, unzip the file
if unzipped_file not in files_list:
zip_file = unzipped_file + '.zip'
unzipping = zipfile.ZipFile(zip_file)
unzipping.extractall()
unzipping.close
Explanation: Unzipping files with Amazon Baby Products Reviews
For this assignment, we will use a subset of the Amazon product review dataset. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted primarily of positive reviews.
End of explanation
products = pd.read_csv("amazon_baby_subset.csv")
Explanation: Loading the products data
We will use a dataset consisting of baby product reviews on Amazon.com.
End of explanation
products.head()
Explanation: Now, let us see a preview of what the dataset looks like.
End of explanation
products['sentiment']
Explanation: One column of this dataset is 'sentiment', corresponding to the class label with +1 indicating a review with positive sentiment and -1 indicating one with negative sentiment.
End of explanation
products.head(10)['name']
print '# of positive reviews =', len(products[products['sentiment']==1])
print '# of negative reviews =', len(products[products['sentiment']==-1])
Explanation: Let us quickly explore more of this dataset. The 'name' column indicates the name of the product. Here we list the first 10 products in the dataset. We then count the number of positive and negative reviews.
End of explanation
import json
with open('important_words.json', 'r') as f: # Reads the list of most frequent words
important_words = json.load(f)
important_words = [str(s) for s in important_words]
print important_words
Explanation: Apply text cleaning on the review data
In this section, we will perform some simple feature cleaning. The last assignment used all words in building bag-of-words features, but here we limit ourselves to 193 words (for simplicity). We compiled a list of 193 most frequent words into a JSON file.
Now, we will load these words from this JSON file:
End of explanation
products["review"] = products["review"].fillna("")
Explanation: Now, we will perform 2 simple data transformations:
Remove punctuation using Python's built-in string functionality.
Compute word counts (only for important_words)
We start with Step 1 which can be done as follows:
Before removing the punctuation from the strings in the review column, we will fill all NA values with empty string.
End of explanation
products["review_clean"] = products["review"].str.translate(None, string.punctuation)
Explanation: Below, we are removing all the punctuation from the strings in the review column and saving the result into a new column in the dataframe.
End of explanation
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
Explanation: Now we proceed with Step 2. For each word in important_words, we compute a count for the number of times the word occurs in the review. We will store this count in a separate column (one for each word). The result of this feature processing is a single column for each word in important_words which keeps a count of the number of times the respective word occurs in the review text.
Note: There are several ways of doing this. In this assignment, we use the built-in count function for Python lists. Each review string is first split into individual words and the number of occurances of a given word is counted.
End of explanation
products['perfect']
Explanation: The products DataFrame now contains one column for each of the 193 important_words. As an example, the column perfect contains a count of the number of times the word perfect occurs in each of the reviews.
End of explanation
with open('module-4-assignment-train-idx.json', 'r') as f:
train_idx_lst = json.load(f)
train_idx_lst = [int(entry) for entry in train_idx_lst]
with open('module-4-assignment-validation-idx.json', 'r') as f:
validation_idx_lst = json.load(f)
validation_idx_lst = [int(entry) for entry in validation_idx_lst]
Explanation: Train-Validation split
We split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set.
Note: In previous assignments, we have called this a train-test split. However, the portion of data that we don't train on will be used to help select model parameters. Thus, this portion of data should be called a validation set. Recall that examining performance of various potential models (i.e. models with different parameters) should be on a validation set, while evaluation of selected model should always be on a test set.
Loading the JSON files with the indicies from the training data and the validation data into a a list.
End of explanation
train_data = products.ix[train_idx_lst]
validation_data = products.ix[validation_idx_lst]
print 'Training set : %d data points' % len(train_data)
print 'Validation set : %d data points' % len(validation_data)
Explanation: Using the list of the training data indicies and the validation data indicies to get a DataFrame with the training data and a DataFrame with the validation data.
End of explanation
def get_numpy_data(data_frame, features, label):
data_frame['intercept'] = 1
features = ['intercept'] + features
features_frame = data_frame[features]
feature_matrix = data_frame.as_matrix(columns=features)
label_array = data_frame[label]
label_array = label_array.values
return(feature_matrix, label_array)
Explanation: Convert DataFrame to NumPy array
Just like in the second assignment of the previous module, we provide you with a function that extracts columns from a DataFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels.
Note: The feature matrix includes an additional column 'intercept' filled with 1's to take account of the intercept term.
End of explanation
feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')
feature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment')
Explanation: We convert both the training and validation sets into NumPy arrays.
Warning: This may take a few minutes.
End of explanation
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
arg_exp = np.dot(coefficients,feature_matrix.transpose())
# Compute P(y_i = +1 | x_i, w) using the link function
predictions = 1.0/(1.0 + np.exp(-arg_exp))
# return predictions
return predictions
Explanation: Building on logistic regression with no L2 penalty assignment
Let us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
where the feature vector $h(\mathbf{x}_i)$ is given by the word counts of important_words in the review $\mathbf{x}_i$.
We will use the same code as in this past assignment to make probability predictions since this part is not affected by the L2 penalty. (Only the way in which the coefficients are learned is affected by the addition of a regularization term.)
End of explanation
def feature_derivative_with_L2(errors, feature, coefficient, l2_penalty, feature_is_constant):
# Compute the dot product of errors and feature
derivative = np.dot(feature.transpose(), errors)
# add L2 penalty term for any feature that isn't the intercept.
if not feature_is_constant:
derivative = derivative - 2.0*l2_penalty*coefficient
return derivative
Explanation: Adding L2 penalty
Let us now work on extending logistic regression with L2 regularization. As discussed in the lectures, the L2 regularization is particularly useful in preventing overfitting. In this assignment, we will explore L2 regularization in detail.
Recall from lecture and the previous assignment that for logistic regression without an L2 penalty, the derivative of the log likelihood function is:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
Adding L2 penalty to the derivative
It takes only a small modification to add a L2 penalty. All terms indicated in red refer to terms that were added due to an L2 penalty.
Recall from the lecture that the link function is still the sigmoid:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
We add the L2 penalty term to the per-coefficient derivative of log likelihood:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j }
$$
The per-coefficient derivative for logistic regression with an L2 penalty is as follows:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j }
$$
and for the intercept term, we have
$$
\frac{\partial\ell}{\partial w_0} = \sum{i=1}^N h_0(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
Note: As we did in the Regression course, we do not apply the L2 penalty on the intercept. A large intercept does not necessarily indicate overfitting because the intercept is not associated with any particular feature.
Write a function that computes the derivative of log likelihood with respect to a single coefficient $w_j$. Unlike its counterpart in the last assignment, the function accepts five arguments:
* errors vector containing $(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w}))$ for all $i$
* feature vector containing $h_j(\mathbf{x}_i)$ for all $i$
* coefficient containing the current value of coefficient $w_j$.
* l2_penalty representing the L2 penalty constant $\lambda$
* feature_is_constant telling whether the $j$-th feature is constant or not.
End of explanation
def compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
lp = np.sum((indicator-1)*scores - np.log(1. + np.exp(-scores))) - l2_penalty*np.sum(coefficients[1:]**2)
return lp
Explanation: Quiz question: In the code above, was the intercept term regularized?
No
To verify the correctness of the gradient ascent algorithm, we provide a function for computing log likelihood (which we recall from the last assignment was a topic detailed in an advanced optional video, and used here for its numerical stability).
$$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) \color{red}{-\lambda\|\mathbf{w}\|_2^2} $$
End of explanation
def logistic_regression_with_L2(feature_matrix, sentiment, initial_coefficients, step_size, l2_penalty, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
predictions = predict_probability(feature_matrix, coefficients)
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
is_intercept = (j == 0)
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j]. Save it in a variable called derivative
derivative = feature_derivative_with_L2(errors, feature_matrix[:,j],coefficients[j],l2_penalty, is_intercept)
# add the step size times the derivative to the current coefficient
coefficients[j] = coefficients[j] + step_size*derivative
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty)
print 'iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp)
return coefficients
Explanation: Quiz question: Does the term with L2 regularization increase or decrease $\ell\ell(\mathbf{w})$?
Decreases
The logistic regression function looks almost like the one in the last assignment, with a minor modification to account for the L2 penalty. Fill in the code below to complete this modification.
End of explanation
# run with L2 = 0
coefficients_0_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=0, max_iter=501)
# run with L2 = 4
coefficients_4_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=4, max_iter=501)
# run with L2 = 10
coefficients_10_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=10, max_iter=501)
# run with L2 = 1e2
coefficients_1e2_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e2, max_iter=501)
# run with L2 = 1e3
coefficients_1e3_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e3, max_iter=501)
# run with L2 = 1e5
coefficients_1e5_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e5, max_iter=501)
Explanation: Explore effects of L2 regularization
Now that we have written up all the pieces needed for regularized logistic regression, let's explore the benefits of using L2 regularization in analyzing sentiment for product reviews. As iterations pass, the log likelihood should increase.
Below, we train models with increasing amounts of regularization, starting with no L2 penalty, which is equivalent to our previous logistic regression implementation.
End of explanation
def add_coefficients_to_table(coefficients, column_name):
return pd.Series(coefficients, index = column_name)
Explanation: Compare coefficients
We now compare the coefficients for each of the models that were trained above. We will create a table of features and learned coefficients associated with each of the different L2 penalty values.
Below is a simple helper function that will help us create this table.
End of explanation
coeff_L2_0_table = add_coefficients_to_table(coefficients_0_penalty, ['intercept'] + important_words)
coeff_L2_4_table = add_coefficients_to_table(coefficients_4_penalty, ['intercept'] + important_words)
coeff_L2_10_table = add_coefficients_to_table(coefficients_10_penalty, ['intercept'] + important_words)
coeff_L2_1e2_table = add_coefficients_to_table(coefficients_1e2_penalty, ['intercept'] + important_words)
coeff_L2_1e3_table = add_coefficients_to_table(coefficients_1e3_penalty, ['intercept'] + important_words)
coeff_L2_1e5_table = add_coefficients_to_table(coefficients_1e5_penalty, ['intercept'] + important_words)
Explanation: Now, let's run the function add_coefficients_to_table for each of the L2 penalty strengths.
End of explanation
positive_words = coeff_L2_0_table.sort_values(ascending=False)[0:5].index.tolist()
negative_words = coeff_L2_0_table.sort_values(ascending=True)[0:5].index.tolist()
Explanation: Using the coefficients trained with L2 penalty 0, find the 5 most positive words (with largest positive coefficients). Save them to positive_words. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to negative_words.
End of explanation
print "positive_words: ", positive_words
print "negative_words: ", negative_words
Explanation: Quiz Question. Which of the following is not listed in either positive_words or negative_words?
End of explanation
l2_pen_vals = [0.0, 4.0, 10.0, 1.0e2, 1.0e3, 1.0e5]
Explanation: Plotting the Coefficient Path with Increase in L2 Penalty
Let us observe the effect of increasing L2 penalty on the 10 words just selected.
First, let's put the 6 L2 penalty values we considered in a list.
End of explanation
feature_words_lst = ['intercept'] + important_words
Explanation: Next, let's put all the words we considered as features for the classification model plus the intercept features
End of explanation
pos_word_coeff_dict = {}
for curr_word in positive_words:
# Finding the index of the word we are considering in the feature_words_lst
word_index = feature_words_lst.index(curr_word)
# Filling in the list for this index with the coefficient values for the 6 L2 penalties we considered.
pos_word_coeff_dict[curr_word] = [coefficients_0_penalty[word_index], coefficients_4_penalty[word_index],
coefficients_10_penalty[word_index], coefficients_1e2_penalty[word_index],
coefficients_1e3_penalty[word_index], coefficients_1e5_penalty[word_index] ]
neg_word_coeff_dict = {}
for curr_word in negative_words:
# Finding the index of the word we are considering in the feature_words_lst
word_index = feature_words_lst.index(curr_word)
# Filling in the list for this index with the coefficient values for the 6 L2 penalties we considered.
neg_word_coeff_dict[curr_word] = [coefficients_0_penalty[word_index], coefficients_4_penalty[word_index],
coefficients_10_penalty[word_index], coefficients_1e2_penalty[word_index],
coefficients_1e3_penalty[word_index], coefficients_1e5_penalty[word_index] ]
Explanation: Now, we will fill-in 2 dictionaries, one with the 5 positive words as the index for the dictionary and the other with the 5 negative words as the index for the dictionary. For each index (word), we fill in a list which has the coefficient value of the index (word) for the 6 different L2 penalties we considered.
End of explanation
plt.figure(figsize=(10,6))
for pos_word in positive_words:
plt.semilogx(l2_pen_vals, pos_word_coeff_dict[pos_word], linewidth =2, label = pos_word )
plt.plot(l2_pen_vals, [0,0,0,0,0,0],linewidth =2, linestyle = '--', color = "black")
plt.axis([4.0, 1.0e5, -0.5, 1.5])
plt.title("Positive Words Coefficient Path", fontsize=18)
plt.xlabel("L2 Penalty ($\lambda$)", fontsize=18)
plt.ylabel("Coefficient Value", fontsize=18)
plt.legend(loc = "upper right", fontsize=18)
Explanation: Plotting coefficient path for positive words
End of explanation
plt.figure(figsize=(10,6))
for pos_word in negative_words:
plt.semilogx(l2_pen_vals, neg_word_coeff_dict[pos_word], linewidth =2, label = pos_word )
plt.plot(l2_pen_vals, [0,0,0,0,0,0],linewidth =2, linestyle = '--', color = "black")
plt.axis([4.0, 1.0e5, -1.5, 0.5])
plt.title("Negative Words Coefficient Path", fontsize=18)
plt.xlabel("L2 Penalty ($\lambda$)", fontsize=18)
plt.ylabel("Coefficient Value", fontsize=18)
plt.legend(loc = "lower right", fontsize=18)
Explanation: Plotting coefficient path for negative words
End of explanation
# Compute the scores as a dot product between feature_matrix and coefficients.
scores_l2_pen_0_train = np.dot(feature_matrix_train, coefficients_0_penalty)
scores_l2_pen_4_train = np.dot(feature_matrix_train, coefficients_4_penalty)
scores_l2_pen_10_train = np.dot(feature_matrix_train, coefficients_10_penalty)
scores_l2_pen_1e2_train = np.dot(feature_matrix_train, coefficients_1e2_penalty)
scores_l2_pen_1e3_train = np.dot(feature_matrix_train, coefficients_1e3_penalty)
scores_l2_pen_1e5_train = np.dot(feature_matrix_train, coefficients_1e5_penalty)
scores_l2_pen_0_valid = np.dot(feature_matrix_valid, coefficients_0_penalty)
scores_l2_pen_4_valid = np.dot(feature_matrix_valid, coefficients_4_penalty)
scores_l2_pen_10_valid = np.dot(feature_matrix_valid, coefficients_10_penalty)
scores_l2_pen_1e2_valid = np.dot(feature_matrix_valid, coefficients_1e2_penalty)
scores_l2_pen_1e3_valid = np.dot(feature_matrix_valid, coefficients_1e3_penalty)
scores_l2_pen_1e5_valid = np.dot(feature_matrix_valid, coefficients_1e5_penalty)
Explanation: The following 2 questions relate to the 2 figures above.
Quiz Question: (True/False) All coefficients consistently get smaller in size as the L2 penalty is increased.
True
Quiz Question: (True/False) The relative order of coefficients is preserved as the L2 penalty is increased. (For example, if the coefficient for 'cat' was more positive than that for 'dog', this remains true as the L2 penalty increases.)
False
Measuring accuracy
Now, let us compute the accuracy of the classifier model. Recall that the accuracy is given by
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}}
$$
Recall from lecture that that the class prediction is calculated using
$$
\hat{y}_i =
\left{
\begin{array}{ll}
+1 & h(\mathbf{x}_i)^T\mathbf{w} > 0 \
-1 & h(\mathbf{x}_i)^T\mathbf{w} \leq 0 \
\end{array}
\right.
$$
Note: It is important to know that the model prediction code doesn't change even with the addition of an L2 penalty. The only thing that changes is the estimated coefficients used in this prediction.
Step 1: First compute the scores using feature_matrix and coefficients using a dot product. Do this for the training data and the validation data.
End of explanation
def get_pred_from_score(scores_array):
# First, set predictions equal to scores array
predictions = scores_array
# Replace <= 0 scores with negative review classification (-1)
scores_array[scores_array<=0] = -1
# Replace > 0 scores with positive review classification (+1)
scores_array[scores_array>0] = 1
return predictions
Explanation: Step 2: Using the formula above, compute the class predictions from the scores.
First, writing a helper function that will return an array with the predictions.
End of explanation
pred_l2_pen_0_train = get_pred_from_score(scores_l2_pen_0_train)
pred_l2_pen_4_train = get_pred_from_score(scores_l2_pen_4_train)
pred_l2_pen_10_train = get_pred_from_score(scores_l2_pen_10_train)
pred_l2_pen_1e2_train = get_pred_from_score(scores_l2_pen_1e2_train)
pred_l2_pen_1e3_train = get_pred_from_score(scores_l2_pen_1e3_train)
pred_l2_pen_1e5_train = get_pred_from_score(scores_l2_pen_1e5_train)
pred_l2_pen_0_valid = get_pred_from_score(scores_l2_pen_0_valid)
pred_l2_pen_4_valid = get_pred_from_score(scores_l2_pen_4_valid)
pred_l2_pen_10_valid = get_pred_from_score(scores_l2_pen_10_valid)
pred_l2_pen_1e2_valid = get_pred_from_score(scores_l2_pen_1e2_valid)
pred_l2_pen_1e3_valid = get_pred_from_score(scores_l2_pen_1e3_valid)
pred_l2_pen_1e5_valid = get_pred_from_score(scores_l2_pen_1e5_valid)
Explanation: Now, getting the predictions for the training data and the validation data for the 6 L2 penalties we considered.
End of explanation
train_accuracy = {}
train_accuracy[0] = np.sum(pred_l2_pen_0_train==sentiment_train)/float(len(sentiment_train))
train_accuracy[4] = np.sum(pred_l2_pen_4_train==sentiment_train)/float(len(sentiment_train))
train_accuracy[10] = np.sum(pred_l2_pen_10_train==sentiment_train)/float(len(sentiment_train))
train_accuracy[1e2] = np.sum(pred_l2_pen_1e2_train==sentiment_train)/float(len(sentiment_train))
train_accuracy[1e3] = np.sum(pred_l2_pen_1e3_train==sentiment_train)/float(len(sentiment_train))
train_accuracy[1e5] = np.sum(pred_l2_pen_1e5_train==sentiment_train)/float(len(sentiment_train))
validation_accuracy = {}
validation_accuracy[0] = np.sum(pred_l2_pen_0_valid==sentiment_valid)/float(len(sentiment_valid))
validation_accuracy[4] = np.sum(pred_l2_pen_4_valid==sentiment_valid)/float(len(sentiment_valid))
validation_accuracy[10] = np.sum(pred_l2_pen_10_valid==sentiment_valid)/float(len(sentiment_valid))
validation_accuracy[1e2] = np.sum(pred_l2_pen_1e2_valid==sentiment_valid)/float(len(sentiment_valid))
validation_accuracy[1e3] = np.sum(pred_l2_pen_1e3_valid==sentiment_valid)/float(len(sentiment_valid))
validation_accuracy[1e5] = np.sum(pred_l2_pen_1e5_valid==sentiment_valid)/float(len(sentiment_valid))
# Build a simple report
for key in sorted(validation_accuracy.keys()):
print "L2 penalty = %g" % key
print "train accuracy = %s, validation_accuracy = %s" % (train_accuracy[key], validation_accuracy[key])
print "--------------------------------------------------------------------------------"
Explanation: Step 3: Getting the accurary for the training set data and the validation set data
End of explanation
accuracy_training_data = [(train_accuracy[0], 0), (train_accuracy[4], 4), (train_accuracy[10], 10),
(train_accuracy[1e2], 1e2), (train_accuracy[1e3], 1e3), (train_accuracy[1e5], 1e5)]
accuracy_validation_data = [(validation_accuracy[0], 0), (validation_accuracy[4], 4), (validation_accuracy[10], 10),
(validation_accuracy[1e2], 1e2), (validation_accuracy[1e3], 1e3), (validation_accuracy[1e5], 1e5)]
Explanation: Cleating a list of tuples with the entries as (accuracy, l2_penalty) for the training set and the validation set.
End of explanation
max(accuracy_training_data)[1]
Explanation: Quiz question: Which model (L2 = 0, 4, 10, 100, 1e3, 1e5) has the highest accuracy on the training data?
End of explanation
max(accuracy_validation_data)[1]
Explanation: Quiz question: Which model (L2 = 0, 4, 10, 100, 1e3, 1e5) has the highest accuracy on the validation data?
End of explanation
<END_TASK> |
188 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Text Classification with spaCy
This walkthrough is based on this spaCy tutorial.
Train a convolutional neural network text classifier on the
IMDB dataset, using the TextCategorizer component. The dataset will be loaded
automatically via Thinc's built-in dataset loader. The model is added to
spacy.pipeline, and predictions are available via doc.cats.
Set Up Environment
This notebook has been tested with the following package versions
Step1: Set Up Verta
Step2: Imports
Step5: Helper Functions
Step6: Train Model
Step7: Log for Deployment
Create Wrapper Class
Verta deployment expects a particular interface for its models.
They must expose a predict() function, so we'll create a thin wrapper class around our spaCy pipeline.
Step8: Create Deployment Artifacts
Verta deployment also needs a couple more details about the model.
What do its inputs and outputs look like?
Step9: What PyPI-installable packages (with version numbers) are required to deserialize and run the model?
Step10: Log Model
Step11: Deployment
Step12: Click the link above to view your Experiment Run in the Verta Web App, and deploy it.
Once it's ready, you can make predictions against the deployed model. | <ASSISTANT_TASK:>
Python Code:
# Python >3.5
!pip install verta
!pip install spacy==2.1.6
!python -m spacy download en
Explanation: Text Classification with spaCy
This walkthrough is based on this spaCy tutorial.
Train a convolutional neural network text classifier on the
IMDB dataset, using the TextCategorizer component. The dataset will be loaded
automatically via Thinc's built-in dataset loader. The model is added to
spacy.pipeline, and predictions are available via doc.cats.
Set Up Environment
This notebook has been tested with the following package versions:
(you may need to change pip to pip3, depending on your own Python environment)
End of explanation
HOST = 'app.verta.ai'
PROJECT_NAME = 'Film Review Classification'
EXPERIMENT_NAME = 'spaCy CNN'
# import os
# os.environ['VERTA_EMAIL'] =
# os.environ['VERTA_DEV_KEY'] =
from verta import Client
from verta.utils import ModelAPI
client = Client(HOST, use_git=False)
proj = client.set_project(PROJECT_NAME)
expt = client.set_experiment(EXPERIMENT_NAME)
run = client.set_experiment_run()
Explanation: Set Up Verta
End of explanation
from __future__ import print_function
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
import random
import six
import numpy as np
import thinc.extra.datasets
import spacy
from spacy.util import minibatch, compounding
Explanation: Imports
End of explanation
def load_data(limit=0, split=0.8):
Load data from the IMDB dataset.
# Partition off part of the dataset to train and test
train_data, _ = thinc.extra.datasets.imdb()
random.shuffle(train_data)
train_data = train_data[-limit:]
texts, labels = zip(*train_data)
cats = [{"POSITIVE": bool(y), "NEGATIVE": not bool(y)} for y in labels]
split = int(len(train_data) * split)
return (texts[:split], cats[:split]), (texts[split:], cats[split:])
def evaluate(tokenizer, textcat, texts, cats):
Evaluate with text data, calculates precision, recall and f score
docs = (tokenizer(text) for text in texts)
tp = 0.0 # True positives
fp = 1e-8 # False positives
fn = 1e-8 # False negatives
tn = 0.0 # True negatives
for i, doc in enumerate(textcat.pipe(docs)):
gold = cats[i]
for label, score in doc.cats.items():
if label not in gold:
continue
if label == "NEGATIVE":
continue
if score >= 0.5 and gold[label] >= 0.5:
tp += 1.0
elif score >= 0.5 and gold[label] < 0.5:
fp += 1.0
elif score < 0.5 and gold[label] < 0.5:
tn += 1
elif score < 0.5 and gold[label] >= 0.5:
fn += 1
precision = tp / (tp + fp)
recall = tp / (tp + fn)
if (precision + recall) == 0:
f_score = 0.0
else:
f_score = 2 * (precision * recall) / (precision + recall)
return {"textcat_p": precision, "textcat_r": recall, "textcat_f": f_score}
Explanation: Helper Functions
End of explanation
hyperparams = {
'model':'en',
'n_iter': 2, # epochs
'n_texts': 500, # num of training samples
'architecture': 'simple_cnn',
'num_samples': 1000,
'train_test_split': 0.8,
'dropout': 0.2
}
run.log_hyperparameters(hyperparams)
# using the basic en model
try:
nlp = spacy.load(hyperparams['model']) # load existing spaCy model
except OSError:
nlp = spacy.blank(hyperparams['model']) # create blank Language class
print("Created blank '{}' model".format(hyperparams['model']))
else:
print("Loaded model '{}'".format(nlp))
# add the text classifier to the pipeline if it doesn't exist
if "textcat" not in nlp.pipe_names:
textcat = nlp.create_pipe(
"textcat",
config={
"exclusive_classes": True,
"architecture": hyperparams['architecture'],
}
)
nlp.add_pipe(textcat, last=True)
# otherwise, get it, so we can add labels to it
else:
textcat = nlp.get_pipe("textcat")
# add label to text classifier
_= textcat.add_label("POSITIVE")
_= textcat.add_label("NEGATIVE")
# load the IMDB dataset
print("Loading IMDB data...")
(train_texts, train_cats), (dev_texts, dev_cats) = load_data(limit=hyperparams['num_samples'],
split=hyperparams['train_test_split'])
print(
"Using {} examples ({} training, {} evaluation)".format(
hyperparams['num_samples'], len(train_texts), len(dev_texts)
)
)
train_data = list(zip(train_texts, [{"cats": cats} for cats in train_cats]))
# sample train data
train_data[:1]
# get names of other pipes to disable them during training
other_pipes = [pipe for pipe in nlp.pipe_names if pipe != "textcat"]
print("other pipes:", other_pipes)
with nlp.disable_pipes(*other_pipes): # only train textcat
optimizer = nlp.begin_training()
print("Training the model...")
print("{:^5}\t{:^5}\t{:^5}\t{:^5}".format("LOSS", "P", "R", "F"))
batch_sizes = compounding(4.0, 32.0, 1.001)
for i in range(hyperparams['n_iter']):
losses = {}
# batch up the examples using spaCy's minibatch
random.shuffle(train_data)
batches = minibatch(train_data, size=batch_sizes)
for batch in batches:
texts, annotations = zip(*batch)
nlp.update(texts, annotations, sgd=optimizer, drop=hyperparams['dropout'], losses=losses)
with textcat.model.use_params(optimizer.averages):
# evaluate on the dev data split off in load_data()
scores = evaluate(nlp.tokenizer, textcat, dev_texts, dev_cats)
print(
"{0:.3f}\t{1:.3f}\t{2:.3f}\t{3:.3f}".format( # print a simple table
losses["textcat"],
scores["textcat_p"],
scores["textcat_r"],
scores["textcat_f"],
)
)
run.log_observation('loss', losses['textcat'])
run.log_observation('precision', scores['textcat_p'])
run.log_observation('recall', scores['textcat_r'])
run.log_observation('f_score', scores['textcat_f'])
Explanation: Train Model
End of explanation
class TextClassifier:
def __init__(self, nlp):
self.nlp = nlp
def predict(self, input_list): # param must be a list/batch of inputs
predictions = []
for text in input_list:
scores = self.nlp(text).cats
if scores['POSITIVE'] > scores['NEGATIVE']:
predictions.append("POSITIVE")
else:
predictions.append("NEGATIVE")
return np.array(predictions) # response currently must be a NumPy array
input_list = [
"This movie was subpar at best.",
"Plot didn't make sense."
]
model = TextClassifier(nlp)
model.predict(input_list)
Explanation: Log for Deployment
Create Wrapper Class
Verta deployment expects a particular interface for its models.
They must expose a predict() function, so we'll create a thin wrapper class around our spaCy pipeline.
End of explanation
from verta.utils import ModelAPI # Verta-provided utility class
model_api = ModelAPI(
input_list, # example inputs
model.predict(input_list), # example outputs
)
Explanation: Create Deployment Artifacts
Verta deployment also needs a couple more details about the model.
What do its inputs and outputs look like?
End of explanation
requirements = ["numpy", "spacy", "thinc"]
# this could also have been a path to a requirements.txt file on disk
run.log_requirements(requirements)
Explanation: What PyPI-installable packages (with version numbers) are required to deserialize and run the model?
End of explanation
# test the trained model
test_text = 'The Lion King was very entertaining. The movie was visually spectacular.'
doc = nlp(test_text)
print(test_text)
print(doc.cats)
run.log_model(
model,
model_api=model_api,
)
Explanation: Log Model
End of explanation
run
Explanation: Deployment
End of explanation
from verta._demo_utils import DeployedModel
deployed_model = DeployedModel(HOST, run.id)
deployed_model.predict(["I would definitely watch this again!"])
Explanation: Click the link above to view your Experiment Run in the Verta Web App, and deploy it.
Once it's ready, you can make predictions against the deployed model.
End of explanation
<END_TASK> |
189 | <SYSTEM_TASK:>
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
<END_TASK>
<USER_TASK:>
Problem:
Say I have two dataframes: | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
df1 = pd.DataFrame({'Timestamp': ['2019/04/02 11:00:01', '2019/04/02 11:00:15', '2019/04/02 11:00:29', '2019/04/02 11:00:30'],
'data': [111, 222, 333, 444]})
df2 = pd.DataFrame({'Timestamp': ['2019/04/02 11:00:14', '2019/04/02 11:00:15', '2019/04/02 11:00:16', '2019/04/02 11:00:30', '2019/04/02 11:00:31'],
'stuff': [101, 202, 303, 404, 505]})
df1['Timestamp'] = pd.to_datetime(df1['Timestamp'])
df2['Timestamp'] = pd.to_datetime(df2['Timestamp'])
def g(df1, df2):
return pd.merge_asof(df2, df1, on='Timestamp', direction='forward')
result = g(df1.copy(), df2.copy())
<END_TASK> |
190 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
2A.data - Classification, régression, anomalies - énoncé
Le jeu de données Wine Quality Data Set contient 5000 vins décrits par leurs caractéristiques chimiques et évalués par un expert. Peut-on s'approcher de l'expert à l'aide d'un modèle de machine learning.
Step1: Les données
On peut les récupérer sur github...data_2a.
Step2: Exercice 1
Step3: Exercice 5
Step4: Exercice 6 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 2A.data - Classification, régression, anomalies - énoncé
Le jeu de données Wine Quality Data Set contient 5000 vins décrits par leurs caractéristiques chimiques et évalués par un expert. Peut-on s'approcher de l'expert à l'aide d'un modèle de machine learning.
End of explanation
from ensae_teaching_cs.data import wines_quality
from pandas import read_csv
df = read_csv(wines_quality(local=True, filename=True))
df.head()
Explanation: Les données
On peut les récupérer sur github...data_2a.
End of explanation
from sklearn.metrics import roc_curve, auc
# labels = pipe.steps[1][1].classes_
# y_score = pipe.predict_proba(X_test)
fpr = dict()
tpr = dict()
roc_auc = dict()
# for i, cl in enumerate(labels):
# fpr[cl], tpr[cl], _ = roc_curve(y_test == cl, y_score[:, i])
# roc_auc[cl] = auc(fpr[cl], tpr[cl])
# fig, ax = plt.subplots(1, 1, figsize=(8,4))
# for k in roc_auc:
# ax.plot(fpr[k], tpr[k], label="c%d = %1.2f" % (k, roc_auc[k]))
# ax.legend();
Explanation: Exercice 1 : afficher la distribution des notes
La fonction hist est simple, efficice.
Exercice 2 : séparation train / test
La fonction est tellement utilisée que vous la trouverez rapidement.
Exercice 3 : la variable couleur n'est pas numérique
M... OneHotEncoder.
Exercice 3 : premier classifieur
Vous trouverez aussi tout seul. Quelques fonctions pourront vous aider à évaluer le modèle confusion_matrix, classification_report.
Beaucoup mieux.
Exercice 4 : courbe ROC
Quelques aides...
End of explanation
from sklearn.covariance import EllipticEnvelope
Explanation: Exercice 5 : anomalies
Une anomalie est un point aberrant. Cela revient à dire que sa probabilité qu'un tel événement se reproduise est faible. Un modèle assez connu est EllipticEnvelope. On suppose que si le modèle détecte une anomalie, un modèle de prédiction aura plus de mal à prédire. On réutilise le pipeline précédent en changeant juste la dernière étape.
End of explanation
from sklearn.ensemble import RandomForestRegressor
Explanation: Exercice 6 : régression
La note est numérique, pourquoi ne pas essayer une régression.
End of explanation
<END_TASK> |
191 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Why Violin Plots are Awesome for Feature Engineering
Using NLP to Identify Similar Products
At Wayfair, technology and data expertise enable data scientists to transform new web datasets into intelligent machine algorithms that re-imagine how traditional commerce works.In this post, we introduce how visual tools like Violin Plots amplify our data acumen to unlock deep insights. The savvy data scientist recognizes the value of a Violin Plot when engineering new model features. We share how this method is applied in an e-commerce example where fuzzy text matching systems are developed to identify similar products sold online.
Key article takeaways
Step1: What I like most about violin plots is that they show you the entire distribution of your data. If data inputs violate your assumptions (e.g. multimodal, full of null values, skewed by bad imputation or extreme outliers) you see the problems at a quick glance and in incredible detail. This is better than a few representative percentiles as in a box and whisker plot, or a table of summary statistics. They avoid the problem of oversaturation prevalent in scatter plots with lots of points, and reveal outliers more clearly than you would in a histogram without a lot of fine-tuning.
We’ll illustrate these advantages in a simple example where we use fuzzy string matching to engineer features for a binary classification problem.
An Example using NLP to Identify Similar Products
At Wayfair, we develop sophisticated algorithms to parse large product catalogs and identify similar products. Part of this project involves engineering features for a model which flags two products as the same or not. Let’s start from a dataset that provides several pairs of product names and a label indicating whether or not they refer to the same item.
Step2: Fuzzywuzzy Similarity Scores
For the purpose of this fuzzy text matching illustration, we’ll use an open-source Python library called fuzzywuzzy (developed by the fine folks at SeatGeek). This library contains several functions for measuring the similarity between two strings. Each function takes in two strings and returns a number between 0 and 100 representing the similarity between the strings. Functions differ in their conventions, however, and consequently the results often differ from function to function.
Step5: It’s rarely obvious which function is best for a given problem. Let’s consider five different fuzzy matching methods and compute similarity scores for each pair of strings. Using these scores, we’ll create some violin plots to determine which method is best for distinguishing between matches and not matches. (You could also consider combinations of scores though this comes at a higher computational cost.)
Step6: A few lines of code is all it takes to generate split violin plots using the Seaborn library. The purple distribution depicts a smoothed (sideways) histogram of fuzzy matching scores when Match is True, while the light-green shows the distribution of similarity scores when Match is False. When two distributions have little or no overlap along the y-axis, the fuzzy matching function will do a better job distinguishing between our binary classes.
Step7: Generally, these fuzzy matching scores do a good job in distinguishing between observations where the two names refer to the same product. For any method, a pair of names with a similarity score of 50 or more will probably refer to the same product.
Still, we can see that some fuzzy matching functions do a better job than others in distinguishing between True and False observations. The token-set-ratio plot seems to have the least overlap between the True and False distributions, followed by the plots for token-sort-ratio and WRatio. Of our five similarity scores, the scores from these methods should perform the best in any predictive model. In comparison, notice how much more the True and False distributions overlap for the partial_ratio and QRatio methods. Scores from these methods will be less helpful as features.
Conclusion
Step9: For this data, the correlation coefficients give a similar ranking as achieved using the violin plots. The token-set-ratio method gives the strongest correlation to the Match variable while the QRatio method gives the weakest correlation.
If our goal was only to identify the best fuzzywuzzy function to use, we apparently could have made our selection using correlation coefficients instead of violin plots. In general, however, violin plots are much more reliable and informative. Consider the following (pathological) example.
Step10: In these violin plots, the similarity scores on the left appear to be more helpful in separating between matches and not-matches. There is less overlap between the True and False observations and the observations are more tightly clustered into their respective groups.
However, notice that the relationship between the similarity scores and the True/False indicator is not at all linear or even monotone. As a result, correlation coefficients can fail to correctly guide our decision on which set of scores to use. Is this true? Let’s take a look. | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from fuzzywuzzy import fuzz
import numpy as np
# some settings to be used throughout the notebook
pd.set_option('max_colwidth', 70)
wf_colors = ["#C7DEB1","#9763A4"]
# make some fake data for a demo split-violin plot
data1 = pd.DataFrame({'Variable': np.random.randn(100)*.2 + 1, 'Label':'Binary Case 1'})
data2 = pd.DataFrame({'Variable': np.random.randn(100)*.3, 'Label':'Binary Case 2'})
df = data1.append(data2)
# violin plots in seaborn require 2 catagorical variables ('x' and 'hue'). We use 'Label' for hue.
df['Category'] = '' # placeholder for 'x' categorical variable
# make the plot
fig, ax = plt.subplots(1,1,figsize=(8, 6))
sns.violinplot(x='Category', y="Variable", hue="Label", data=df, split=True, ax=ax, palette=wf_colors)
ax.set_xlabel(' ')
ax.set_ylabel('Some Continuous Variable', fontsize=16)
ax.set_title('Example Split Violin Plot', fontsize=18)
plt.show()
Explanation: Why Violin Plots are Awesome for Feature Engineering
Using NLP to Identify Similar Products
At Wayfair, technology and data expertise enable data scientists to transform new web datasets into intelligent machine algorithms that re-imagine how traditional commerce works.In this post, we introduce how visual tools like Violin Plots amplify our data acumen to unlock deep insights. The savvy data scientist recognizes the value of a Violin Plot when engineering new model features. We share how this method is applied in an e-commerce example where fuzzy text matching systems are developed to identify similar products sold online.
Key article takeaways:
* Skillful usage of Violin Plots can improve feature engineering and selection
* A good Violin Plot communicates more information about data irregularities than standard summary statistics and correlation coefficients
Good data visualizations are helpful at every step of a data science project. When starting out, the right data visualizations can inform how one should frame their data science problem. Visualizations also can help guide decisions surrounding which data inputs to use, and are helpful when evaluating model accuracy and feature importance. When debugging an existing model, visualizations help diagnose data irregularities and bias in model predictions. Finally, when communicating with business stakeholders, the right visualization makes a clear point without any additional explanation.
A type of data visualization that is particularly helpful when working on binary classification problems is the split violin plot. In my experience, this is a type of plot that is not nearly as famous as it should be. In brief, a split violin plot takes a variable grouped by two categories and plots a smoothed histogram of the variable in each group on opposite sides of a shared axis. The code below make a quick example plot to illustrate.
End of explanation
# read in data
data = pd.read_csv('productnames.csv')
df = data[['Product1', 'Product2', 'Match']]
# what does the data look like?
df.head()
Explanation: What I like most about violin plots is that they show you the entire distribution of your data. If data inputs violate your assumptions (e.g. multimodal, full of null values, skewed by bad imputation or extreme outliers) you see the problems at a quick glance and in incredible detail. This is better than a few representative percentiles as in a box and whisker plot, or a table of summary statistics. They avoid the problem of oversaturation prevalent in scatter plots with lots of points, and reveal outliers more clearly than you would in a histogram without a lot of fine-tuning.
We’ll illustrate these advantages in a simple example where we use fuzzy string matching to engineer features for a binary classification problem.
An Example using NLP to Identify Similar Products
At Wayfair, we develop sophisticated algorithms to parse large product catalogs and identify similar products. Part of this project involves engineering features for a model which flags two products as the same or not. Let’s start from a dataset that provides several pairs of product names and a label indicating whether or not they refer to the same item.
End of explanation
print('Qratio: ', fuzz.QRatio('brown leather sofa', '12ft leather dark brown sofa'))
print('Wratio: ', fuzz.WRatio('brown leather sofa', '12ft leather dark brown sofa'))
print('token_sort_ratio: ', fuzz.token_set_ratio('brown leather sofa', '12ft leather dark brown sofa'))
Explanation: Fuzzywuzzy Similarity Scores
For the purpose of this fuzzy text matching illustration, we’ll use an open-source Python library called fuzzywuzzy (developed by the fine folks at SeatGeek). This library contains several functions for measuring the similarity between two strings. Each function takes in two strings and returns a number between 0 and 100 representing the similarity between the strings. Functions differ in their conventions, however, and consequently the results often differ from function to function.
End of explanation
def get_scores(df, func, score_name):
Function for getting fuzzy similarity scores using a specified function
def _fuzzyscore(row, func=func):
Fuzzy matching score on two columns of pandas dataframe. Called via df.apply()
Args:
row (df row instance): row of pandas DataFrame with columns 'Product1' and 'Product2'
func (function): return numeric similarity score between 'Product1' and 'Product1, defaults to
return func(row['Product1'], row['Product2'])
#get the actual scores
df[score_name] = df.apply(_fuzzyscore, axis=1)
#get scores for different fuzzy functions
get_scores(df, fuzz.QRatio, 'QRatio')
get_scores(df, fuzz.WRatio, 'WRatio')
get_scores(df, fuzz.partial_ratio, 'partial_ratio')
get_scores(df, fuzz.token_set_ratio, 'token_set_ratio')
get_scores(df, fuzz.token_sort_ratio, 'token_sort_ratio')
df.head()
Explanation: It’s rarely obvious which function is best for a given problem. Let’s consider five different fuzzy matching methods and compute similarity scores for each pair of strings. Using these scores, we’ll create some violin plots to determine which method is best for distinguishing between matches and not matches. (You could also consider combinations of scores though this comes at a higher computational cost.)
End of explanation
plot_df = pd.melt(df, id_vars=['Match'], value_vars=['QRatio','WRatio', 'partial_ratio','token_set_ratio', 'token_sort_ratio'])
plot_df.columns = ['Match', 'Function', 'Fuzzy Score']
fig, ax = plt.subplots(1,1, figsize=(14, 5))
sns.violinplot(x="Function", y="Fuzzy Score", hue="Match", data=plot_df, split=True, ax=ax, palette=wf_colors)
ax.set_ylabel('Similarity Score', fontsize=18)
ax.set_xlabel('')
ax.legend(loc='lower right', fontsize=13, ncol=2)
ax.tick_params(axis='both', which='major', labelsize=16)
ax.set_title('Fuzzywuzzy Methods: similarity scores for matches and not matches', fontsize=20)
plt.show()
# make sure you have a "plots" folder
fig.savefig('blog_pic1.png')
Explanation: A few lines of code is all it takes to generate split violin plots using the Seaborn library. The purple distribution depicts a smoothed (sideways) histogram of fuzzy matching scores when Match is True, while the light-green shows the distribution of similarity scores when Match is False. When two distributions have little or no overlap along the y-axis, the fuzzy matching function will do a better job distinguishing between our binary classes.
End of explanation
df[['QRatio','WRatio', 'partial_ratio','token_set_ratio', 'token_sort_ratio', 'Match']].corr()
Explanation: Generally, these fuzzy matching scores do a good job in distinguishing between observations where the two names refer to the same product. For any method, a pair of names with a similarity score of 50 or more will probably refer to the same product.
Still, we can see that some fuzzy matching functions do a better job than others in distinguishing between True and False observations. The token-set-ratio plot seems to have the least overlap between the True and False distributions, followed by the plots for token-sort-ratio and WRatio. Of our five similarity scores, the scores from these methods should perform the best in any predictive model. In comparison, notice how much more the True and False distributions overlap for the partial_ratio and QRatio methods. Scores from these methods will be less helpful as features.
Conclusion: Violin plots suggest that of our five similarity scores, token-set-ratio would be the best feature in a predictive model, especially compared to the partial-ratio or QRatio methods.
Why Violin Plots are Superior to More Conventional Analyses
For comparison, let’s look at the Pearson correlation coefficients between our fuzzy-matching scores and our indicator variable for whether the pair is a match or not.
End of explanation
def make_fake_data(low, high, n=300):
Stacks three draws from a uniform distribution w/ bounds given by 'low' and 'high'
Args:
low (list of ints): lower bounds for the three random draws
high (list of ints): upper bounds for the three random draws
rand_array = np.hstack((np.random.uniform(low=low[0], high=high[0], size=n),
np.random.uniform(low=low[1], high=high[1], size=n),
np.random.uniform(low=low[2], high=high[2], size=n)
))
return rand_array
# make fake data
true1 = make_fake_data([3, 33, 63], [12, 44, 72])
false1 = make_fake_data([18, 48, 78], [27, 57, 84])
true2 = make_fake_data([0, 30, 60], [15, 45, 75])
false2 = make_fake_data([15, 45, 75], [30, 60, 90])
fake_match_df = pd.DataFrame({'score1': false1, 'score2': false2, 'Match': np.full_like(false1, 0, dtype=bool)})
true_match_df = pd.DataFrame({'score1': true1, 'score2':true2, 'Match': np.full_like(true1, 1, dtype=bool)})
df = true_match_df.append(fake_match_df)
plot_df = pd.melt(df, id_vars=['Match'], value_vars=['score1', 'score2'])
plot_df.columns = ['Match', 'Function', 'Fuzzy Score']
fig, ax = plt.subplots(1,1, figsize=(12, 5))
sns.violinplot(x='Function', y='Fuzzy Score', hue="Match", data=plot_df, split=True, ax=ax, bw=.1, palette=["#C7DEB1","#9763A4"])
ax.set_ylabel('Similarity Score', fontsize=18)
ax.set_xlabel('')
ax.legend(loc='upper right', fontsize=12, ncol=2)
ax.tick_params(axis='both', which='major', labelsize=16)
ax.set_title('Irregular Data: Why Violin Plots are Better than Correlation Coefficients', fontsize=20)
fig.savefig('blog_pic2.png')
Explanation: For this data, the correlation coefficients give a similar ranking as achieved using the violin plots. The token-set-ratio method gives the strongest correlation to the Match variable while the QRatio method gives the weakest correlation.
If our goal was only to identify the best fuzzywuzzy function to use, we apparently could have made our selection using correlation coefficients instead of violin plots. In general, however, violin plots are much more reliable and informative. Consider the following (pathological) example.
End of explanation
df.corr()
Explanation: In these violin plots, the similarity scores on the left appear to be more helpful in separating between matches and not-matches. There is less overlap between the True and False observations and the observations are more tightly clustered into their respective groups.
However, notice that the relationship between the similarity scores and the True/False indicator is not at all linear or even monotone. As a result, correlation coefficients can fail to correctly guide our decision on which set of scores to use. Is this true? Let’s take a look.
End of explanation
<END_TASK> |
192 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Text Data Explanation Benchmarking
Step1: Load Data and Model
Step2: Class Label Mapping
Step3: Define Score Function
Step4: Create Explainer Object
Step5: Run SHAP Explanation
Step6: Define Metrics (Sort Order & Perturbation Method)
Step7: Benchmark Explainer | <ASSISTANT_TASK:>
Python Code:
import copy
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import shap.benchmark as benchmark
import shap
import scipy as sp
import nlp
import torch
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
pd.set_option('max_colwidth', None)
Explanation: Text Data Explanation Benchmarking: Emotion Multiclass Classification
This notebook demonstrates how to use the benchmark utility to benchmark the performance of an explainer for text data. In this demo, we showcase explanation performance for partition explainer on an Emotion Multiclass Classification model. The metrics used to evaluate are "keep positive" and "keep negative". The masker used is Text Masker.
The new benchmark utility uses the new API with MaskedModel as wrapper around user-imported model and evaluates masked values of inputs.
End of explanation
train, test = nlp.load_dataset("emotion", split = ["train", "test"])
data={'text':train['text'],
'emotion':train['label']}
data = pd.DataFrame(data)
tokenizer = AutoTokenizer.from_pretrained("nateraw/bert-base-uncased-emotion",use_fast=True)
model = AutoModelForSequenceClassification.from_pretrained("nateraw/bert-base-uncased-emotion")
Explanation: Load Data and Model
End of explanation
# set mapping between label and id
id2label = model.config.id2label
label2id = model.config.label2id
labels = sorted(label2id, key=label2id.get)
Explanation: Class Label Mapping
End of explanation
def f(x):
tv = torch.tensor([tokenizer.encode(v, padding='max_length', max_length=128,truncation=True) for v in x])
attention_mask = (tv!=0).type(torch.int64)
outputs = model(tv,attention_mask=attention_mask)[0].detach().numpy()
scores = (np.exp(outputs).T / np.exp(outputs).sum(-1)).T
val = sp.special.logit(scores)
return val
Explanation: Define Score Function
End of explanation
explainer = shap.Explainer(f,tokenizer,output_names=labels)
Explanation: Create Explainer Object
End of explanation
shap_values = explainer(data['text'][0:20])
Explanation: Run SHAP Explanation
End of explanation
sort_order = 'positive'
perturbation = 'keep'
Explanation: Define Metrics (Sort Order & Perturbation Method)
End of explanation
sequential_perturbation = benchmark.perturbation.SequentialPerturbation(explainer.model, explainer.masker, sort_order, perturbation)
xs, ys, auc = sequential_perturbation.model_score(shap_values, data['text'][0:20])
sequential_perturbation.plot(xs, ys, auc)
sort_order = 'negative'
perturbation = 'keep'
sequential_perturbation = benchmark.perturbation.SequentialPerturbation(explainer.model, explainer.masker, sort_order, perturbation)
xs, ys, auc = sequential_perturbation.model_score(shap_values, data['text'][0:20])
sequential_perturbation.plot(xs, ys, auc)
Explanation: Benchmark Explainer
End of explanation
<END_TASK> |
193 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Principal Component Analysis with Iris Dataset
Step1: Load Iris dataset
The Iris Dataset here.
This data sets consists of 3 different types of irises’ (Setosa, Versicolour, and Virginica) petal and sepal length, stored in a 150x4 numpy.ndarray.
The rows being the samples and the columns being
Step2: PCA
Can we reduce the dimensionality of our dataset withour losing much information? PCA will help us decide.
Step3: The P.C. #0 explained variance is one order of magnitude higher than P.C. #1 and #2, and two orders of magnitude higher than P.C. #3. We can us use this knowledge to reduce our dataset from 4D to 3D.
We could have done everything in one line by setting the number of components we want (3), fitting the PCA and transforming it to 3D | <ASSISTANT_TASK:>
Python Code:
from sklearn import datasets
from sklearn.decomposition import PCA
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from mpl_toolkits.mplot3d import Axes3D
# %matplotlib inline
%matplotlib notebook
Explanation: Principal Component Analysis with Iris Dataset
End of explanation
iris = datasets.load_iris()
X = pd.DataFrame(iris.data, columns=iris.feature_names)
y = pd.Series(iris.target, name='FlowerType')
X.head()
plt.figure(2, figsize=(8, 6))
plt.clf()
# Plot the training points
plt.scatter(X['sepal length (cm)'], X['sepal width (cm)'], s=35, c=y, cmap=plt.cm.brg)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.title('Sepal length vs. Sepal width')
plt.show()
Explanation: Load Iris dataset
The Iris Dataset here.
This data sets consists of 3 different types of irises’ (Setosa, Versicolour, and Virginica) petal and sepal length, stored in a 150x4 numpy.ndarray.
The rows being the samples and the columns being: Sepal Length, Sepal Width, Petal Length and Petal Width.
End of explanation
pca_iris = PCA(n_components=3).fit(iris.data)
pca_iris.explained_variance_ratio_
pca_iris.transform(iris.data)
Explanation: PCA
Can we reduce the dimensionality of our dataset withour losing much information? PCA will help us decide.
End of explanation
iris_reduced = PCA(n_components=3).fit(iris.data)
iris_reduced.components_
iris_reduced = PCA(n_components=3).fit_transform(iris.data)
fig = plt.figure(1, figsize=(8, 6))
ax = Axes3D(fig, elev=-150, azim=110)
X_reduced = PCA(n_components=3).fit_transform(iris.data)
ax.scatter(iris_reduced[:, 0], iris_reduced[:, 1], iris_reduced[:, 2],
cmap=plt.cm.Paired, c=iris.target)
for k in range(3):
ax.scatter(iris_reduced[y==k, 0], iris_reduced[y==k, 1], iris_reduced[y==k, 2], label=iris.target_names[k])
ax.set_title("First three P.C.")
ax.set_xlabel("P.C. 1")
ax.w_xaxis.set_ticklabels([])
ax.set_ylabel("P.C. 2")
ax.w_yaxis.set_ticklabels([])
ax.set_zlabel("P.C. 3")
ax.w_zaxis.set_ticklabels([])
plt.legend(numpoints=1)
plt.show()
Explanation: The P.C. #0 explained variance is one order of magnitude higher than P.C. #1 and #2, and two orders of magnitude higher than P.C. #3. We can us use this knowledge to reduce our dataset from 4D to 3D.
We could have done everything in one line by setting the number of components we want (3), fitting the PCA and transforming it to 3D:
End of explanation
<END_TASK> |
194 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Notebook 3.3
Step1: Newick tree files
We learned in notebook 3.2 that the file format to store phylogenetic trees is simply a text file containing a string of names within nested parentheses. When researchers publish phylogenetic results these are the types of tree files that they produce and publish. The files are sometimes saved as something like "treefile.newick" or "trees.nwk" or "birds.tree", etc. There are databases online where such files are saved including those that are specialized for tree data (like TreeBase) or general data repositories like Data Dryad.
Newick trees
The toytree.tree() function reads the newick file to create a Tree object in memory. As input it can take a newick string as text, or it can take a URL pointing to a text file online, or it can take a filename that is located on your computer.
Step2: An example using a URL from treebase | <ASSISTANT_TASK:>
Python Code:
import toytree
Explanation: Notebook 3.3: Newick Assignment
Complete the notebook then download as an HTML file (toolbar -> File -> Download as) and submit your assignment by emailing to Natalie (natalie.niepoth@columbia.edu).
End of explanation
newick = "((a,b),(c, d));"
tre = toytree.tree(newick)
tre.draw();
Explanation: Newick tree files
We learned in notebook 3.2 that the file format to store phylogenetic trees is simply a text file containing a string of names within nested parentheses. When researchers publish phylogenetic results these are the types of tree files that they produce and publish. The files are sometimes saved as something like "treefile.newick" or "trees.nwk" or "birds.tree", etc. There are databases online where such files are saved including those that are specialized for tree data (like TreeBase) or general data repositories like Data Dryad.
Newick trees
The toytree.tree() function reads the newick file to create a Tree object in memory. As input it can take a newick string as text, or it can take a URL pointing to a text file online, or it can take a filename that is located on your computer.
End of explanation
URL = "https://treebase.org/treebase-web/search/downloadATree.html?id=11298&treeid=31264"
tre = toytree.tree(URL)
tre.draw(tip_labels_align=True, height=800, width=600);
Explanation: An example using a URL from treebase
End of explanation
<END_TASK> |
195 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'thu', 'sandbox-2', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: THU
Source ID: SANDBOX-2
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:40
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation
<END_TASK> |
196 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Manuscript4 - Computational Model -- Group-level for Fig. 4
Master code for Ito et al., 2017¶
Takuya Ito (takuya.ito@rutgers.edu)
The model (see Stern et al., 2014)
$$ \frac{dx_{i}}{dt} \tau_{i} = -x_{i}(t) + s \hspace{3 pt} \phi \hspace{1 pt} \bigg{(} x_i(t) \bigg{)} + g \bigg{(} \sum_{j\neq i}^{N} W_{ij} \hspace{3 pt} \phi \hspace{1 pt} \bigg{(} x_{j}(t) \bigg{)} \bigg{)} + I_{i}(t)$$
where $x_i$ is the activity of region $i$, $\tau_{i}$ is the time constant for region $i$, $s$ is the recurrent (local) coupling, $g$ is the global coupling parameter, $\phi$ is the bounded transfer function (in this scenario is the hyperbolic tangent), $W_{xy}$ is the synaptic connectivity matrix, and $I$ is the task-stimulation (if any).
Step1: Simulate Tasks
4 task-related communites are 2-5
Simulations are run for 100 seconds
Sampling rate of 10 ms
global coupling paramter g = 1.0
local coupling parameter s = 1.0
Tasks last for 4 seconds @ every 6th second (i.e., 6s, 16s, 26s) for a total of 10 trials in 100 seconds
Stimulate 1/4th of the nodes per community (so if 50 nodes per community, stimulate 13)
2.1 Run information transfer mapping with simulated neural data
Due to computational cost, simulations were run on supercomputer and then copied over to lab server. Code is identical to code presented in demo tutorial
3.0 Run data analyses on simulated neural data
Step2: 3.0.1a visualize synaptic matrix (sample subject)
Step3: 3.0.1 Visualize actual estimated 'intrinsic FC's from Pearson FC and MultRegFC
Step6: 3.1 Run Task information transfer mapping classifying each of the different rules (4-way classification)
3.1.1 Define some basic functions
Step8: 3.2.1 Define some basic functions
Step9: 3.2.3 Run information transfer mapping analysis on subjects using MultReg FC
Step10: 3.2.3 Statistical testing on results and plot
Step11: 3.3 Statistical testing on results and plot using FWE-correction (permutation testing)
Step12: 4.0 Compute BGC using resting-state MultRegFC
Step13: Correct for FWE instead | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from scipy import sparse
% matplotlib inline
import scipy.stats as stats
import statsmodels.api as sm
import CompModel_v7 as cm
cm = reload(cm)
import multiprocessing as mp
import sklearn.preprocessing as preprocessing
import sklearn.svm as svm
import statsmodels.sandbox.stats.multicomp as mc
import multregressionconnectivity as mreg
import sys
sys.path.append('utils/')
import permutationTesting as pt
import os
os.environ['OMP_NUM_THREADS'] = str(1)
from matplotlib.colors import Normalize
class MidpointNormalize(Normalize):
def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False):
self.midpoint = midpoint
Normalize.__init__(self, vmin, vmax, clip)
def __call__(self, value, clip=None):
# I'm ignoring masked values and all kinds of edge cases to make a
# simple example...
x, y = [self.vmin, self.midpoint, self.vmax], [0, 0.5, 1]
return np.ma.masked_array(np.interp(value, x, y))
Explanation: Manuscript4 - Computational Model -- Group-level for Fig. 4
Master code for Ito et al., 2017¶
Takuya Ito (takuya.ito@rutgers.edu)
The model (see Stern et al., 2014)
$$ \frac{dx_{i}}{dt} \tau_{i} = -x_{i}(t) + s \hspace{3 pt} \phi \hspace{1 pt} \bigg{(} x_i(t) \bigg{)} + g \bigg{(} \sum_{j\neq i}^{N} W_{ij} \hspace{3 pt} \phi \hspace{1 pt} \bigg{(} x_{j}(t) \bigg{)} \bigg{)} + I_{i}(t)$$
where $x_i$ is the activity of region $i$, $\tau_{i}$ is the time constant for region $i$, $s$ is the recurrent (local) coupling, $g$ is the global coupling parameter, $\phi$ is the bounded transfer function (in this scenario is the hyperbolic tangent), $W_{xy}$ is the synaptic connectivity matrix, and $I$ is the task-stimulation (if any).
End of explanation
# Basic simulation parameters
# Simulation parameters sampled at 10ms
Tmax = 10000 # 100 seconds per block (10 trials perblock, each trial lasts 4 seconds)
Tmaxrest = 60000 # 500 seconds of rest
nblocks = 20
# Network parameters
g = 1.0
s = 1.0
nsubjs = 30
ncommunities = 5
nodespernetwork = 50
totalnodes = ncommunities*nodespernetwork
localtasks = range(1,5)
flexhubtasks = range(5,9)
flexandlocalnets = range(9,13)
flexandlocalnet2 = range(13,17)
ntasks = len(localtasks) + len(flexhubtasks) + len(flexandlocalnets) + len(flexandlocalnet2)
Explanation: Simulate Tasks
4 task-related communites are 2-5
Simulations are run for 100 seconds
Sampling rate of 10 ms
global coupling paramter g = 1.0
local coupling parameter s = 1.0
Tasks last for 4 seconds @ every 6th second (i.e., 6s, 16s, 26s) for a total of 10 trials in 100 seconds
Stimulate 1/4th of the nodes per community (so if 50 nodes per community, stimulate 13)
2.1 Run information transfer mapping with simulated neural data
Due to computational cost, simulations were run on supercomputer and then copied over to lab server. Code is identical to code presented in demo tutorial
3.0 Run data analyses on simulated neural data
End of explanation
nblocks = 20
## First four tasks are local tasks
localtasks = range(1,5)
localcommunity = 4 # local community to stimulate the local tasks
## Second four tasks are 'flexhub' tasks
flexhubtasks = range(5,9)
ntasks = len(flexhubtasks) + len(localtasks)
Tmax = 10000
Tmaxrest = 60000
# g = 1.0
# s = 1.0
autocorrfactor = 0
samplingrate = 1.0
TRLength=100
#### Set up subject networks ####
# Parameters for subject's networks
ncommunities = 5
innetwork_dsity = .35
outnetwork_dsity = .05
hubnetwork_dsity = .20
nodespernetwork = 50
totalnodes = nodespernetwork*ncommunities
##########
# Construct structural matrix
W = cm.generateStructuralNetwork(ncommunities=ncommunities, innetwork_dsity=innetwork_dsity,
outnetwork_dsity=outnetwork_dsity, hubnetwork_dsity=hubnetwork_dsity,
nodespernetwork=nodespernetwork, showplot=False)
# Construct synaptic matrix
G = cm.generateSynapticNetwork(W, showplot=False)
plt.figure()
# norm = MidpointNormalize(midpoint=0)
plt.imshow(G,origin='lower',interpolation='none')
plt.xlabel('Regions')
plt.ylabel('Regions')
plt.title('Synaptic Weight Matrix', y=1.04, fontsize=18)
plt.colorbar()
# plt.savefig('SingleSubj_SynapticWeightMatrix.pdf')
Explanation: 3.0.1a visualize synaptic matrix (sample subject)
End of explanation
fcmat_pearson = np.zeros((totalnodes,totalnodes,nsubjs))
fcmat_multreg = np.zeros((totalnodes,totalnodes,nsubjs))
for subj in range(nsubjs):
indir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript4_CompModelv7_resubmission/restfc/'
# Load in pearson FC matrix
filename1 = 'subj' + str(subj) + '_restfc_pearson.txt'
fcmat_pearson[:,:,subj] = np.loadtxt(indir + filename1, delimiter=',')
# Loda in multreg FC matrix
filename2 = 'subj' + str(subj) + '_restfc_multreg.txt'
fcmat_multreg[:,:,subj] = np.loadtxt(indir + filename2, delimiter=',')
plt.figure()
avg = np.mean(fcmat_pearson,axis=2)
np.fill_diagonal(avg,0)
plt.imshow(avg ,origin='lower',interpolation='none')#,vmin=0)
plt.xlabel('Regions')
plt.ylabel('Regions')
plt.title('Group Rest FC Matrix\nPearson FC', y=1.04, fontsize=18)
plt.colorbar()
#plt.savefig('Fig1a_CompModel5_GroupRestFC_Pearson.pdf')
plt.figure()
avg = np.mean(fcmat_multreg,axis=2)
np.fill_diagonal(avg,0)
# norm = MidpointNormalize(midpoint=0)
plt.imshow(avg ,origin='lower',interpolation='none')#,vmin=-.08,vmax=.08)
plt.xlabel('Regions')
plt.ylabel('Regions')
plt.title('Group Rest FC Matrix\nMultiple Regression FC', y=1.04, fontsize=18)
plt.colorbar()
# plt.savefig('Fig1b_CompModel5_GroupRestFC_MultReg.pdf')
plt.figure()
avg = np.mean(fcmat_multreg,axis=2)
np.fill_diagonal(avg,0)
norm = MidpointNormalize(midpoint=0)
plt.imshow(avg, origin='lower',interpolation='none', cmap='OrRd',vmin=0)
plt.xlabel('Regions')
plt.ylabel('Regions')
plt.title('Group Rest FC Matrix\nMultiple Regression FC', y=1.04, fontsize=18)
plt.colorbar()
Explanation: 3.0.1 Visualize actual estimated 'intrinsic FC's from Pearson FC and MultRegFC
End of explanation
def setUpActFlowRSAMat(subj,net,fromnet,tasks,nblocks=20,fc='multreg'):
Retrieves actflow data from subject and puts it in an SVM ready format
tasks input -- an array or list of task numbers corresponding to which set of tasks you want to analyze
May want only local tasks or flexhub tasks
nsamples = len(tasks)*nblocks
nfeatures = nodespernetwork # regions per network
svm_mat = np.zeros((nsamples,nfeatures))
labels = np.zeros((nsamples,))
indir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript4_CompModelv7_resubmission/actflow_predictions/'
indcount = 0
for task in tasks:
if fc=='multreg':
filename = 'subj'+str(subj)+'_task'+str(task)+'_net'+str(fromnet)+'tonet'+str(net)+'_multregFC.txt'
elif fc=='pearson':
filename = 'subj'+str(subj)+'_task'+str(task)+'_net'+str(fromnet)+'tonet'+str(net)+'_pearsonFC.txt'
actflowdat = np.loadtxt(indir+filename,delimiter=',')
svm_mat[indcount:(indcount+nblocks),:] = actflowdat.T
labels[indcount:(indcount+nblocks)] = task
indcount += nblocks
return svm_mat, labels
def setUpBetasRSAMat(subj,net,tasks,nblocks=20):
Retrieves in task beta from subject and puts it in an SVM ready format
tasks input -- an array or list of task numbers corresponding to which set of tasks you want to analyze
May want only local tasks or flexhub tasks
nfeatures = nodespernetwork # Number of regions for each network
nsamples = len(tasks)*nblocks
svm_mat = np.zeros((nsamples,nfeatures))
labels =np.zeros((nsamples,))
net_ind = np.arange(net*nodespernetwork,net*nodespernetwork+nodespernetwork)
indir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript4_CompModelv7_resubmission/task_betas/'
indcount = 0
for task in tasks:
filename = 'subj'+str(subj)+'_task'+str(task)+'_allblocks.txt'
betas = np.loadtxt(indir + filename, delimiter=',')
# Get relevant network data
svm_mat[indcount:(indcount+nblocks),:] = betas[net_ind,:].T # get all trials
labels[indcount:(indcount+nblocks)] = task
indcount += nblocks
return svm_mat, labels
Explanation: 3.1 Run Task information transfer mapping classifying each of the different rules (4-way classification)
3.1.1 Define some basic functions
End of explanation
def runActFlowRSA((subj,net,fromnet,tasks,nblocks,fc)):
Runs a leave-block-out CV style SVM analysis (leaving 4 blocks out per CV)
Trains on predicted ActFlow data
Tests on real data (betas)
actflow_mat, labels = setUpActFlowRSAMat(subj,net,fromnet,tasks,nblocks=nblocks,fc=fc)
real_mat, labels = setUpBetasRSAMat(subj,net,tasks,nblocks=nblocks)
# actflow_mat = preprocessing.scale(actflow_mat,axis=0)
# real_mat = preprocessing.scale(real_mat,axis=0)
ncvs = nblocks
indices = np.arange(actflow_mat.shape[0])
matched_rhos = []
mismatch_rhos = []
for cv in range(ncvs):
task_ind = {}
prototype = {}
# Construct prototypes of each task
for task in tasks:
# Get indices for this particular task
task_ind[task] = np.where(labels==task)[0]
# Decide which one is your 'comparison test trial' will be
test_ind = task_ind[task][cv]
# Find the indices for the prototypes
train_ind = np.setxor1d(test_ind,task_ind[task])
prototype[task] = np.mean(real_mat[train_ind,:],axis=0)
# Now compare each pair of tasks with the prototype
for task_a in tasks:
for task_b in tasks:
test_ind = task_ind[task_a][cv] # Compare task a
rho_tmp = stats.spearmanr(prototype[task_b].T,actflow_mat[test_ind,:].T)[0] # With task b
rho_tmp = np.arctanh(rho_tmp)
if task_a==task_b:
# Match!
matched_rhos.append(rho_tmp)
else:
mismatch_rhos.append(rho_tmp)
# Get averages
matched_rhos_avg = np.mean(matched_rhos)
mismatch_rhos_avg = np.mean(mismatch_rhos)
return matched_rhos_avg, mismatch_rhos_avg
Explanation: 3.2.1 Define some basic functions
End of explanation
# Empty variables for FlexHub task analysis
rho_mat_match_flexhub = np.zeros((ncommunities,ncommunities,nsubjs))
rho_mat_mismatch_flexhub = np.zeros((ncommunities,ncommunities,nsubjs))
for i in range(ncommunities):
for j in range(ncommunities):
if i==j: continue
fromnet = i
net = j
nblocks = nblocks
fc='multreg'
## First run on flexhub tasks
inputs = []
for subj in range(nsubjs): inputs.append((subj,net,fromnet,flexhubtasks,nblocks,fc))
# Run multiprocessing
pool = mp.Pool(processes=15)
results_flexhub = pool.map_async(runActFlowRSA, inputs).get()
pool.close()
pool.join()
## Get results
for subj in range(nsubjs):
match, mismatch = results_flexhub[subj]
rho_mat_match_flexhub[i,j,subj],rho_mat_mismatch_flexhub[i,j,subj] = match, mismatch
Explanation: 3.2.3 Run information transfer mapping analysis on subjects using MultReg FC
End of explanation
# Instantiate empty result matrices
tmat_flexhub = np.zeros((ncommunities,ncommunities))
pmat_flexhub = np.ones((ncommunities,ncommunities))
for i in range(ncommunities):
for j in range(ncommunities):
if i==j: continue
t, p = stats.ttest_rel(rho_mat_match_flexhub[i,j,:],rho_mat_mismatch_flexhub[i,j,:])
tmat_flexhub[i,j] = t
# One-sided p-value
if t > 0:
p = p/2.0
elif t < 0:
p = 1.0 - p/2.0
pmat_flexhub[i,j] = p
## FlexHub Tasks
# Run FDR correction on p-values (Don't get diagonal values)
qmat_flexhub = np.ones((ncommunities,ncommunities))
triu_ind = np.triu_indices(ncommunities,k=1)
tril_ind = np.tril_indices(ncommunities,k=-1)
all_ps = np.hstack((pmat_flexhub[triu_ind],pmat_flexhub[tril_ind]))
h, all_qs = mc.fdrcorrection0(all_ps)
# the first half of all qs belong to triu, second half belongs to tril
qmat_flexhub[triu_ind] = all_qs[:len(triu_ind[0])]
qmat_flexhub[tril_ind] = all_qs[len(tril_ind[0]):]
binary_mat_flexhub = qmat_flexhub < .05
rho_diff_mat_flexhub = np.mean(rho_mat_match_flexhub,axis=2) - np.mean(rho_mat_mismatch_flexhub,axis=2)
plt.figure()
threshold_acc = np.multiply(binary_mat_flexhub,tmat_flexhub)
norm = MidpointNormalize(midpoint=0)
plt.imshow(threshold_acc,norm=norm,origin='lower',interpolation='None',cmap='bwr')
plt.title('Network-to-Network Information Transfer\n(FDR-corrected)\nFlexHub Tasks -- MultReg FC',fontsize=16, y=1.02)
plt.colorbar()
plt.yticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4'])
plt.xticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4'])
plt.ylabel('Network ActFlow FROM',fontsize=15)
plt.xlabel('Network ActFlow TO',fontsize=15)
plt.tight_layout()
# plt.savefig('SFig_CompModel_Network2Network_RSA_MultRegFC_HubNetStim_.pdf')
Explanation: 3.2.3 Statistical testing on results and plot
End of explanation
# Instantiate empty result matrices
tfwe_flexhub = np.zeros((ncommunities,ncommunities))
pfwe_flexhub = np.ones((ncommunities,ncommunities))
ite_flexhub = rho_mat_match_flexhub - rho_mat_mismatch_flexhub
indices = np.ones((ncommunities,ncommunities))
np.fill_diagonal(indices,0)
flatten_ind = np.where(indices==1)
## FlexHub Tasks
t, p = pt.permutationFWE(ite_flexhub[flatten_ind[0],flatten_ind[1],:], permutations=1000, nproc=15)
p = 1.0 - p
np.fill_diagonal(pfwe_flexhub,1.0)
tfwe_flexhub[flatten_ind[0],flatten_ind[1]] = t
pfwe_flexhub[flatten_ind[0],flatten_ind[1]] = p
binary_mat_flexhub = pfwe_flexhub < 0.05
# Print statistics to place in text of paper
# Compute average t-value of hub-network transfers
sig_ind = pfwe_flexhub<0.05
nonsig_ind = pfwe_flexhub>0.05
print 'Average significant T-value:', np.mean(tfwe_flexhub[sig_ind])
print 'Maximum significant P-value:', np.max(pfwe_flexhub[sig_ind])
print 'Average significant ITE:', np.mean(ite_flexhub[sig_ind])
print '\n'
print 'Average non-significant T-value:', np.mean(tfwe_flexhub[nonsig_ind])
print 'Average non-significant P-value:', np.mean(pfwe_flexhub[nonsig_ind])
print 'Average non-significant ITE:', np.mean(ite_flexhub[nonsig_ind])
ite_mat_flexhub = np.mean(rho_mat_match_flexhub,axis=2) - np.mean(rho_mat_mismatch_flexhub,axis=2)
plt.figure()
threshold_acc = np.multiply(binary_mat_flexhub,tmat_flexhub)
norm = MidpointNormalize(midpoint=0)
plt.imshow(threshold_acc,norm=norm,origin='lower',interpolation='None',cmap='bwr')
plt.title('Network-to-Network Information Transfer\n(FWE-corrected)\nFlexHub Tasks -- MultReg FC',fontsize=16, y=1.02)
plt.colorbar()
plt.yticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4'])
plt.xticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4'])
plt.ylabel('Network ActFlow FROM',fontsize=15)
plt.xlabel('Network ActFlow TO',fontsize=15)
plt.tight_layout()
# plt.savefig('SFig_CompModel_Network2Network_RSA_MultRegFC_HubNetStim_.pdf')
Explanation: 3.3 Statistical testing on results and plot using FWE-correction (permutation testing)
End of explanation
outofnet_intrinsicFC = np.zeros((ncommunities,nsubjs))
indices = np.arange(nodespernetwork*ncommunities)
for subj in range(nsubjs):
for net in range(0,ncommunities):
# if net == hubnet: continue
net_ind = np.arange(nodespernetwork*net,nodespernetwork*net + nodespernetwork)
net_ind.shape = (len(net_ind),1)
outofnet_ind = np.setxor1d(net_ind,indices)
outofnet_ind.shape = (len(outofnet_ind),1)
outofnet_intrinsicFC[net,subj] = np.mean(fcmat_multreg[net_ind, outofnet_ind.T, subj])
fcmean = np.mean(outofnet_intrinsicFC,axis=1)
fcerr = np.std(outofnet_intrinsicFC,axis=1)/np.sqrt(nsubjs)
fig = plt.bar(range(len(fcmean)), fcmean, yerr=fcerr)
# fig = plt.ylim([.09,0.10])
fig = plt.xticks(np.arange(.4,5.4,1.0),['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4'],fontsize=14)
fig = plt.ylabel('Multiple Regression FC', fontsize=16)
fig = plt.xlabel('Networks', fontsize=16)
fig = plt.title("Out-of-Network (BGC) Intrinsic FC", fontsize=18, y=1.02)
fig = plt.tight_layout()
# pp2 = PdfPages('Fig1_CompModel_OutNetIntrinsicFC.pdf')
# pp2.savefig(fig)
# pp2.close()
pvals = []
tvals = []
hubnet = 0
for net in range(ncommunities):
if hubnet == net: continue
t, p = stats.ttest_rel(outofnet_intrinsicFC[hubnet,:],outofnet_intrinsicFC[net,:])
tvals.append(t)
pvals.append(p)
qvals = mc.fdrcorrection0(pvals)[1]
for net in range(ncommunities):
if net == hubnet:
print 'Average out-of-network GBC of network', net, ':', round(np.mean(outofnet_intrinsicFC[net,:]),5)
else:
print 'Average out-of-network GBC of network', net, ':', round(np.mean(outofnet_intrinsicFC[net,:]),5), '\t t =', round(tvals[net-1],3), '\t p =', round(pvals[net-1],3), '\t q =', round(qvals[net-1],3)
print 'Average t-value for hub network greater than local networks:', np.mean(tvals)
Explanation: 4.0 Compute BGC using resting-state MultRegFC
End of explanation
contrast = np.zeros((ncommunities-1,outofnet_intrinsicFC.shape[1]))
hubnet = 0
i = 0
for net in range(ncommunities):
if hubnet == net: continue
t, p = stats.ttest_rel(outofnet_intrinsicFC[hubnet,:],outofnet_intrinsicFC[net,:])
contrast[i,:] = outofnet_intrinsicFC[hubnet,:] - outofnet_intrinsicFC[net,:]
i += 1
t, p_fwe = pt.permutationFWE(contrast, permutations=1000, nproc=15)
p_fwe = 1.0 - p_fwe
qvals = mc.fdrcorrection0(pvals)[1]
for net in range(ncommunities):
if net == hubnet:
print 'Average out-of-network GBC of network', net, ':', round(np.mean(outofnet_intrinsicFC[net,:]),5)
else:
print 'Average out-of-network GBC of network', net, ':', round(np.mean(outofnet_intrinsicFC[net,:]),5), '\t t =', round(t[net-1],3), '\t p =', round(p_fwe[net-1],3)
Explanation: Correct for FWE instead
End of explanation
<END_TASK> |
197 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
A simple python application
Step1: 1.- Introduction to Python dictionaries
In this practice we will extensively use Python Dictionaries, in this preliminary section we will learn to build, modify and use them.
Python dictionaries are hash tables (also known as associative memories, associative arrays), a data structure indexed by keys, which can be any immutable (hashable) type. It is best to think of a dictionary as an unordered set of (key, value) pairs, with the requirement that the keys are unique (within one dictionary). Let's see a simple example of dictionary creation and manipulation.
Step2: Note
Step3: EXERCISE
Step4: EXERCISE
Step5: 2.- Downloading a webpage
The urllib library allows to download any web content. Other tools for building crawling bots for massive downloading are also available (Scrapy).
We will use the urllib library to download a webpage.
EXERCISE
Step6: BeautifulSoup is a powerfull library to postprocess the HTML code, let's see one example, we will extract the text and remove the HTML markup.
Step7: 3.- The Google Translate webpage syntax
Let's explore the behaviour of the Google Translate webpage. Execute the following code, and open the resulting URL in a web browser (click on it, or copy-paste it to a web browser, if it does not automatically open). You should be able to read the translation text in that webpage. Try to identify the syntax of the query, and manually modify the destiny language, as well as the text to translate.
Step8: EXERCISE
Step9: 4.- Downloading the html code with the translation
We will construct a query link and download the page containing the desired translation.
EXERCISE
Step10: 5.- Postprocessing the downloaded webpage
We will analyze the html content to obtain the desired translation.
EXERCISE
Step11: EXERCISE
Step12: 6.- Adding more languages
We will increase the number of destiny languages. Modify the dictionary such that the following new languages are included
Step13: EXERCISE
Step14: 7.- Translation to all available languages
We will use the goslate library to get a full dictionary of languages.
EXERCISE
Step15: We will now call the translation function to translate a text to every one of the languages.
EXERCISE | <ASSISTANT_TASK:>
Python Code:
import goslate # pip install goslate
from bs4 import BeautifulSoup # pip install beautifulsoup4
import urllib2 # pip install requests
Explanation: A simple python application: universal translator
In this lab we will develop a universal text translator by using the Google translation webpage.
We will use the following libraries, please verify that they are available in your system:
* goslate: un-official API access to Google translate
* BeautifulSoup: Processing of html pages.
* urllib2: Downloading of webpages.
DISCLAIMER: This code is intended only for academic purposes, any professional or commercial use of the Google translate page should be conducted under the terms and conditions of the Google Translate API (https://cloud.google.com/translate/docs). The authors of this code are not responsible for any unauthorized use.
End of explanation
inventary_dict = {'milk': 23, 'coockies': 12, 'chocolate': 26, 'yogourt': 5}
print "This is the original dictionary:"
print inventary_dict
print " "
print "This is the value associated to 'milk':"
print inventary_dict['milk']
print " "
print "We add a new element to the dictionary:"
inventary_dict.update({'sugar': 103})
print inventary_dict
print " "
print "We increment the value of one of the elements:"
inventary_dict['coockies'] += 10
print inventary_dict
print " "
Explanation: 1.- Introduction to Python dictionaries
In this practice we will extensively use Python Dictionaries, in this preliminary section we will learn to build, modify and use them.
Python dictionaries are hash tables (also known as associative memories, associative arrays), a data structure indexed by keys, which can be any immutable (hashable) type. It is best to think of a dictionary as an unordered set of (key, value) pairs, with the requirement that the keys are unique (within one dictionary). Let's see a simple example of dictionary creation and manipulation.
End of explanation
keys = inventary_dict.keys()
print "These are the keys of the dictionary:"
print keys
print " "
values = inventary_dict.values()
print "These are the values of the dictionary:"
print values
print " "
print "The size of this dictionary is %d, and it stores the following elements:" % len(inventary_dict.keys())
for key in keys:
print key + ": " + str(inventary_dict[key])
Explanation: Note: Observe that the order in a dictionary is not relevant.
Dictionaries have two fundamental methods: keys and values
End of explanation
languages_dict = <COMPLETAR>
print "Vamos a traducir de %s a %s." % (languages_dict['es'], languages_dict['it'])
Explanation: EXERCISE : Define the following 'languages_dict' dictionary, that stores the languages our translator will work on. Note that both keys and values are strings:
<pre><code>
en: Inglés
zh: Chino
de: Alemán
it: Italiano
es: Español
</code></pre>
End of explanation
def view_codes(mydict):
<COMPLETAR>
view_codes(languages_dict)
Explanation: EXERCISE : Define a funtion 'view_codes' that prints all language codes in the dictionary, such that view_codes(languages_dict) produces:
<pre><code>
en: Inglés
zh: Chino
de: Alemán
it: Italiano
es: Español
</code></pre>
Note: dictionaries are not ordered structured, the results may be presented in a different order.
End of explanation
agent = {'User-Agent':"Mozilla/4.0"}
url1 = "https://www.u-tad.com/conocenos/conoce-u-tad/"
request = urllib2.Request(url1, headers=agent)
page = urllib2.urlopen(request).read()
n_caracteres_descargados = <COMPLETAR>
print "La página descargada tiene %d caracteres." % n_caracteres_descargados
print "Estos son los primeros 1000 caracteres:"
print "=" * 100
print <COMPLETAR>
print "=" * 100
Explanation: 2.- Downloading a webpage
The urllib library allows to download any web content. Other tools for building crawling bots for massive downloading are also available (Scrapy).
We will use the urllib library to download a webpage.
EXERCISE: Complete the code below to compute the number of downloaded characters and print the first 1000.
End of explanation
bs = BeautifulSoup(page, "html5lib")
for script in bs(["script", "style"]):
script.extract()
text_utad = bs.get_text()
text_utad = ' '.join(text_utad.split())
print text_utad
Explanation: BeautifulSoup is a powerfull library to postprocess the HTML code, let's see one example, we will extract the text and remove the HTML markup.
End of explanation
url = "https://translate.google.com/m?hl=de&sl=auto&q=adiós+amigos"
print url
Explanation: 3.- The Google Translate webpage syntax
Let's explore the behaviour of the Google Translate webpage. Execute the following code, and open the resulting URL in a web browser (click on it, or copy-paste it to a web browser, if it does not automatically open). You should be able to read the translation text in that webpage. Try to identify the syntax of the query, and manually modify the destiny language, as well as the text to translate.
End of explanation
destiny_language = 'it'
my_text = "Hola a todos mis amigos"
def url_translate(destiny_language, text):
url = <COMPLETAR> % (destiny_language, "auto", text.replace(<COMPLETAR>))
return url
url = url_translate(destiny_language, my_text)
print url
Explanation: EXERCISE: Define a function that takes as argument the destiny language and the text to be translated and returns the url. Check the result by clicking on the printed link.
End of explanation
def get_html(lang, text):
agent = {'User-Agent':"Mozilla/4.0"}
url = <COMPLETAR>
request = urllib2.Request(url, headers=agent)
html = urllib2.urlopen(request).read()
return html
html = get_html(destiny_language, my_text)
n_caracteres_descargados = <COMPLETAR>
print "La página descargada tiene %d caracteres." % n_caracteres_descargados
print "=" * 100
print html
print "=" * 100
Explanation: 4.- Downloading the html code with the translation
We will construct a query link and download the page containing the desired translation.
EXERCISE : Write a function 'get_html' that takes as input the destiny language and the text to be translated and returns the html code of the page.
End of explanation
def translate(lang, text):
html = <COMPLETAR>
bs = BeautifulSoup(html, "html5lib")
translation =bs.findAll('div')[2].text
return translation
key = 'en'
print u"Traducción al " + unicode(languages_dict[key],'utf-8') + ":"
print translate(key, my_text)
Explanation: 5.- Postprocessing the downloaded webpage
We will analyze the html content to obtain the desired translation.
EXERCISE : Complete the function 'translate' that takes as input the destiny language and the text to be translated and returns the translation.
End of explanation
for <COMPLETAR>:
print u"Traducción al " + unicode(languages_dict[key],'utf-8') + ":"
print <COMPLETAR>
print " "
Explanation: EXERCISE : Use the function in the previous section to translate the text to all languages in the dict.
End of explanation
languages_dict.update(<COMPLETAR>)
languages_dict.<COMPLETAR>
languages_dict.<COMPLETAR>
languages_dict.<COMPLETAR>
languages_dict.<COMPLETAR>
languages_dict.<COMPLETAR>
languages_dict.<COMPLETAR>
view_codes(languages_dict)
Explanation: 6.- Adding more languages
We will increase the number of destiny languages. Modify the dictionary such that the following new languages are included:
<pre><code>
ru: Ruso
fr: Francés
hi: Hindi
ja: Japonés
eu: Vasco
gl: Gallego
ca: Catalán
</code></pre>
End of explanation
<COMPLETAR>
Explanation: EXERCISE : Translate the text to all languages in the dict.
End of explanation
gs = goslate.Goslate()
all_languages_dict = gs.get_languages()
view_codes(<COMPLETAR>)
Explanation: 7.- Translation to all available languages
We will use the goslate library to get a full dictionary of languages.
EXERCISE : Add the code to print the full list of codes and languages names.
The answer should be:
<pre><code>
gu: Gujarati
zh-TW: Chinese (Traditional)
gd: Scots Gaelic
ga: Irish
gl: Galician
lb: Luxembourgish
la: Latin
lo: Lao
(...)
</code></pre>
End of explanation
my_text = 'Ya hemos completado el curso introductorio'
for key in <COMPLETAR>:
print u"\nTraducción al " + <COMPLETAR> + ":"
print <COMPLETAR>
Explanation: We will now call the translation function to translate a text to every one of the languages.
EXERCISE : Write the code necessary to translate the sentence "Ya hemos completado el curso introductorio" .
The answer should be:
<pre><code>
Traducción al Gujarati:
અમે પહેલાથી જ પ્રારંભિક અભ્યાસક્રમ પૂર્ણ કર્યા
Traducción al Chinese (Traditional):
我們已經完成了入門課程
Traducción al Scots Gaelic:
Tha sinn air crěoch a chur air mar-thŕ a 'chiad chůrsa
Traducción al Irish:
Táimid tar éis i gcrích cheana féin ar an gcúrsa tosaigh
(...)
</code></pre>
End of explanation
<END_TASK> |
198 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<a href="http
Step1: The class <code>plot_error_surfaces</code> is just to help you visualize the data space and the parameter space during training and has nothing to do with PyTorch.
Step2: Set the random seed
Step3: <!--Empty Space for separating topics-->
<h2 id="Makeup_Data">Get Some Data</h2>
Create the <code>Data</code> class
Step4: Make <code>Data</code> object
Step5: <!--Empty Space for separating topics-->
<h2 id="Model_Cost">Create the Model and Total Loss Function (Cost)</h2>
Create a custom module for logistic regression
Step6: Create a logistic regression object and print the parameters
Step7: Create a <code>plot_error_surfaces</code> object to visualize the data space and the parameter space during training
Step8: Create <code>DataLoader</code> object, cost or criterion function and <code>optimizer</code>
Step9: <!--Empty Space for separating topics-->
<h2 id="BGD">Train the Model via Batch Gradient Descent</h2>
Train the model
Step10: Get the actual class of each sample and calculate the accuracy on the test data. | <ASSISTANT_TASK:>
Python Code:
# Import the libraries we need for this lab
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
import torch
from torch.utils.data import Dataset, DataLoader
import torch.nn as nn
Explanation: <a href="http://cocl.us/pytorch_link_top">
<img src="https://cocl.us/Pytorch_top" width="750" alt="IBM 10TB Storage" />
</a>
<img src="https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png" width="200" alt="cognitiveclass.ai logo" />
<h1>Logistic Regression Training Mean Square Error</h1>
<h2>Table of Contents</h2>
<p>In this lab, you will see what happens when you use the root mean square error cost or total loss function using random initialization for a parameter value.</p>
<ul>
<li><a href="#Makeup_Data">Make Some Data</a></li>
<li><a href="#Model_Cost">Create the Model and Cost Function the PyTorch way</a></li>
<li><a href="#BGD">Train the Model: Batch Gradient Descent</a></li>
</ul>
<p>Estimated Time Needed: <strong>30 min</strong></p>
<hr>
<h2>Preparation</h2>
We'll need the following libraries:
End of explanation
# Create class for plotting and the function for plotting
class plot_error_surfaces(object):
# Construstor
def __init__(self, w_range, b_range, X, Y, n_samples = 30, go = True):
W = np.linspace(-w_range, w_range, n_samples)
B = np.linspace(-b_range, b_range, n_samples)
w, b = np.meshgrid(W, B)
Z = np.zeros((30, 30))
count1 = 0
self.y = Y.numpy()
self.x = X.numpy()
for w1, b1 in zip(w, b):
count2 = 0
for w2, b2 in zip(w1, b1):
Z[count1, count2] = np.mean((self.y - (1 / (1 + np.exp(-1 * (w2 * self.x + b2))))) ** 2)
count2 += 1
count1 += 1
self.Z = Z
self.w = w
self.b = b
self.W = []
self.B = []
self.LOSS = []
self.n = 0
if go == True:
plt.figure()
plt.figure(figsize = (7.5, 5))
plt.axes(projection = '3d').plot_surface(self.w, self.b, self.Z, rstride = 1, cstride = 1, cmap = 'viridis', edgecolor = 'none')
plt.title('Loss Surface')
plt.xlabel('w')
plt.ylabel('b')
plt.show()
plt.figure()
plt.title('Loss Surface Contour')
plt.xlabel('w')
plt.ylabel('b')
plt.contour(self.w, self.b, self.Z)
plt.show()
# Setter
def set_para_loss(self, model, loss):
self.n = self.n + 1
self.W.append(list(model.parameters())[0].item())
self.B.append(list(model.parameters())[1].item())
self.LOSS.append(loss)
# Plot diagram
def final_plot(self):
ax = plt.axes(projection = '3d')
ax.plot_wireframe(self.w, self.b, self.Z)
ax.scatter(self.W, self.B, self.LOSS, c = 'r', marker = 'x', s = 200, alpha = 1)
plt.figure()
plt.contour(self.w, self.b, self.Z)
plt.scatter(self.W, self.B, c = 'r', marker = 'x')
plt.xlabel('w')
plt.ylabel('b')
plt.show()
# Plot diagram
def plot_ps(self):
plt.subplot(121)
plt.ylim
plt.plot(self.x, self.y, 'ro', label = "training points")
plt.plot(self.x, self.W[-1] * self.x + self.B[-1], label = "estimated line")
plt.plot(self.x, 1 / (1 + np.exp(-1 * (self.W[-1] * self.x + self.B[-1]))), label = 'sigmoid')
plt.xlabel('x')
plt.ylabel('y')
plt.ylim((-0.1, 2))
plt.title('Data Space Iteration: ' + str(self.n))
plt.show()
plt.subplot(122)
plt.contour(self.w, self.b, self.Z)
plt.scatter(self.W, self.B, c = 'r', marker = 'x')
plt.title('Loss Surface Contour Iteration' + str(self.n))
plt.xlabel('w')
plt.ylabel('b')
# Plot the diagram
def PlotStuff(X, Y, model, epoch, leg = True):
plt.plot(X.numpy(), model(X).detach().numpy(), label = 'epoch ' + str(epoch))
plt.plot(X.numpy(), Y.numpy(), 'r')
if leg == True:
plt.legend()
else:
pass
Explanation: The class <code>plot_error_surfaces</code> is just to help you visualize the data space and the parameter space during training and has nothing to do with PyTorch.
End of explanation
# Set random seed
torch.manual_seed(0)
Explanation: Set the random seed:
End of explanation
# Create the data class
class Data(Dataset):
# Constructor
def __init__(self):
self.x = torch.arange(-1, 1, 0.1).view(-1, 1)
self.y = torch.zeros(self.x.shape[0], 1)
self.y[self.x[:, 0] > 0.2] = 1
self.len = self.x.shape[0]
# Getter
def __getitem__(self, index):
return self.x[index], self.y[index]
# Get items
def __len__(self):
return self.len
Explanation: <!--Empty Space for separating topics-->
<h2 id="Makeup_Data">Get Some Data</h2>
Create the <code>Data</code> class
End of explanation
# Create Data object
data_set = Data()
Explanation: Make <code>Data</code> object
End of explanation
# Create logistic_regression class
class logistic_regression(nn.Module):
# Construstor
def __init__(self,n_inputs):
super(logistic_regression, self).__init__()
self.linear = nn.Linear(n_inputs, 1)
# Prediction
def forward(self, x):
yhat = torch.sigmoid(self.linear(x))
return yhat
Explanation: <!--Empty Space for separating topics-->
<h2 id="Model_Cost">Create the Model and Total Loss Function (Cost)</h2>
Create a custom module for logistic regression:
End of explanation
# Create model object and print parameter
model = logistic_regression(1)
print("The parameters: ", model.state_dict())
Explanation: Create a logistic regression object and print the parameters:
End of explanation
# Create plot_error_surfaces object
get_surface = plot_error_surfaces(15, 13, data_set[:][0], data_set[:][1], 30)
Explanation: Create a <code>plot_error_surfaces</code> object to visualize the data space and the parameter space during training:
End of explanation
# Create DataLoader, Cost Function, Optimizer
trainloader = DataLoader(dataset = data_set, batch_size = 3)
criterion_rms = nn.MSELoss()
learning_rate = 2
optimizer = torch.optim.SGD(model.parameters(), lr = learning_rate)
Explanation: Create <code>DataLoader</code> object, cost or criterion function and <code>optimizer</code>:
End of explanation
# Train the model
def train_model(epochs):
for epoch in range(epochs):
for x, y in trainloader:
yhat = model(x)
loss = criterion_rms(yhat, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
get_surface.set_para_loss(model, loss.tolist())
if epoch % 20 == 0:
get_surface.plot_ps()
train_model(100)
Explanation: <!--Empty Space for separating topics-->
<h2 id="BGD">Train the Model via Batch Gradient Descent</h2>
Train the model
End of explanation
# Make the Prediction
yhat = model(data_set.x)
label = yhat > 0.5
print("The accuracy: ", torch.mean((label == data_set.y.type(torch.ByteTensor)).type(torch.float)))
Explanation: Get the actual class of each sample and calculate the accuracy on the test data.
End of explanation
<END_TASK> |
199 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Python Libraries For Better Code Insights
Snoop - Never Use print Again
Step1: Snooping on execution
Step2: Snooping on referenced functions
Step3: pp - pretty print
Step4: Shortcut
Step5: How to use in Jupyter
Load extension with %load_ext snoop in a notebook cell, then use the cell magic %%snoop at the top of a notebook cell to trace that cell | <ASSISTANT_TASK:>
Python Code:
ROMAN = [
(1000, "M"),
( 900, "CM"),
( 500, "D"),
( 400, "CD"),
( 100, "C"),
( 90, "XC"),
( 50, "L"),
( 40, "XL"),
( 10, "X"),
( 9, "IX"),
( 5, "V"),
( 4, "IV"),
( 1, "I"),
]
def to_roman(number: int):
result = ""
for (arabic, roman) in ROMAN:
(factor, number) = divmod(number, arabic)
result += roman * factor
return result
print(to_roman(2021))
print(to_roman(8))
Explanation: Python Libraries For Better Code Insights
Snoop - Never Use print Again
End of explanation
import snoop
@snoop
def to_roman2(number: int):
result = ""
for (arabic, roman) in ROMAN:
(factor, number) = divmod(number, arabic)
result += roman * factor
return result
print(to_roman2(2021))
from statistics import stdev
numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
print(f"numbers={numbers}: stdev={stdev(numbers)}")
Explanation: Snooping on execution
End of explanation
def mystddev(max: int) -> float:
my_numbers = list(range(max))
with snoop(depth=2):
return stdev(my_numbers)
print(mystddev(5))
from statistics import median
print(median(numbers) + 2 * stdev(numbers))
Explanation: Snooping on referenced functions
End of explanation
from snoop import pp
pp(pp(median(numbers)) + pp(2 * pp(stdev(numbers))))
Explanation: pp - pretty print
End of explanation
# print(median(numbers) + 2 * stdev(numbers))
pp.deep(lambda: median(numbers) + 2 * stdev(numbers))
Explanation: Shortcut: pp.deep + parameters-less lambda
End of explanation
users = {
'user1': { 'is_admin': True, 'email': 'one@exmple.com'},
'user2': { 'is_admin': True, 'phone': '281-555-5555' },
'user3': { 'is_admin': False, 'email': 'three@example.com' },
}
def email_user(*user_names) -> None:
global users
for user in user_names:
print("Emailing %s at %s", (user, users[user]['email']))
email_user('user1', 'user2')
Explanation: How to use in Jupyter
Load extension with %load_ext snoop in a notebook cell, then use the cell magic %%snoop at the top of a notebook cell to trace that cell:
better-exceptions - Better and Prettier Stack Traces
End of explanation
<END_TASK> |