text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we can write some C code. The %%microblaze magic provides an environment where we can write the code and it takes a single argument - the Microblaze we wish to target this code at. This first example simply adds two numbers together and returns the result.
Step2: The functions we defined in the magic are now available for us to interact with in Python as any other function.
Step3: Data Motion
Step4: Arrays can be passed in two different way. If a type other than void is provided then the data will be copied to the microblaze and if non-const the data will be copied back as well. And iterable and modifiable object can be used as the argument in this case.
Step5: Finally we can pass a void pointer which will allow the Microblaze to directly access the memory of the host processing system for transferring large quantities of data. In Python these blocks of memory should be allocated using the pynq.allocate function and it is the responsibility of the programmer to make sure that the Python and C code agree on the types used.
Step6: Debug printing
Step7: Long running processes
Step8: We can now start the counter going.
Step9: And interrogate its current value
Step10: There are some limitations with using pyprintf inside a persistent function in that the output will not be displayed until a subsequent function is called. If the buffer fills in the meantime this can cause the process to deadlock.
Step11: We can now create instances using our create function and call the add method on the returned object. The underlying value of the typedef instance can be retrieved from the .val attribute.
| <ASSISTANT_TASK:>
Python Code:
from pynq.overlays.base import BaseOverlay
base = BaseOverlay('base.bit')
%%microblaze base.PMODA
int add(int a, int b) {
return a + b;
}
add(4,6)
%%microblaze base.PMODA
float arg_passing(float a, char b, unsigned int c) {
return a + b + c;
}
arg_passing(1, 2, 3)
%%microblaze base.PMODA
int culm_sum(int* val, int len) {
int sum = 0;
for (int i = 0; i < len; ++i) {
sum += val[i];
val[i] = sum;
}
return sum;
}
numbers = [i for i in range(10)]
culm_sum(numbers, len(numbers))
print(numbers)
%%microblaze base.PMODA
long long big_sum(void* data, int len) {
int* int_data = (int*)data;
long long sum = 0;
for (int i = 0; i < len; ++i) {
sum += int_data[i];
}
return sum;
}
from pynq import allocate
buffer = allocate(shape=(1024 * 1024), dtype='i4')
buffer[:] = range(1024*1024)
big_sum(buffer, len(buffer))
%%microblaze base.PMODA
#include <pyprintf.h>
int debug_sum(int a, int b) {
int sum = a + b;
pyprintf("Adding %d and %d to get %d\n", a, b, sum);
return sum;
}
debug_sum(1,2)
%%microblaze base.PMODA
#include <yield.h>
static int counter = 0;
void start_counter() {
while (1) {
++counter;
yield();
}
}
int counter_value() {
return counter;
}
start_counter()
counter_value()
%%microblaze base.PMODA
typedef unsigned int mod_int;
mod_int mod_int_create(int val) { return val % 53; }
mod_int mod_int_add(mod_int lhs, int rhs) { return (lhs + rhs) % 53; }
a = mod_int_create(63)
b = a.add(4)
print(b)
print(b.val)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
| <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccr-iitm', 'sandbox-3', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Key Phrase Extraction - Keybert
Step2: Sentiment Analysis - Huggingface
Step3: Zero Shot Classification - Huggingface
Step4: Few Shot Classification - Huggingface
Step5: Named Entity Extraction - Huggingface
Step6: Question Answering
| <ASSISTANT_TASK:>
Python Code:
!pip install transformers
!pip install torch
!pip install keybert
from keybert import KeyBERT
kw_model = KeyBERT()
document='''
My electricity isn't working, and I've not had any power for five hours, can you send someone to fix it please.
'''
kw_model.extract_keywords(document, keyphrase_ngram_range=(1, 3), stop_words='english')
from transformers import pipeline
sentiment_pipeline = pipeline("text-classification", model="bhadresh-savani/distilbert-base-uncased-emotion")
data = ["The product is amazing, I really love it", "I was really frustrated they didn't get back to me"]
sentiment_pipeline(data)
from transformers import pipeline
classification_pipeline = pipeline("zero-shot-classification", model="facebook/bart-large-mnli")
candidate_labels = ["renewable", "politics", "emissions", "temperature","emergency","advertisement"]
sentence = ["The smoke from the car exhaust was unbearable"]
classification_pipeline (sentence, candidate_labels)
from transformers import pipeline
examples='''
sentence: "My car needs a service"
intent: repair
###
sentence: "My car is dirty"
intent: valet
###
sentence: "I want to sell my car"
intent: sale
###
sentence: "My cars engine is making a funny noise"
intent:'''
generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B')
generator(examples, do_sample=True, max_new_tokens=3, temperature=0.1, end_sequence="###")
from transformers import pipeline
classification_pipeline = pipeline("ner", model="dbmdz/bert-large-cased-finetuned-conll03-english")
sentence = ["Pete wanted to go to London to present NLP stuff for ECI"]
classification_pipeline (sentence)
from transformers import pipeline
classification_pipeline = pipeline("question-answering", model="distilbert-base-cased-distilled-squad")
document="A reusable launch system (RLS, or reusable launch vehicle RLV) is a launch system which is capable of launching a payload into space more than once."
question = ["Whas it an RLS"]
classification_pipeline (question=question, context=document)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Prepare dataset
Step2: Degree-days work on daily data, so we want to resample our dataset to the daily mean.
Step3: Compute degree days
Step4: Plot weekly degree days
| <ASSISTANT_TASK:>
Python Code:
import opengrid as og
import pandas as pd
plt = og.plot_style()
# A dataset with hourly weather variables for Brussels (2016) is included in OpenGrid.
# Note that the data contains also the last week of 2015.
# This is required in order for the degree-days computation to work for the first days of January.
dfw = og.datasets.get('weather_2016_hour')
# We only need temperature
temp = dfw.temperature
temp = temp.resample('D').mean()
fig = temp.plot()
# set base temperatures
heating_base_temperatures = range(8, 18, 2) # 8, 10, 12, 14, 16, 18
cooling_base_temperatures = range(16, 26, 2) # 16, 18, ...
# compute degree days for each of the base temperatures and return them in a dataframe called DD
DD = og.library.weather.compute_degree_days(
ts=temp,
heating_base_temperatures=heating_base_temperatures,
cooling_base_temperatures=cooling_base_temperatures
)
# restrict the dataframe to 2016
DD = DD.loc['2016']
DD_weekly = DD.resample('W').sum()
fig = DD_weekly.filter(like='HDD').plot()
fig.set_title('Weekly heating degree-days with different base temperatures')
plt.show()
fig = DD_weekly.filter(like='CDD').plot()
fig.set_title('Weekly cooling degree-days with different base temperatures')
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the arm library-- see the Chapter 3.1 notebook if you need help.
Step2: Display a regression line as a function of one input variable, Pg 42
Step3: alternately
Step4: Displaying two fitted regression lines, Pg 43
Step5: alternative sequence of commands
Step6: model with interaction
Step7: Displaying uncertainty in the fitted regression, Pg 43
Step8: Replicating Figure 3.10, Pg 44
Step9: alternatively
Step10: Displaying using one plot for each input variable, Pg 44
Step11: Replicating Figure 3.11, Pg 45
| <ASSISTANT_TASK:>
Python Code:
%%R
# I had to import foreign to get access to read.dta
library("foreign")
kidiq <- read.dta("../../ARM_Data/child.iq/kidiq.dta")
# I won't attach kidiq-- i generally don't attach to avoid confusion(s)
#attach(kidiq)
%%R
library("arm")
%%R
fit.2 <- lm(kidiq$kid_score ~ kidiq$mom_iq)
plot(kidiq$mom_iq, kidiq$kid_score,
xlab="Mother IQ score", ylab="Child test score")
curve(coef(fit.2)[1] + coef(fit.2)[2]*x, add=TRUE)
%%R
plot(kidiq$mom_iq, kidiq$kid_score,
xlab="Mother IQ score", ylab="Child test score")
curve(cbind(1,x) %*% coef(fit.2), add=TRUE)
%%R
# fit, no interactions
fit.3 <- lm(kidiq$kid_score ~ kidiq$mom_hs + kidiq$mom_iq)
# define colors
colors <- ifelse(kidiq$mom_hs==1, "black", "gray")
# plot points
plot(kidiq$mom_iq, kidiq$kid_score,
xlab="Mother IQ score", ylab="Child test score",
col=colors, pch=20)
# plot fits, using matrix notation
curve(cbind(1, 1, x) %*% coef(fit.3), add=TRUE, col="black")
curve(cbind(1, 0, x) %*% coef(fit.3), add=TRUE, col="gray")
%%R
# set axes, not points
plot(kidiq$mom_iq, kidiq$kid_score,
xlab="Mother IQ score", ylab="Child test score",
type="n")
# add points, with appropriate colors
points(kidiq$mom_iq[kidiq$mom_hs==1], kidiq$kid_score[kidiq$mom_hs==1],
pch=20, col="black")
points(kidiq$mom_iq[kidiq$mom_hs==0], kidiq$kid_score[kidiq$mom_hs==0],
pch=20, col="gray")
# add fits, using matrix notation
curve(cbind(1, 1, x) %*% coef(fit.3), add=TRUE, col="black")
curve(cbind(1, 0, x) %*% coef(fit.3), add=TRUE, col="gray")
%%R
# fit with interactions
fit.4 <- lm(kidiq$kid_score ~ kidiq$mom_hs + kidiq$mom_iq + kidiq$mom_hs:kidiq$mom_iq)
# setup colors
colors <- ifelse(kidiq$mom_hs==1, "black", "gray")
# plot points using colors
plot(kidiq$mom_iq, kidiq$kid_score,
xlab="Mother IQ score", ylab="Child test score",
col=colors, pch=20)
# add fits using matrix notation
curve(cbind(1, 1, x, 1*x) %*% coef(fit.4), add=TRUE, col="black")
curve(cbind(1, 0, x, 0*x) %*% coef(fit.4), add=TRUE, col="gray")
%%R
fit.2 <- lm(kidiq$kid_score ~ kidiq$mom_iq)
display(fit.2)
%%R
# generate samples of fit parameters, reflecting uncertainty
# 100 sets of parameters generated
fit.2.sim <- sim(fit.2)
# plot points
plot(kidiq$mom_iq, kidiq$kid_score,
xlab="Mother IQ score", ylab="Child test score",
pch=20)
# plot lines with sampled parameters, use all 100
for (i in 1:100) {
# $ access gave errors, use coef()
curve(coef(fit.2.sim)[i,1] + coef(fit.2.sim)[i,2]*x, add=TRUE,col="gray")
}
# add best fit in red
curve(coef(fit.2)[1] + coef(fit.2)[2]*x, add=TRUE, col="red")
%%R
# plot points
plot(kidiq$mom_iq, kidiq$kid_score,
xlab="Mother IQ score", ylab="Child test score",
pch=20)
# define function to plot lines from samples of parameters, reflecting
# uncertainty
Oneline <- function(beta) {
curve(beta[1]+beta[2]*x, add=TRUE, col="gray")
}
# apply the function
# again, change $ access to coef()
apply(coef(fit.2.sim), 1, Oneline)
curve(coef(fit.2)[1] + coef(fit.2)[2]*x, add=TRUE, col="red")
%%R
# fit, no interactions
fit.3 <- lm(kidiq$kid_score ~ kidiq$mom_hs + kidiq$mom_iq)
# get estimate coefficients
beta.hat <- coef(fit.3)
# sample coefficients to reflect uncertainty
beta.sim <- coef(sim(fit.3))
%%R
kidscore.jitter <- jitter(kidiq$kid_score)
%%R
jitter.binary <- function(a, jitt=.05){
ifelse (a==0, runif (length(a), 0, jitt), runif (length(a), 1-jitt, 1))
}
jitter.mom_hs <- jitter.binary(kidiq$mom_hs)
%%R -w 780 -h 480 -u px
par(mfrow=c(1,2))
# PLOT 1
plot(kidiq$mom_iq, kidiq$kid_score,
xlab="Mother IQ score", ylab="Child test score",
pch=20, xaxt="n", yaxt="n")
axis(1, c(80,100,120,140))
axis(2, c(20,60,100,140))
# plot samples to reflect uncertainty-- use all 100
for (i in 1:100) {
curve(cbind (1, mean(kidiq$mom_hs), x) %*% beta.sim[i,], lwd=.5, col="gray",
add=TRUE)
}
# add best fit line
curve(cbind (1, mean(kidiq$mom_hs), x) %*% beta.hat, col="black", add=TRUE)
# PLOT 2
plot(jitter.mom_hs, kidscore.jitter,
xlab="Mother completed high school", ylab="Child test score",
pch=20, xaxt="n", yaxt="n")
axis(1, seq(0,1))
axis(2, c(0,50,100,150))
# plot samples to reflect uncertainty-- use all 100
for (i in 1:100) {
curve(cbind (1, x, mean(kidiq$mom_iq)) %*% beta.sim[i,], lwd=.5, col="gray",
add=TRUE)
}
# add best fit line
curve(cbind (1, x, mean(kidiq$mom_iq)) %*% beta.hat, col="black", add=TRUE)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Learning Embeddings with SeqToSeq
Step2: We need to define the "alphabet" for our SeqToSeq model, the list of all tokens that can appear in sequences. (It's also possible for input and output sequences to have different alphabets, but since we're training it as an autoencoder, they're identical in this case.) Make a list of every character that appears in any training sequence.
Step3: Create the model and define the optimization method to use. In this case, learning works much better if we gradually decrease the learning rate. We use an ExponentialDecay to multiply the learning rate by 0.9 after each epoch.
Step4: Let's train it! The input to fit_sequences() is a generator that produces input/output pairs. On a good GPU, this should take a few hours or less.
Step5: Let's see how well it works as an autoencoder. We'll run the first 500 molecules from the validation set through it, and see how many of them are exactly reproduced.
Step6: Now we'll trying using the encoder as a way to generate molecular fingerprints. We compute the embedding vectors for all molecules in the training and validation datasets, and create new datasets that have those as their feature vectors. The amount of data is small enough that we can just store everything in memory.
Step7: For classification, we'll use a simple fully connected network with one hidden layer.
Step8: Find out how well it worked. Compute the ROC AUC for the training and validation datasets.
| <ASSISTANT_TASK:>
Python Code:
!pip install --pre deepchem
import deepchem
deepchem.__version__
import deepchem as dc
tasks, datasets, transformers = dc.molnet.load_muv(split='stratified')
train_dataset, valid_dataset, test_dataset = datasets
train_smiles = train_dataset.ids
valid_smiles = valid_dataset.ids
tokens = set()
for s in train_smiles:
tokens = tokens.union(set(c for c in s))
tokens = sorted(list(tokens))
from deepchem.models.optimizers import Adam, ExponentialDecay
max_length = max(len(s) for s in train_smiles)
batch_size = 100
batches_per_epoch = len(train_smiles)/batch_size
model = dc.models.SeqToSeq(tokens,
tokens,
max_length,
encoder_layers=2,
decoder_layers=2,
embedding_dimension=256,
model_dir='fingerprint',
batch_size=batch_size,
learning_rate=ExponentialDecay(0.001, 0.9, batches_per_epoch))
def generate_sequences(epochs):
for i in range(epochs):
for s in train_smiles:
yield (s, s)
model.fit_sequences(generate_sequences(40))
predicted = model.predict_from_sequences(valid_smiles[:500])
count = 0
for s,p in zip(valid_smiles[:500], predicted):
if ''.join(p) == s:
count += 1
print('reproduced', count, 'of 500 validation SMILES strings')
import numpy as np
train_embeddings = model.predict_embeddings(train_smiles)
train_embeddings_dataset = dc.data.NumpyDataset(train_embeddings,
train_dataset.y,
train_dataset.w.astype(np.float32),
train_dataset.ids)
valid_embeddings = model.predict_embeddings(valid_smiles)
valid_embeddings_dataset = dc.data.NumpyDataset(valid_embeddings,
valid_dataset.y,
valid_dataset.w.astype(np.float32),
valid_dataset.ids)
classifier = dc.models.MultitaskClassifier(n_tasks=len(tasks),
n_features=256,
layer_sizes=[512])
classifier.fit(train_embeddings_dataset, nb_epoch=10)
metric = dc.metrics.Metric(dc.metrics.roc_auc_score, np.mean, mode="classification")
train_score = classifier.evaluate(train_embeddings_dataset, [metric], transformers)
valid_score = classifier.evaluate(valid_embeddings_dataset, [metric], transformers)
print('Training set ROC AUC:', train_score)
print('Validation set ROC AUC:', valid_score)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Most objects knows how to show() themselves
Step2: Most things are list-like
Step3: Since a directory is a list of files, it makes sence that the Python slice syntax [5
Step4: And a text file is really just a list of lines, so
Step5: NB
Step6: Other useful things to do with directories/lists of files
Step7: And calling .images on a directory returns a list of images. For which we can, of course, render thumbnails
Step8: Other such "list of files by type" attributes are .fits, .tables, and .dirs
Step9: And the show_all() method will call show() on every file object in the list. This is useful if you want to render a bunch of objects with the same parameters
Step10: Accessing a single file by name
Step11: If you want to get at one specific file, using dd(name_or_pattern)[0] becomes a hassle. Filelists therefore support a direct [name_or_pattern] operation which always returns a single file object. If name_or_pattern matches multiple files, only the first one is returned (but radiopadre will show you a transient warning message).
Step12: Working with text files
Step13: "Watching" text files
Step14: Running shell commands
Step15: Working with FITS files
Step16: With multiple FITS files, it's possible to load all of them into JS9, and use the "<" and ">" keys to switch between images. Use the "JS9 all" button to do this
Step17: There's a shortcut for doing this directly -- just call .js9() on a list of FITS files (note that "collective" functions such as .thumbs() and .js9() will only work on homogenuous filelists, i.e. lists of FITS files. Don't try calling them on a list contaning a mix of files -- it won't work!)
Step18: The .header attribute of a FITS file object returns the FITS header, in the same kind of object (list-of-lines) as a text file. So all the tricks we did on text files above still apply
Step19: If you want to read in data from the FITS file, the .fitsobj attribute returns a PrimaryHDU object, just like astropy.io.fits.open(filename) would
Step20: Working with CASA tables
Step21: With optional arguments to .show(), you can render just a subset of rows (given as start_row, nrows), and a subset of columns, taking a slice through an array column. The below tells radiopadre to render the first 10 rows, taking the column TIME in its entirety, and taking a [32
Step22: If you want to render all columns with a common slice, use the optional _ argument (we saw this above). The given slice will be applied to all columns as much as possible (or at least to those that match the shape)
Step23: The .table attribute returns a casacore table object with which you can do all the normal casacore table operations
Step24: But if you want to quickly read data from a table, radiopadre provides some fancier methods. For example, subtables of the table are available as a .SUBTABLE_NAME attribute. This gives another table object, with all the functions above available
Step25: Accessing table columns
Step26: ...or else apply a numpy-style array index with []
Step27: Another useful feature is creating a masked array from a combination of a column and FLAG/FLAG_ROW. Append _F to the column name to get a masked array
Step28: So combining the above, here's how to compute the UVW in wavelengths of all baselines to antenna 1, and make a uv-coverage plot of that subset of baselines
Step29: The ls() function
Step30: You can also use the "R" switch for a recursive directory listing
Step31: Or give a filename to get an object representing that one file
Step32: Om the same principle, give a subdirectory name to get a directory object
Step33: One thing to note is that ls() (i.e. with no patterns) doesn't necessarily list all files. The files included by default are governed by radiopadre settings. Below we'll see how to change those.
Step34: Using "with" to change settings temporarily
Step35: Here's how we can render FITS images with different settings, without changing the global settings. Whatever we set in with only applies in the body of the with statement. In this case it is particularly useful, as it will also apply to the JS9 displays by default
| <ASSISTANT_TASK:>
Python Code:
from radiopadre import ls, settings
dd = ls() # calls radiopadre.ls() to get a directory listing, assigns this to dd
dd # standard notebook feature: the result of the last expression on the cell is rendered in HTML
dd.show()
print "Calling .show() on an object renders it in HTML anyway, same as if it was the last statement in the cell"
images_subdir = dd[0]
demo_ms = dd[1]
fits_image = dd[2]
log_file = dd[-1] # last file in directory... consistent with Python list syntax
images_subdir.show()
demo_ms.show(_=(32,0)) # _ selects channels/correlations... more detail later
fits_image.show()
log_file.show()
# be prepared for a lot of output below... scroll through it
images_subdir[5:10]
sub_ms = demo_ms[5:10] # gives us a table containing rows 5 through 9 of the MS
sub_ms.show(_=(32,0)) # _ selects channels/correlations... more detail later
log_file[-10:] # extract last ten lines and show them
png_files = dd("*.png") # on directories, () works like a shell pattern
png_files
log_file("Gain plots") # on text files, () works like grep
demo_ms("ANTENNA1==1").show(_=(32,0)) # on tables, () does a TaQL query
png_files.thumbs() # for PNG images, these are nice and clickable!
images_subdir.images.thumbs()
dd.fits.show()
dd.tables.show()
dd.dirs.show()
dd.fits.thumbs(vmin=-1e-4, vmax=0.01) # and FITS files also know how to make themselves a thumbnail
# note that thumbs() takes optional arguments just like show()
# note the difference: dd.fits selects all files of type FITS, dd("*fits") selects all files matching "*fits".
# In our case this happens to be one and the same thing, but it doesn't have to be
dd("*fits").show_all(vmin=0, vmax=1e-2, colormap='hot')
# show_all() passes all its arguments to the show() method of each file.
dirties = dd("j0839-5417_2-MFS-dirty.fits")
print "This is a list:", type(dirties), len(dirties) # this is a list even though we only specified one file
print "This is a single file:", type(dirties[0]) # so we have to use [0] to get at the FITS file itself
# Note that the summary attribute returns a short summary of any radiopadre object (as text or HTML).
# You can show() or print it
print "This is a summary of the list:",dirties.summary
print "And now in HTML:"
dirties.summary.show()
print "This is a summary of the file:",dirties[0].summary
print "And now in HTML:"
dirties[0].summary.show()
dirty_image = dd["*fits"] # matches 2 files. if you re-execute this with Ctrl+Enter, you'll see a warning
print type(dirty_image)
dirty_image = dd["*dirty*fits"] # this will match just the one file
dirty_image.show()
log_file
log_file.head(5) # same as log_file.show(head=5). Number is optional -- default is 10
log_file.tail(5) # same as log_file.show(tail=5)
log_file.full() # same as log_file.show(full=True). Use the scrollbar next to the cell output.
log_file("Gain") # same as log_file.grep("Gain") or log_file.show(grep="Gain")
# and of course all objects are just "lists of lines", so the normal list slicing syntax works
log_file("Gain")[10:20].show()
log_file("Gain")[-1]
log_file.watch(head=0, tail=10)
dd.sh("df -h")
dd.sh("df -h")("/boot")
dirty_image.summary.show()
dirty_image.js9()
dd("*fits")
# If you're wondering how to tell JS9 to start with specific scale settings, use the "with settings" trick
# shown here. It will be explained below...
with settings.fits(vmin=-1e-4, vmax=0.01):
dd("*fits").js9()
dirty_image.header
dirty_image.header("CDELT*")
dirty_image.header.full()
dirty_image.fitsobj
demo_ms
demo_ms.show(0,10,TIME=(),DATA=(slice(32,34),None))
demo_ms.show(0, 10, _=(32,0)) # selects channel 32, correlation 0 from all 2D array columns. Doesn't apply to
# other types of columns
print type(demo_ms.table)
demo_ms.ANTENNA
## and .subtables gives you a list of all the subtables
for subtable in demo_ms.subtables:
subtable.show()
data = demo_ms.DATA(0,5)
print data.shape
data
demo_ms.DATA[0:10,:,0] # read rows 0~9, corrrelation 0
demo_ms.DATA_F[0,:,0]
pylab.plot(demo_ms.DATA[0,:,0],'+b')
pylab.plot(demo_ms.DATA_F[0,:,0],'xr')
# of course all of these things work together
demo_ms("ANTENNA1==1 && ANTENNA2==3").DATA_F[:20,32:64,:].shape
demo_ms.UVW()
import numpy as np
freqs = demo_ms.SPECTRAL_WINDOW.CHAN_FREQ(0, 1) # read frequencies for spw 0
print freqs
subset = demo_ms("ANTENNA1 == 1")
uvw_lambda = subset.UVW()[np.newaxis,:,:]*3e+8/freqs[0,:,np.newaxis,np.newaxis]
print uvw_lambda.shape
import pylab
pylab.plot(uvw_lambda[:,:,0].flatten(), uvw_lambda[:,:,1].flatten(), '.')
ls("*txt -rt") # give *txt files in reverse order of modification time
logs = ls("*txt -rt") # of course this just returns a list-of-files object
logs
ls("*png -R")
image = ls("1525170187-1_meqtrees-gjones_plots-chan.png")
image
images_dir = ls("images")
images_dir
settings # same as settings.show(), if it's the last expression in the cell
# and the various sections will also render themselves
settings.files
# changing settings is as easy as
settings.files.include = "*png"
# the new settings apply from that point onwards, so you probably want to do this at the top of a notebook
ls()
# from now on, only "*png" files will be listed. Unless you override this by an explicit pattern to ls(),
# e.g. in this case "*" overrides settings.files.include:
ls("*")
settings.fits
with settings.fits(vmin=1e-6, vmax=1, colormap='hot', scale='log'):
ls("*fits").show() # this shows a list of FITS files
ls("*fits").show_all() # and this calls show() on every FITS file
# observe that the global settings haven't changed:
settings.fits
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate fake dataset
Step2: Hyperparameters
Step3: Visualize training sequences
Step4: The model definition
Step5: <div style="text-align
Step6: prepare training dataset
Step7: Instantiate the model
Step8: Initialize Tensorflow session
Step9: The training loop
Step10: <a name="benchmark"></a>
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import utils_datagen
import utils_display
from matplotlib import pyplot as plt
import tensorflow as tf
print("Tensorflow version: " + tf.__version__)
DATA_SEQ_LEN = 1024*128
data = np.concatenate([utils_datagen.create_time_series(waveform, DATA_SEQ_LEN) for waveform in utils_datagen.Waveforms])
utils_display.picture_this_1(data, DATA_SEQ_LEN)
NB_EPOCHS = 10 # number of times the data is repeated during training
RNN_CELLSIZE = 32 # size of the RNN cells
SEQLEN = 16 # unrolled sequence length
BATCHSIZE = 32 # mini-batch size
utils_display.picture_this_2(data, BATCHSIZE, SEQLEN) # execute multiple times to see different sample sequences
# tree simplistic predictive models: can you beat them ?
def simplistic_models(X):
# "random" model
Yrnd = tf.random_uniform([tf.shape(X)[0]], -2.0, 2.0) # tf.shape(X)[0] is the batch size
# "same as last" model
Ysal = X[:,-1]
# "trend from last two" model
Ytfl = X[:,-1] + (X[:,-1] - X[:,-2])
return Yrnd, Ysal, Ytfl
# linear model (RMSE: 0.36, with shuffling: 0.17)
def linear_model(X):
Yout = tf.layers.dense(X, 1) # output shape [BATCHSIZE, 1]
return Yout
# 2-layer dense model (RMSE: 0.38, with shuffling: 0.15-0.18)
def DNN_model(X):
Y = tf.layers.dense(X, SEQLEN//2, activation=tf.nn.relu)
Yout = tf.layers.dense(Y, 1, activation=None) # output shape [BATCHSIZE, 1]
return Yout
# convolutional (RMSE: 0.31, with shuffling: 0.16)
def CNN_model(X):
X = tf.expand_dims(X, axis=2) # [BATCHSIZE, SEQLEN, 1] is necessary for conv model
Y = tf.layers.conv1d(X, filters=8, kernel_size=4, activation=tf.nn.relu, padding="same") # [BATCHSIZE, SEQLEN, 8]
Y = tf.layers.conv1d(Y, filters=16, kernel_size=3, activation=tf.nn.relu, padding="same") # [BATCHSIZE, SEQLEN, 8]
Y = tf.layers.conv1d(Y, filters=8, kernel_size=1, activation=tf.nn.relu, padding="same") # [BATCHSIZE, SEQLEN, 8]
Y = tf.layers.max_pooling1d(Y, pool_size=2, strides=2) # [BATCHSIZE, SEQLEN//2, 8]
Y = tf.layers.conv1d(Y, filters=8, kernel_size=3, activation=tf.nn.relu, padding="same") # [BATCHSIZE, SEQLEN//2, 8]
Y = tf.layers.max_pooling1d(Y, pool_size=2, strides=2) # [BATCHSIZE, SEQLEN//4, 8]
# mis-using a conv layer as linear regression :-)
Yout = tf.layers.conv1d(Y, filters=1, kernel_size=SEQLEN//4, activation=None, padding="valid") # output shape [BATCHSIZE, 1, 1]
Yout = tf.squeeze(Yout, axis=-1) # output shape [BATCHSIZE, 1]
return Yout
# RNN model (RMSE: 0.38, with shuffling 0.14, the same with loss on last 8)
def RNN_model(X, n=1):
# 2-layer RNN
X = tf.expand_dims(X, axis=2) # [BATCHSIZE, SEQLEN, 1] is necessary for RNN model
cell1 = tf.nn.rnn_cell.GRUCell(RNN_CELLSIZE)
cell2 = tf.nn.rnn_cell.GRUCell(RNN_CELLSIZE)
cell = tf.nn.rnn_cell.MultiRNNCell([cell1, cell2], state_is_tuple=False)
Yn, H = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32) # Yn [BATCHSIZE, SEQLEN, RNN_CELLSIZE]
# regression head
batchsize = tf.shape(X)[0]
Yn = tf.reshape(Yn, [batchsize*SEQLEN, RNN_CELLSIZE])
Yr = tf.layers.dense(Yn, 1) # Yr [BATCHSIZE*SEQLEN, 1]
Yr = tf.reshape(Yr, [batchsize, SEQLEN, 1]) # Yr [BATCHSIZE, SEQLEN, 1]
# In this RNN model, you can compute the loss on the last predicted item or the lats n predicted items
# Last n is slightly better.
Yout = Yr[:,-n:SEQLEN,:] # last item(s) in sequence: output shape [BATCHSIZE, n, 1]
Yout = tf.squeeze(Yout, axis=-1)
return Yout
def RNN_model_N(X): return RNN_model(X, n=SEQLEN//2)
def model_fn(features, labels, model):
X = features # shape [BATCHSIZE, SEQLEN]
Y = model(X)
last_label = labels[:, -1] # last item in sequence: the target value to predict
last_labels = labels[:, -tf.shape(Y)[1]:SEQLEN] # last p items in sequence (as many as in Y), useful for RNN_model(X, n>1)
loss = tf.losses.mean_squared_error(Y, last_labels) # loss computed on last label(s)
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train_op = optimizer.minimize(loss)
Yrnd, Ysal, Ytfl = simplistic_models(X)
eval_metrics = {"RMSE": tf.sqrt(loss),
# compare agains three simplistic predictive models: can you beat them ?
"RMSErnd": tf.sqrt(tf.losses.mean_squared_error(Yrnd, last_label)),
"RMSEsal": tf.sqrt(tf.losses.mean_squared_error(Ysal, last_label)),
"RMSEtfl": tf.sqrt(tf.losses.mean_squared_error(Ytfl, last_label))}
Yout = Y[:,-1]
return Yout, loss, eval_metrics, train_op
# training to predict the same sequence shifted by one (next value)
labeldata = np.roll(data, -1)
# slice data into sequences
traindata = np.reshape(data, [-1, SEQLEN])
labeldata = np.reshape(labeldata, [-1, SEQLEN])
# also make an evaluation dataset by randomly subsampling our fake data
EVAL_SEQUENCES = DATA_SEQ_LEN*4//SEQLEN//4
joined_data = np.stack([traindata, labeldata], axis=1) # new shape is [N_sequences, 2(train/eval), SEQLEN]
joined_evaldata = joined_data[np.random.choice(joined_data.shape[0], EVAL_SEQUENCES, replace=False)]
evaldata = joined_evaldata[:,0,:]
evallabels = joined_evaldata[:,1,:]
def datasets(nb_epochs):
# Dataset API for batching, shuffling, repeating
dataset = tf.data.Dataset.from_tensor_slices((traindata, labeldata))
dataset = dataset.repeat(NB_EPOCHS)
dataset = dataset.shuffle(DATA_SEQ_LEN*4//SEQLEN) # important ! Number of sequences in shuffle buffer: all of them
dataset = dataset.batch(BATCHSIZE)
# Dataset API for batching
evaldataset = tf.data.Dataset.from_tensor_slices((evaldata, evallabels))
evaldataset = evaldataset.repeat()
evaldataset = evaldataset.batch(EVAL_SEQUENCES) # just one batch with everything
# Some boilerplate code...
# this creates a Tensorflow iterator of the correct type and shape
# compatible with both our training and eval datasets
tf_iter = tf.data.Iterator.from_structure(dataset.output_types, dataset.output_shapes)
# it can be initialized to iterate through the training dataset
dataset_init_op = tf_iter.make_initializer(dataset)
# or it can be initialized to iterate through the eval dataset
evaldataset_init_op = tf_iter.make_initializer(evaldataset)
# Returns the tensorflow nodes needed by our model_fn.
features, labels = tf_iter.get_next()
# When these nodes will be executed (sess.run) in the training or eval loop,
# they will output the next batch of data.
# Note: when you do not need to swap the dataset (like here between train/eval) just use
# features, labels = dataset.make_one_shot_iterator().get_next()
# TODO: easier with tf.estimator.inputs.numpy_input_fn ???
return features, labels, dataset_init_op, evaldataset_init_op
tf.reset_default_graph() # restart model graph from scratch
# instantiate the dataset
features, labels, dataset_init_op, evaldataset_init_op = datasets(NB_EPOCHS)
# instantiate the model
Yout, loss, eval_metrics, train_op = model_fn(features, labels, RNN_model_N)
# variable initialization
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
count = 0
losses = []
indices = []
sess.run(dataset_init_op)
while True:
try: loss_, _ = sess.run([loss, train_op])
except tf.errors.OutOfRangeError: break
# print progress
if count%300 == 0:
epoch = count // (DATA_SEQ_LEN*4//BATCHSIZE//SEQLEN)
print("epoch " + str(epoch) + ", batch " + str(count) + ", loss=" + str(loss_))
if count%10 == 0:
losses.append(np.mean(loss_))
indices.append(count)
count += 1
# final evaluation
sess.run(evaldataset_init_op)
eval_metrics_, Yout_ = sess.run([eval_metrics, Yout])
print("Final accuracy on eval dataset:")
print(str(eval_metrics_))
plt.ylim(ymax=np.amax(losses[1:])) # ignore first value(s) for scaling
plt.plot(indices, losses)
plt.show()
# execute multiple times to see different sample sequences
utils_display.picture_this_3(Yout_, evaldata, evallabels, SEQLEN)
NB_EPOCHS = 10
sess.close()
models = [linear_model, DNN_model, CNN_model, RNN_model, RNN_model_N]
for model in models:
# reset tensorflow graph: start from scratch
tf.reset_default_graph()
# instantiate the dataset
features, labels, dataset_init_op, evaldataset_init_op = datasets(NB_EPOCHS)
# instantiate model
Yout, loss, eval_metrics, train_op = model_fn(features, labels, model)
init = tf.global_variables_initializer()
with tf.Session() as sess:
# training loop
sess.run([init, dataset_init_op])
while True:
try: sess.run(train_op)
except tf.errors.OutOfRangeError: break
# evaluation
sess.run(evaldataset_init_op)
eval_metrics_ = sess.run(eval_metrics)
print(str(model))
print(str(eval_metrics_))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: array.T returns the transpose of an array.
Step2: Stacking and Splitting Arrays
Step3: Similarly, two arrays having the same number of rows can be horizontally stacked using np.hstack((a, b)).
Step4: Apply User Defined Functions
Step5: This also has the advantage that you can vectorize the function once, and then apply it as many times as needed.
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
# Reshape a 1-D array to a 3 x 4 array
some_array = np.arange(0, 12).reshape(3, 4)
print(some_array)
# Can reshape it further
some_array.reshape(2, 6)
# If you specify -1 as a dimension, the dimensions are automatically calculated
# -1 means "whatever dimension is needed"
some_array.reshape(4, -1)
# Transposing an array
some_array.T
# Creating two arrays
array_1 = np.arange(12).reshape(3, 4)
array_2 = np.arange(20).reshape(5, 4)
print(array_1)
print("\n")
print(array_2)
# vstack
# Note that np.vstack(a, b) throws an error - you need to pass the arrays as a list
np.vstack((array_1, array_2))
# Basic mathematical operations
a = np.arange(1, 20)
# sin, cos, exp, log
print(np.sin(a))
print(np.cos(a))
print(np.exp(a))
print(np.log(a))
print(a)
# The non-numpy way, not recommended
a_list = [x/(x+1) for x in a]
print(a_list)
# The numpy way: vectorize the function, then apply it
f = np.vectorize(lambda x: x/(x+1))
f(a)
# Apply function on a 2-d array: Applied to each element
b = np.linspace(1, 100, 10)
f(b)
# np.linalg documentation
help(np.linalg)
# Creating arrays
a = np.arange(1, 10).reshape(3, 3)
b= np.arange(1, 13).reshape(3, 4)
print(a)
print(b)
# Inverse
np.linalg.inv(a)
# Determinant
np.linalg.det(a)
# Eigenvalues and eigenvectors
np.linalg.eig(a)
# Multiply matrices
np.dot(a, b)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Anytime you see a statement that starts with import, you'll recognize that the programmer is pulling in some sort of external functionality not previously available to Python by default. In this case, the random package provides some basic functionality for computing random numbers.
Step2: Quite a bit of other functionality--still built-in to the default Python environment!--requires explicit import statements to unlock. Here are just a couple of examples
Step3: If you are so inclined, you can see the full Python default module index here
Step4: Dot-notation works by
Step5: We can tweak it
Step6: You can put whatever you want after the as, and anytime you call methods from that module, you'll use the name you gave it.
Step7: Indexing would still work as you would expect, but looping through a matrix--say, to do matrix multiplication--would be laborious and highly inefficient.
Step8: Now just call the array method using our list from before!
Step9: To reference an element in the array, just use the same notation we did for lists
Step10: You can also separate dimensions by commas
Step11: Remember, with indexing matrices
Step12: Now, let's see the same operation, this time with NumPy arrays.
Step13: No loops needed, far fewer lines of code, and a simple intuitive operation.
Step14: Works exactly as you'd expect, but no [explicit] loop needed.
Step15: If you recall from algebra, matrix multiplication $A \times B$ involves multipliying each row of $A$ by each column of $B$. But rather than write that code yourself, Python (as of version 3.5) gives us a dedicated matrix multiplication operator
| <ASSISTANT_TASK:>
Python Code:
import random
x = [3, 7, 2, 9, 4]
print("Maximum: {}".format(max(x)))
print("Minimum: {}".format(min(x)))
import random # For generating random numbers, as we've seen.
import os # For interacting with the filesystem of your computer.
import re # For regular expressions. Unrelated: https://xkcd.com/1171/
import datetime # Helps immensely with determining the date and formatting it.
import math # Gives some basic math functions: trig, factorial, exponential, logarithms, etc.
import xml # Abandon all hope, ye who enter.
import random
random.randint(0, 1)
import random
random.randint(0, 1)
import random as r
r.randint(0, 1)
matrix = [[ 1, 2, 3],
[ 4, 5, 6],
[ 7, 8, 9] ]
print(matrix)
import numpy
arr = numpy.array(matrix)
print(arr)
arr[0]
arr[2][2]
arr[2, 2]
vector = [4.0, 15.0, 6.0, 2.0]
# To normalize this to unit length, we need to divide each element by the vector's magnitude.
# To learn it's magnitude, we need to loop through the whole vector.
# So. We need two loops!
magnitude = 0.0
for element in vector:
magnitude += element ** 2
magnitude = (magnitude ** 0.5) # square root
print("Original magnitude: {:.2f}".format(magnitude))
new_magnitude = 0.0
for index, element in enumerate(vector):
vector[index] = element / magnitude
new_magnitude += vector[index] ** 2
new_magnitude = (new_magnitude ** 0.5)
print("Normalized magnitude: {:.2f}".format(new_magnitude))
import numpy as np # This tends to be the "standard" convention when importing NumPy.
import numpy.linalg as nla
vector = [4.0, 15.0, 6.0, 2.0]
np_vector = np.array(vector) # Convert to NumPy array.
magnitude = nla.norm(np_vector) # Computing the magnitude: one-liner.
print("Original magnitude: {:.2f}".format(magnitude))
np_vector /= magnitude # Vectorized division!!! No loop needed!
new_magnitude = nla.norm(np_vector)
print("Normalized magnitude: {:.2f}".format(new_magnitude))
x = np.array([1, 2, 3])
y = np.array([4, 5, 6])
z = x + y
print(z)
A = np.array([ [1, 2], [3, 4] ])
B = np.array([ [5, 6], [7, 8] ])
A @ B
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This model does a poor job of fitting to our data. If I fit a non-parametric model, like the Nelson-Aalen model, to this data, the Exponential's lack of fit is very obvious.
Step2: It should be clear that the single parameter model is just averaging the hazards over the entire time period. In reality though, the true hazard rate exhibits some complex non-linear behaviour.
Step3: We can see a much better fit in this model. A quantitative measure of fit is to compare the log-likelihood between exponential model and the piecewise exponential model (higher is better). The log-likelihood went from -772 to -647, respectively. We could keep going and add more and more breakpoints, but that would end up overfitting to the data.
Step4: The best fit of the model to the data is
Step5: From the output, we see that the value of 76.55 is the suggested asymptote, that is
Step6: Our new asymptote is at $t\approx 100, \text{c.i.}=(87, 112)$. The model appears to fit the early times better than the previous models as well, however our $\alpha$ parameter has more uncertainty now. Continuing to add parameters isn't advisable, as we will overfit to the data.
Step7: 3-parameter Weibull distribution
Step8: Inverse Gaussian distribution
Step9: Gompertz
Step10: APGW
Step11: Bounded lifetimes using the beta distribution
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from lifelines.datasets import load_waltons
waltons = load_waltons()
T, E = waltons['T'], waltons['E']
from lifelines import ExponentialFitter
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(10, 4))
epf = ExponentialFitter().fit(T, E)
epf.plot_hazard(ax=ax[0])
epf.plot_cumulative_hazard(ax=ax[1])
ax[0].set_title("hazard"); ax[1].set_title("cumulative_hazard")
epf.print_summary(3)
from lifelines import NelsonAalenFitter
ax = epf.plot(figsize=(8,5))
naf = NelsonAalenFitter().fit(T, E)
ax = naf.plot(ax=ax)
plt.legend()
from lifelines import PiecewiseExponentialFitter
# looking at the above plot, I think there may be breaks at t=40 and t=60.
pf = PiecewiseExponentialFitter(breakpoints=[40, 60]).fit(T, E)
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(10, 4))
ax = pf.plot(ax=axs[1])
pf.plot_hazard(ax=axs[0])
ax = naf.plot(ax=ax, ci_show=False)
axs[0].set_title("hazard"); axs[1].set_title("cumulative_hazard")
pf.print_summary(3)
from lifelines.fitters import ParametricUnivariateFitter
import autograd.numpy as np
class InverseTimeHazardFitter(ParametricUnivariateFitter):
# we tell the model what we want the names of the unknown parameters to be
_fitted_parameter_names = ['alpha_']
# this is the only function we need to define. It always takes two arguments:
# params: an iterable that unpacks the parameters you'll need in the order of _fitted_parameter_names
# times: a vector of times that will be passed in.
def _cumulative_hazard(self, params, times):
alpha = params[0]
return alpha /(80 - times)
itf = InverseTimeHazardFitter()
itf.fit(T, E)
itf.print_summary()
ax = itf.plot(figsize=(8,5))
ax = naf.plot(ax=ax, ci_show=False)
plt.legend()
class TwoParamInverseTimeHazardFitter(ParametricUnivariateFitter):
_fitted_parameter_names = ['alpha_', 'beta_']
# Sequence of (min, max) pairs for each element in x. None is used to specify no bound
_bounds = [(0, None), (75.0001, None)]
def _cumulative_hazard(self, params, times):
alpha, beta = params
return alpha / (beta - times)
two_f = TwoParamInverseTimeHazardFitter()
two_f.fit(T, E)
two_f.print_summary()
ax = itf.plot(ci_show=False, figsize=(8,5))
ax = naf.plot(ax=ax, ci_show=False)
two_f.plot(ax=ax)
plt.legend()
from lifelines.fitters import ParametricUnivariateFitter
class ThreeParamInverseTimeHazardFitter(ParametricUnivariateFitter):
_fitted_parameter_names = ['alpha_', 'beta_', 'gamma_']
_bounds = [(0, None), (75.0001, None), (0, None)]
# this is the only function we need to define. It always takes two arguments:
# params: an iterable that unpacks the parameters you'll need in the order of _fitted_parameter_names
# times: a numpy vector of times that will be passed in by the optimizer
def _cumulative_hazard(self, params, times):
a, b, c = params
return a / (b - times) ** c
three_f = ThreeParamInverseTimeHazardFitter()
three_f.fit(T, E)
three_f.print_summary()
ax = itf.plot(ci_show=False, figsize=(8,5))
ax = naf.plot(ax=ax, ci_show=False)
ax = two_f.plot(ax=ax, ci_show=False)
ax = three_f.plot(ax=ax)
plt.legend()
fig, axs = plt.subplots(3, figsize=(7, 8), sharex=True)
new_timeline = np.arange(0, 85)
three_f = ThreeParamInverseTimeHazardFitter().fit(T, E, timeline=new_timeline)
three_f.plot_hazard(label='hazard', ax=axs[0]).legend()
three_f.plot_cumulative_hazard(label='cumulative hazard', ax=axs[1]).legend()
three_f.plot_survival_function(label='survival function', ax=axs[2]).legend()
fig.subplots_adjust(hspace=0)
# Hide x labels and tick labels for all but bottom plot.
for ax in axs:
ax.label_outer()
import autograd.numpy as np
from autograd.scipy.stats import norm
# I'm shifting this to exaggerate the effect
T_ = T + 10
class ThreeParameterWeibullFitter(ParametricUnivariateFitter):
_fitted_parameter_names = ["lambda_", "rho_", "theta_"]
_bounds = [(0, None), (0, None), (0, T.min()-0.001)]
def _cumulative_hazard(self, params, times):
lambda_, rho_, theta_ = params
return ((times - theta_) / lambda_) ** rho_
tpw = ThreeParameterWeibullFitter()
tpw.fit(T_, E)
tpw.print_summary()
ax = tpw.plot_cumulative_hazard(figsize=(8,5))
ax = NelsonAalenFitter().fit(T_, E).plot(ax=ax, ci_show=False)
from autograd.scipy.stats import norm
class InverseGaussianFitter(ParametricUnivariateFitter):
_fitted_parameter_names = ['lambda_', 'mu_']
def _cumulative_density(self, params, times):
mu_, lambda_ = params
v = norm.cdf(np.sqrt(lambda_ / times) * (times / mu_ - 1), loc=0, scale=1) + \
np.exp(2 * lambda_ / mu_) * norm.cdf(-np.sqrt(lambda_ / times) * (times / mu_ + 1), loc=0, scale=1)
return v
def _cumulative_hazard(self, params, times):
return -np.log(1-np.clip(self._cumulative_density(params, times), 1e-15, 1-1e-15))
igf = InverseGaussianFitter()
igf.fit(T, E)
igf.print_summary()
ax = igf.plot_cumulative_hazard(figsize=(8,5))
ax = NelsonAalenFitter().fit(T, E).plot(ax=ax, ci_show=False)
class GompertzFitter(ParametricUnivariateFitter):
# this parameterization is slightly different than wikipedia.
_fitted_parameter_names = ['nu_', 'b_']
def _cumulative_hazard(self, params, times):
nu_, b_ = params
return nu_ * (np.expm1(times * b_))
T, E = waltons['T'], waltons['E']
ggf = GompertzFitter()
ggf.fit(T, E)
ggf.print_summary()
ax = ggf.plot_cumulative_hazard(figsize=(8,5))
ax = NelsonAalenFitter().fit(T, E).plot(ax=ax, ci_show=False)
class APGWFitter(ParametricUnivariateFitter):
# this parameterization is slightly different than wikipedia.
_fitted_parameter_names = ['kappa_', 'gamma_', 'phi_']
def _cumulative_hazard(self, params, t):
kappa_, phi_, gamma_ = params
return (kappa_ + 1) / kappa_ * ((1 + ((phi_ * t) ** gamma_) /(kappa_ + 1)) ** kappa_ -1)
apg = APGWFitter()
apg.fit(T, E)
apg.print_summary(2)
ax = apg.plot_cumulative_hazard(figsize=(8,5))
ax = NelsonAalenFitter().fit(T, E).plot(ax=ax, ci_show=False)
n = 100
T = 5 * np.random.random(n)**2
T_censor = 10 * np.random.random(n)**2
E = T < T_censor
T_obs = np.minimum(T, T_censor)
from autograd_gamma import betainc
class BetaFitter(ParametricUnivariateFitter):
_fitted_parameter_names = ['alpha_', 'beta_', "m_"]
_bounds = [(0, None), (0, None), (T.max(), None)]
def _cumulative_density(self, params, times):
alpha_, beta_, m_ = params
return betainc(alpha_, beta_, times / m_)
def _cumulative_hazard(self, params, times):
return -np.log(1-self._cumulative_density(params, times))
beta_fitter = BetaFitter().fit(T_obs, E)
beta_fitter.plot()
beta_fitter.print_summary()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basic rich display
Step2: Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate.
| <ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
assert True # leave this to grade the import statements
Image(url='http://ecx.images-amazon.com/images/I/31ESVCFE0SL.jpg',embed=True,width=600,height=600)
assert True # leave this to grade the image display
%%html
<table>
<tr>
<th>Name</th>
<th>Symbol</th>
<th>Antiparticle</th>
<th>Charge (e)</th>
<th>Mass (MeV/$c^2$)</th>
</tr>
<tr>
<td>up</td>
<td>u</td>
<td>$\bar{u}$</td>
<td>$+\frac{2}{3}$</td>
<td>1.5–3.3</td>
</tr>
<tr>
<td>down</td>
<td>d</td>
<td>$\bar{d}$</td>
<td>$-\frac{1}{3}$</td>
<td>3.5–6.0</td>
</tr>
<tr>
<td>charm</td>
<td>c</td>
<td>$\bar{c}$</td>
<td>$+\frac{2}{3}$</td>
<td>1,160–1,340</td>
</tr>
<tr>
<td>strange</td>
<td>s</td>
<td>$\bar{s}$</td>
<td>$-\frac{1}{3}$</td>
<td>70-130</td>
</tr>
<tr>
<td>top</td>
<td>t</td>
<td>$\bar{t}$</td>
<td>$+\frac{2}{3}$</td>
<td>169,000-173,300</td>
</tr>
<tr>
<td>bottom</td>
<td>b</td>
<td>$\bar{b}$</td>
<td>$-\frac{1}{3}$</td>
<td>4,130-4,370</td>
</tr>
</table>
assert True # leave this here to grade the quark table
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Displacement operation
Step2: Optical quantum states in the fock basis
Step4: Displace and measure - the generalized Q function
Step6: Iterative Maximum Likelihood Estimation
Step7: Reconstruction of the quantum state density matrix from (ideal) generalized $Q$ function measurements
Step8: Let us plot the Husimi $Q$ function - Fig 1(d) of Ref~[1]
Step9: We can also look at the density matrix of the states using Hinton plots
Step10: Discussion
Step12: Let us construct an iMLE function that we can reuse
Step13: Visualizing the state reconstructed from noisy data
Step14: More iterations
Step15: QuTiP details
| <ASSISTANT_TASK:>
Python Code:
# imports
import numpy as np
from qutip import Qobj, rand_dm, fidelity, displace, qdiags, qeye, expect
from qutip.states import coherent, coherent_dm, thermal_dm, fock_dm
from qutip.random_objects import rand_dm
from qutip.visualization import plot_wigner, hinton, plot_wigner_fock_distribution
from qutip.wigner import qfunc
import qutip
import matplotlib.pyplot as plt
from matplotlib import animation, colors
from IPython.display import clear_output
hilbert_size = 32
psi = coherent(hilbert_size, 0)
d = displace(hilbert_size, 2+2j)
fig, ax = plt.subplots(1, 4, figsize=(19, 4))
plot_wigner_fock_distribution(psi, fig=fig, axes=[ax[0], ax[1]])
plot_wigner_fock_distribution(d*psi, fig=fig, axes=[ax[2], ax[3]])
ax[0].set_title(r"Initial state, $\psi_{vac} = |0 \rangle$")
ax[2].set_title(r"Displaced state, $D(\alpha=2+2i )\psi_{vac}$")
plt.show()
alpha_range = 2
alphas = np.array([alpha_range, -alpha_range - 1j*alpha_range,
-alpha_range + 1j*alpha_range])
psi = sum([coherent(hilbert_size, a) for a in alphas])
psi = psi.unit()
rho = psi*psi.dag()
fig, ax = plot_wigner_fock_distribution(rho, figsize=(9, 4))
ax[0].set_title("Superposition of three coherent states")
plt.show()
def measure_q(beta, rho):
Measures the generalized q function values for the state density matrix.
Parameters
----------
beta: np.complex
A complex displacement.
rho:
The density matrix as a QuTiP Qobj (`qutip.Qobj`)
Returns
-------
population: ndarray
A 1D array for the probabilities for populations.
hilbertsize = rho.shape[0]
# Apply a displacement to the state and then measure the diagonals.
D = displace(hilbertsize, -beta)
rho_disp = D*rho*D.dag()
# measure all the elements in the diagonal
populations = np.real(np.diagonal(rho_disp.full()))
return populations
betas = [1.7, -2, 2.5j, -2.1 - 2.1j, -2 + 2j]
generalized_Q = [measure_q(b, rho) for b in betas]
fig, ax = plt.subplots(1, 3, figsize=(15, 4))
indices = np.arange(hilbert_size)
plot_wigner(rho, fig, ax[0])
ax[0].scatter(np.real(betas), np.imag(betas), marker="x")
ax[0].set_title(r"Measurement $\beta$ values")
for i in range(len(betas)):
ax[1].bar(indices, generalized_Q[i],
label = r"$beta = {:.2f}$".format(betas[i]))
ax[1].set_title("Population measurement statistics")
ax[1].set_xlabel("n")
ax[1].set_ylabel("Photon number probability")
hinton(rho, ax=ax[2])
ax[2].set_xlabel("Hinton plot of density matrix")
ax[1].legend()
plt.show()
def construct_R(hilbert_size, betas, ops=None):
Calculates the set of operators R in a displace and measure method.
Parameters
----------
hilbert_size (int):
The hilbert space size
beta: list_like
A list of the displacements that were applied to the state before
measurement.
op: list of :class:`qutip.Qobj`
Measurement operators such as photon occupation |n><n| that should
be used to construct the R operator for various displacements.
default: |n><n| constructed using `qutip.fock_dm(hilbert_size, n)`
for all n.
Returns
-------
R (list of `qutip.Qobj`) :
The list of iterative operators applied for state reconstruction.
if ops == None:
ops = []
for n in range(hilbert_size):
ops.append(fock_dm(hilbert_size, n))
R_ops = []
for beta in betas:
D = displace(hilbert_size, -beta)
displaced_operators = []
for operator in ops:
displaced_D = D.dag()*operator*D
displaced_operators.append(displaced_D)
R_ops.append(displaced_operators)
return R_ops
r_ops = construct_R(hilbert_size, betas)
expectation_values = [expect(r_ops[i], rho) for i in range(len(betas))]
# test if the expectation values calculated from the R operators match
# the previous calculations with the measure_q function
generalized_Q = [measure_q(b, rho) for b in betas]
np.allclose(expectation_values, generalized_Q)
r_ops = construct_R(hilbert_size, betas)
data = [expect(r_ops[i], rho) for i in range(len(betas))]
max_iter = 200
rho_reconstructed = qeye(hilbert_size)/hilbert_size # initial dm
rho_t = []
rho_t.append(rho_reconstructed)
fidelities = [fidelity(rho_reconstructed, rho)]
for iterations in range(max_iter):
R = 0*qeye(hilbert_size)
for i in range(len(betas)):
# for all the n photons
for n in range(hilbert_size):
r = r_ops[i][n]
R += (data[i][n]/(expect(r, rho_reconstructed) + 1e-20))*r
rho_reconstructed = R*rho_reconstructed*R
# Trace renorm
rho_reconstructed = rho_reconstructed/rho_reconstructed.tr()
rho_t.append(rho_reconstructed)
# Compute fidelity
f = fidelity(rho_reconstructed, rho)
fidelities.append(f)
print(r"Iteration {}; Fidelity: {}".format(iterations, f))
clear_output(wait=True)
xvec = np.linspace(-7.5, 7.5, 100)
yvec = np.linspace(-7.5, 7.5, 100)
q_state = qfunc(rho, xvec, yvec)
q_reconstruction = qfunc(rho_reconstructed, xvec, yvec)
fig, ax = plt.subplots(1, 2, figsize=(8, 3))
norm = colors.TwoSlopeNorm(vmin=-1e-9, vcenter=0, vmax=np.max(q_state))
ax[0].pcolor(xvec, yvec, q_state, norm=norm, cmap="RdBu_r", shading='auto')
im = ax[1].pcolor(xvec, yvec, q_reconstruction, norm=norm, cmap="RdBu_r", shading='auto')
ax[0].scatter(np.real(betas), np.imag(betas), marker="x", s=20)
ax[0].set_title(r"Target state ($Q$ function)")
ax[1].set_title("Reconstructed state ($Q$ function)")
ax[0].set_xlabel(r"Re($\beta$)", fontsize=13)
ax[0].set_ylabel(r"Im($\beta$)", fontsize=13)
ax[1].set_xlabel(r"Re($\beta$)", fontsize=13)
plt.colorbar(im, ax=[ax[0], ax[1]])
plt.show()
fig, ax = hinton(Qobj(rho[:16, :16]))
ax.set_title("Target state")
plt.show()
hinton(Qobj(rho_t[-1][:16, :16]))
ax.set_title("Reconstructed state")
plt.show()
generalized_Q_noisy = generalized_Q + np.abs(np.random.normal(loc=0, scale=0.05, size = [len(betas), hilbert_size]))
plt.figure(figsize=(5,3))
for i in range(1):
plt.bar(indices, generalized_Q_noisy[i],
label = "noisy")
plt.bar(indices, generalized_Q[i], fill=False,
label = "ideal")
plt.xlabel("n")
plt.ylabel("p(n)")
plt.legend()
plt.show()
def imle(data, r_ops, initial_rho=None, max_iter=200):
Implements the iterative maximum likelihood estimation algorithm.
Args:
data (array): An array representing measured data for a set of operators.
r_ops (list of `qutip.Qobj`): The list of iterative operators applied
for state reconstruction computed using the
set of measurement operators.
initial_rho (`qutip.Qobj`): Initial density matrix estimate
default: maximally mixed state (I/n).
max_iter (int): The number of iterations to run .
if initial_rho is not None:
rho_reconstructed = initial_rho
else:
rho_reconstructed = qeye(hilbert_size)/hilbert_size
rho_t = []
rho_t.append(rho_reconstructed)
for iterations in range(max_iter):
R = 0*qeye(hilbert_size)
for i in range(len(r_ops)):
# for all the n photons
for n in range(hilbert_size):
r = r_ops[i][n]
R += (data[i][n]/(expect(r, rho_reconstructed) + 1e-20))*r
rho_reconstructed = R*rho_reconstructed*R
# Trace renorm
rho_reconstructed = rho_reconstructed/rho_reconstructed.tr()
rho_t.append(rho_reconstructed)
# Compute fidelity
f = fidelity(rho_reconstructed, rho)
print(r"Iteration {}; Fidelity: {}".format(iterations, f))
clear_output(wait=True)
return rho_t
rho_t_noisy = imle(generalized_Q_noisy, r_ops)
q_reconstruction_noisy = qfunc(rho_t_noisy[-1], xvec, yvec)
fig, ax = plt.subplots(1, 2, figsize=(8, 3))
norm = colors.TwoSlopeNorm(vmin=-1e-9, vcenter=0, vmax=np.max(q_state))
ax[0].pcolor(xvec, yvec, q_state, norm=norm, cmap="RdBu_r", shading='auto')
im = ax[1].pcolor(xvec, yvec, q_reconstruction_noisy, norm=norm, cmap="RdBu_r", shading='auto')
ax[0].scatter(np.real(betas), np.imag(betas), marker="x", s=20)
ax[0].set_title(r"Target state ($Q$ function)")
ax[1].set_title("Reconstructed state ($Q$ function)")
ax[0].set_xlabel(r"Re($\beta$)", fontsize=13)
ax[0].set_ylabel(r"Im($\beta$)", fontsize=13)
ax[1].set_xlabel(r"Re($\beta$)", fontsize=13)
plt.colorbar(im, ax=[ax[0], ax[1]])
plt.show()
rho_t_noisy = imle(generalized_Q_noisy, r_ops, max_iter=1000)
qutip.about()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: values from Okanoya paper below (KOUMURA_OKANOYA_NOTE_ERROR_RATES) are taken from this table
| <ASSISTANT_TASK:>
Python Code:
TRAIN_DUR_IND_MAP = {
k:v for k, v in zip(
sorted(curve_df['train_set_dur'].unique()),
sorted(curve_df['train_set_dur_ind'].unique())
)
}
SAVE_FIG = True
sns.set("paper")
KOUMURA_OKANOYA_NOTE_ERROR_RATES = {
120. : 0.84,
480. : 0.46,
}
KOUMURA_OKANOYA_X = np.asarray([TRAIN_DUR_IND_MAP[k] for k in KOUMURA_OKANOYA_NOTE_ERROR_RATES.keys()])
KOUMURA_OKANOYA_Y = np.asarray(list(KOUMURA_OKANOYA_NOTE_ERROR_RATES.values()))
# max width in inches is 7.5
# https://journals.plos.org/ploscompbiol/s/figures
FIGSIZE = (7.5, 3.75)
DPI = 300
fig = plt.figure(constrained_layout=True, figsize=FIGSIZE, dpi=DPI)
gs = fig.add_gridspec(nrows=4, ncols=2, hspace=0.005)
ax_arr = []
ax_arr.append(fig.add_subplot(gs[0, 0]))
ax_arr.append(fig.add_subplot(gs[:2, 1]))
ax_arr.append(fig.add_subplot(gs[1:, 0]))
ax_arr.append(fig.add_subplot(gs[2:, 1]))
ax_arr = np.array(ax_arr).reshape(2, 2)
ax_arr[0,0].get_shared_x_axes().join(*ax_arr[:, 0].tolist())
ax_arr[0,0].get_shared_x_axes().join(*ax_arr[:, 1].tolist())
for col in range(2):
ax_arr[0,col].spines['bottom'].set_visible(False)
ax_arr[1, col].spines['top'].set_visible(False)
ax_arr[1, col].xaxis.tick_bottom()
metric_list = ['avg_error', 'avg_segment_error_rate']
ylabels = ['Frame error (%)', 'Segment error rate\n(edits per segment)']
for col, (metric, ylabel) in enumerate(zip(metric_list, ylabels)):
for row in range(2):
# ax_ind = row * 2 + col
ax = ax_arr[row, col]
if row == 1 and col == 1:
legend = 'full'
else:
legend = False
sns.lineplot(x='train_set_dur_ind', y=metric, hue='bird', data=curve_df, ci='sd', linewidth=2, ax=ax, legend=legend)
sns.lineplot(x='train_set_dur_ind', y=metric,
linestyle='dashed', color='k', linewidth=4,
data=curve_df, ci=None, label='mean', ax=ax, legend=legend)
if metric == 'avg_segment_error_rate' and row == 0:
scatter = ax.scatter(KOUMURA_OKANOYA_X, KOUMURA_OKANOYA_Y, s=20)
ax.legend(handles=(scatter,), labels=('Koumura\nOkanoya 2016',), loc='upper left')
ax.set_ylabel('')
if row == 0:
ax.set_xticklabels([])
ax.set_xlabel('')
elif row == 1:
ax.set_xlabel('Training set duration (s)', fontsize=10)
ax.set_xticks(list(TRAIN_DUR_IND_MAP.values()))
ax.set_xticklabels(sorted(curve_df['train_set_dur'].unique().astype(int)), rotation=45)
# zoom-in / limit the view to different portions of the data
ax_arr[0, 0].set_ylim(12, 100)
ax_arr[1, 0].set_ylim(0, 8)
ax_arr[0, 1].set_ylim(0.35, 0.95)
ax_arr[1, 1].set_ylim(0.0, 0.12)
bigax_col0 = fig.add_subplot(gs[:, 0], frameon=False)
bigax_col1 = fig.add_subplot(gs[:, 1], frameon=False)
labelpads = (2, 10)
panel_labels = ['A', 'B']
for ylabel, labelpad, panel_label, ax in zip(ylabels,
labelpads,
panel_labels,
[bigax_col0, bigax_col1]):
ax.tick_params(labelcolor='none', top=False, bottom=False, left=False, right=False)
ax.grid(False)
ax.set_ylabel(ylabel, fontsize=10, labelpad=labelpad)
ax.text(-0.2, 1., panel_label, transform=ax.transAxes,
fontsize=12, fontweight='bold', va='top', ha='right')
# get handles from lower right axes legend, then remove and re-create outside
handles, _ = ax_arr[1, 1].get_legend_handles_labels()
ax_arr[1, 1].get_legend().remove()
bigax_col1.legend(handles=handles, bbox_to_anchor=(1.35, 1))
for row in range(2):
for col in range(2):
ax_arr[row, col].spines['left'].set_color('black')
ax_arr[row, col].spines['left'].set_linewidth(0.5)
if row == 1:
ax_arr[row, col].spines['bottom'].set_color('black')
ax_arr[row, col].spines['bottom'].set_linewidth(0.5)
for ax_ in ax_arr.ravel():
ax_.tick_params(axis='both', which='major', labelsize=8)
fig.set_constrained_layout_pads(hspace=-0.05, wspace=0.0)
if SAVE_FIG:
plt.savefig(
REPO_ROOT.joinpath('doc/article/figures/fig4/fig4-learning-curves.png')
)
plt.savefig(
REPO_ROOT.joinpath('doc/article/figures/fig4/fig4-learning-curves.svg')
)
plt.savefig(
REPO_ROOT.joinpath('doc/article/figures/fig4/fig4-learning-curves.tiff')
)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Create the dataset
Step3: Review the dataset
Step4: Using BQML
Step5: Get training statistics and examine training info
Step6: We can also examine the training statistics collected by Big Query. To view training results we use the ML.TRAINING_INFO function.
Step7: Compare to simple benchmark
Step8: So, the naive strategy of just guessing the majority class would have accuracy of 0.5509 on the eval dataset, just below our BQML model.
Step9: Just as before we can examine the evaluation metrics for our regression model and examine the training statistics in Big Query
| <ASSISTANT_TASK:>
Python Code:
PROJECT = !(gcloud config get-value core/project)
PROJECT = PROJECT[0]
%env PROJECT = {PROJECT}
%env REGION = "us-central1"
from google.cloud import bigquery
from IPython import get_ipython
bq = bigquery.Client(project=PROJECT)
def create_dataset():
dataset = bigquery.Dataset(bq.dataset("stock_market"))
try:
bq.create_dataset(dataset) # Will fail if dataset already exists.
print("Dataset created")
except:
print("Dataset already exists")
def create_features_table():
error = None
try:
bq.query(
CREATE TABLE stock_market.eps_percent_change_sp500
AS
SELECT *
FROM `stock_market.eps_percent_change_sp500`
).to_dataframe()
except Exception as e:
error = str(e)
if error is None:
print("Table created")
elif "Already Exists" in error:
print("Table already exists.")
else:
print(error)
raise Exception("Table was not created.")
create_dataset()
create_features_table()
%%bigquery --project $PROJECT
#standardSQL
SELECT
*
FROM
stock_market.eps_percent_change_sp500
LIMIT
10
%%bigquery --project $PROJECT
#standardSQL
CREATE OR REPLACE MODEL
stock_market.direction_model OPTIONS(model_type = "logistic_reg",
input_label_cols = ["direction"]) AS
-- query to fetch training data
SELECT
symbol,
Date,
Open,
close_MIN_prior_5_days,
close_MIN_prior_20_days,
close_MIN_prior_260_days,
close_MAX_prior_5_days,
close_MAX_prior_20_days,
close_MAX_prior_260_days,
close_AVG_prior_5_days,
close_AVG_prior_20_days,
close_AVG_prior_260_days,
close_STDDEV_prior_5_days,
close_STDDEV_prior_20_days,
close_STDDEV_prior_260_days,
direction
FROM
`stock_market.eps_percent_change_sp500`
WHERE
tomorrow_close IS NOT NULL
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15)) = 1
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) <= 15 * 70
%%bigquery --project $PROJECT
#standardSQL
SELECT
*
FROM
ML.EVALUATE(MODEL `stock_market.direction_model`,
(
SELECT
symbol,
Date,
Open,
close_MIN_prior_5_days,
close_MIN_prior_20_days,
close_MIN_prior_260_days,
close_MAX_prior_5_days,
close_MAX_prior_20_days,
close_MAX_prior_260_days,
close_AVG_prior_5_days,
close_AVG_prior_20_days,
close_AVG_prior_260_days,
close_STDDEV_prior_5_days,
close_STDDEV_prior_20_days,
close_STDDEV_prior_260_days,
direction
FROM
`stock_market.eps_percent_change_sp500`
WHERE
tomorrow_close IS NOT NULL
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15)) = 1
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) > 15 * 70
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) <= 15 * 85))
%%bigquery --project $PROJECT
#standardSQL
SELECT
*
FROM
ML.TRAINING_INFO(MODEL `stock_market.direction_model`)
ORDER BY iteration
%%bigquery --project $PROJECT
#standardSQL
WITH
eval_data AS (
SELECT
symbol,
Date,
Open,
close_MIN_prior_5_days,
close_MIN_prior_20_days,
close_MIN_prior_260_days,
close_MAX_prior_5_days,
close_MAX_prior_20_days,
close_MAX_prior_260_days,
close_AVG_prior_5_days,
close_AVG_prior_20_days,
close_AVG_prior_260_days,
close_STDDEV_prior_5_days,
close_STDDEV_prior_20_days,
close_STDDEV_prior_260_days,
direction
FROM
`stock_market.eps_percent_change_sp500`
WHERE
tomorrow_close IS NOT NULL
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15)) = 1
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) > 15 * 70
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) <= 15 * 85)
SELECT
direction,
(COUNT(direction)* 100 / (
SELECT
COUNT(*)
FROM
eval_data)) AS percentage
FROM
eval_data
GROUP BY
direction
%%bigquery --project $PROJECT
#standardSQL
CREATE OR REPLACE MODEL
stock_market.price_model OPTIONS(model_type = "linear_reg",
input_label_cols = ["normalized_change"]) AS
-- query to fetch training data
SELECT
symbol,
Date,
Open,
close_MIN_prior_5_days,
close_MIN_prior_20_days,
close_MIN_prior_260_days,
close_MAX_prior_5_days,
close_MAX_prior_20_days,
close_MAX_prior_260_days,
close_AVG_prior_5_days,
close_AVG_prior_20_days,
close_AVG_prior_260_days,
close_STDDEV_prior_5_days,
close_STDDEV_prior_20_days,
close_STDDEV_prior_260_days,
normalized_change
FROM
`stock_market.eps_percent_change_sp500`
WHERE
normalized_change IS NOT NULL
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15)) = 1
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) <= 15 * 70
%%bigquery --project $PROJECT
#standardSQL
SELECT
*
FROM
ML.EVALUATE(MODEL `stock_market.price_model`,
(
SELECT
symbol,
Date,
Open,
close_MIN_prior_5_days,
close_MIN_prior_20_days,
close_MIN_prior_260_days,
close_MAX_prior_5_days,
close_MAX_prior_20_days,
close_MAX_prior_260_days,
close_AVG_prior_5_days,
close_AVG_prior_20_days,
close_AVG_prior_260_days,
close_STDDEV_prior_5_days,
close_STDDEV_prior_20_days,
close_STDDEV_prior_260_days,
normalized_change
FROM
`stock_market.eps_percent_change_sp500`
WHERE
normalized_change IS NOT NULL
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15)) = 1
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) > 15 * 70
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) <= 15 * 85))
%%bigquery --project $PROJECT
#standardSQL
SELECT
*
FROM
ML.TRAINING_INFO(MODEL `stock_market.price_model`)
ORDER BY iteration
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load LendingClub dataset
Step2: Exploring some features
Step3: Here, we see that we have some feature columns that have to do with grade of the loan, annual income, home ownership status, etc. Let's take a look at the distribution of loan grades in the dataset.
Step4: We can see that over half of the loan grades are assigned values B or C. Each loan is assigned one of these grades, along with a more finely discretized feature called sub_grade (feel free to explore that feature column as well!). These values depend on the loan application and credit report, and determine the interest rate of the loan. More information can be found here.
Step5: This feature describes whether the loanee is mortaging, renting, or owns a home. We can see that a small percentage of the loanees own a home.
Step6: Now, let us explore the distribution of the column safe_loans. This gives us a sense of how many safe and risky loans are present in the dataset.
Step7: You should have
Step8: What remains now is a subset of features and the target that we will use for the rest of this notebook.
Step9: Now, write some code to compute below the percentage of safe and risky loans in the dataset and validate these numbers against what was given using .show earlier in the assignment
Step10: One way to combat class imbalance is to undersample the larger class until the class distribution is approximately half and half. Here, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We used seed=1 so everyone gets the same results.
Step11: Now, let's verify that the resulting percentage of safe and risky loans are each nearly 50%.
Step12: Note
Step13: Use decision tree to build a classifier
Step14: Visualizing a learned model
Step15: In the view that is provided by GraphLab Create, you can see each node, and each split at each node. This visualization is great for considering what happens when this model predicts the target of a new data point.
Step16: Making predictions
Step17: Explore label predictions
Step18: Quiz Question
Step19: Quiz Question
Step20: Tricky predictions!
Step21: Quiz Question
Step22: Let's visualize the small tree here to do the traversing for this data point.
Step23: Note
Step24: Evaluating accuracy of the decision tree model
Step25: Checkpoint
Step26: Quiz Question
Step27: Now, let us evaluate big_model on the training set and validation set.
Step28: Checkpoint
Step29: False positives are predictions where the model predicts +1 but the true label is -1. Complete the following code block for the number of false positives
Step30: False negatives are predictions where the model predicts -1 but the true label is +1. Complete the following code block for the number of false negatives
Step31: Quiz Question
| <ASSISTANT_TASK:>
Python Code:
import graphlab
graphlab.canvas.set_target('ipynb')
loans = graphlab.SFrame('lending-club-data.gl/')
loans.column_names()
loans['grade'].show()
loans['sub_grade'].show()
loans['home_ownership'].show()
# safe_loans = 1 => safe
# safe_loans = -1 => risky
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.remove_column('bad_loans')
loans['safe_loans'].show(view = 'Categorical')
features = ['grade', # grade of the loan
'sub_grade', # sub-grade of the loan
'short_emp', # one year or less of employment
'emp_length_num', # number of years of employment
'home_ownership', # home_ownership status: own, mortgage or rent
'dti', # debt to income ratio
'purpose', # the purpose of the loan
'term', # the term of the loan
'last_delinq_none', # has borrower had a delinquincy
'last_major_derog_none', # has borrower had 90 day or worse rating
'revol_util', # percent of available credit being used
'total_rec_late_fee', # total late fees received to day
]
target = 'safe_loans' # prediction target (y) (+1 means safe, -1 is risky)
# Extract the feature columns and target column
loans = loans[features + [target]]
safe_loans_raw = loans[loans[target] == +1]
risky_loans_raw = loans[loans[target] == -1]
print "Number of safe loans : %s" % len(safe_loans_raw)
print "Number of risky loans : %s" % len(risky_loans_raw)
print "Percentage of safe loans :", (1. * len(safe_loans_raw) / (len(safe_loans_raw) + len(risky_loans_raw))) * 100
print "Percentage of risky loans :", (1. * len(risky_loans_raw) / (len(safe_loans_raw) + len(risky_loans_raw))) * 100
# Since there are fewer risky loans than safe loans, find the ratio of the sizes
# and use that percentage to undersample the safe loans.
percentage = len(risky_loans_raw)/float(len(safe_loans_raw))
risky_loans = risky_loans_raw
safe_loans = safe_loans_raw.sample(percentage, seed=1)
# Append the risky_loans with the downsampled version of safe_loans
loans_data = risky_loans.append(safe_loans)
print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data))
print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data))
print "Total number of loans in our new dataset :", len(loans_data)
train_data, validation_data = loans_data.random_split(.8, seed=1)
decision_tree_model = graphlab.decision_tree_classifier.create(train_data, validation_set=None,
target = target, features = features)
small_model = graphlab.decision_tree_classifier.create(train_data, validation_set=None,
target = target, features = features, max_depth = 2)
small_model.show(view="Tree")
validation_safe_loans = validation_data[validation_data[target] == 1]
validation_risky_loans = validation_data[validation_data[target] == -1]
sample_validation_data_risky = validation_risky_loans[0:2]
sample_validation_data_safe = validation_safe_loans[0:2]
sample_validation_data = sample_validation_data_safe.append(sample_validation_data_risky)
sample_validation_data
predictions = decision_tree_model.predict(sample_validation_data)
print predictions
sample_validation_data[sample_validation_data[target] == predictions]
percentage_predictions = decision_tree_model.predict(sample_validation_data, output_type='probability')
for idx, val in enumerate(percentage_predictions):
print percentage_predictions[idx], sample_validation_data[idx][target]
small_model.predict(sample_validation_data, output_type='probability')
sample_validation_data[1]
small_model.show(view="Tree")
small_model.predict(sample_validation_data[1])
print small_model.evaluate(train_data)['accuracy']
print decision_tree_model.evaluate(train_data)['accuracy']
print small_model.evaluate(validation_data)['accuracy']
print decision_tree_model.evaluate(validation_data)['accuracy']
big_model = graphlab.decision_tree_classifier.create(train_data, validation_set=None,
target = target, features = features, max_depth = 10)
print big_model.evaluate(train_data)['accuracy']
print big_model.evaluate(validation_data)['accuracy']
predictions = decision_tree_model.predict(validation_data)
false_positive = 0
false_negative = 0
for idx, row in enumerate(validation_data):
if predictions[idx] == 1 and row[target] == -1:
false_positive += 1
if predictions[idx] == -1 and row[target] == 1:
false_negative += 1
print false_positive, false_negative
print false_negative
false_negative * 10000 + false_positive * 20000
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data
Step2: Fit the best model
Step3: A better way. Use a model_selection tool
| <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
from sklearn import __version__ as sklearn_version
print('Sklearn version:', sklearn_version)
from sklearn import datasets
all_data = datasets.california_housing.fetch_california_housing()
print(all_data.DESCR)
# Randomize, separate train & test and normalize
from sklearn.utils import shuffle
X, y = shuffle(all_data.data, all_data.target, random_state=0)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=0)
# Normalize the data
from sklearn.preprocessing import Normalizer
normal = Normalizer()
X_train = normal.fit_transform(X_train)
X_test = normal.transform(X_test)
# Create a basic decision tree
from sklearn import tree
from sklearn.metrics import mean_absolute_error
clf = tree.DecisionTreeRegressor()
clf.fit(X_train, y_train)
mean_absolute_error(y_test, clf.predict(X_test))
# Define a function to evaluate the error over models with different max_depth
def acc(md):
'''
Calculate error of a tree with a specific mas_depth
Paramters:
md: max depth of the tree
Returns:
Mean absolute error of the fitted tree
'''
clf = tree.DecisionTreeRegressor(max_depth=md)
clf.fit(X_train, y_train)
return mean_absolute_error(y_test, clf.predict(X_test))
# Evaluate from max_depth=1 to max_depth=30
index = []
accuracy = []
for i in range(1,30):
accuracy_step = acc(i)
index += [i]
accuracy += [accuracy_step]
print('Max depth - Error:', i, accuracy_step)
# Plot the error vs max_depth
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(index,accuracy)
clf = tree.DecisionTreeRegressor(max_depth=9)
clf.fit(X_train, y_train)
mean_absolute_error(y_test, clf.predict(X_test))
# Plot the sctterplot
plt.scatter(y_test, clf.predict(X_test))
import numpy as np
from time import time
from scipy.stats import randint
from sklearn.model_selection import RandomizedSearchCV
# Define estimator. No parameters
clf = tree.DecisionTreeRegressor()
# specify parameters and distributions to sample from
param_dist = {"max_depth": randint(3, 20),
"min_samples_leaf": randint(5, 50)}
# Define randomized search
n_iter_search = 30
random_search = RandomizedSearchCV(clf, param_distributions=param_dist, n_iter=n_iter_search)
# Run the randomized search
start = time()
random_search.fit(X_train, y_train)
print("RandomizedSearchCV took %.2f seconds for %d candidates parameter settings." % ((time() - start), n_iter_search))
# Utility function to report best scores
def report(results, n_top=3):
for i in range(1, n_top + 1):
candidate = np.argmax(results['rank_test_score'] == i)
print("Model with rank: ", i)
print("Mean validation score: ", results['mean_test_score'][candidate])
print("Parameters: ", results['params'][candidate], "\n")
report(random_search.cv_results_)
# Build the tree with the optimal parametrization
clf = tree.DecisionTreeRegressor(max_depth=15, min_samples_leaf=28)
clf.fit(X_train, y_train)
print(mean_absolute_error(y_test, clf.predict(X_test)))
plt.scatter(y_test, clf.predict(X_test))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Running
Step2: async-apply
Step3: We can see that we created a new task and it's pending. Note that the API is async, meaning it won't wait until the task finish.
Step4: result
Step5: revoke
Step6: rate-limit
Step7: timeout
Step8: Worker API
Step9: workers
Step10: pool/shutdown
Step11: pool/restart
Step12: pool/grow
Step13: pool/shrink
Step14: pool/autoscale
Step15: queue/add-consumer
Step16: queue/cancel-consumer
Step17: Queue API
| <ASSISTANT_TASK:>
Python Code:
from celery import Celery
from time import sleep
celery = Celery()
celery.config_from_object({
'BROKER_URL': 'amqp://localhost',
'CELERY_RESULT_BACKEND': 'amqp://',
'CELERYD_POOL_RESTARTS': True, # Required for /worker/pool/restart API
})
@celery.task
def add(x, y):
return x + y
@celery.task
def sub(x, y):
sleep(30) # Simulate work
return x - y
# Done once for the whole docs
import requests, json
api_root = 'http://localhost:5555/api'
task_api = '{}/task'.format(api_root)
args = {'args': [1, 2]}
url = '{}/async-apply/tasks.add'.format(task_api)
print(url)
resp = requests.post(url, data=json.dumps(args))
reply = resp.json()
reply
args = {'args': [1, 2]}
url = '{}/apply/tasks.add'.format(task_api)
print(url)
resp = requests.post(url, data=json.dumps(args))
reply = resp.json()
reply
url = '{}/result/{}'.format(task_api, reply['task-id'])
print(url)
resp = requests.get(url)
resp.json()
# Run a task
args = {'args': [1, 2]}
resp = requests.post('{}/async-apply/tasks.sub'.format(task_api), data=json.dumps(args))
reply = resp.json()
# Now revoke it
url = '{}/revoke/{}'.format(task_api, reply['task-id'])
print(url)
resp = requests.post(url, data='terminate=True')
resp.json()
worker = 'miki-manjaro' # You'll need to get the worker name from the worker API (seel below)
url = '{}/rate-limit/{}'.format(task_api, worker)
print(url)
resp = requests.post(url, params={'taskname': 'tasks.add', 'ratelimit': '10'})
resp.json()
url = '{}/timeout/{}'.format(task_api, worker)
print(url)
resp = requests.post(url, params={'taskname': 'tasks.add', 'hard': '3.14', 'soft': '3'}) # You can omit soft or hard
resp.json()
# Once for the documentation
worker_api = '{}/worker'.format(api_root)
url = '{}/workers'.format(api_root) # Only one not under /worker
print(url)
resp = requests.get(url)
workers = resp.json()
workers
worker = workers.keys()[0]
url = '{}/shutdown/{}'.format(worker_api, worker)
print(url)
resp = requests.post(url)
resp.json()
pool_api = '{}/pool'.format(worker_api)
url = '{}/restart/{}'.format(pool_api, worker)
print(url)
resp = requests.post(url)
resp.json()
url = '{}/grow/{}'.format(pool_api, worker)
print(url)
resp = requests.post(url, params={'n': '10'})
resp.json()
url = '{}/shrink/{}'.format(pool_api, worker)
print(url)
resp = requests.post(url, params={'n': '3'})
resp.json()
url = '{}/autoscale/{}'.format(pool_api, worker)
print(url)
resp = requests.post(url, params={'min': '3', 'max': '10'})
resp.json()
queue_api = '{}/queue'.format(worker_api)
url = '{}/add-consumer/{}'.format(queue_api, worker)
print(url)
resp = requests.post(url, params={'queue': 'jokes'})
resp.json()
url = '{}/cancel-consumer/{}'.format(queue_api, worker)
print(url)
resp = requests.post(url, params={'queue': 'jokes'})
resp.json()
url = '{}/queues/length'.format(api_root)
print(url)
resp = requests.get(url)
resp.json()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Downsampling
Step3: Weighted classes and output bias
Step4: We'll take all of the fraud examples from this dataset, and a subset of non-fraud. Then we'll shuffle and combine and look at the number of examples we have for each class.
Step5: Now let's try with weighted classes and add a bias initializer to our output layer. First, calculate the class weights.
Step6: Reframing
Step7: First, let's look at the cluster prediction results for an "average" example from our dataset.
Step8: Here, it's fairly obvious that this datapoint should be put in cluster 1, given the short distance from that cluster.
Step9: Let's compare this with a cluster prediction for an outlier baby weight.
Step10: Here there's a high distance from each cluster, which we can use to conclude that this might be an anomaly.
| <ASSISTANT_TASK:>
Python Code:
import itertools
import math
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
import xgboost as xgb
from tensorflow import keras
from tensorflow.keras import Sequential
from sklearn.metrics import confusion_matrix
from sklearn.preprocessing import MinMaxScaler
from sklearn.utils import shuffle
from google.cloud import bigquery
# Download the data and preview
!gsutil cp gs://ml-design-patterns/fraud_data_kaggle.csv .
fraud_data = pd.read_csv('fraud_data_kaggle.csv')
fraud_data.head()
# Drop a few columns we won't use for this demo
fraud_data = fraud_data.drop(columns=['nameOrig', 'nameDest', 'isFlaggedFraud'])
fraud_data = pd.get_dummies(fraud_data)
# Split into separate dataframes
fraud = fraud_data[fraud_data['isFraud'] == 1]
not_fraud = fraud_data[fraud_data['isFraud'] == 0]
# Take a random sample of non-fraud data
# The .005 frac will give us around an 80/20 split of not-fraud/fraud samples
# You can experiment with this value
not_fraud_sample = not_fraud.sample(random_state=2, frac=.005)
# Put the data back together and shuffle
fraud_data = pd.concat([not_fraud_sample,fraud])
fraud_data = shuffle(fraud_data, random_state=2)
# Look at our data balance after downsampling
fraud_data['isFraud'].value_counts()
train_test_split = int(len(fraud_data) * .8)
train_data = fraud_data[:train_test_split]
test_data = fraud_data[train_test_split:]
train_labels = train_data.pop('isFraud')
test_labels = test_data.pop('isFraud')
model = xgb.XGBRegressor(
objective='reg:linear'
)
model.fit(train_data.values, train_labels)
# Get some test predictions
y_pred = model.predict(test_data.values)
# To build a confusion matrix using the scikit utility, we'll need the values as ints
y_pred_formatted = []
for i in y_pred:
y_pred_formatted.append(int(round(i)))
cm = confusion_matrix(test_labels.values, y_pred_formatted)
print(cm)
# This is from the sklearn docs
# https://scikit-learn.org/0.18/auto_examples/model_selection/plot_confusion_matrix.html
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = np.round(cm.astype('float') / cm.sum(axis=1)[:, np.newaxis], 3)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# With downsampling, our model's accuracy on fraud is almost as good as non-fraud examples
# You can compare this by training a model on the full dataset if you'd like (it'll take a long time to train given the size)
classes = ['not fraud', 'fraud']
plot_confusion_matrix(cm, classes, normalize=True)
# To access BigQuery, you'll need to authenticate to your Cloud account
from google.colab import auth
auth.authenticate_user()
%%bigquery fraud_df --project sara-cloud-ml
SELECT
*
FROM
`bigquery-public-data.ml_datasets.ulb_fraud_detection`
WHERE Class = 1
# This query will take a a minute to run
%%bigquery nonfraud_df --project sara-cloud-ml
SELECT
*
FROM
`bigquery-public-data.ml_datasets.ulb_fraud_detection`
WHERE Class = 0
AND RAND() < 0.05
bq_fraud_data = pd.concat([fraud_df, nonfraud_df])
bq_fraud_data.sort_values(by=['Time'])
# bq_fraud_data = shuffle(bq_fraud_data, random_state=22)
# Scale time and amount values
time_scaler = MinMaxScaler()
amt_scaler = MinMaxScaler()
bq_fraud_data['Time'] = time_scaler.fit_transform(bq_fraud_data['Time'].values.reshape(-1,1))
bq_fraud_data['Amount'] = amt_scaler.fit_transform(bq_fraud_data['Amount'].values.reshape(-1,1))
# See data balance
bq_fraud_data['Class'].value_counts()
train_test_split = int(len(bq_fraud_data) * .8)
train_data = bq_fraud_data[:train_test_split]
test_data = bq_fraud_data[train_test_split:]
train_labels = train_data.pop('Class')
test_labels = test_data.pop('Class')
# Create a tf dataset
train_dataset = tf.data.Dataset.from_tensor_slices((train_data.values, train_labels))
train_dataset = train_dataset.shuffle(len(train_data)).batch(1024)
test_dataset = tf.data.Dataset.from_tensor_slices((test_data.values, test_labels))
test_dataset = test_dataset.shuffle(len(test_data)).batch(1)
# Get number of examples for each class from the training set
num_minority = train_labels.value_counts()[1]
num_majority = train_labels.value_counts()[0]
minority_class_weight = 1 / (num_minority / len(train_data)) / 2
majority_class_weight = 1 / (num_majority / len(train_data)) / 2
# Pass the weights to Keras in a dict
# The key is the index of each class
keras_class_weights = {0: majority_class_weight, 1: minority_class_weight}
print(keras_class_weights)
# Calculate output bias
output_bias = math.log(num_minority / num_majority)
print(output_bias)
fraud_model = keras.Sequential([
keras.layers.Dense(16, input_shape=(len(train_data.iloc[0]),), activation='relu'),
keras.layers.Dropout(0.25),
keras.layers.Dense(16, activation='relu'),
keras.layers.Dense(1, activation='sigmoid', bias_initializer=tf.keras.initializers.Constant(output_bias))
])
metrics = [
tf.keras.metrics.BinaryAccuracy(name='accuracy'),
tf.keras.metrics.Precision(name='precision'),
tf.keras.metrics.Recall(name='recall'),
tf.keras.metrics.AUC(name='roc_auc'),
]
fraud_model.compile(loss='binary_crossentropy', optimizer='adam', metrics=metrics)
fraud_model.fit(train_dataset, validation_data=test_dataset, epochs=10, class_weight=keras_class_weights)
# This will take about a minute to run
%%bigquery --project sara-cloud-ml
CREATE OR REPLACE MODEL
`sara-cloud-ml.natality.baby_weight_clusters` OPTIONS(model_type='kmeans',
num_clusters=4) AS
SELECT
weight_pounds,
mother_age,
gestation_weeks
FROM
`bigquery-public-data.samples.natality`
LIMIT 10000
%%bigquery average_pred --project sara-cloud-ml
SELECT
*
FROM
ML.PREDICT (MODEL `sara-cloud-ml.natality.baby_weight_clusters`,
(
SELECT
7.0 as weight_pounds,
28 as mother_age,
40 as gestation_weeks
)
)
average_pred
# Print the resulting cluster distances
df['NEAREST_CENTROIDS_DISTANCE'].iloc[0]
%%bigquery outlier_pred --project sara-cloud-ml
SELECT
*
FROM
ML.PREDICT (MODEL `sara-cloud-ml.natality.baby_weight_clusters`,
(
SELECT
3.0 as weight_pounds,
20 as mother_age,
27 as gestation_weeks
)
)
outlier_pred
outlier_pred['NEAREST_CENTROIDS_DISTANCE'].iloc[0]
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2) What genres are most represented in the search results?
Step2: ANSWER
Step3: 3) Use a for loop to determine who BESIDES Lil Wayne has the highest popularity rating.
Step4: ANSWER
Step5: 5) Pick two of your favorite Lils to fight it out, and use their IDs to print out their top tracks.
Step6: Will the world explode if a musician swears?
Step7: QUESTION
Step8: 7) Since we're talking about Lils, what about Biggies?
Step9: 8) Out of the top 50 "Lil"s and the top 50 "Biggie"s, who is more popular on average?
| <ASSISTANT_TASK:>
Python Code:
data = response.json()
data.keys()
artist_data = data['artists']
artist_data.keys()
lil_names = artist_data['items']
#lil_names = list of dictionaries = list of artist name, popularity, type, genres etc
for names in lil_names:
if not names['genres']:
print(names['name'], names['popularity'], "there are no genres listed")
else:
print(names['name'], names['popularity'], names["genres"])
#Join all the lists of genres in the dictionary and then count the number of elements in it
#ANSWER:
all_genres = []
for artist in artists:
print("All genres we've heard of:", all_genres)
#The conditional: None
print("Current artist has:", artist['genres'])
all_genres = all_genres + artist['genres']
#genre_list = ", ".join(artist['genres'])
#print(artist['name'], ":", genre_list)
all_genre.count('dirty soup rap')
all_genre.count('crunk')
#This is bad because dirty south rap shows up four times. We need a unique list of genres
for genre in all_genres:
genre_count = all_genres.count(genre)
print(genre, "shows up", genre_count, "times")
#To remove duplicates. You need to turn a list into a set.
unique_genres = set(list_with_duplicates)
for genre in unique_genres:
genre_count = all_genres.count(genre)
print(genre, "shows up", genre_count, "times")
#There is a library that comes with Python called Collections
#Inside of it is a magic thing called Counter.
import collections # will import the whole collections
#you can also type
from collections import Counter
#all_genres is a list of strings of genrs with duplicates
#counter will count all te genres for us
counts = collections.Counter(all_genres)
counts.most_common(4) #will give you the four most common genres
#HOW TO AUTOMATE GETTING ALL THE RESULTS
response = requests.get("https://api.spotify.com/v1/search?query=lil&type=artist&market=US&limit=10")
small_data = response.json()
small_data['artists']
print(len(small_data['artists']['items'])) #We only get 10 artists
print(data['artists']['total'])
import math
page_count = math.ceil(4502/50)
#math.ceil rounds up
#math.ceil(page_count)
page_count
#First page artists 1-50:
#https://api.spotify.com/v1/search?query=lil&type=artist&market=US&limit=50
#Second page artists 51-100:
#https://api.spotify.com/v1/search?query=lil&type=artist&market=US&limit=50&offset=50
#Third page artists 101-150:
#https://api.spotify.com/v1/search?query=lil&type=artist&market=US&limit=50&offset=100
#Fourth page artists 151-200:
#https://api.spotify.com/v1/search?query=lil&type=artist&market=US&limit=50&offset=150
for page in [0, 1, 2, 3, 4]:
offset = (page) * 50 #because page 2 is 50 and 2-1 = 1 x 50 = 50
print("We are on page", page, "with an offset of", offset)
for page in range(91):
#Get a page
offset = page * 50
print("We are on page", page, "with an offset of", offset)
#Make the request with a changed offset ?offset [offset]
#data = response.json()
#add all our new artists to our list of existing artists
#all_artists = all_artists + data['artists]['items]
print("Successfully retrieved", len(all_artists), "artists")
for popularity in lil_names:
print(popularity['name'], popularity['popularity'], popularity['followers'])
print("Lil Yachty, Lil Uzi Vert, Lil Jon have the highest popularity ratings besides Lil Wayne, and they do not have the largest number of followers.")
for kim in lil_names:
print(kim['name'], kim['id'])
response = requests.get("https://api.spotify.com/v1/artists/5tth2a3v0sWwV1C7bApBdX/")
kim_data = response.json()
#print(kim_data)
kim_followers = kim_data['followers']
total_kim_followers = kim_followers['total']
#print(total_kim_followers)
for artists in lil_names:
if artists["followers"]["total"] > total_kim_followers:
print(artists['name'], artists['popularity'])
ANSWER:
for artist in artists:
#print("Looking at", artist['name'])
if artist['name'] == "Lil' Kim":
print("Found Lil Kim")
print(artist['popularity'])
else:
pass
#print("Not Lil Kim")
lil_kim_popularity = 62
for artist in artists:
if artist['popularity'] > lil_kim_popularity:
print(artist['name'], "is more popular with a score of", artist['popularity'])
more_popular_than_lil_kim.append(artist['name'])
else:
print(artist['name'], "is less popular with a score of", artist['popularity'])
print("#### More popular than Lil Kim ####"):
print(artist_name)
more_popular_string = ", ".join(more_popular_than_lil_kim)
print("Artists mroe popular than Lil Kim are:", more_popular_string)
#Let's pick Lil Wayne and Lil Mama because I don't know who most of these people are
wayne_id = "55Aa2cqylxrFIXC767Z865"
response = requests.get("https://api.spotify.com/v1/artists/" + wayne_id + "/top-tracks?country=US")
wayne_data = response.json()
top_wayne_tracks = wayne_data['tracks']
for track in top_wayne_tracks:
print(track["name"])
mama_id = "5qK5bOC6wLtuLhG5KvU17c"
response = requests.get("https://api.spotify.com/v1/artists/" + mama_id + "/top-tracks?country=US")
mama_data = response.json()
top_mama_tracks = mama_data['tracks']
for track in top_mama_tracks:
print(track["name"])
wayne_explicit_count = 0
wayne_exp_popularity_count = 0
wayne_ok_count = 0
wayne_ok_popularity_count = 0
wayne_explicit_len = 0
wayne_ok_len = 0
for track in top_wayne_tracks:
print(track['name'], track['explicit'], track['popularity'], track["duration_ms"])
if True:
wayne_explicit_count = wayne_explicit_count + 1
wayne_exp_popularity_count = wayne_exp_popularity_count + int(track['popularity'])
wayne_avg_pop = wayne_exp_popularity_count / wayne_explicit_count
wayne_explicit_len = wayne_explicit_len + int(track["duration_ms"])
if not track['explicit']:
wayne_ok_count = wayne_ok_count + 1
wayne_ok_popularity_count = wayne_ok_popularity_count + track['popularity']
wayne_ok_avg_pop = wayne_ok_popularity_count / wayne_ok_count
wayne_ok_len = wayne_ok_len + track["duration_ms"]
if wayne_explicit_count > 0:
print("The average popularity for Lil Wayne's explicit songs is", wayne_avg_pop)
#1 minute is 60000 milliseconds, who knew?
wayne_explicit_mins = int(wayne_explicit_len) / 60000
print("Lil Wayne has", wayne_explicit_mins, "minutes of explicit songs")
if wayne_ok_count > 0:
print("The average popularity for Lil Wayne's non-explicit songs is", wayne_ok_avg_pop)
wayne_ok_mins = int(wayne_ok_len) / 60000
print("Lil Wayne has", wayne_ok_mins, "minutes of explicit songs")
mama_exp_count = 0
mama_exp_pop_count = 0
mama_ok_count = 0
mama_ok_pop_count = 0
mama_exp_len = 0
mama_ok_len = 0
for track in top_mama_tracks:
print(track['name'], track['explicit'], track['popularity'], track["duration_ms"])
if True:
mama_exp_count = mama_exp_count + 1
mama_exp_pop_count = mama_exp_pop_count + int(track['popularity'])
mama_avg_pop = int(mama_exp_pop_count) / int(mama_exp_count)
mama_exp_len = mama_exp_len + int(track["duration_ms"])
if not track['explicit']:
mama_ok_count = mama_ok_count + 1
mama_ok_pop_count = mama_ok_pop_count + int(track['popularity'])
mama_ok_avg_pop = int(mama_ok_pop_count) / int(mama_ok_count)
mama_ok_len = mama_ok_len + int(track["duration_ms"])
if mama_exp_count > 0:
#1 minute is 60000 milliseconds, who knew?
print("The average popularity for Lil Mama's xplicit songs is", mama_avg_pop)
mama_exp_mins = int(mama_exp_len) / 60000
print("Lil Mama has", mama_exp_mins, "minutes of explicit songs")
if mama_ok_count > 0:
print("The average popularity for Lil Mama's non-explicit songs is", mama_ok_avg_pop)
mama_ok_mins = int(mama_ok_len) / 60000
print("Lil Mama has", mama_ok_mins, "minutes of non-explicit songs")
#We need to bypass the limit. And find out
response = requests.get('https://api.spotify.com/v1/search?query=biggie&type=artist')
biggie_data = response.json()
biggie_artists = biggie_data['artists']
biggie_names = biggie_artists['items']
biggie_count= 0
for name in biggie_names:
print(name['name'])
biggie_count = biggie_count + 1
print("There are a total number of", biggie_count, "biggie artists")
response = requests.get('https://api.spotify.com/v1/search?query=lil&type=artist')
lil_data = response.json()
lil_x_artists = lil_data['artists']
lil_x_names = lil_x_artists['items']
lil_x_count= 0
for name in lil_x_names:
print(name['name'])
lil_x_count = biggie_count + 1
print("There are a total number of", lil_x_count, "lil artists")
response = requests.get('https://api.spotify.com/v1/search?query=biggie&type=artist&limit=50')
b_data = response.json()
b_artists = b_data['artists']
b_names = b_artists['items']
b_pop_count = 0
b_number = 0
for names in b_names:
print(names['name'], names['popularity'])
b_number = b_number + 1
b_pop_count = b_pop_count + int(names['popularity'])
avg_b_pop = b_pop_count / int(b_number)
print("The Biggies' average popularity is", avg_b_pop)
lil_pop_count = 0
lil_number = 0
for names in lil_names:
print(names['name'], names['popularity'])
lil_number = lil_number + 1
lil_pop_count = lil_pop_count + int(names['popularity'])
avg_lil_pop = lil_pop_count / int(lil_number)
print("The Lils average popularity is", avg_lil_pop)
print("The Lils are far more popular")
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading data files
Step2: The files are all in Unicode, to simplify we will turn Unicode characters to ASCII, make everything lowercase, and trim most punctuation.
Step3: To read the data file we will split the file into lines, and then split lines into pairs. The files are all English → Other Language, so if we want to translate from Other Language → English I added the reverse flag to reverse the pairs.
Step4: Since there are a lot of example sentences and we want to train something quickly, we'll trim the data set to only relatively short and simple sentences. Here the maximum length is 10 words (that includes ending punctuation) and we're filtering to sentences that translate to the form "I am" or "He is" etc. (accounting for apostrophes replaced earlier).
Step5: The full process for preparing the data is
Step6: The Encoder
Step7: Simple Decoder
Step8: Attention Decoder
Step9: Note
Step10: Attention
Step11: Plotting results
Step12: Evaluation
Step13: Training and Evaluating
Step14: Visualizing Attention
Step15: For a better viewing experience we will do the extra work of adding axes and labels
| <ASSISTANT_TASK:>
Python Code:
import unicodedata, string, re, random, time, math, torch, torch.nn as nn
from torch.autograd import Variable
from torch import optim
import torch.nn.functional as F
import keras, numpy as np
from keras.preprocessing import sequence
SOS_token = 0
EOS_token = 1
class Lang:
def __init__(self, name):
self.name = name
self.word2index = {}
self.word2count = {}
self.index2word = {0: "SOS", 1: "EOS"}
self.n_words = 2 # Count SOS and EOS
def addSentence(self, sentence):
for word in sentence.split(' '):
self.addWord(word)
def addWord(self, word):
if word not in self.word2index:
self.word2index[word] = self.n_words
self.word2count[word] = 1
self.index2word[self.n_words] = word
self.n_words += 1
else:
self.word2count[word] += 1
# Turn a Unicode string to plain ASCII, thanks to http://stackoverflow.com/a/518232/2809427
def unicodeToAscii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
)
# Lowercase, trim, and remove non-letter characters
def normalizeString(s):
s = unicodeToAscii(s.lower().strip())
s = re.sub(r"([.!?])", r" \1", s)
s = re.sub(r"[^a-zA-Z.!?]+", r" ", s)
return s
def readLangs(lang1, lang2, pairs_file, reverse=False):
print("Reading lines...")
# Read the file and split into lines
lines = open('data/%s' % (pairs_file)).read().strip().split('\n')
# Split every line into pairs and normalize
pairs = [[normalizeString(s) for s in l.split('\t')] for l in lines]
# Reverse pairs, make Lang instances
if reverse:
pairs = [list(reversed(p)) for p in pairs]
input_lang = Lang(lang2)
output_lang = Lang(lang1)
else:
input_lang = Lang(lang1)
output_lang = Lang(lang2)
return input_lang, output_lang, pairs
MAX_LENGTH = 10
eng_prefixes = (
"i am ", "i m ",
"he is", "he s ",
"she is", "she s",
"you are", "you re ",
"we are", "we re ",
"they are", "they re "
)
def filterPair(p):
return len(p[0].split(' ')) < MAX_LENGTH and \
len(p[1].split(' ')) < MAX_LENGTH and \
p[1].startswith(eng_prefixes)
def filterPairs(pairs):
return [pair for pair in pairs if filterPair(pair)]
def prepareData(lang1, lang2, pairs_file, reverse=False):
input_lang, output_lang, pairs = readLangs(lang1, lang2, pairs_file, reverse)
print("Read %s sentence pairs" % len(pairs))
pairs = filterPairs(pairs)
print("Trimmed to %s sentence pairs" % len(pairs))
print("Counting words...")
for pair in pairs:
input_lang.addSentence(pair[0])
output_lang.addSentence(pair[1])
print("Counted words:")
print(input_lang.name, input_lang.n_words)
print(output_lang.name, output_lang.n_words)
return input_lang, output_lang, pairs
input_lang, output_lang, pairs = prepareData('eng', 'fra', 'fra.txt', True)
print(random.choice(pairs))
def indexesFromSentence(lang, sentence):
return [lang.word2index[word] for word in sentence.split(' ')]+[EOS_token]
def variableFromSentence(lang, sentence):
indexes = indexesFromSentence(lang, sentence)
return Variable(torch.LongTensor(indexes).unsqueeze(0))
def variablesFromPair(pair):
input_variable = variableFromSentence(input_lang, pair[0])
target_variable = variableFromSentence(output_lang, pair[1])
return (input_variable, target_variable)
def index_and_pad(lang, dat):
return sequence.pad_sequences([indexesFromSentence(lang, s)
for s in dat], padding='post').astype(np.int64)
fra, eng = list(zip(*pairs))
fra = index_and_pad(input_lang, fra)
eng = index_and_pad(output_lang, eng)
def get_batch(x, y, batch_size=16):
idxs = np.random.permutation(len(x))[:batch_size]
return x[idxs], y[idxs]
class EncoderRNN(nn.Module):
def __init__(self, input_size, hidden_size, n_layers=1):
super(EncoderRNN, self).__init__()
self.hidden_size = hidden_size
self.embedding = nn.Embedding(input_size, hidden_size)
self.gru = nn.GRU(hidden_size, hidden_size, batch_first=True, num_layers=n_layers)
def forward(self, input, hidden):
output, hidden = self.gru(self.embedding(input), hidden)
return output, hidden
# TODO: other inits
def initHidden(self, batch_size):
return Variable(torch.zeros(1, batch_size, self.hidden_size))
class DecoderRNN(nn.Module):
def __init__(self, hidden_size, output_size, n_layers=1):
super(DecoderRNN, self).__init__()
self.embedding = nn.Embedding(output_size, hidden_size)
self.gru = nn.GRU(hidden_size, hidden_size, batch_first=True, num_layers=n_layers)
# TODO use transpose of embedding
self.out = nn.Linear(hidden_size, output_size)
self.sm = nn.LogSoftmax()
def forward(self, input, hidden):
emb = self.embedding(input).unsqueeze(1)
# NB: Removed relu
res, hidden = self.gru(emb, hidden)
output = self.sm(self.out(res[:,0]))
return output, hidden
class AttnDecoderRNN(nn.Module):
def __init__(self, hidden_size, output_size, n_layers=1, dropout_p=0.1, max_length=MAX_LENGTH):
super(AttnDecoderRNN, self).__init__()
self.hidden_size = hidden_size
self.output_size = output_size
self.n_layers = n_layers
self.dropout_p = dropout_p
self.max_length = max_length
self.embedding = nn.Embedding(self.output_size, self.hidden_size)
self.attn = nn.Linear(self.hidden_size * 2, self.max_length)
self.attn_combine = nn.Linear(self.hidden_size * 2, self.hidden_size)
self.dropout = nn.Dropout(self.dropout_p)
self.gru = nn.GRU(self.hidden_size, self.hidden_size)
self.out = nn.Linear(self.hidden_size, self.output_size)
def forward(self, input, hidden, encoder_output, encoder_outputs):
embedded = self.embedding(input).view(1, 1, -1)
embedded = self.dropout(embedded)
attn_weights = F.softmax(self.attn(torch.cat((embedded[0], hidden[0]), 1)))
attn_applied = torch.bmm(attn_weights.unsqueeze(0), encoder_outputs.unsqueeze(0))
output = torch.cat((embedded[0], attn_applied[0]), 1)
output = self.attn_combine(output).unsqueeze(0)
for i in range(self.n_layers):
output = F.relu(output)
output, hidden = self.gru(output, hidden)
output = F.log_softmax(self.out(output[0]))
return output, hidden, attn_weights
def initHidden(self):
return Variable(torch.zeros(1, 1, self.hidden_size))
def train(input_variable, target_variable, encoder, decoder,
encoder_optimizer, decoder_optimizer, criterion, max_length=MAX_LENGTH):
batch_size, input_length = input_variable.size()
target_length = target_variable.size()[1]
encoder_hidden = encoder.initHidden(batch_size).cuda()
encoder_optimizer.zero_grad()
decoder_optimizer.zero_grad()
loss = 0
encoder_output, encoder_hidden = encoder(input_variable, encoder_hidden)
decoder_input = Variable(torch.LongTensor([SOS_token]*batch_size)).cuda()
decoder_hidden = encoder_hidden
for di in range(target_length):
decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden)
#, encoder_output, encoder_outputs)
targ = target_variable[:, di]
# print(decoder_output.size(), targ.size(), target_variable.size())
loss += criterion(decoder_output, targ)
decoder_input = targ
loss.backward()
encoder_optimizer.step()
decoder_optimizer.step()
return loss.data[0] / target_length
def asMinutes(s):
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
def timeSince(since, percent):
now = time.time()
s = now - since
es = s / (percent)
rs = es - s
return '%s (- %s)' % (asMinutes(s), asMinutes(rs))
def trainEpochs(encoder, decoder, n_epochs, print_every=1000, plot_every=100,
learning_rate=0.01):
start = time.time()
plot_losses = []
print_loss_total = 0 # Reset every print_every
plot_loss_total = 0 # Reset every plot_every
encoder_optimizer = optim.RMSprop(encoder.parameters(), lr=learning_rate)
decoder_optimizer = optim.RMSprop(decoder.parameters(), lr=learning_rate)
criterion = nn.NLLLoss().cuda()
for epoch in range(1, n_epochs + 1):
training_batch = get_batch(fra, eng)
input_variable = Variable(torch.LongTensor(training_batch[0])).cuda()
target_variable = Variable(torch.LongTensor(training_batch[1])).cuda()
loss = train(input_variable, target_variable, encoder, decoder, encoder_optimizer,
decoder_optimizer, criterion)
print_loss_total += loss
plot_loss_total += loss
if epoch % print_every == 0:
print_loss_avg = print_loss_total / print_every
print_loss_total = 0
print('%s (%d %d%%) %.4f' % (timeSince(start, epoch / n_epochs), epoch,
epoch / n_epochs * 100, print_loss_avg))
if epoch % plot_every == 0:
plot_loss_avg = plot_loss_total / plot_every
plot_losses.append(plot_loss_avg)
plot_loss_total = 0
showPlot(plot_losses)
# TODO: Make this change during training
teacher_forcing_ratio = 0.5
def attn_train(input_variable, target_variable, encoder, decoder, encoder_optimizer,
decoder_optimizer, criterion, max_length=MAX_LENGTH):
encoder_hidden = encoder.initHidden()
encoder_optimizer.zero_grad()
decoder_optimizer.zero_grad()
input_length = input_variable.size()[0]
target_length = target_variable.size()[0]
encoder_outputs = Variable(torch.zeros(max_length, encoder.hidden_size))
loss = 0
for ei in range(input_length):
encoder_output, encoder_hidden = encoder(input_variable[ei], encoder_hidden)
encoder_outputs[ei] = encoder_output[0][0]
decoder_input = Variable(torch.LongTensor([[SOS_token]]))
decoder_hidden = encoder_hidden
use_teacher_forcing = True if random.random() < teacher_forcing_ratio else False
if use_teacher_forcing:
# Teacher forcing: Feed the target as the next input
for di in range(target_length):
decoder_output, decoder_hidden, decoder_attention = decoder(
decoder_input, decoder_hidden, encoder_output, encoder_outputs)
loss += criterion(decoder_output[0], target_variable[di])
decoder_input = target_variable[di] # Teacher forcing
else:
# Without teacher forcing: use its own predictions as the next input
for di in range(target_length):
decoder_output, decoder_hidden, decoder_attention = decoder(
decoder_input, decoder_hidden, encoder_output, encoder_outputs)
topv, topi = decoder_output.data.topk(1)
ni = topi[0][0]
decoder_input = Variable(torch.LongTensor([[ni]]))
loss += criterion(decoder_output[0], target_variable[di])
if ni == EOS_token:
break
loss.backward()
encoder_optimizer.step()
decoder_optimizer.step()
return loss.data[0] / target_length
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import numpy as np
%matplotlib inline
def showPlot(points):
plt.figure()
fig, ax = plt.subplots()
loc = ticker.MultipleLocator(base=0.2) # this locator puts ticks at regular intervals
ax.yaxis.set_major_locator(loc)
plt.plot(points)
def evaluate(encoder, decoder, sentence, max_length=MAX_LENGTH):
input_variable = variableFromSentence(input_lang, sentence).cuda()
input_length = input_variable.size()[0]
encoder_hidden = encoder.initHidden(1).cuda()
encoder_output, encoder_hidden = encoder(input_variable, encoder_hidden)
decoder_input = Variable(torch.LongTensor([SOS_token])).cuda()
decoder_hidden = encoder_hidden
decoded_words = []
# decoder_attentions = torch.zeros(max_length, max_length)
for di in range(max_length):
# decoder_output, decoder_hidden, decoder_attention = decoder(
decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden)
#, encoder_output, encoder_outputs)
# decoder_attentions[di] = decoder_attention.data
topv, topi = decoder_output.data.topk(1)
ni = topi[0][0]
if ni == EOS_token:
decoded_words.append('<EOS>')
break
else:
decoded_words.append(output_lang.index2word[ni])
decoder_input = Variable(torch.LongTensor([ni])).cuda()
return decoded_words,0#, decoder_attentions[:di+1]
def evaluateRandomly(encoder, decoder, n=10):
for i in range(n):
pair = random.choice(pairs)
print('>', pair[0])
print('=', pair[1])
output_words, attentions = evaluate(encoder, decoder, pair[0])
output_sentence = ' '.join(output_words)
print('<', output_sentence)
print('')
#TODO:
# - Test set
# - random teacher forcing
# - attention
# - multi layers
# - bidirectional encoding
hidden_size = 256
encoder1 = EncoderRNN(input_lang.n_words, hidden_size).cuda()
attn_decoder1 = DecoderRNN(hidden_size, output_lang.n_words).cuda()
trainEpochs(encoder1, attn_decoder1, 15000, print_every=500, learning_rate=0.005)
evaluateRandomly(encoder1, attn_decoder1)
output_words, attentions = evaluate(encoder1, attn_decoder1, "je suis trop froid .")
plt.matshow(attentions.numpy())
def showAttention(input_sentence, output_words, attentions):
# Set up figure with colorbar
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(attentions.numpy(), cmap='bone')
fig.colorbar(cax)
# Set up axes
ax.set_xticklabels([''] + input_sentence.split(' ') + ['<EOS>'], rotation=90)
ax.set_yticklabels([''] + output_words)
# Show label at every tick
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
plt.show()
def evaluateAndShowAttention(input_sentence):
output_words, attentions = evaluate(encoder1, attn_decoder1, input_sentence)
print('input =', input_sentence)
print('output =', ' '.join(output_words))
showAttention(input_sentence, output_words, attentions)
evaluateAndShowAttention("elle a cinq ans de moins que moi .")
evaluateAndShowAttention("elle est trop petit .")
evaluateAndShowAttention("je ne crains pas de mourir .")
evaluateAndShowAttention("c est un jeune directeur plein de talent .")
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Change the following cell as necessary
Step2: Confirm below that the bucket is regional and its region equals to the specified region
Step3: Create BigQuery tables
Step4: Let's create a table with 1 million examples.
Step5: Make the validation dataset be 1/10 the size of the training dataset.
Step6: Export the tables as CSV files
Step7: Make code compatible with AI Platform Training Service
Step8: Move code into a python package
Step9: Paste existing code into model.py
Step10: Modify code to read data from and write checkpoint files to GCS
Step11: Run trainer module package locally
Step12: Run your training package on Cloud AI Platform
Step13: (Optional) Run your training package using Docker container
Step14: Remark
| <ASSISTANT_TASK:>
Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
import os
from google.cloud import bigquery
# Change with your own bucket and project below:
BUCKET = "<BUCKET>"
PROJECT = "<PROJECT>"
REGION = "<YOUR REGION>"
OUTDIR = "gs://{bucket}/taxifare/data".format(bucket=BUCKET)
os.environ['BUCKET'] = BUCKET
os.environ['OUTDIR'] = OUTDIR
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = "2.1"
%%bash
gsutil ls -Lb gs://$BUCKET | grep "gs://\|Location"
echo $REGION
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
bq = bigquery.Client(project = PROJECT)
dataset = bigquery.Dataset(bq.dataset("taxifare"))
try:
bq.create_dataset(dataset)
print("Dataset created")
except:
print("Dataset already exists")
%%bigquery
CREATE OR REPLACE TABLE taxifare.feateng_training_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 1000)) = 1
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
%%bigquery
CREATE OR REPLACE TABLE taxifare.feateng_valid_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
%%bash
echo "Deleting current contents of $OUTDIR"
gsutil -m -q rm -rf $OUTDIR
echo "Extracting training data to $OUTDIR"
bq --location=US extract \
--destination_format CSV \
--field_delimiter "," --noprint_header \
taxifare.feateng_training_data \
$OUTDIR/taxi-train-*.csv
echo "Extracting validation data to $OUTDIR"
bq --location=US extract \
--destination_format CSV \
--field_delimiter "," --noprint_header \
taxifare.feateng_valid_data \
$OUTDIR/taxi-valid-*.csv
gsutil ls -l $OUTDIR
!gsutil cat gs://$BUCKET/taxifare/data/taxi-train-000000000000.csv | head -2
!gsutil ls gs://$BUCKET/taxifare/data
ls ./taxifare/trainer/
%%writefile ./taxifare/trainer/model.py
import datetime
import logging
import os
import shutil
import numpy as np
import tensorflow as tf
from tensorflow.keras import activations
from tensorflow.keras import callbacks
from tensorflow.keras import layers
from tensorflow.keras import models
from tensorflow import feature_column as fc
logging.info(tf.version.VERSION)
CSV_COLUMNS = [
'fare_amount',
'pickup_datetime',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'passenger_count',
'key',
]
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']]
DAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
def features_and_labels(row_data):
for unwanted_col in ['key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label
def load_dataset(pattern, batch_size, num_repeat):
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=pattern,
batch_size=batch_size,
column_names=CSV_COLUMNS,
column_defaults=DEFAULTS,
num_epochs=num_repeat,
)
return dataset.map(features_and_labels)
def create_train_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=None)
return dataset.prefetch(1)
def create_eval_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=1)
return dataset.prefetch(1)
def parse_datetime(s):
if type(s) is not str:
s = s.numpy().decode('utf-8')
return datetime.datetime.strptime(s, "%Y-%m-%d %H:%M:%S %Z")
def euclidean(params):
lon1, lat1, lon2, lat2 = params
londiff = lon2 - lon1
latdiff = lat2 - lat1
return tf.sqrt(londiff*londiff + latdiff*latdiff)
def get_dayofweek(s):
ts = parse_datetime(s)
return DAYS[ts.weekday()]
@tf.function
def dayofweek(ts_in):
return tf.map_fn(
lambda s: tf.py_function(get_dayofweek, inp=[s], Tout=tf.string),
ts_in
)
@tf.function
def fare_thresh(x):
return 60 * activations.relu(x)
def transform(inputs, NUMERIC_COLS, STRING_COLS, nbuckets):
# Pass-through columns
transformed = inputs.copy()
del transformed['pickup_datetime']
feature_columns = {
colname: fc.numeric_column(colname)
for colname in NUMERIC_COLS
}
# Scaling longitude from range [-70, -78] to [0, 1]
for lon_col in ['pickup_longitude', 'dropoff_longitude']:
transformed[lon_col] = layers.Lambda(
lambda x: (x + 78)/8.0,
name='scale_{}'.format(lon_col)
)(inputs[lon_col])
# Scaling latitude from range [37, 45] to [0, 1]
for lat_col in ['pickup_latitude', 'dropoff_latitude']:
transformed[lat_col] = layers.Lambda(
lambda x: (x - 37)/8.0,
name='scale_{}'.format(lat_col)
)(inputs[lat_col])
# Adding Euclidean dist (no need to be accurate: NN will calibrate it)
transformed['euclidean'] = layers.Lambda(euclidean, name='euclidean')([
inputs['pickup_longitude'],
inputs['pickup_latitude'],
inputs['dropoff_longitude'],
inputs['dropoff_latitude']
])
feature_columns['euclidean'] = fc.numeric_column('euclidean')
# hour of day from timestamp of form '2010-02-08 09:17:00+00:00'
transformed['hourofday'] = layers.Lambda(
lambda x: tf.strings.to_number(
tf.strings.substr(x, 11, 2), out_type=tf.dtypes.int32),
name='hourofday'
)(inputs['pickup_datetime'])
feature_columns['hourofday'] = fc.indicator_column(
fc.categorical_column_with_identity(
'hourofday', num_buckets=24))
latbuckets = np.linspace(0, 1, nbuckets).tolist()
lonbuckets = np.linspace(0, 1, nbuckets).tolist()
b_plat = fc.bucketized_column(
feature_columns['pickup_latitude'], latbuckets)
b_dlat = fc.bucketized_column(
feature_columns['dropoff_latitude'], latbuckets)
b_plon = fc.bucketized_column(
feature_columns['pickup_longitude'], lonbuckets)
b_dlon = fc.bucketized_column(
feature_columns['dropoff_longitude'], lonbuckets)
ploc = fc.crossed_column(
[b_plat, b_plon], nbuckets * nbuckets)
dloc = fc.crossed_column(
[b_dlat, b_dlon], nbuckets * nbuckets)
pd_pair = fc.crossed_column([ploc, dloc], nbuckets ** 4)
feature_columns['pickup_and_dropoff'] = fc.embedding_column(
pd_pair, 100)
return transformed, feature_columns
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model(nbuckets, nnsize, lr):
# input layer is all float except for pickup_datetime which is a string
STRING_COLS = ['pickup_datetime']
NUMERIC_COLS = (
set(CSV_COLUMNS) - set([LABEL_COLUMN, 'key']) - set(STRING_COLS)
)
inputs = {
colname: layers.Input(name=colname, shape=(), dtype='float32')
for colname in NUMERIC_COLS
}
inputs.update({
colname: layers.Input(name=colname, shape=(), dtype='string')
for colname in STRING_COLS
})
# transforms
transformed, feature_columns = transform(
inputs, NUMERIC_COLS, STRING_COLS, nbuckets=nbuckets)
dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed)
x = dnn_inputs
for layer, nodes in enumerate(nnsize):
x = layers.Dense(nodes, activation='relu', name='h{}'.format(layer))(x)
output = layers.Dense(1, name='fare')(x)
model = models.Model(inputs, output)
#TODO 1a
lr_optimizer = tf.keras.optimizers.Adam(learning_rate=lr)
model.compile(optimizer=lr_optimizer, loss='mse', metrics=[rmse, 'mse'])
return model
def train_and_evaluate(hparams):
#TODO 1b
batch_size = hparams['batch_size']
nbuckets = hparams['nbuckets']
lr = hparams['lr']
nnsize = hparams['nnsize']
eval_data_path = hparams['eval_data_path']
num_evals = hparams['num_evals']
num_examples_to_train_on = hparams['num_examples_to_train_on']
output_dir = hparams['output_dir']
train_data_path = hparams['train_data_path']
timestamp = datetime.datetime.now().strftime('%Y%m%d%H%M%S')
savedmodel_dir = os.path.join(output_dir, 'export/savedmodel')
model_export_path = os.path.join(savedmodel_dir, timestamp)
checkpoint_path = os.path.join(output_dir, 'checkpoints')
tensorboard_path = os.path.join(output_dir, 'tensorboard')
if tf.io.gfile.exists(output_dir):
tf.io.gfile.rmtree(output_dir)
model = build_dnn_model(nbuckets, nnsize, lr)
logging.info(model.summary())
trainds = create_train_dataset(train_data_path, batch_size)
evalds = create_eval_dataset(eval_data_path, batch_size)
steps_per_epoch = num_examples_to_train_on // (batch_size * num_evals)
checkpoint_cb = callbacks.ModelCheckpoint(
checkpoint_path,
save_weights_only=True,
verbose=1
)
tensorboard_cb = callbacks.TensorBoard(tensorboard_path)
history = model.fit(
trainds,
validation_data=evalds,
epochs=num_evals,
steps_per_epoch=max(1, steps_per_epoch),
verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch
callbacks=[checkpoint_cb, tensorboard_cb]
)
# Exporting the model with default serving function.
tf.saved_model.save(model, model_export_path)
return history
%%writefile taxifare/trainer/task.py
import argparse
from trainer import model
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
"--batch_size",
help="Batch size for training steps",
type=int,
default=32
)
parser.add_argument(
"--eval_data_path",
help="GCS location pattern of eval files",
required=True
)
parser.add_argument(
"--nnsize",
help="Hidden layer sizes (provide space-separated sizes)",
nargs="+",
type=int,
default=[32, 8]
)
parser.add_argument(
"--nbuckets",
help="Number of buckets to divide lat and lon with",
type=int,
default=10
)
parser.add_argument(
"--lr",
help = "learning rate for optimizer",
type = float,
default = 0.001
)
parser.add_argument(
"--num_evals",
help="Number of times to evaluate model on eval data training.",
type=int,
default=5
)
parser.add_argument(
"--num_examples_to_train_on",
help="Number of examples to train on.",
type=int,
default=100
)
parser.add_argument(
"--output_dir",
help="GCS location to write checkpoints and export models",
required=True
)
parser.add_argument(
"--train_data_path",
help="GCS location pattern of train files containing eval URLs",
required=True
)
parser.add_argument(
"--job-dir",
help="this model ignores this field, but it is required by gcloud",
default="junk"
)
args = parser.parse_args()
hparams = args.__dict__
hparams.pop("job-dir", None)
model.train_and_evaluate(hparams)
%%bash
EVAL_DATA_PATH=./taxifare/tests/data/taxi-valid*
TRAIN_DATA_PATH=./taxifare/tests/data/taxi-train*
OUTPUT_DIR=./taxifare-model
test ${OUTPUT_DIR} && rm -rf ${OUTPUT_DIR}
export PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare
python3 -m trainer.task \
--eval_data_path $EVAL_DATA_PATH \
--output_dir $OUTPUT_DIR \
--train_data_path $TRAIN_DATA_PATH \
--batch_size 5 \
--num_examples_to_train_on 100 \
--num_evals 1 \
--nbuckets 10 \
--lr 0.001 \
--nnsize 32 8
%%bash
# Output directory and jobID
OUTDIR=gs://${BUCKET}/taxifare/trained_model_$(date -u +%y%m%d_%H%M%S)
JOBID=taxifare_$(date -u +%y%m%d_%H%M%S)
echo ${OUTDIR} ${REGION} ${JOBID}
gsutil -m rm -rf ${OUTDIR}
# Model and training hyperparameters
BATCH_SIZE=50
NUM_EXAMPLES_TO_TRAIN_ON=100
NUM_EVALS=100
NBUCKETS=10
LR=0.001
NNSIZE="32 8"
# GCS paths
GCS_PROJECT_PATH=gs://$BUCKET/taxifare
DATA_PATH=$GCS_PROJECT_PATH/data
TRAIN_DATA_PATH=$DATA_PATH/taxi-train*
EVAL_DATA_PATH=$DATA_PATH/taxi-valid*
#TODO 2
gcloud ai-platform jobs submit training $JOBID \
--module-name=trainer.task \
--package-path=taxifare/trainer \
--staging-bucket=gs://${BUCKET} \
--python-version=3.7 \
--runtime-version=${TFVERSION} \
--region=${REGION} \
-- \
--eval_data_path $EVAL_DATA_PATH \
--output_dir $OUTDIR \
--train_data_path $TRAIN_DATA_PATH \
--batch_size $BATCH_SIZE \
--num_examples_to_train_on $NUM_EXAMPLES_TO_TRAIN_ON \
--num_evals $NUM_EVALS \
--nbuckets $NBUCKETS \
--lr $LR \
--nnsize $NNSIZE
%%writefile ./taxifare/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu
# TODO 3
COPY . /code
WORKDIR /code
ENTRYPOINT ["python3", "-m", "trainer.task"]
!gcloud auth configure-docker
%%bash
PROJECT_DIR=$(cd ./taxifare && pwd)
PROJECT_ID=$(gcloud config list project --format "value(core.project)")
IMAGE_NAME=taxifare_training_container
DOCKERFILE=$PROJECT_DIR/Dockerfile
IMAGE_URI=gcr.io/$PROJECT_ID/$IMAGE_NAME
docker build $PROJECT_DIR -f $DOCKERFILE -t $IMAGE_URI
docker push $IMAGE_URI
%%bash
PROJECT_ID=$(gcloud config list project --format "value(core.project)")
BUCKET=$PROJECT_ID
REGION="us-central1"
# Output directory and jobID
OUTDIR=gs://${BUCKET}/taxifare/trained_model_$(date -u +%y%m%d_%H%M%S)
JOBID=taxifare_container_$(date -u +%y%m%d_%H%M%S)
echo ${OUTDIR} ${REGION} ${JOBID}
gsutil -m rm -rf ${OUTDIR}
# Model and training hyperparameters
BATCH_SIZE=50
NUM_EXAMPLES_TO_TRAIN_ON=100
NUM_EVALS=100
NBUCKETS=10
NNSIZE="32 8"
# AI-Platform machines to use for training
MACHINE_TYPE=n1-standard-4
SCALE_TIER=CUSTOM
# GCS paths.
GCS_PROJECT_PATH=gs://$BUCKET/taxifare
DATA_PATH=$GCS_PROJECT_PATH/data
TRAIN_DATA_PATH=$DATA_PATH/taxi-train*
EVAL_DATA_PATH=$DATA_PATH/taxi-valid*
IMAGE_NAME=taxifare_training_container
IMAGE_URI=gcr.io/$PROJECT_ID/$IMAGE_NAME
gcloud beta ai-platform jobs submit training $JOBID \
--staging-bucket=gs://$BUCKET \
--region=$REGION \
--master-image-uri=$IMAGE_URI \
--master-machine-type=$MACHINE_TYPE \
--scale-tier=$SCALE_TIER \
-- \
--eval_data_path $EVAL_DATA_PATH \
--output_dir $OUTDIR \
--train_data_path $TRAIN_DATA_PATH \
--batch_size $BATCH_SIZE \
--num_examples_to_train_on $NUM_EXAMPLES_TO_TRAIN_ON \
--num_evals $NUM_EVALS \
--nbuckets $NBUCKETS \
--nnsize $NNSIZE
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create the data
Step2: Fit the models
| <ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort
# Denis A. Engemann
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from scipy import linalg
from sklearn.decomposition import PCA, FactorAnalysis
from sklearn.covariance import ShrunkCovariance, LedoitWolf
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
print(__doc__)
n_samples, n_features, rank = 1000, 50, 10
sigma = 1.
rng = np.random.RandomState(42)
U, _, _ = linalg.svd(rng.randn(n_features, n_features))
X = np.dot(rng.randn(n_samples, rank), U[:, :rank].T)
# Adding homoscedastic noise
X_homo = X + sigma * rng.randn(n_samples, n_features)
# Adding heteroscedastic noise
sigmas = sigma * rng.rand(n_features) + sigma / 2.
X_hetero = X + rng.randn(n_samples, n_features) * sigmas
n_components = np.arange(0, n_features, 5) # options for n_components
def compute_scores(X):
pca = PCA(svd_solver='full')
fa = FactorAnalysis()
pca_scores, fa_scores = [], []
for n in n_components:
pca.n_components = n
fa.n_components = n
pca_scores.append(np.mean(cross_val_score(pca, X)))
fa_scores.append(np.mean(cross_val_score(fa, X)))
return pca_scores, fa_scores
def shrunk_cov_score(X):
shrinkages = np.logspace(-2, 0, 30)
cv = GridSearchCV(ShrunkCovariance(), {'shrinkage': shrinkages})
return np.mean(cross_val_score(cv.fit(X).best_estimator_, X))
def lw_score(X):
return np.mean(cross_val_score(LedoitWolf(), X))
for X, title in [(X_homo, 'Homoscedastic Noise'),
(X_hetero, 'Heteroscedastic Noise')]:
pca_scores, fa_scores = compute_scores(X)
n_components_pca = n_components[np.argmax(pca_scores)]
n_components_fa = n_components[np.argmax(fa_scores)]
pca = PCA(svd_solver='full', n_components='mle')
pca.fit(X)
n_components_pca_mle = pca.n_components_
print("best n_components by PCA CV = %d" % n_components_pca)
print("best n_components by FactorAnalysis CV = %d" % n_components_fa)
print("best n_components by PCA MLE = %d" % n_components_pca_mle)
plt.figure()
plt.plot(n_components, pca_scores, 'b', label='PCA scores')
plt.plot(n_components, fa_scores, 'r', label='FA scores')
plt.axvline(rank, color='g', label='TRUTH: %d' % rank, linestyle='-')
plt.axvline(n_components_pca, color='b',
label='PCA CV: %d' % n_components_pca, linestyle='--')
plt.axvline(n_components_fa, color='r',
label='FactorAnalysis CV: %d' % n_components_fa,
linestyle='--')
plt.axvline(n_components_pca_mle, color='k',
label='PCA MLE: %d' % n_components_pca_mle, linestyle='--')
# compare with other covariance estimators
plt.axhline(shrunk_cov_score(X), color='violet',
label='Shrunk Covariance MLE', linestyle='-.')
plt.axhline(lw_score(X), color='orange',
label='LedoitWolf MLE' % n_components_pca_mle, linestyle='-.')
plt.xlabel('nb of components')
plt.ylabel('CV scores')
plt.legend(loc='lower right')
plt.title(title)
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Explore The Data
Step2: Count Labels
Step3: Top 50 Labels
Step4: Sig/ Labels
Step5: See correlation among labels
Step6: Obtain Baseline With Automated Machine Learning
Step7: Get Embeddings and Repeat
Step8: Compare Transfer Learning vs. Regular Methods
| <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
from random import randint
from matplotlib import pyplot as plt
import re
pd.set_option('max_colwidth', 1000)
df = pd.read_csv('https://storage.googleapis.com/issue_label_bot/k8s_issues/000000000000.csv')
df.labels = df.labels.apply(lambda x: eval(x))
df.head()
#remove target leakage from kubernetes which are the bot commands
df['body'] = df.body.apply(lambda x: re.sub('(/sig|/kind|/status/triage/|priority) \S+', '', str(x)))
def count_sig(l):
return(sum(['sig/' in x for x in l]))
from matplotlib.ticker import PercentFormatter
sig_counts = df.labels.apply(lambda x: count_sig(x))
plt.hist(sig_counts, weights=np.ones(len(sig_counts)) / len(sig_counts))
plt.gca().yaxis.set_major_formatter(PercentFormatter(1))
plt.title(f'Distribution of # of sig/ labels for kubernetes/kubernetes\n {len(sig_counts):,} issues pulled from GHArchive.')
plt.show()
from collections import Counter
c = Counter()
for row in df.labels:
c.update(row)
print(f'There are {len(c.keys())} unique labels in kubernetes/kubernetes')
nsig = sum(['sig/' in x for x in list(c.keys())])
print(f"number of sig labels: {nsig}")
c.most_common(50)
len([(k, c[k]) for k in c if c[k] >= 100])
sig_labels = [x for x in list(c.keys()) if 'sig/' in x]
for l in sig_labels:
print(f'{l}: {c[l]}')
min_freq = 30
def contains_sig(l):
if not l:
return False
else:
# make sure there are at least 10 issues labeled with that value
return max(['sig/' in x and c[x] >=min_freq for x in l])
sig_df = df[df.labels.apply(lambda x: contains_sig(x))]
print(f'{sig_df.shape[0]:,} issues have sig/ labels')
sig_labels = [k for k in c.keys() if c[k] >= min_freq and 'sig/' in k]
print(f'{len(sig_labels)} sig labels that have at least {min_freq} issues')
# build an indicator matrix
indicator = []
for l in sig_df.labels.values:
zer = np.zeros(len(sig_labels))
mask = [sig_labels.index(x) for x in l if x in sig_labels]
zer[mask] = 1
indicator.append(zer[None, :])
indicator_matrix = pd.DataFrame(np.concatenate(indicator, axis=0), columns=sig_labels).astype(int)
corr_grid = indicator_matrix.T.dot(indicator_matrix)
for i, x in enumerate(corr_grid):
corr_grid.iloc[i][i:] = 0
import seaborn as sns
import matplotlib.pyplot as plt
#cmap = sns.diverging_palette(220, 10, as_cmap=True)
#normalize correlation grid
for label in corr_grid:
corr_grid.loc[label] = corr_grid.loc[label] / c[label]
plt.figure(figsize=(16, 14))
plt.title('Co-Occurence Matrix')
sns.heatmap(corr_grid, square=True, vmin=0, vmax=.4, mask=corr_grid<=0.05)
def part_assign():
i = randint(1, 10)
if i <=5:
return i
else:
return 6
combined_sig_df = pd.concat([sig_df.reset_index(), indicator_matrix.reset_index()], axis=1)
combined_sig_df['part'] = combined_sig_df.repo.apply(lambda x: part_assign())
combined_sig_df.to_hdf('combined_sig_df.hdf')
combined_sig_df = pd.read_hdf('combined_sig_df.hdf')
#! pip install datarobot
import datarobot as dr
from datarobot import UserCV
from fastai.core import parallel
from datarobot import Blueprint
ucv = UserCV(user_partition_col='part', cv_holdout_level=6, seed=123)
dr.Client(token='something-something', endpoint='https://app.datarobot.com/api/v2')
def create_dr_proj(label):
temp_df = combined_sig_df[['title', 'body', 'part', label]]
proj = dr.Project.create(sourcedata=temp_df,
project_name=label,
)
proj.set_target(label,
positive_class=1,
partitioning_method=ucv,
target_type='Binary',
mode=dr.AUTOPILOT_MODE.MANUAL,
worker_count=9,
max_wait=600000)
bps = proj.get_blueprints()
bp = [b for b in bps if 'Nystroem' in str(b)][0]
proj.train(bp, sample_pct=49.8)
proj.unlock_holdout()
return proj
proj_list = []
for i, label in enumerate(sig_labels):
try:
print(f'creating project {i}: {label}')
proj = create_dr_proj(label)
proj_list.append(proj)
except:
pass
predictions = []
for proj in proj_list:
print(f'getting predictions for holdout set for {str(proj)}')
label = proj.target.replace('_', '-')
temp_df = combined_sig_df[['title', 'body', 'part', label]]
temp_df = temp_df[temp_df.part == 6]
ds = proj.upload_dataset(temp_df)
m = proj.get_models()[0]
predict_job = m.request_predictions(ds.id)
yhat = predict_job.get_result_when_complete()
predictions.append({label: yhat['positive_probability']})
result = {}
for d in predictions:
result.update(d)
baseline_holdout_predictions_df = pd.DataFrame(result)
baseline_holdout_predictions_df.columns = ['p_'+x for x in baseline_holdout_predictions_df.columns]
assert len(baseline_holdout_predictions_df) == len(combined_sig_df[combined_sig_df.part == 6])
predictions_df = pd.concat([combined_sig_df[combined_sig_df.part == 6].reset_index(drop=True),
baseline_holdout_predictions_df.reset_index(drop=True)], axis=1)
predictions_df['version'] = 'baseline'
predictions_df.to_hdf('prediction_baseline_df.hdf')
import pandas as pd
from inference import InferenceWrapper, pass_through
import os
import torch
from torch.cuda import empty_cache
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="0"
wrapper = InferenceWrapper(model_path='/ds/lang_model/models_uxgcl1e1/',
model_file_name='trained_model_uxgcl1e1.hdf')
empty_cache()
combined_sig_df = pd.read_hdf('combined_sig_df.hdf')
# text = wrapper.process_df(combined_sig_df)
# text.to_hdf('textlm_df.hdf')
text = pd.read_hdf('textlm_df.hdf')
assert text['text'].isna().sum() == 0
features = []
from tqdm.auto import tqdm
with torch.no_grad():
for t in tqdm(text['text'].values):
feat = wrapper.get_pooled_features(t).cpu()
features.append(feat)
empty_cache()
feat_matrix = torch.cat(features, dim=0).numpy()
feat_matrix = feat_matrix[:, :1600]
feat_df = pd.DataFrame(feat_matrix)
feat_df.columns = ['f_' + str(x) for x in feat_df.columns]
feat_df.to_csv('feat_df.csv', index=False)
feat_df = pd.read_csv('feat_df.csv')
lm_combined_df = pd.concat([combined_sig_df.reset_index(drop=True),
feat_df.reset_index(drop=True)], axis=1)
import datarobot as dr
from datarobot import UserCV
ucv = UserCV(user_partition_col='part', cv_holdout_level=6, seed=123)
dr.Client(token='something', endpoint='https://app.datarobot.com/api/v2')
def create_dr_proj(label):
temp_df = lm_combined_df[['title', 'body', 'part', label] + list(feat_df.columns)]
proj = dr.Project.create(sourcedata=temp_df,
project_name='lm_'+label,
)
proj.set_target(label,
positive_class=1,
partitioning_method=ucv,
target_type='Binary',
mode=dr.AUTOPILOT_MODE.QUICK,
worker_count=9,
max_wait=600000)
proj.unlock_holdout()
return proj
proj_list_lm = []
for i, label in enumerate(sig_labels):
try:
print(f'creating project {i}: lm_{label}')
proj = create_dr_proj(label)
proj_list_lm.append(proj)
except:
pass
import datarobot as dr
from datarobot import UserCV
dr.Client(token='something-something', endpoint='https://app.datarobot.com/api/v2')
def get_metrics(modelobj):
return modelobj.metrics['AUC']['holdout']
projects = [p for p in dr.Project.list() if p.project_name.startswith('lm_')]
'hamel'.replace('am', 'gg')
label = []
category = []
auc = []
for proj in projects:
print(f'getting metrics for {proj.project_name}')
models = [m for m in proj.get_models() if m.sample_pct > 45]
baseline_model = sorted([m for m in models if m.featurelist_name == 'text only'], key=get_metrics, reverse=True)[0]
deep_model = sorted([m for m in models if m.featurelist_name != 'text only'], key=get_metrics, reverse=True)[0]
baseline_auc = get_metrics(baseline_model)
deep_auc = get_metrics(deep_model)
label.extend([proj.project_name.replace('lm_', '')] * 2)
category.extend(['baseline', 'deep'])
auc.extend([baseline_auc, deep_auc])
import pandas as pd
compare_df = pd.DataFrame({'label': label,
'category': category,
'auc': auc})
pivot = compare_df.pivot(index='label', columns='category', values='auc')
pivot['winner'] = pivot.apply(lambda x: 'deep' if x.deep > x.baseline else 'baseline', axis=1)
pivot['abs diff'] = pivot.apply(lambda x: abs(x.deep - x.baseline), axis=1)
pivot['label count'] = [c[x] for x in pivot.index.values]
pivot.sort_values(by=['label count'], ascending=False)
wrapper
len(wrapper.learn.data.vocab.itos)
pivot.to_hdf('pivot_df.hdf')
import pandas as pd
score_df = pd.read_hdf('score_df.hdf')
score_df.set_index('label', inplace=True)
score_df.columns = ['deep2']
new_pivot = pivot.join(score_df, how='left')[['baseline', 'deep', 'deep2', 'label count']]
def winner(x):
if x.baseline > x.deep and x.baseline > x.deep2:
return 'baseline'
elif x.deep > x.deep2:
return 'deep'
elif x.deep2 > x.deep:
return 'deep2'
new_pivot.dropna(inplace=True)
new_pivot['winner'] = new_pivot.apply(lambda x: winner(x), axis=1)
new_pivot['baseline minus best deep'] = new_pivot.apply(lambda x: x.baseline - max(x.deep, x.deep2), axis=1)
new_pivot['abs diff'] = new_pivot.apply(lambda x: abs(x['baseline minus best deep']), axis=1)
new_pivot.sort_values('label count', ascending=False)
new_pivot.mean()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in house sales data
Step2: If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features.
Step3: Also, copy and paste the predict_output() function to compute the predictions for an entire matrix of features given the matrix and the weights
Step4: Normalize features
Step5: Numpy provides a shorthand for computing 2-norms of each column
Step6: To normalize, apply element-wise division
Step7: Using the shorthand we just covered, write a short function called normalize_features(feature_matrix), which normalizes columns of a given feature matrix. The function should return a pair (normalized_features, norms), where the second item contains the norms of original features. As discussed in the lectures, we will use these norms to normalize the test data in the same way as we normalized the training data.
Step8: To test the function, run the following
Step9: Implementing Coordinate Descent with normalized features
Step10: Don't forget to normalize features
Step11: We assign some random set of initial weights and inspect the values of ro[i]
Step12: Use predict_output() to make predictions on this data.
Step13: Compute the values of ro[i] for each feature in this simple model, using the formula given above, using the formula
Step14: QUIZ QUESTION
Step15: QUIZ QUESTION
Step16: So we can say that ro[i] quantifies the significance of the i-th feature
Step17: To test the function, run the following cell
Step18: Cyclical coordinate descent
Step19: Using the following parameters, learn the weights on the sales dataset.
Step20: First create a normalized version of the feature matrix, normalized_simple_feature_matrix
Step21: Then, run your implementation of LASSO coordinate descent
Step22: QUIZ QUESTIONS
Step23: Let us consider the following set of features.
Step24: First, create a normalized feature matrix from the TRAINING data with these features. (Make you store the norms for the normalization, since we'll use them later)
Step25: NEXT, learn the weights with l1_penalty=1e7, on the training data. Initialize weights to all zeros, and set the tolerance=1. Call resulting weights weights1e7, you will need them later.
Step26: QUIZ QUESTION
Step27: QUIZ QUESTION
Step28: QUIZ QUESTION
Step29: To check your results, if you call normalized_weights1e7 the normalized version of weights1e7, then
Step30: Compute the RSS of each of the three normalized weights on the (unnormalized) test_feature_matrix
| <ASSISTANT_TASK:>
Python Code:
import graphlab
sales = graphlab.SFrame('../Data/kc_house_data.gl/')
# In the dataset, 'floors' was defined with type string,
# so we'll convert them to int, before using it below
sales['floors'] = sales['floors'].astype(int)
import numpy as np # note this allows us to refer to numpy as np instead
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
# select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):
features_sframe = data_sframe[features]
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe[output]
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
def predict_output(feature_matrix, weights):
# assume feature_matrix is a numpy matrix containing the features as columns and weights is a corresponding numpy array
# create the predictions vector by using np.dot()
predictions = np.dot(feature_matrix, weights)
return(predictions)
X = np.array([[3.,5.,8.],[4.,12.,15.]])
print X
norms = np.linalg.norm(X, axis=0) # gives [norm(X[:,0]), norm(X[:,1]), norm(X[:,2])]
print norms
print X / norms # gives [X[:,0]/norm(X[:,0]), X[:,1]/norm(X[:,1]), X[:,2]/norm(X[:,2])]
def normalize_features(feature_matrix):
norms = np.linalg.norm(feature_matrix, axis=0)
normalized_features = feature_matrix / norms
return (normalized_features, norms)
features, norms = normalize_features(np.array([[3.,6.,9.],[4.,8.,12.]]))
print features
# should print
# [[ 0.6 0.6 0.6]
# [ 0.8 0.8 0.8]]
print norms
# should print
# [5. 10. 15.]
simple_features = ['sqft_living', 'bedrooms']
my_output = 'price'
(simple_feature_matrix, output) = get_numpy_data(sales, simple_features, my_output)
simple_feature_matrix, norms = normalize_features(simple_feature_matrix)
weights = np.array([1., 4., 1.])
prediction = predict_output(simple_feature_matrix, weights)
def get_ro_values(feature_matrix, output, prediction, weights):
ro_values = np.empty(feature_matrix.shape[1], dtype=float)
for i in range(feature_matrix.shape[1]):
ro_values[i] = sum(feature_matrix[:,i] * (output - prediction + weights[i] * feature_matrix[:,i]))
return ro_values
ros = get_ro_values(simple_feature_matrix, output, prediction, weights)
ros
print -ros * 2
print ros * 2
print (-ros[1] * 2, -ros[2] * 2), (ros[2] * 2, ros[1] * 2)
def lasso_coordinate_descent_step(i, feature_matrix, output, weights, l1_penalty):
# compute prediction
prediction = predict_output(feature_matrix, weights)
# compute ro[i] = SUM[ [feature_i]*(output - prediction + weight[i]*[feature_i]) ]
ro_i = sum(feature_matrix[:,i] * (output - prediction + weights[i] * feature_matrix[:,i]))
if i == 0: # intercept -- do not regularize
new_weight_i = ro_i
elif ro_i < -l1_penalty/2.:
new_weight_i = ro_i + l1_penalty/2.
elif ro_i > l1_penalty/2.:
new_weight_i = ro_i - l1_penalty/2.
else:
new_weight_i = 0.
return new_weight_i
# should print 0.425558846691
import math
print lasso_coordinate_descent_step(1, np.array([[3./math.sqrt(13),1./math.sqrt(10)],[2./math.sqrt(13),3./math.sqrt(10)]]),
np.array([1., 1.]), np.array([1., 4.]), 0.1)
def lasso_cyclical_coordinate_descent(feature_matrix, output, initial_weights, l1_penalty, tolerance):
weights = np.copy(initial_weights)
weight_diffs = np.zeros(len(weights))
converged = False
iteration = 0
abs_tolerance = abs(tolerance)
while( not converged and iteration < 10000 ):
for i in range(weights.size):
old_weights_i = weights[i]
weights[i] = lasso_coordinate_descent_step(i, feature_matrix, output, weights, l1_penalty)
weight_diffs[i] = abs(old_weights_i - weights[i])
converged = max(weight_diffs) < abs_tolerance
iteration = iteration + 1
print converged
print iteration
return weights
simple_features = ['sqft_living', 'bedrooms']
my_output = 'price'
initial_weights = np.zeros(3)
l1_penalty = 1e7
tolerance = 1.0
(simple_feature_matrix, output) = get_numpy_data(sales, simple_features, my_output)
(normalized_simple_feature_matrix, simple_norms) = normalize_features(simple_feature_matrix) # normalize features
weights = lasso_cyclical_coordinate_descent(normalized_simple_feature_matrix, output,
initial_weights, l1_penalty, tolerance)
predictions = predict_output(normalized_simple_feature_matrix, weights)
residuals = predictions - sales['price']
rss = sum(pow(residuals,2))
print rss
weights_mask = np.array(weights[1::] == 0.)
simple_features_np = np.array(simple_features)
simple_features_np[weights_mask]
train_data,test_data = sales.random_split(.8,seed=0)
all_features = ['bedrooms',
'bathrooms',
'sqft_living',
'sqft_lot',
'floors',
'waterfront',
'view',
'condition',
'grade',
'sqft_above',
'sqft_basement',
'yr_built',
'yr_renovated']
(all_features_matrix, all_output_array) = get_numpy_data(train_data, all_features, my_output)
(all_normalized_features, all_norms) = normalize_features(all_features_matrix)
all_initial_weights = np.zeros(all_features_matrix.shape[1])
weights1e7 = lasso_cyclical_coordinate_descent(all_normalized_features, all_output_array,
all_initial_weights, 1e7, 1)
weights1e7
weights1e7_mask = np.array(weights1e7[1::] <> 0.)
all_features_np = np.array(all_features)
all_features_np[weights1e7_mask]
all_initial_weights
weights1e8 = lasso_cyclical_coordinate_descent(all_normalized_features, all_output_array,
all_initial_weights, 1e8, 1)
weights1e8_mask = np.array(weights1e8[1::] <> 0.)
all_features_np[weights1e8_mask]
weights1e8
weights1e4 = lasso_cyclical_coordinate_descent(all_normalized_features, all_output_array,
all_initial_weights, 1e4, 5e5)
weights1e4
weights1e4_mask = np.array(weights1e4[1::] <> 0.)
all_features_np[weights1e4_mask]
weights1e4_normalized = weights1e4 / all_norms
weights1e7_normalized = weights1e7 / all_norms
weights1e8_normalized = weights1e8 / all_norms
print weights1e7_normalized[3]
(test_feature_matrix, test_output) = get_numpy_data(test_data, all_features, 'price')
predictions1e4 = predict_output(test_feature_matrix, weights1e4_normalized)
residuals1e4 = predictions1e4 - test_data['price']
rss1e4 = sum(pow(residuals1e4,2))
print rss1e4
predictions1e7 = predict_output(test_feature_matrix, weights1e7_normalized)
residuals1e7 = predictions1e7 - test_data['price']
rss1e7 = sum(pow(residuals1e7,2))
print rss1e7
predictions1e8 = predict_output(test_feature_matrix, weights1e8_normalized)
residuals1e8 = predictions1e8 - test_data['price']
rss1e8 = sum(pow(residuals1e8,2))
print rss1e8
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercice 1 - manipulation des bases
Step2: Nombre de joueurs par équipe
Step3: Les joueurs ayant couvert le plus de distance
Step4: On voit un clair effet de sélection sur cette variable
| <ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
from pyensae.datasource import download_data
files = download_data("td2a_eco_exercices_de_manipulation_de_donnees.zip",
url="https://github.com/sdpython/ensae_teaching_cs/raw/master/_doc/notebooks/td2a_eco/data/")
files
import pandas as pd
data_players = pd.read_excel("Players_WC2014.xlsx", engine='openpyxl')
data_players.head()
data_players.groupby(['Team']).size().to_dict()
## quels joueurs ont couvert le plus de distance ?
data_players['Distance Covered'] = data_players['Distance Covered'].str.replace('km','')
data_players['Distance Covered'] = pd.to_numeric(data_players['Distance Covered'])
data_players.sort_values(['Distance Covered'], ascending = 0).head(n=3)
## Qui a été le plus rapide ?
data_players['Top Speed'] = data_players['Top Speed'].str.replace('km/h','')
data_players['Top Speed'] = pd.to_numeric(data_players['Top Speed'])
data_players.sort_values(['Top Speed'], ascending = 0).head(n=3)
## Parmis ceux qui sont dans le décile des plus rapides, qui a passé le plus clair de son temps à courrir sans la balle ?
data_players['Distance Covered In Possession'] = data_players['Distance Covered In Possession'].str.replace('km','')
data_players['Distance Covered In Possession'] = pd.to_numeric(data_players['Distance Covered In Possession'])
data_players['Share of Possession'] = data_players['Distance Covered In Possession']/data_players['Distance Covered']
data_players[data_players['Top Speed'] > data_players['Top Speed'].
quantile(.90)].sort_values(['Share of Possession'], ascending = 0).head()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Compiling Expressions
Step2: Multiple matches
Step3: The finditer() function returns an iterator that produces Match object instances of the strings return by findall()
Step5: Pattern Syntax
Step6: Repetition
Step7: When processing a repetition instruction, re will usually consume as much of the input as possible while matching the pattern. This so-called greedy behavior may result in fewer individual matches, or the matches may include more of the input text than intended. Greediness can be turned off by following the repetition instruction with ?.
Step8: Disabling greedy consumption of the input for any of the patterns where zero occurrences of b are allowed means the matched substring does not include any b characters.
Step9: A character set can also be used to exclude specific characters. The carat (^) means to look for characters that are not in the set following the carat.
Step10: As character sets grow larger, typing every character that should (or should not) match becomes tedious. A more compact format using character ranges can be used to define a character set to include all of the contiguous characters between the specified start and stop points.
Step11: Here the range a-z includes the lowercase ASCII letters, and the range A-Z includes the uppercase ASCII letters. The ranges can also be combined into a single character set.
Step12: Escape codes
Step13: To match the characters that are part of the regular expression syntax, escape the characters in the search pattern.
Step14: Anchoring
Step15: The patterns in the example for matching words at the beginning and the end of the string are different because the word at the end of the string is followed by punctuation to terminate the sentence. The pattern \w+$ would not match, since . is not considered an alphanumeric character.
Step17: Dissecting Matches with Groups
Step18: To access the substrings matched by the individual groups within a pattern, use the groups() method of the Match object.
Step19: To ask for the match of a single group, use the group() method. This is useful when grouping is being used to find parts of the string, but some of the parts matched by groups are not needed in the results.
Step20: Group 0 represents the string matched by the entire expression, and subgroups are numbered starting with 1 in the order that their left parenthesis appears in the expression.
Step21: Groups are also useful for specifying alternative patterns. Use the pipe symbol (|) to separate two patterns and indicate that either pattern should match. Consider the placement of the pipe carefully, though. The first expression in this example matches a sequence of a followed by a sequence consisting entirely of a single letter, a or b. The second pattern matches a followed by a sequence that may include either a or b. The patterns are similar, but the resulting matches are completely different.
Step22: When an alternative group is not matched, but the entire pattern does match, the return value of groups() includes a None value at the point in the sequence where the alternative group should appear.
Step23: re.findall(pattern, string, flags=0)
Step24: Input with Multiple Lines
Step25: DOTALL is the other flag related to multiline text. Normally, the dot character (.) matches everything in the input text except a newline character. The flag allows the dot to match newlines as well.
Step26: Unicode
Step27: Verbose Expression Syntax
Step28: Converting the expression to a more verbose format will make it easier to extend.
Step29: Embedding Flags in Patterns
Step30: <p>The abbreviations for all of the flags are listed in the table below.</p>
Step31: There are several important changes in this version of the expression. First, the name portion is no longer optional. That means stand-alone addresses do not match, but it also prevents improperly formatted name/address combinations from matching. The positive look ahead rule after the “name” group asserts that either the remainder of the string is wrapped with a pair of angle brackets, or there is not a mismatched bracket; either both of or neither of the brackets is present. The look ahead is expressed as a group, but the match for a look ahead group does not consume any of the input text, so the rest of the pattern picks up from the same spot after the look ahead matches
Step32: Instead of looking ahead for noreply in the username portion of the email address, the pattern can alternatively be written using a negative look behind assertion after the username is matched using the syntax (?<!pattern).
Step33: Looking backward works a little differently than looking ahead, in that the expression must use a fixed-length pattern. Repetitions are allowed, as long as there is a fixed number of them (no wildcards or ranges).
Step34: Although the syntax is simple, creating back-references by numerical ID has a few disadvantages. From a practical standpoint, as the expression changes, the groups must be counted again and every reference may need to be updated. Another disadvantage is that only 99 references can be made using the standard back-reference syntax \n, because if the ID number is three digits long, it will be interpreted as an octal character value instead of a group reference. Of course, if there are more than 99 groups in an expression, there will be more serious maintenance challenges than simply not being able to refer to all of them.
Step35: The other mechanism for using back-references in expressions chooses a different pattern based on whether a previous group matched. The email pattern can be corrected so that the angle brackets are required if a name is present, and not required if the email address is by itself. The syntax for testing whether if a group has matched is (?(id)yes-expression|no-expression), where id is the group name or number, yes-expression is the pattern to use if the group has a value, and no-expression is the pattern to use otherwise.
Step36: This version of the email address parser uses two tests. If the name group matches, then the look ahead assertion requires both angle brackets and sets up the brackets group. If name is not matched, the assertion requires the rest of the text to not have angle brackets around it. Later, if the brackets group is set, the actual pattern matching code consumes the brackets in the input using literal patterns; otherwise, it consumes any blank space.
Step37: References to the text matched by the pattern can be inserted using the \num syntax used for back-references.
Step38: Extending the pattern to say that a paragraph ends with two or more newlines or the end of input fixes the problem, but makes the pattern more complicated. Converting to re.split() instead of re.findall() handles the boundary condition automatically and keeps the pattern simpler.
| <ASSISTANT_TASK:>
Python Code:
import re
pattern = 'text'
text = 'Does this text match the pattern?'
match = re.search(pattern, text)
s = match.start()
e = match.end()
print('Found "{}"\n in "{}"\n from {} to {} ("{}")'.format(
match.re.pattern, match.string, s, e, text[s:e]))
import re
regexes = [re.compile(p) for p in ['this', 'that']]
text = 'Does this text match the pattern?'
print("Text: {!r}\n".format(text))
for regex in regexes:
print("Seeking '{}' -> ".format(regex.pattern), end=' ')
if regex.search(text):
print("match")
else:
print("no match")
import re
text = 'abbaabbbbaaaaa'
pattern = 'ab'
for match in re.findall(pattern, text):
print('Found {!r}'.format(match))
import re
text = 'abbaabbbbaaaaa'
pattern = 'ab'
for match in re.finditer(pattern, text):
s = match.start()
e = match.end()
print("Found '{}' at {:d}:{:d}".format(text[s:e], s, e))
import re
def test_patterns(text, patterns):
Given source text and a list of patterns, look for
matches for each pattern within the text and print
them to stdout.
# Look for each pattern in the text and print the results
for pattern, desc in patterns:
print("'{}' ({})\n".format(pattern, desc))
print("scaned text: '{}'".format(text))
for match in re.finditer(pattern, text):
s = match.start()
e = match.end()
substr = text[s:e]
n_backslashes = text[:s].count('\\')
prefix = '.' * (s + n_backslashes)
print(" {}'{}'".format(prefix, substr))
print()
return
if __name__ == '__main__':
test_patterns('abbaaabbbbaaaaa',
[('ab', "'a' followed by 'b'"),
])
test_patterns(
'abbaabbba',
[('ab*', 'a followed by zero or more b'),
('ab+', 'a followed by one or more b'),
('ab?', 'a followed by zero or one b'),
('ab{3}', 'a followed by three b'),
('ab{2,3}', 'a followed by two to three b')],
)
test_patterns(
'abbaabbba',
[('ab*?', 'a followed by zero or more b'),
('ab+?', 'a followed by one or more b'),
('ab??', 'a followed by zero or one b'),
('ab{3}?', 'a followed by three b'),
('ab{2,3}?', 'a followed by two to three b')],
)
test_patterns(
'abbaabbba',
[('[ab]', 'either a or b'),
('a[ab]+', 'a followed by 1 or more a or b'),
('a[ab]+?', 'a followed by 1 or more a or b, not greedy')],
)
test_patterns(
'This is some text -- with punctuation.',
[('[^-. ]+', 'sequences without -, ., or space')],
)
test_patterns(
'This is some text -- with punctuation.',
[('[a-z]+', 'sequences of lowercase letters'),
('[A-Z]+', 'sequences of uppercase letters'),
('[a-zA-Z]+', 'sequences of letters of either case'),
('[A-Z][a-z]+', 'one uppercase followed by lowercase')],
)
test_patterns(
'abbaabbba',
[('a.', 'a followed by any one character'),
('b.', 'b followed by any one character'),
('a.*b', 'a followed by anything, ending in b'),
('a.*?b', 'a followed by anything, ending in b')],
)
test_patterns(
'A prime #1 example!',
[(r'\d+', 'sequence of digits'),
(r'\D+', 'sequence of non-digits'),
(r'\s+', 'sequence of whitespace'),
(r'\S+', 'sequence of non-whitespace'),
(r'\w+', 'alphanumeric characters'),
(r'\W+', 'non-alphanumeric')],
)
test_patterns(
r'\d+ \D+ \s+',
[(r'\\.\+', 'escape code')],
)
test_patterns(
'This is some text -- with punctuation.',
[(r'^\w+', 'word at start of string'),
(r'\A\w+', 'word at start of string'),
(r'\w+\S*$', 'word near end of string'),
(r'\w+\S*\Z', 'word near end of string'),
(r'\w*t\w*', 'word containing t'),
(r'\bt\w+', 't at start of word'),
(r'\w+t\b', 't at end of word'),
(r'\Bt\B', 't, not start or end of word')],
)
import re
text = 'This is some text -- with punctuation.'
pattern = 'is'
print('Text :', text)
print('Pattern:', pattern)
m = re.match(pattern, text)
print('Match :', m)
s = re.search(pattern, text)
print('Search :', s)
def test_patterns(text, patterns):
Given source text and a list of patterns, look for
matches for each pattern within the text and print
them to stdout.
# Look for each pattern in the text and print the results
for pattern, desc in patterns:
print('{!r} ({})\n'.format(pattern, desc))
print(' {!r}'.format(text))
for match in re.finditer(pattern, text):
s = match.start()
e = match.end()
prefix = ' ' * (s)
print(
' {}{!r}{} '.format(prefix,
text[s:e],
' ' * (len(text) - e)),
end=' ',
)
print(match.groups())
if match.groupdict():
print('{}{}'.format(
' ' * (len(text) - s),
match.groupdict()),
)
print()
return
test_patterns(
'abbaaabbbbaaaaa',
[('a(ab)', 'a followed by literal ab'),
('a(a*b*)', 'a followed by 0-n a and 0-n b'),
('a(ab)*', 'a followed by 0-n ab'),
('a(ab)+', 'a followed by 1-n ab')],
)
import re
text = 'This is some text -- with punctuation.'
print(text)
print()
patterns = [
(r'^(\w+)', 'word at start of string'),
(r'(\w+)\S*$', 'word at end, with optional punctuation'),
(r'(\bt\w+)\W+(\w+)', 'word starting with t, another word'),
(r'(\w+t)\b', 'word ending with t'),
]
for pattern, desc in patterns:
regex = re.compile(pattern)
match = regex.search(text)
print("'{}' ({})\n".format(pattern, desc))
print(' ', match.groups())
print()
import re
text = 'This is some text -- with punctuation.'
print('Input text :', text)
# word starting with 't' then another word
regex = re.compile(r'(\bt\w+)\W+(\w+)')
print('Pattern :', regex.pattern)
match = regex.search(text)
print('Entire match :', match.group(0))
print('Word starting with "t":', match.group(1))
print('Word after "t" word :', match.group(2))
import re
text = 'This is some text -- with punctuation.'
print(text)
print()
patterns = [
r'^(?P<first_word>\w+)',
r'(?P<last_word>\w+)\S*$',
r'(?P<t_word>\bt\w+)\W+(?P<other_word>\w+)',
r'(?P<ends_with_t>\w+t)\b',
]
for pattern in patterns:
regex = re.compile(pattern)
match = regex.search(text)
print("'{}'".format(pattern))
print(' ', match.groups())
print(' ', match.groupdict())
print()
test_patterns(
'abbaabbba',
[(r'a((a+)|(b+))', 'a then seq. of a or seq. of b'),
(r'a((a|b)+)', 'a then seq. of [ab]')],
)
test_patterns(
'abbaabbba',
[(r'a((a+)|(b+))', 'capturing form'),
(r'a((?:a+)|(?:b+))', 'noncapturing')],
)
import re
text = 'This is some text -- with punctuation.'
pattern = r'\bT\w+'
with_case = re.compile(pattern)
without_case = re.compile(pattern, re.IGNORECASE)
print('Text:\n {!r}'.format(text))
print('Pattern:\n {}'.format(pattern))
print('Case-sensitive:')
for match in with_case.findall(text):
print(' {!r}'.format(match))
print('Case-insensitive:')
for match in without_case.findall(text):
print(' {!r}'.format(match))
import re
text = 'This is some text -- with punctuation.\nA second line.'
pattern = r'(^\w+)|(\w+\S*$)'
single_line = re.compile(pattern)
multiline = re.compile(pattern, re.MULTILINE)
print('Text:\n {!r}'.format(text))
print('Pattern:\n {}'.format(pattern))
print('Single Line :')
for match in single_line.findall(text):
print(' {!r}'.format(match))
print('Multline :')
for match in multiline.findall(text):
print(' {!r}'.format(match))
import re
text = 'This is some text -- with punctuation.\nA second line.'
pattern = r'.+'
no_newlines = re.compile(pattern)
dotall = re.compile(pattern, re.DOTALL)
print('Text:\n {!r}'.format(text))
print('Pattern:\n {}'.format(pattern))
print('No newlines :')
for match in no_newlines.findall(text):
print(' {!r}'.format(match))
print('Dotall :')
for match in dotall.findall(text):
print(' {!r}'.format(match))
import re
text = u'Français złoty Österreich'
pattern = r'\w+'
ascii_pattern = re.compile(pattern, re.ASCII)
unicode_pattern = re.compile(pattern)
print('Text :', text)
print('Pattern :', pattern)
print('ASCII :', list(ascii_pattern.findall(text)))
print('Unicode :', list(unicode_pattern.findall(text)))
import re
address = re.compile('[\w\d.+-]+@([\w\d.]+\.)+(com|org|edu)')
candidates = [
u'first.last@example.com',
u'first.last+category@gmail.com',
u'valid-address@mail.example.com',
u'not-valid@example.foo',
]
for candidate in candidates:
match = address.search(candidate)
print('{:<30} {}'.format(
candidate, 'Matches' if match else 'No match')
)
import re
address = re.compile(
'''
[\w\d.+-]+ # username
@
([\w\d.]+\.)+ # domain name prefix
(com|org|edu) # TODO: support more top-level domains
''',
re.VERBOSE)
candidates = [
u'first.last@example.com',
u'first.last+category@gmail.com',
u'valid-address@mail.example.com',
u'not-valid@example.foo',
]
for candidate in candidates:
match = address.search(candidate)
print('{:<30} {}'.format(
candidate, 'Matches' if match else 'No match'),
)
import re
text = 'This is some text -- with punctuation.'
pattern = r'(?i)\bT\w+'
regex = re.compile(pattern)
print('Text :', text)
print('Pattern :', pattern)
print('Matches :', regex.findall(text))
import re
address = re.compile(
'''
# A name is made up of letters, and may include "."
# for title abbreviations and middle initials.
((?P<name>
([\w.,]+\s+)*[\w.,]+
)
\s+
) # name is no longer optional
# LOOKAHEAD
# Email addresses are wrapped in angle brackets, but only
# if both are present or neither is.
(?= (<.*>$) # remainder wrapped in angle brackets
|
([^<].*[^>]$) # remainder *not* wrapped in angle brackets
)
<? # optional opening angle bracket
# The address itself: username@domain.tld
(?P<email>
[\w\d.+-]+ # username
@
([\w\d.]+\.)+ # domain name prefix
(com|org|edu) # limit the allowed top-level domains
)
>? # optional closing angle bracket
''',
re.VERBOSE)
candidates = [
u'First Last <first.last@example.com>',
u'No Brackets first.last@example.com',
u'Open Bracket <first.last@example.com',
u'Close Bracket first.last@example.com>',
]
for candidate in candidates:
print('Candidate:', candidate)
match = address.search(candidate)
if match:
print(' Name :', match.groupdict()['name'])
print(' Email:', match.groupdict()['email'])
else:
print(' No match')
import re
address = re.compile(
'''
^
# An address: username@domain.tld
# Ignore noreply addresses
(?!noreply@.*$)
[\w\d.+-]+ # username
@
([\w\d.]+\.)+ # domain name prefix
(com|org|edu) # limit the allowed top-level domains
$
''',
re.VERBOSE)
candidates = [
u'first.last@example.com',
u'noreply@example.com',
]
for candidate in candidates:
print('Candidate:', candidate)
match = address.search(candidate)
if match:
print(' Match:', candidate[match.start():match.end()])
else:
print(' No match')
import re
address = re.compile(
'''
^
# An address: username@domain.tld
[\w\d.+-]+ # username
# Ignore noreply addresses
(?<!noreply)
@
([\w\d.]+\.)+ # domain name prefix
(com|org|edu) # limit the allowed top-level domains
$
''',
re.VERBOSE)
candidates = [
u'first.last@example.com',
u'noreply@example.com',
]
for candidate in candidates:
print('Candidate:', candidate)
match = address.search(candidate)
if match:
print(' Match:', candidate[match.start():match.end()])
else:
print(' No match')
import re
address = re.compile(
r'''
# The regular name
(\w+) # first name
\s+
(([\w.]+)\s+)? # optional middle name or initial
(\w+) # last name
\s+
<
# The address: first_name.last_name@domain.tld
(?P<email>
\1 # first name
\.
\4 # last name
@
([\w\d.]+\.)+ # domain name prefix
(com|org|edu) # limit the allowed top-level domains
)
>
''',
re.VERBOSE | re.IGNORECASE)
candidates = [
u'First Last <first.last@example.com>',
u'Different Name <first.last@example.com>',
u'First Middle Last <first.last@example.com>',
u'First M. Last <first.last@example.com>',
]
for candidate in candidates:
print('Candidate:', candidate)
match = address.search(candidate)
if match:
print(' Match name :', match.group(1), match.group(4))
print(' Match email:', match.group(5))
else:
print(' No match')
import re
address = re.compile(
'''
# The regular name
(?P<first_name>\w+)
\s+
(([\w.]+)\s+)? # optional middle name or initial
(?P<last_name>\w+)
\s+
<
# The address: first_name.last_name@domain.tld
(?P<email>
(?P=first_name)
\.
(?P=last_name)
@
([\w\d.]+\.)+ # domain name prefix
(com|org|edu) # limit the allowed top-level domains
)
>
''',
re.VERBOSE | re.IGNORECASE)
candidates = [
u'First Last <first.last@example.com>',
u'Different Name <first.last@example.com>',
u'First Middle Last <first.last@example.com>',
u'First M. Last <first.last@example.com>',
]
for candidate in candidates:
print('Candidate:', candidate)
match = address.search(candidate)
if match:
print(' Match name :', match.groupdict()['first_name'],
end=' ')
print(match.groupdict()['last_name'])
print(' Match email:', match.groupdict()['email'])
else:
print(' No match')
import re
address = re.compile(
'''
^
# A name is made up of letters, and may include "."
# for title abbreviations and middle initials.
(?P<name>
([\w.]+\s+)*[\w.]+
)?
\s*
# Email addresses are wrapped in angle brackets, but
# only if a name is found.
(?(name)
# remainder wrapped in angle brackets because
# there is a name
(?P<brackets>(?=(<.*>$)))
|
# remainder does not include angle brackets without name
(?=([^<].*[^>]$))
)
# Look for a bracket only if the look-ahead assertion
# found both of them.
(?(brackets)<|\s*)
# The address itself: username@domain.tld
(?P<email>
[\w\d.+-]+ # username
@
([\w\d.]+\.)+ # domain name prefix
(com|org|edu) # limit the allowed top-level domains
)
# Look for a bracket only if the look-ahead assertion
# found both of them.
(?(brackets)>|\s*)
$
''',
re.VERBOSE)
candidates = [
u'First Last <first.last@example.com>',
u'No Brackets first.last@example.com',
u'Open Bracket <first.last@example.com',
u'Close Bracket first.last@example.com>',
u'no.brackets@example.com',
]
for candidate in candidates:
print('Candidate:', candidate)
match = address.search(candidate)
if match:
print(' Match name :', match.groupdict()['name'])
print(' Match email:', match.groupdict()['email'])
else:
print(' No match')
import re
bold = re.compile(r'\*{2}(.*?)\*{2}')
text = 'Make this **bold**. This **too**.'
print('Text:', text)
print('Bold:', bold.sub(r'<b>\1</b>', text))
import re
text = '''Paragraph one
on two lines.
Paragraph two.
Paragraph three.'''
for num, para in enumerate(re.findall(r'(.+?)\n{2,}',
text,
flags=re.DOTALL)
):
print(num, repr(para))
print()
import re
text = '''Paragraph one
on two lines.
Paragraph two.
Paragraph three.'''
print('With findall:')
for num, para in enumerate(re.findall(r'(.+?)(\n{2,}|$)',
text,
flags=re.DOTALL)):
print(num, repr(para))
print()
print()
print('With split:')
for num, para in enumerate(re.split(r'\n{2,}', text)):
print(num, repr(para))
print()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 2. Explore Natality dataset
Step3: 3. Training on Cloud ML Engine
Step4: 3. Get a saved model directory
Step5: 4. Testing an evaluation pipeline with DirectRunner
Step6: Execute a DirectRunner job
Step9: Show results of a DirectRunner Job
Step10: 5. Running an evaluation pipeline with DataflowRunner
Step13: Execute a Dataflow job
| <ASSISTANT_TASK:>
Python Code:
# change these to try this notebook out
BUCKET = 'cloudonair-ml-demo'
PROJECT = 'cloudonair-ml-demo'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
# Call BigQuery and examine in dataframe
import pandas
from pandas.io import gbq
# Create SQL query using natality data after the year 2000
query =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
publicdata.samples.natality
WHERE year > 2000
LIMIT 100
df = gbq.read_gbq(query=query, dialect='standard', project_id=os.environ['PROJECT'], verbose=False)
df.head()
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
# copy canonical set of preprocessed files if you didn't do previous notebook
gsutil -m cp -R gs://cloud-training-demos/babyweight gs://${BUCKET}
fi
%%bash
OUTDIR=gs://${BUCKET}/babyweight/trained_model
JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=$(pwd)/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=STANDARD_1 \
--runtime-version=1.4 \
-- \
--bucket=${BUCKET} \
--output_dir=${OUTDIR} \
--train_examples=200000
import datetime
import os
import pytz
import re
import urllib
from google.cloud import storage
def get_saved_model_dir(bucket, pattern='.*/babyweight/.*/saved_model.pb'):
created_time = datetime.datetime(datetime.MINYEAR, 1, 1, tzinfo=pytz.timezone('UTC'))
recent_model_path = None
p = re.compile(pattern)
for blob in storage.Client().bucket(bucket).list_blobs():
if p.match(urllib.unquote(blob.path)):
if created_time < blob.time_created:
recent_model_path = blob.path
created_time = blob.time_created
saved_model = urllib.unquote(recent_model_path.split('/')[-1])
saved_model_dir = '/'.join(saved_model.split('/')[:-1])
return 'gs://{0}/{1}'.format(os.environ['BUCKET'], saved_model_dir)
saved_model_dir = get_saved_model_dir(os.environ['BUCKET'])
if saved_model_dir:
os.environ['SAVED_MODEL_DIR'] = saved_model_dir
print(saved_model_dir)
else:
print('wait until saved_model.pb is written by Cloud ML Engine...')
import os
import datetime
os.environ['RUNNER'] = 'DirectRunner'
os.environ['JOB_NAME'] = 'evaluate-ml-model-{0}'.format(datetime.datetime.now().strftime('%y%m%d-%H%M%S'))
os.environ['BQ_DATASET'] = 'model_evaluation'
os.environ['BQ_TABLE'] = 'sample'
os.environ['OUTPUT_TABLE'] = '{}:{}.{}'.format(os.environ['PROJECT'], os.environ['BQ_DATASET'], os.environ['BQ_TABLE'])
%%bash
# Create BQ dataset if it doesn't exist.
if ! bq ls ${BQ_DATASET} > /dev/null; then
bq mk ${BQ_DATASET}
fi
%%bash
cd $(pwd)/evaluator
python -m main \
--datasize=100 \
--year_from=1974 \
--year_to=1978 \
--saved_model_dir=${SAVED_MODEL_DIR} \
--output_table=${OUTPUT_TABLE} \
--project=${PROJECT} \
--runner=DirectRunner \
--region=${REGION} \
--job_name=${JOB_NAME}
import pandas
from pandas.io import gbq
query =
SELECT * FROM `{0}` LIMIT 10
.format(os.environ['OUTPUT_TABLE'].replace(':', '.'))
df = gbq.read_gbq(query=query, dialect='standard', project_id=os.environ['PROJECT'], verbose=False)
df.head()
import pandas
from pandas.io import gbq
query =
SELECT
model,
testdata,
SQRT(SUM(POW(weight_residual,2))/COUNT(*)) AS RMSE,
AVG(time_inference) AS AVG_MSEC
FROM
`{0}`
GROUP BY
model, testdata
.format(os.environ['OUTPUT_TABLE'].replace(':', '.'))
df = gbq.read_gbq(query=query, dialect='standard', project_id=os.environ['PROJECT'], verbose=False)
df.head()
import os
import datetime
os.environ['RUNNER'] = 'DataflowRunner'
os.environ['STAGING_LOCATION'] = 'gs://{0}/babyweight/staging'.format(os.environ['BUCKET'])
os.environ['TEMP_LOCATION'] = 'gs://{0}/babyweight/temp'.format(os.environ['BUCKET'])
os.environ['JOB_NAME'] = 'evaluate-ml-model-{0}'.format(datetime.datetime.now().strftime('%y%m%d-%H%M%S'))
os.environ['BQ_DATASET'] = 'model_evaluation'
os.environ['BQ_TABLE'] = 'prediction_results'
os.environ['OUTPUT_TABLE'] = '{}:{}.{}'.format(os.environ['PROJECT'], os.environ['BQ_DATASET'], os.environ['BQ_TABLE'])
%%bash
# Create BQ dataset if it doesn't exist.
if ! bq ls ${BQ_DATASET} > /dev/null; then
bq mk ${BQ_DATASET}
fi
%%bash
cd $(pwd)/evaluator
python -m main \
--datasize=1000000 \
--year_from=1984 \
--year_to=1988 \
--saved_model_dir=${SAVED_MODEL_DIR} \
--output_table=${OUTPUT_TABLE} \
--project=${PROJECT} \
--runner=${RUNNER} \
--region=${REGION} \
--staging_location=${STAGING_LOCATION} \
--temp_location=${TEMP_LOCATION} \
--job_name=${JOB_NAME} \
--setup_file=$(pwd)/setup.py
import pandas
from pandas.io import gbq
query =
SELECT * FROM `{0}` LIMIT 10
.format(os.environ['OUTPUT_TABLE'].replace(':', '.'))
df = gbq.read_gbq(query=query, dialect='standard', project_id=os.environ['PROJECT'], verbose=False)
df.head()
query =
SELECT
model,
testdata,
SQRT(SUM(POW(weight_residual,2))/COUNT(*)) AS RMSE,
AVG(time_inference) AS AVG_MSEC
FROM
`{0}`
GROUP BY
model, testdata
.format(os.environ['OUTPUT_TABLE'].replace(':', '.'))
df = gbq.read_gbq(query=query, dialect='standard', project_id=os.environ['PROJECT'], verbose=False)
df
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Running pyBadlands
| <ASSISTANT_TASK:>
Python Code:
from pyBadlands.model import Model as badlandsModel
# Initialise model
model = badlandsModel()
# Define the XmL input file
model.load_xml('test','mountain.xml')
start = time.time()
model.run_to_time(10000000)
print 'time', time.time() - start
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initializing DNA object and storing data to it
Step2: Smoothening of Helical Axis
Step3: Extraction of original and smoothed helical axis postions as a function of time (manually)
Step4: To calculate curvature and tangent vectors along helical axis
Step5: Writing trajectory of Helical Axis
Step6: Angle between tangent vectors
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import dnaMD
%matplotlib inline
## Initialization
fdna = dnaMD.DNA(60) #Initialization for 60 base-pairs free DNA
## If HDF5 file is used to store/save data use these:
# fdna = dnaMD.DNA(60, filename='odna.h5') #Initialization for 60 base-pairs free DNA
## Loading data from input files in respective DNA object
# Number of local helical axis positions = Number of base-pairs - one
# Number of helcial steps in a 60 base-pairs DNA = 59
fdna.set_helical_axis('tutorial_data/HelAxis_odna.dat')
### Smoothening of helical axis
fdna.generate_smooth_axis(smooth=500, spline=3, fill_point=6)
### Extraction of original helical axis for 5-55 base-steps segment
# X-axis original
RawX, bp_idx = fdna.get_parameters('helical x-axis', [5, 55], bp_range=True)
# Y-axis original
RawY, bp_idx = fdna.get_parameters('helical y-axis', [5, 55], bp_range=True)
# Z-axis original
RawZ, bp_idx = fdna.get_parameters('helical z-axis', [5, 55], bp_range=True)
# X-axis smoothed
SmoothX, bp_idx = fdna.get_parameters('helical x-axis smooth', [5, 55], bp_range=True)
# Y-axis smoothed
SmoothY, bp_idx = fdna.get_parameters('helical y-axis smooth', [5, 55], bp_range=True)
# Z-axis smoothed
SmoothZ, bp_idx = fdna.get_parameters('helical z-axis smooth', [5, 55], bp_range=True)
# Here RawX is a 2D array of shape (base-step, nframes)
# Some examples
## x, y, z coordinates of nth base-step in mth frame: base-step index = (n - 5), frame index = (m - 1)
print ( "\n====== Some Examples ======" )
print ("Original coordinates of 8th base-step in 15th frame : [ %8.3f %8.3f %8.3f ]" %
(RawX[3][14], RawY[3][14], RawZ[3][14]))
print ("Smoothened coordinates of 8th base-step in 15th frame: [ %8.3f %8.3f %8.3f ]" %
(SmoothX[3][14], SmoothY[3][14], SmoothZ[3][14]))
print ("\nOriginal coordinates of 40th base-step in 900th frame : [ %8.3f %8.3f %8.3f ]" %
(RawX[37][899], RawY[37][899], RawZ[37][899]))
print ("Smoothened coordinates of 40th base-step in 900th frame: [ %8.3f %8.3f %8.3f ]\n" %
(SmoothX[37][899], SmoothY[37][899], SmoothZ[37][899]))
### Calculating curvature and tangent vectors
# If store_tangent=True; then tangent vectors will be stored for later use, otherwise it will be discarded
fdna.calculate_curvature_tangent(store_tangent=True)
fdna.calculate_angle_bw_tangents([5,50])
# Curvature vs Time for 22nd bp
plt.title('Curvature for 22nd bp')
time, value = fdna.time_vs_parameter('curvature', [22])
plt.plot(time, value)
plt.xlabel('Time (ps)')
plt.ylabel('Curvature ($\AA^{-1}$)')
plt.show()
# Total Curvature vs Time for 10-50 bp segment
plt.title('Total Curvature for 10-50 bp segment')
# Bound DNA
# Here, Total Curvature is considered as the sum over the local curvatures of the base-steps
time, value = fdna.time_vs_parameter('curvature', [10, 50], merge=True, merge_method='sum')
plt.plot(time, value)
plt.xlabel('Time (ps)')
plt.ylabel('Curvature ($\AA^{-1}$)')
plt.show()
# Only smoothed helical axis
fdna.write_haxis_pdb(filename='only_smoothed_axis.pdb', write_smooth_axis=True)
# Only original helical axis
fdna.write_haxis_pdb(filename='only_original_axis.pdb', write_smooth_axis=False, write_orig_axis=True)
# Both original and smoothed axis
fdna.write_haxis_pdb(filename='original_smoothed_axis.pdb', write_smooth_axis=True, write_orig_axis=True)
# Both original and smoothed axis with curvature scaled-up by 1000 times
fdna.write_haxis_pdb(filename='original_smoothed_axis_curvature_.pdb', write_smooth_axis=True, write_orig_axis=True,
write_curv=True, scale_curv=1000)
# Angle vs Time for 28-32 bp
plt.title('Bending Angle for 28-32 bp')
# Calculating angle between the tangent vectors of 38th and 32nd base-steps
angle = fdna.calculate_angle_bw_tangents([28,32])
# Change to Degree
angle = np.degrees(angle)
# Plotting
plt.plot(fdna.time, angle)
plt.xlabel('Time (ps)')
plt.ylabel('Angle ( $^o$)')
plt.show()
# Angle vs Time for 25-35 bp
plt.title('Bending Angle for 25-35 bp')
# Calculating angle between the tangent vectors of 25th and 35th base-steps
angle = fdna.calculate_angle_bw_tangents([25,35])
# Change to Degree
angle = np.degrees(angle)
# Plotting
plt.plot(fdna.time, angle)
plt.xlabel('Time (ps)')
plt.ylabel('Angle ( $^o$)')
plt.show()
# Angle vs Time for 20-40 bp
plt.title('Bending Angle for 20-40 bp')
# Calculating angle between the tangent vectors of 20th and 40th base-steps
angle = fdna.calculate_angle_bw_tangents([20,40])
# Change to Degree
angle = np.degrees(angle)
# Plotting
plt.plot(fdna.time, angle)
plt.xlabel('Time (ps)')
plt.ylabel('Angle ( $^o$)')
plt.show()
# Angle vs Time for 15-45 bp
plt.title('Bending Angle for 15-45 bp')
# Calculating angle between the tangent vectors of 15th and 45th base-steps
angle = fdna.calculate_angle_bw_tangents([15,45])
# Change to Degree
angle = np.degrees(angle)
# Plotting
plt.plot(fdna.time, angle)
plt.xlabel('Time (ps)')
plt.ylabel('Angle ( $^o$)')
plt.show()
# Angle vs Time for 5-55 bp
plt.title('Bending Angle for 5-55 bp')
# Calculating angle between the tangent vectors of 5th and 55th base-steps
angle = fdna.calculate_angle_bw_tangents([5,55])
# Change to Degree
angle = np.degrees(angle)
# Plotting
plt.plot(fdna.time, angle)
plt.xlabel('Time (ps)')
plt.ylabel('Angle ( $^o$)')
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set Up Verta
Step2: Imports
Step3: Download the IMDB dataset
Step4: Explore the data
Step5: Let's also print the first 2 labels.
Step6: Build the model
Step7: Let's now build the full model
Step8: The layers are stacked sequentially to build the classifier
Step9: Train the model
Step10: Evaluate the model
Step11: With more advanced approaches, the model should get closer to 95%.
Step12: Prediction with the model
| <ASSISTANT_TASK:>
Python Code:
# Python 3.6
!pip install verta
!pip install matplotlib==3.1.1
!pip install tensorflow==2.0.0-beta1
!pip install tensorflow-hub==0.5.0
!pip install tensorflow-datasets==1.0.2
HOST = 'app.verta.ai'
PROJECT_NAME = 'Text-Classification'
EXPERIMENT_NAME = 'basic-clf'
# import os
# os.environ['VERTA_EMAIL'] =
# os.environ['VERTA_DEV_KEY'] =
from verta import Client
from verta.utils import ModelAPI
client = Client(HOST, use_git=False)
proj = client.set_project(PROJECT_NAME)
expt = client.set_experiment(EXPERIMENT_NAME)
run = client.set_experiment_run()
from __future__ import absolute_import, division, print_function, unicode_literals
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
import numpy as np
import six
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.test.is_gpu_available() else "NOT AVAILABLE")
# Split the training set into 60% and 40%, so we'll end up with 15,000 examples
# for training, 10,000 examples for validation and 25,000 examples for testing.
train_validation_split = tfds.Split.TRAIN.subsplit([6, 4])
(train_data, validation_data), test_data = tfds.load(
name="imdb_reviews",
split=(train_validation_split, tfds.Split.TEST),
as_supervised=True)
train_examples_batch, train_labels_batch = next(iter(train_data.batch(2)))
train_examples_batch
train_labels_batch
embedding = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1"
hub_layer = hub.KerasLayer(embedding, input_shape=[],
dtype=tf.string, trainable=True)
hub_layer(train_examples_batch[:3])
model = tf.keras.Sequential()
model.add(hub_layer)
model.add(tf.keras.layers.Dense(16, activation='relu'))
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
model.summary()
hyperparams = {'optimizer':'adam',
'loss':'binary_crossentropy',
'metrics':'accuracy',
'train_batch_size':512,
'num_epochs':20,
'validation_batch_size':512,
'test_batch_size':512,
}
run.log_hyperparameters(hyperparams)
model.compile(optimizer=hyperparams['optimizer'],
loss=hyperparams['loss'],
metrics=[hyperparams['metrics']])
# called at the end of each epoch - logging loss, accuracy as observations for the run
class LossAndErrorLoggingCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
print('The average loss for epoch {} is {:7.2f}, accuracy is {:7.2f}.'.format(epoch, logs['loss'], logs['accuracy']))
run.log_observation("train_loss", float(logs['loss']))
run.log_observation("train_acc", float(logs['accuracy']))
run.log_observation("val_loss", float(logs['val_loss']))
run.log_observation("val_acc", float(logs['val_accuracy']))
history = model.fit(train_data.shuffle(10000).batch(hyperparams['train_batch_size']),
epochs=hyperparams['num_epochs'],
validation_data=validation_data.batch(hyperparams['validation_batch_size']),
callbacks=[LossAndErrorLoggingCallback()])
results = model.evaluate(test_data.batch(512), verbose=0)
for name, value in zip(model.metrics_names, results):
print("%s: %.3f" % (name, value))
run.log_metric(name, value)
def plot_graphs(history, string, run, plot_title):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel('Epochs')
plt.ylabel(string)
plt.legend([string, 'val_'+string])
run.log_image(plot_title, plt)
plt.show()
# plotting graphs to see variation in accuracy and loss
plot_graphs(history, 'accuracy', run, 'accuracy_plt')
plot_graphs(history, 'loss', run, 'loss_plt')
sample_pred = np.array(["The new Spiderman movie is a fun watch. Loved it!"])
model.predict(sample_pred)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Ice Albedo
Step7: 1.4. Atmospheric Coupling Variables
Step8: 1.5. Oceanic Coupling Variables
Step9: 1.6. Prognostic Variables
Step10: 2. Key Properties --> Software Properties
Step11: 2.2. Code Version
Step12: 2.3. Code Languages
Step13: 3. Grid
Step14: 3.2. Adaptive Grid
Step15: 3.3. Base Resolution
Step16: 3.4. Resolution Limit
Step17: 3.5. Projection
Step18: 4. Glaciers
Step19: 4.2. Description
Step20: 4.3. Dynamic Areal Extent
Step21: 5. Ice
Step22: 5.2. Grounding Line Method
Step23: 5.3. Ice Sheet
Step24: 5.4. Ice Shelf
Step25: 6. Ice --> Mass Balance
Step26: 7. Ice --> Mass Balance --> Basal
Step27: 7.2. Ocean
Step28: 8. Ice --> Mass Balance --> Frontal
Step29: 8.2. Melting
Step30: 9. Ice --> Dynamics
Step31: 9.2. Approximation
Step32: 9.3. Adaptive Timestep
Step33: 9.4. Timestep
| <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'sandbox-1', 'landice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
| <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csir-csiro', 'sandbox-3', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's start by loading some pre-generated data meant to represent radial velocity observations of a single luminous source with two faint companions
Step2: Let's first pretend that we don't know it is a triple, and try generating orbit samples assuming a binary with no polynomial velocity trend. We will set up the default prior with some reasonable parameters that we have used in previous tutorials, and generate a big cache of prior samples
Step3: Now we can run The Joker to generate posterior samples
Step4: Only one sample was returned, and it's not a very good fit to the data (see the plot above). This is because the data were generated from a hierarchical triple system, but fit as a two-body system. Let's now try generating Keplerian orbit samples for the inner binary, while including a polynomial trend in velocity to capture the long-term trend from the outer companion. To do this, we specify the number of polynomial trend coefficients to sample over
Step5: Notice the additional parameters v1, v2 in the prior
Step6: We are now set up to generate prior samples and run The Joker including the new linear trend parameters
Step7: Those orbit samples look much better at matching the data! In a real-world situation with these data and results, given that the samples look like they all share a similar period, at this point I would start standard MCMC to continue generating samples. But, that is covered in Tutorial 4, so for now, we will proceed with only the samples returned from The Joker.
Step8: Truth
Step9: Assuming binary
Step10: Assuming binary + quadratic velocity trend
| <ASSISTANT_TASK:>
Python Code:
import astropy.table as at
import astropy.units as u
from astropy.visualization.units import quantity_support
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
import thejoker as tj
# set up a random number generator to ensure reproducibility
rnd = np.random.default_rng(seed=42)
data = tj.RVData.guess_from_table(at.QTable.read('data-triple.ecsv'))
data = data[rnd.choice(len(data), size=16, replace=False)] # downsample data
_ = data.plot()
prior = tj.JokerPrior.default(
P_min=2*u.day, P_max=1e3*u.day,
sigma_K0=30*u.km/u.s,
sigma_v=100*u.km/u.s)
prior_samples = prior.sample(size=250_000, random_state=rnd)
joker = tj.TheJoker(prior, random_state=rnd)
samples = joker.rejection_sample(data, prior_samples,
max_posterior_samples=128)
samples
_ = tj.plot_rv_curves(samples, data=data)
prior_trend = tj.JokerPrior.default(
P_min=2*u.day, P_max=1e3*u.day,
sigma_K0=30*u.km/u.s,
sigma_v=[100*u.km/u.s,
0.5*u.km/u.s/u.day,
1e-2*u.km/u.s/u.day**2],
poly_trend=3)
prior_trend
prior_samples_trend = prior_trend.sample(size=250_000,
random_state=rnd)
joker_trend = tj.TheJoker(prior_trend, random_state=rnd)
samples_trend = joker_trend.rejection_sample(data, prior_samples_trend,
max_posterior_samples=128)
samples_trend
_ = tj.plot_rv_curves(samples_trend, data=data)
import pickle
with open('true-orbit-triple.pkl', 'rb') as f:
truth = pickle.load(f)
truth['P'], truth['e'], truth['K']
samples['P'], samples['e'], samples['K']
samples_trend.mean()['P'], samples_trend.mean()['e'], samples_trend.mean()['K']
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Mairhuber-Curtis Theorem
Step2: Halton points vs pseudo-random points in 2D
Step3: Interpolation with Distance Matrix from Halton points
Step4: Defining a test function
Step5: Let's look at $f$
Step6: The interpolation with distance matrix itself
Step7: RBF interpolation
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import ghalton
import matplotlib.pyplot as plt
%matplotlib inline
from ipywidgets import interact
from scipy.spatial import distance_matrix
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
from ipywidgets import IntSlider
import sympy as sym
import matplotlib as mpl
mpl.rcParams['font.size'] = 14
mpl.rcParams['axes.labelsize'] = 20
mpl.rcParams['xtick.labelsize'] = 14
mpl.rcParams['ytick.labelsize'] = 14
sym.init_printing()
M=8
def plot_matrices_with_values(ax,M):
N=M.shape[0]
cmap = plt.get_cmap('GnBu')
ax.matshow(M, cmap=cmap)
for i in np.arange(0, N):
for j in np.arange(0, N):
ax.text(i, j, '{:.2f}'.format(M[i,j]), va='center', ha='center', color='r')
# Initializing a R^2
sequencer = ghalton.Halton(2)
sequencer.reset()
xH=np.array(sequencer.get(9))
print(xH)
def show_MC_theorem(s_local=0):
i=3
j=4
NC=40
sequencer.reset()
xH=np.array(sequencer.get(9))
phi1= lambda s: (s-0.5)*(s-1)/((0-0.5)*(0-1))
phi2= lambda s: (s-0)*(s-1)/((0.5-0)*(0.5-1))
phi3= lambda s: (s-0)*(s-0.5)/((1-0)*(1-0.5))
C1=lambda s: xH[i,:]*phi1(s)+np.array([0.45,0.55])*phi2(s)+xH[j,:]*phi3(s)
C2=lambda s: xH[j,:]*phi1(s)+np.array([0.15,0.80])*phi2(s)+xH[i,:]*phi3(s)
C1v=np.vectorize(C1,otypes=[np.ndarray])
C2v=np.vectorize(C2,otypes=[np.ndarray])
ss=np.linspace(0,1,NC).reshape((-1, 1))
C1o=np.array(C1v(ss))
C2o=np.array(C2v(ss))
C1plot=np.zeros((NC,2))
C2plot=np.zeros((NC,2))
for k in np.arange(0,NC):
C1plot[k,0]=C1o[k][0][0]
C1plot[k,1]=C1o[k][0][1]
C2plot[k,0]=C2o[k][0][0]
C2plot[k,1]=C2o[k][0][1]
plt.figure(figsize=(2*M,M))
plt.subplot(121)
plt.plot(C1plot[:,0],C1plot[:,1],'r--')
plt.plot(C2plot[:,0],C2plot[:,1],'g--')
plt.scatter(xH[:,0], xH[:,1], s=300, c="b", alpha=1.0, marker='.',
label="Halton")
plt.scatter(C1(s_local)[0], C1(s_local)[1], s=300, c="r", alpha=1.0, marker='d')
plt.scatter(C2(s_local)[0], C2(s_local)[1], s=300, c="g", alpha=1.0, marker='d')
plt.axis([0,1,0,1])
plt.title(r'Quasi-random points (Halton)')
plt.grid(True)
xHm=np.copy(xH)
xHm[i,:]=C1(s_local)
xHm[j,:]=C2(s_local)
R=distance_matrix(xHm, xH)
det_s_local=np.linalg.det(R)
plt.subplot(122)
plt.title(r'det(R_fixed)='+str(det_s_local))
det_s=np.zeros_like(ss)
for k, s in enumerate(ss):
xHm[i,:]=C1plot[k,:]
xHm[j,:]=C2plot[k,:]
R=distance_matrix(xHm, xH)
det_s[k]=np.linalg.det(R)
plt.plot(ss,det_s,'-')
plt.plot(s_local,det_s_local,'dk',markersize=16)
plt.grid(True)
plt.show()
interact(show_MC_theorem,s_local=(0,1,0.1))
def plot_random_vs_Halton(n=100):
# Number of points to be generated
# n=1000
# I am reseting the sequence everytime I generated just to get the same points
sequencer.reset()
xH=np.array(sequencer.get(n))
np.random.seed(0)
xR=np.random.rand(n,2)
plt.figure(figsize=(2*M,M))
plt.subplot(121)
plt.scatter(xR[:,0], xR[:,1], s=100, c="r", alpha=1.0, marker='.',
label="Random", edgecolors='None')
plt.axis([0,1,0,1])
plt.title(r'Pseudo-random points')
plt.grid(True)
plt.subplot(122)
plt.scatter(xH[:,0], xH[:,1], s=100, c="b", alpha=1.0, marker='.',
label="Halton")
plt.axis([0,1,0,1])
plt.title(r'Quasi-random points (Halton)')
plt.grid(True)
plt.show()
interact(plot_random_vs_Halton,n=(20,500,20))
def show_R(mH=10):
fig= plt.figure(figsize=(2*M*mH/12,M*mH/12))
ax = plt.gca()
sequencer.reset()
X=np.array(sequencer.get(mH))
R=distance_matrix(X, X)
plot_matrices_with_values(ax,R)
interact(show_R,mH=(2,20,1))
# The function to be interpolated
f=lambda x,y: 16*x*(1-x)*y*(1-y)
def showing_f(n=10, elev=40, azim=230):
fig = plt.figure(figsize=(2*M,M))
# Creating regular mesh
Xr = np.linspace(0, 1, n)
Xm, Ym = np.meshgrid(Xr,Xr)
Z = f(Xm,Ym)
# Wireframe
plt.subplot(221,projection='3d')
ax = fig.gca()
ax.plot_wireframe(Xm, Ym, Z)
ax.view_init(elev,azim)
# imshow
plt.subplot(222)
#plt.imshow(Z,interpolation='none', extent=[0, 1, 0, 1])
plt.contourf(Xm, Ym, Z, 20)
plt.ylabel('$y$')
plt.xlabel('$x$')
plt.axis('equal')
plt.xlim(0,1)
plt.colorbar()
# Contour plot
plt.subplot(223)
plt.contour(Xm, Ym, Z, 20)
plt.axis('equal')
plt.colorbar()
# Surface
plt.subplot(224,projection='3d')
ax = fig.gca()
surf = ax.plot_surface(Xm, Ym, Z, rstride=1, cstride=1, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
fig.colorbar(surf)
ax.view_init(elev,azim)
plt.show()
elev_widget = IntSlider(min=0, max=180, step=10, value=40)
azim_widget = IntSlider(min=0, max=360, step=10, value=230)
interact(showing_f,n=(5,50,5),elev=elev_widget,azim=azim_widget)
def eval_interp_distance_matrix(C,X,x,y):
R=distance_matrix(X, np.array([[x,y]]))
return np.dot(C,R)
def showing_f_interpolated(n=10, mH=10, elev=40, azim=230):
fig = plt.figure(figsize=(2*M,M))
## Building distance matrix and solving linear system
sequencer.reset()
X=np.array(sequencer.get(mH))
R=distance_matrix(X, X)
Zs=f(X[:,0],X[:,1])
C=np.linalg.solve(R,Zs)
# f interpolated with distance function
fIR=np.vectorize(eval_interp_distance_matrix, excluded=[0,1])
# Creating regular mesh
Xr = np.linspace(0, 1, n)
Xm, Ym = np.meshgrid(Xr,Xr)
Z = f(Xm,Ym)
# Contour plot - Original Data
plt.subplot(221)
plt.contour(Xm, Ym, Z, 20)
plt.colorbar()
plt.axis('equal')
plt.title(r'$f(x,y)$')
# Surface - Original Data
plt.subplot(222,projection='3d')
ax = fig.gca()
surf = ax.plot_surface(Xm, Ym, Z, rstride=1, cstride=1, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
fig.colorbar(surf)
ax.view_init(elev,azim)
plt.title(r'$f(x,y)$')
# Contour plot - Interpolated Data
plt.subplot(223)
plt.contour(Xm, Ym, fIR(C,X,Xm,Ym), 20)
plt.axis('equal')
plt.colorbar()
plt.scatter(X[:,0], X[:,1], s=100, c="r", alpha=0.5, marker='.',
label="Random", edgecolors='None')
plt.title(r'$fIR(x,y)$')
# Surface - Interpolated Data
plt.subplot(224,projection='3d')
ax = fig.gca()
surf = ax.plot_surface(Xm, Ym, fIR(C,X,Xm,Ym), rstride=1, cstride=1, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
fig.colorbar(surf)
ax.view_init(elev,azim)
ax.set_zlim(0,1)
plt.title(r'$fIR(x,y)$')
plt.show()
interact(showing_f_interpolated,n=(5,50,5),mH=(5,80,5),elev=elev_widget,azim=azim_widget)
# Some RBF's
linear_rbf = lambda r,eps: r
gaussian_rbf = lambda r,eps: np.exp(-(eps*r)**2)
MQ_rbf = lambda r,eps: np.sqrt(1+(eps*r)**2)
IMQ_rbf = lambda r,eps: 1./np.sqrt(1+(eps*r)**2)
# The chosen one! But please try all of them!
rbf = lambda r,eps: MQ_rbf(r,eps)
def eval_interp_rbf(C,X,x,y,eps):
A=rbf(distance_matrix(X, np.array([[x,y]])),eps)
return np.dot(C,A)
def showing_f_interpolated_rbf(n=10, mH=10, elev=40, azim=230, eps=1):
fig = plt.figure(figsize=(2*M,M))
# Creating regular mesh
Xr = np.linspace(0, 1, n)
Xm, Ym = np.meshgrid(Xr,Xr)
Z = f(Xm,Ym)
########################################################
## Pseudo-random
## Building distance matrix and solving linear system
np.random.seed(0)
X=np.random.rand(mH,2)
R=distance_matrix(X,X)
A=rbf(R,eps)
Zs=f(X[:,0],X[:,1])
C=np.linalg.solve(A,Zs)
# f interpolated with distance function
fIR=np.vectorize(eval_interp_rbf, excluded=[0,1,4])
# Contour plot - Original Data
plt.subplot(231)
plt.contour(Xm, Ym, fIR(C,X,Xm,Ym,eps), 20)
plt.colorbar()
plt.scatter(X[:,0], X[:,1], s=100, c="r", alpha=0.5, marker='.',
label="Random", edgecolors='None')
plt.title(r'$f(x,y)_{rbf}$ with Pseudo-random points')
# Surface - Original Data
plt.subplot(232,projection='3d')
ax = fig.gca()
surf = ax.plot_surface(Xm, Ym, fIR(C,X,Xm,Ym,eps), rstride=1, cstride=1, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
fig.colorbar(surf)
ax.view_init(elev,azim)
ax.set_zlim(0,1)
plt.title(r'$f(x,y)_{rbf}$ with Pseudo-random points')
# Contour plot - Original Data
plt.subplot(233)
plt.contourf(Xm, Ym, np.abs(f(Xm,Ym)-fIR(C,X,Xm,Ym,eps)), 20)
#plt.imshow(np.abs(f(Xm,Ym)-fIR(C,X,Xm,Ym,eps)),interpolation='none', extent=[0, 1, 0, 1])
plt.axis('equal')
plt.xlim(0,1)
plt.colorbar()
plt.scatter(X[:,0], X[:,1], s=100, c="k", alpha=0.8, marker='.',
label="Random", edgecolors='None')
plt.title(r'Error with Pseudo-random points')
########################################################
## HALTON (Quasi-random)
## Building distance matrix and solving linear system
sequencer.reset()
X=np.array(sequencer.get(mH))
R=distance_matrix(X,X)
A=rbf(R,eps)
Zs=f(X[:,0],X[:,1])
C=np.linalg.solve(A,Zs)
# f interpolated with distance function
fIR=np.vectorize(eval_interp_rbf, excluded=[0,1,4])
# Contour plot - Interpolated Data
plt.subplot(234)
plt.contour(Xm, Ym, fIR(C,X,Xm,Ym,eps), 20)
plt.colorbar()
plt.scatter(X[:,0], X[:,1], s=100, c="r", alpha=0.5, marker='.',
label="Random", edgecolors='None')
plt.title(r'$f_{rbf}(x,y)$ with Halton points')
# Surface - Interpolated Data
plt.subplot(235,projection='3d')
ax = fig.gca()
surf = ax.plot_surface(Xm, Ym, fIR(C,X,Xm,Ym,eps), rstride=1, cstride=1, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
fig.colorbar(surf)
ax.view_init(elev,azim)
ax.set_zlim(0,1)
plt.title(r'$f_{rbf}(x,y)$ with Halton points')
# Contour plot - Original Data
plt.subplot(236)
plt.contourf(Xm, Ym, np.abs(f(Xm,Ym)-fIR(C,X,Xm,Ym,eps)), 20)
#plt.imshow(np.abs(f(Xm,Ym)-fIR(C,X,Xm,Ym,eps)),interpolation='none', extent=[0, 1, 0, 1])
plt.axis('equal')
plt.xlim(0,1)
plt.colorbar()
plt.scatter(X[:,0], X[:,1], s=100, c="k", alpha=0.8, marker='.',
label="Random", edgecolors='None')
plt.title(r'Error with Halton points')
plt.show()
interact(showing_f_interpolated_rbf,n=(5,50,5),mH=(5,80,5),elev=elev_widget,azim=azim_widget,eps=(0.1,50,0.1))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup
Step2: Define the network
Step3: Load the model parameters and metadata
Step4: Trying it out
Step5: Helper to fetch and preprocess images
Step6: Process test images and print top 5 predicted labels
| <ASSISTANT_TASK:>
Python Code:
!wget https://s3.amazonaws.com/lasagne/recipes/pretrained/imagenet/vgg_cnn_s.pkl
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import lasagne
from lasagne.layers import InputLayer, DenseLayer, DropoutLayer
from lasagne.layers.dnn import Conv2DDNNLayer as ConvLayer
from lasagne.layers import MaxPool2DLayer as PoolLayer
from lasagne.layers import LocalResponseNormalization2DLayer as NormLayer
from lasagne.utils import floatX
net = {}
net['input'] = InputLayer((None, 3, 224, 224))
net['conv1'] = ConvLayer(net['input'], num_filters=96, filter_size=7, stride=2)
net['norm1'] = NormLayer(net['conv1'], alpha=0.0001) # caffe has alpha = alpha * pool_size
net['pool1'] = PoolLayer(net['norm1'], pool_size=3, stride=3, ignore_border=False)
net['conv2'] = ConvLayer(net['pool1'], num_filters=256, filter_size=5)
net['pool2'] = PoolLayer(net['conv2'], pool_size=2, stride=2, ignore_border=False)
net['conv3'] = ConvLayer(net['pool2'], num_filters=512, filter_size=3, pad=1)
net['conv4'] = ConvLayer(net['conv3'], num_filters=512, filter_size=3, pad=1)
net['conv5'] = ConvLayer(net['conv4'], num_filters=512, filter_size=3, pad=1)
net['pool5'] = PoolLayer(net['conv5'], pool_size=3, stride=3, ignore_border=False)
net['fc6'] = DenseLayer(net['pool5'], num_units=4096)
net['drop6'] = DropoutLayer(net['fc6'], p=0.5)
net['fc7'] = DenseLayer(net['drop6'], num_units=4096)
net['drop7'] = DropoutLayer(net['fc7'], p=0.5)
net['fc8'] = DenseLayer(net['drop7'], num_units=1000, nonlinearity=lasagne.nonlinearities.softmax)
output_layer = net['fc8']
import pickle
model = pickle.load(open('vgg_cnn_s.pkl'))
CLASSES = model['synset words']
MEAN_IMAGE = model['mean image']
lasagne.layers.set_all_param_values(output_layer, model['values'])
import urllib
index = urllib.urlopen('http://www.image-net.org/challenges/LSVRC/2012/ori_urls/indexval.html').read()
image_urls = index.split('<br>')
np.random.seed(23)
np.random.shuffle(image_urls)
image_urls = image_urls[:5]
import io
import skimage.transform
def prep_image(url):
ext = url.split('.')[-1]
im = plt.imread(io.BytesIO(urllib.urlopen(url).read()), ext)
# Resize so smallest dim = 256, preserving aspect ratio
h, w, _ = im.shape
if h < w:
im = skimage.transform.resize(im, (256, w*256/h), preserve_range=True)
else:
im = skimage.transform.resize(im, (h*256/w, 256), preserve_range=True)
# Central crop to 224x224
h, w, _ = im.shape
im = im[h//2-112:h//2+112, w//2-112:w//2+112]
rawim = np.copy(im).astype('uint8')
# Shuffle axes to c01
im = np.swapaxes(np.swapaxes(im, 1, 2), 0, 1)
# Convert to BGR
im = im[::-1, :, :]
im = im - MEAN_IMAGE
return rawim, floatX(im[np.newaxis])
for url in image_urls:
try:
rawim, im = prep_image(url)
prob = np.array(lasagne.layers.get_output(output_layer, im, deterministic=True).eval())
top5 = np.argsort(prob[0])[-1:-6:-1]
plt.figure()
plt.imshow(rawim.astype('uint8'))
plt.axis('off')
for n, label in enumerate(top5):
plt.text(250, 70 + n * 20, '{}. {}'.format(n+1, CLASSES[label]), fontsize=14)
except IOError:
print('bad url: ' + url)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We first prepare set up the credentials required to access the devices.
Step2: We'll now run the circuit on the simulator for 128 shots (so we don't need to wait in the ibmqx5 queue) and plot the results.
Step3: The above results in stats tell us how many times each bit string was found as a result. To make our emoticon, we convert the bit strings into ASCII characters. We also calculate the fraction of shots for which each result occurred. The most common results are then printed to screen.
Step4: We'll now run the smiley circuit on ibmqx5, after checking that the device is online. Given the noise on the device, our smiley won't be as clean as our previous run on the simulator.
| <ASSISTANT_TASK:>
Python Code:
from qiskit import ClassicalRegister, QuantumRegister
from qiskit import QuantumCircuit, execute
from qiskit.tools.visualization import plot_histogram
from qiskit import IBMQ, available_backends, get_backend
from qiskit.wrapper.jupyter import *
import matplotlib.pyplot as plt
%matplotlib inline
# set up registers and program
qr = QuantumRegister(16)
cr = ClassicalRegister(16)
qc = QuantumCircuit(qr, cr)
# rightmost eight (qu)bits have ')' = 00101001
qc.x(qr[0])
qc.x(qr[3])
qc.x(qr[5])
# second eight (qu)bits have superposition of
# '8' = 00111000
# ';' = 00111011
# these differ only on the rightmost two bits
qc.h(qr[9]) # create superposition on 9
qc.cx(qr[9],qr[8]) # spread it to 8 with a CNOT
qc.x(qr[11])
qc.x(qr[12])
qc.x(qr[13])
# measure
for j in range(16):
qc.measure(qr[j], cr[j])
# Load saved IBMQ accounts
IBMQ.load_accounts()
backend = "ibmq_qasm_simulator"
shots_sim = 128
job_sim = execute(qc, backend, shots=shots_sim)
stats_sim = job_sim.result().get_counts()
plot_histogram(stats_sim)
plt.rc('font', family='monospace')
def plot_smiley (stats, shots):
for bitString in stats:
char = chr(int( bitString[0:8] ,2)) # get string of the leftmost 8 bits and convert to an ASCII character
char += chr(int( bitString[8:16] ,2)) # do the same for string of rightmost 8 bits, and add it to the previous character
prob = stats[bitString] / shots # fraction of shots for which this result occurred
# create plot with all characters on top of each other with alpha given by how often it turned up in the output
plt.annotate( char, (0.5,0.5), va="center", ha="center", color = (0,0,0, prob ), size = 300)
if (prob>0.05): # list prob and char for the dominant results (occurred for more than 5% of shots)
print(str(prob)+"\t"+char)
plt.axis('off')
plt.show()
plot_smiley(stats_sim, shots_sim)
%%qiskit_job_status
backends = available_backends()
backend = get_backend('ibmqx5')
shots_device = 1000
job_device = execute(qc, backend, shots=shots_device)
stats_device = job_device.result().get_counts()
plot_smiley(stats_device, shots_device)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exploring the Fermi distribution
Step3: In this equation
Step4: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Step5: Use interact with plot_fermidist to explore the distribution
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Image
from IPython.html.widgets import interact, interactive, fixed
Image('fermidist.png')
def fermidist(energy, mu, kT):
Compute the Fermi distribution at energy, mu and kT.
F = 1/(np.exp((energy-mu)/kT)+1)
return F
assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033)
assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0),
np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532,
0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ]))
def plot_fermidist(mu, kT):
energy = np.linspace(0,10.0,21)
plt.plot(energy, fermidist(energy, mu, kT))
plt.tick_params(direction='out')
plt.xlabel('$Energy$')
plt.ylabel('$F(Energy)$')
plt.title('Fermi Distribution')
plot_fermidist(4.0, 1.0)
assert True # leave this for grading the plot_fermidist function
interact(plot_fermidist, mu=(0.0,5.0), kT=(0.1,10.0))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Text
Step2: Listing 8.1
Step5: Listing 8.2
Step6: Listing 8.3
Step7: Listing 8.4
Step8: Listing 8.6
Step11: Listing 8.7
| <ASSISTANT_TASK:>
Python Code:
class Square:
def __init__(self):
self.side = 1
Bob = Square() # Bob is an instance of Square.
Bob.side #Let’s see the value of side
Bob.side = 5 #Assing a new value to side
Bob.side #Let’s see the new value of side
Krusty = Square()
Krusty.side
class Square:
def __init__(self):
self.side=1
Bob = Square() # Bob is an instance of Square.
Bob.side #Let's see the value of side
Bob.side = 5 #Assing a new value to side
Bob.side #Let's see the new value of side
Krusty = Square()
Krusty.side
Square.side
Crab = Square()
Crab.side
class Square:
count = 0
def __init__(self):
Square.count += 1
print("Object created successfully")
Bob = Square()
Patrick = Square()
Square.count
class Sequence:
transcription_table = {'A':'U', 'T':'A', 'C':'G' , 'G':'C'}
def __init__(self, seqstring):
self.seqstring = seqstring.upper()
def transcription(self):
tt = ""
for letter in self.seqstring:
if letter in 'ATCG':
tt += self.transcription_table[letter]
return tt
dangerous_virus = Sequence('atggagagccttgttcttggtgtcaa')
dangerous_virus.seqstring
harmless_virus = Sequence('aatgctactactattagtagaattgatgcca')
harmless_virus.seqstring
dangerous_virus.transcription()
class Sequence:
transcription_table = {'A':'U', 'T':'A', 'C':'G' , 'G':'C'}
enz_dict = {'EcoRI':'GAATTC', 'EcoRV':'GATATC'}
def __init__(self, seqstring):
self.seqstring = seqstring.upper()
def restriction(self, enz):
try:
enz_target = Sequence.enz_dict[enz]
return self.seqstring.count(enz_target)
except KeyError:
return 0
def transcription(self):
tt = ""
for letter in self.seqstring:
if letter in 'ATCG':
tt += self.transcription_table[letter]
return tt
other_virus = Sequence('atgatatcggagaggatatcggtgtcaa')
other_virus.restriction('EcoRV')
class Mammal():
Docstring with class description
# Properties here
# Methods here
class Orca(Mammal):
Docstring with class description
# Properties here
# Methods here
class Plasmid(Sequence):
ab_res_dict = {'Tet':'ctagcat', 'Amp':'CACTACTG'}
def __init__(self, seqstring):
Sequence.__init__(self, seqstring)
def ab_res(self, ab):
if self.ab_res_dict[ab] in self.seqstring:
return True
return False
!conda install biopython -y
from Bio.Alphabet import IUPAC
from Bio.Seq import Seq
first_seq = Seq('GCTATGCAGC', IUPAC.unambiguous_dna)
first_seq
first_seq.complement()
first_seq.tostring()
first_seq[:10] # slice a sequence
len(first_seq) # get the length of the sequence
first_seq[0] # get one character
first_seq
AnotherSeq=first_seq.tomutable()
AnotherSeq.extend("TTTTTTT")
print(AnotherSeq)
AnotherSeq.pop()
AnotherSeq.pop()
print(AnotherSeq)
class Sequence:
transcription_table = {'A':'U', 'T':'A', 'C':'G' , 'G':'C'}
enz_dict = {'EcoRI':'GAATTC', 'EcoRV':'GATATC'}
def __init__(self, seqstring):
self.seqstring = seqstring.upper()
def __len__(self):
return len(self.seqstring)
def restriction(self, enz):
try:
enz_target = Sequence.enz_dict[enz]
return self.seqstring.count(enz_target)
except KeyError:
return 0
def transcription(self):
tt = ""
for letter in self.seqstring:
if letter in 'ATCG':
tt += self.transcription_table[letter]
return tt
M13 = Sequence("ACGACTCTCGACGGCATCCACCCTCTCTGAGA")
len(M13)
class Straight:
def __init__(self, data):
self.data = data
self.index = 0
def __iter__(self):
return self
def __next__(self):
if self.index == len(self.data):
raise StopIteration
answer = self.data[self.index]
self.index += 1
return answer
class Reverse:
def __init__(self, data):
self.data = data
self.index = len(data)
def __iter__(self):
return self
def __next__(self):
if self.index == 0:
raise StopIteration
self.index -= 1
return self.data[self.index]
a = Straight("123")
for i in a:
print(i)
b = Reverse("123")
for i in b:
print(i)
class Sequence:
transcription_table = {'A':'U', 'T':'A', 'C':'G', 'G':'C'}
comp_table = {'A':'T', 'T':'A', 'C':'G', 'G':'C'}
def __init__(self, seqstring):
self.seqstring = seqstring.upper()
def restriction(self, enz):
enz_dict = {'EcoRI':'ACTGG', 'EcoRV':'AGTGC'}
try:
target = enz_dict[enz]
except KeyError:
raise ValueError('No such enzime in out enzime DB')
return self.seqstring.count(target)
def __getitem__(self,index):
return self.seqstring[index]
def __getslice__(self, low, high):
return self.seqstring[low:high]
def __len__(self):
return len(self.seqstring)
def __str__(self):
if len(self.seqstring) >= 28:
return '{0}...{1}'.format(self.seqstring[:25],
self.seqstring[-3:])
else:
return self.seqstring
def transcription(self):
tt = ''
for x in self.seqstring:
if x in 'ATCG':
tt += self.transcription_table[x]
return tt
def complement(self):
tt = ''
for x in self.seqstring:
if x in 'ATCG':
tt += self.comp_table[x]
return tt
class Zdic(dict):
A dictionary-like object that return 0 when a user
request a non-existent key.
def __missing__(self,x):
return 0
a = Zdic()
a['blue'] = 'azul'
a['red']
class TestClass:
A class with a "private" method (b)
def a(self):
pass
def __b(self):
# mangled to _TestClass__b
pass
my_object = TestClass()
print(my_object.a())
my_object.__b()
my_object._TestClass__b()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: People needing to divide a fiscal year starting in July, into quarters, are in luck with pandas. I've been looking for lunar year and other periodic progressions. The whole timeline thing still seems difficult, even with a proleptic Gregorian plus UTC timezones.
Step2: As usual, I'm recommending telling yourself a story, in this case about an exclusive party you've been hosting ever since 2000, all the way up to 2018. Once you get the interactive version of this Notebook, you'll be able to extend this record by as many more years as you want.
Step3: DBAs who know SQL / noSQL, will find pandas, especially its inner outer left and right merge possibilities somewhat familiar. We learn about the set type through maths, through Python, and understand about unions and intersections, differences.
Step4: What's the average number of party-goers over this nine-year period?
Step5: Might you also want the median and mode? Do you remember what those are?
Step6: Now that seems strange. Isn't the mode of a column of numbers, a number?
| <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
rng_years = pd.period_range('1/1/2000', '1/1/2018', freq='Y')
head_count = np.random.randint(10,35, size=19)
new_years_party = pd.DataFrame(head_count, index = rng_years,
columns=["Attenders"])
new_years_party
np.round(new_years_party.Attenders.mean())
new_years_party.Attenders.mode()
new_years_party.Attenders.median()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data generation
Step2: Hyperparameters
Step3: Training a baseline LSTM
Step4: Training a Bayesian LSTM
Step5: From the training curves and the test results it's clear that the model is not converging and we are not able to differentiate the valid from the invalid strings.
Step6: Hyperparameter tuning
Step7: Analysing the impact of hyperparameters
Step8: Choosing the correct batch size as well as possibly a value to clip gradients by during the training is more complicated.
Step9: Training the best Bayesian Model
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import seaborn as sns
import pandas as pd
import tensorflow as tf
import edward2 as ed
import matplotlib.pyplot as plt
from tqdm import tqdm
from sklearn.model_selection import train_test_split, ParameterGrid
from tensorflow.keras.preprocessing import sequence
import embedded_reber_grammar as erg
def plot_hist(hist):
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(hist.history['val_accuracy'], label='val_accuracy')
plt.plot(hist.history['accuracy'], label='train_accuracy')
plt.legend()
plt.subplot(1, 2, 2)
plt.plot(hist.history['val_loss'], label='val_loss')
plt.plot(hist.history['loss'], label='train_loss')
plt.legend()
plt.show()
x, y = [], []
n = 3000
for i in range(n):
x.append(np.asarray(erg.encode_string(erg.generate_valid_string(erg.embedded_rg))))
y.append(1)
for i in range(n):
x.append(np.asarray(erg.encode_string(erg.generate_invalid_string(erg.embedded_rg))))
y.append(0)
x = sequence.pad_sequences(x)
x_train, x_test, y_train, y_test = train_test_split(np.asarray(x), np.asarray(y))
print(f"Number of training samples: {x_train.shape[0]}")
print(f"Number of test samples: {x_test.shape[0]} \n")
sequence_length = x_train.shape[1]
num_chars = x_train.shape[2]
print(f"Length of sequences: {sequence_length}")
print(f"Number of characters: {num_chars}")
batch_size = 64
epochs = 40
model = tf.keras.Sequential()
model.add(tf.keras.layers.InputLayer(input_shape=(sequence_length, num_chars)))
model.add(tf.keras.layers.RNN(
tf.keras.layers.LSTMCell(128)
))
model.add(tf.keras.layers.Dense(32))
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
hist = model.fit(x_train, y_train, validation_split=0.2, epochs=epochs, batch_size=batch_size, verbose=0)
test_results = model.evaluate(x_test, y_test)
print(f"Test loss: {test_results[0]}")
print(f"Test accuracy: {test_results[1]}")
plot_hist(hist)
model = tf.keras.Sequential()
model.add(tf.keras.layers.InputLayer(input_shape=(sequence_length, num_chars)))
model.add(tf.keras.layers.RNN(
ed.layers.LSTMCellFlipout(128)
))
model.add(tf.keras.layers.Dense(32))
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
hist = model.fit(x_train, y_train, validation_split=0.2, epochs=epochs, batch_size=256, verbose=0)
test_results = model.evaluate(x_test, y_test)
print(f"Test loss: {test_results[0]}")
print(f"Test accuracy: {test_results[1]}")
plot_hist(hist)
params = {
'loss_scaling': [1., 1./x_train.shape[0]],
'batch_size': [64, 128, 256],
'clipvalue': [None, 0.1, 0.5],
}
param_grid = ParameterGrid(params)
results = pd.DataFrame(columns=list(params.keys())+['test_loss', 'test_accuracy'])
def training_run(param_set):
sf = param_set['loss_scaling']
bs = int(param_set['batch_size'])
cv = param_set['clipvalue']
model = tf.keras.Sequential()
model.add(tf.keras.layers.InputLayer(input_shape=(sequence_length, num_chars)))
model.add(tf.keras.layers.RNN(
ed.layers.LSTMCellFlipout(
128,
kernel_regularizer=ed.regularizers.NormalKLDivergence(scale_factor=sf),
recurrent_regularizer=ed.regularizers.NormalKLDivergence(scale_factor=sf)
),
))
model.add(tf.keras.layers.Dense(32))
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
optimizer = tf.keras.optimizers.Adam(clipvalue=cv)
model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
hist = model.fit(x_train, y_train, validation_split=0.2, epochs=epochs, batch_size=bs, verbose=0)
return model, hist
for param_set in tqdm(param_grid):
model, hist = training_run(param_set)
test_results = np.mean(np.asarray([model.evaluate(x_test, y_test, verbose=0) for _ in range(10)]), axis=0)
new_line = param_set
new_line['test_loss'] = test_results[0]
new_line['test_accuracy'] = test_results[1]
results = pd.concat([results, pd.DataFrame(new_line, index=[0])], ignore_index=True, axis=0)
results_ = results.drop(columns=['test_loss']).fillna(-1)
sns.pairplot(results_, y_vars=['test_accuracy'], x_vars=['loss_scaling', 'batch_size', 'clipvalue'])
results_ = results_[results_['loss_scaling'] != 1.]
results_lowclip = results_[results_['clipvalue'] == 0.1].drop(columns=['loss_scaling'])
results_highclip = results_[results_['clipvalue'] == 0.5].drop(columns=['loss_scaling'])
plt.scatter(results_lowclip['batch_size'], results_lowclip['test_accuracy'], c='b', label="clipvalue=0.1")
plt.scatter(results_highclip['batch_size'], results_highclip['test_accuracy'], c='r', label="clipvalue=0.5")
plt.xlabel("batch size")
plt.ylabel("accuracy")
plt.legend()
plt.show()
results_ = results_[results_['loss_scaling'] != 1.]
results_64 = results_[results_['batch_size'] == 64].drop(columns=['loss_scaling'])
results_128 = results_[results_['batch_size'] == 128].drop(columns=['loss_scaling'])
results_256 = results_[results_['batch_size'] == 256].drop(columns=['loss_scaling'])
plt.scatter(results_64['clipvalue'], results_64['test_accuracy'], c='b', label="batch_size=64")
plt.scatter(results_128['clipvalue'], results_128['test_accuracy'], c='r', label="batch_size=128")
plt.scatter(results_256['clipvalue'], results_256['test_accuracy'], c='g', label="batch_size=256")
plt.xlabel("clipvalue")
plt.ylabel("accuracy")
plt.legend()
plt.show()
best_params = results_.iloc[np.argmax(results_['test_accuracy'])].to_dict()
if best_params['clipvalue'] < 0:
best_params['clipvalue'] = None
model, hist = training_run(best_params)
plot_hist(hist)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download & Process Security Dataset
Step2: Analytic I
| <ASSISTANT_TASK:>
Python Code:
from openhunt.mordorutils import *
spark = get_spark()
sd_file = "https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/defense_evasion/host/empire_wdigest_downgrade.tar.gz"
registerMordorSQLTable(spark, sd_file, "sdTable")
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, Image, TargetObject
FROM sdTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 13
AND TargetObject LIKE "%UseLogonCredential"
AND Details = 1
'''
)
df.show(10,False)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Tensorflow datasets from MongoDB collections
Step2: Import packages
Step3: Validate tf and tfio imports
Step4: Download and setup the MongoDB instance
Step5: Once the instance has been started, grep for mongo in the processes list to confirm the availability.
Step6: query the base endpoint to retrieve information about the cluster.
Step7: Explore the dataset
Step8: For the purpose of the tutorial, modifications are made to the label column.
Step9: Split the dataset
Step10: Store the train and test data in mongo collections
Step11: Prepare tfio datasets
Step12: Each item in train_ds is a string which needs to be decoded into a json. To do so, you can select only a subset of the columns by specifying the TensorSpec
Step13: Testing dataset
Step14: Define the keras preprocessing layers
Step15: Build, compile and train the model
Step16: Infer on the test data
| <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install -q tensorflow-io
!pip install -q pymongo
import os
import time
from pprint import pprint
from sklearn.model_selection import train_test_split
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras.layers.experimental import preprocessing
import tensorflow_io as tfio
from pymongo import MongoClient
print("tensorflow-io version: {}".format(tfio.__version__))
print("tensorflow version: {}".format(tf.__version__))
%%bash
sudo apt install -y mongodb >log
service mongodb start
# Sleep for few seconds to let the instance start.
time.sleep(5)
%%bash
ps -ef | grep mongo
client = MongoClient()
client.list_database_names() # ['admin', 'local']
dataset_url = 'http://storage.googleapis.com/download.tensorflow.org/data/petfinder-mini.zip'
csv_file = 'datasets/petfinder-mini/petfinder-mini.csv'
tf.keras.utils.get_file('petfinder_mini.zip', dataset_url,
extract=True, cache_dir='.')
pf_df = pd.read_csv(csv_file)
pf_df.head()
# In the original dataset "4" indicates the pet was not adopted.
pf_df['target'] = np.where(pf_df['AdoptionSpeed']==4, 0, 1)
# Drop un-used columns.
pf_df = pf_df.drop(columns=['AdoptionSpeed', 'Description'])
# Number of datapoints and columns
len(pf_df), len(pf_df.columns)
train_df, test_df = train_test_split(pf_df, test_size=0.3, shuffle=True)
print("Number of training samples: ",len(train_df))
print("Number of testing sample: ",len(test_df))
URI = "mongodb://localhost:27017"
DATABASE = "tfiodb"
TRAIN_COLLECTION = "train"
TEST_COLLECTION = "test"
db = client[DATABASE]
if "train" not in db.list_collection_names():
db.create_collection(TRAIN_COLLECTION)
if "test" not in db.list_collection_names():
db.create_collection(TEST_COLLECTION)
def store_records(collection, records):
writer = tfio.experimental.mongodb.MongoDBWriter(
uri=URI, database=DATABASE, collection=collection
)
for record in records:
writer.write(record)
store_records(collection="train", records=train_df.to_dict("records"))
time.sleep(2)
store_records(collection="test", records=test_df.to_dict("records"))
train_ds = tfio.experimental.mongodb.MongoDBIODataset(
uri=URI, database=DATABASE, collection=TRAIN_COLLECTION
)
train_ds
# Numeric features.
numerical_cols = ['PhotoAmt', 'Fee']
SPECS = {
"target": tf.TensorSpec(tf.TensorShape([]), tf.int64, name="target"),
}
for col in numerical_cols:
SPECS[col] = tf.TensorSpec(tf.TensorShape([]), tf.int32, name=col)
pprint(SPECS)
BATCH_SIZE=32
train_ds = train_ds.map(
lambda x: tfio.experimental.serialization.decode_json(x, specs=SPECS)
)
# Prepare a tuple of (features, label)
train_ds = train_ds.map(lambda v: (v, v.pop("target")))
train_ds = train_ds.batch(BATCH_SIZE)
train_ds
test_ds = tfio.experimental.mongodb.MongoDBIODataset(
uri=URI, database=DATABASE, collection=TEST_COLLECTION
)
test_ds = test_ds.map(
lambda x: tfio.experimental.serialization.decode_json(x, specs=SPECS)
)
# Prepare a tuple of (features, label)
test_ds = test_ds.map(lambda v: (v, v.pop("target")))
test_ds = test_ds.batch(BATCH_SIZE)
test_ds
def get_normalization_layer(name, dataset):
# Create a Normalization layer for our feature.
normalizer = preprocessing.Normalization(axis=None)
# Prepare a Dataset that only yields our feature.
feature_ds = dataset.map(lambda x, y: x[name])
# Learn the statistics of the data.
normalizer.adapt(feature_ds)
return normalizer
all_inputs = []
encoded_features = []
for header in numerical_cols:
numeric_col = tf.keras.Input(shape=(1,), name=header)
normalization_layer = get_normalization_layer(header, train_ds)
encoded_numeric_col = normalization_layer(numeric_col)
all_inputs.append(numeric_col)
encoded_features.append(encoded_numeric_col)
# Set the parameters
OPTIMIZER="adam"
LOSS=tf.keras.losses.BinaryCrossentropy(from_logits=True)
METRICS=['accuracy']
EPOCHS=10
# Convert the feature columns into a tf.keras layer
all_features = tf.keras.layers.concatenate(encoded_features)
# design/build the model
x = tf.keras.layers.Dense(32, activation="relu")(all_features)
x = tf.keras.layers.Dropout(0.5)(x)
x = tf.keras.layers.Dense(64, activation="relu")(x)
x = tf.keras.layers.Dropout(0.5)(x)
output = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(all_inputs, output)
# compile the model
model.compile(optimizer=OPTIMIZER, loss=LOSS, metrics=METRICS)
# fit the model
model.fit(train_ds, epochs=EPOCHS)
res = model.evaluate(test_ds)
print("test loss, test acc:", res)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2 - Overview of the Problem set
Step2: We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).
Step3: Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs.
Step4: Expected Output for m_train, m_test and num_px
Step5: Expected Output
Step7: <font color='blue'>
Step9: Expected Output
Step11: Expected Output
Step13: Expected Output
Step14: Expected Output
Step16: Expected Output
Step17: Run the following cell to train your model.
Step18: Expected Output
Step19: Let's also plot the cost function and the gradients.
Step20: Interpretation
Step21: Interpretation
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset
%matplotlib inline
# Loading the data (cat/non-cat)
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
# Example of a picture
index = 25
plt.imshow(train_set_x_orig[index])
print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.")
train_set_y.shape
### START CODE HERE ### (≈ 3 lines of code)
m_train = train_set_y.shape[1]
m_test = test_set_y.shape[1]
num_px = train_set_x_orig.shape[1]
### END CODE HERE ###
print ("Number of training examples: m_train = " + str(m_train))
print ("Number of testing examples: m_test = " + str(m_test))
print ("Height/Width of each image: num_px = " + str(num_px))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_set_x shape: " + str(train_set_x_orig.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x shape: " + str(test_set_x_orig.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
# Reshape the training and test examples
### START CODE HERE ### (≈ 2 lines of code)
train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0], -1).T
test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0], -1).T
### END CODE HERE ###
print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))
train_set_x = train_set_x_flatten/255.
test_set_x = test_set_x_flatten/255.
# GRADED FUNCTION: sigmoid
def sigmoid(z):
Compute the sigmoid of z
Arguments:
z -- A scalar or numpy array of any size.
Return:
s -- sigmoid(z)
### START CODE HERE ### (≈ 1 line of code)
s = 1.0 / (1.0 + np.exp(-z))
### END CODE HERE ###
return s
print ("sigmoid([0, 2]) = " + str(sigmoid(np.array([0,2]))))
# GRADED FUNCTION: initialize_with_zeros
def initialize_with_zeros(dim):
This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.
Argument:
dim -- size of the w vector we want (or number of parameters in this case)
Returns:
w -- initialized vector of shape (dim, 1)
b -- initialized scalar (corresponds to the bias)
### START CODE HERE ### (≈ 1 line of code)
w = np.zeros((dim, 1))
b = 0
### END CODE HERE ###
assert(w.shape == (dim, 1))
assert(isinstance(b, float) or isinstance(b, int))
return w, b
dim = 2
w, b = initialize_with_zeros(dim)
print ("w = " + str(w))
print ("b = " + str(b))
# GRADED FUNCTION: propagate
def propagate(w, b, X, Y):
Implement the cost function and its gradient for the propagation explained above
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
Return:
cost -- negative log-likelihood cost for logistic regression
dw -- gradient of the loss with respect to w, thus same shape as w
db -- gradient of the loss with respect to b, thus same shape as b
Tips:
- Write your code step by step for the propagation. np.log(), np.dot()
m = X.shape[1]
# FORWARD PROPAGATION (FROM X TO COST)
### START CODE HERE ### (≈ 2 lines of code)
A = sigmoid(np.dot(w.T, X) + b) # compute activation
cost = - 1.0 / m * np.sum(Y * np.log(A) + (1.0 - Y) * np.log(1-A)) # compute cost
### END CODE HERE ###
# BACKWARD PROPAGATION (TO FIND GRAD)
### START CODE HERE ### (≈ 2 lines of code)
dw = 1.0 / m * np.dot(X, (A - Y).T)
db = 1.0 / m * np.sum(A - Y)
### END CODE HERE ###
assert(dw.shape == w.shape)
assert(db.dtype == float)
cost = np.squeeze(cost)
assert(cost.shape == ())
grads = {"dw": dw,
"db": db}
return grads, cost
w, b, X, Y = np.array([[1],[2]]), 2, np.array([[1,2],[3,4]]), np.array([[1,0]])
grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print ("cost = " + str(cost))
# GRADED FUNCTION: optimize
def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
This function optimizes w and b by running a gradient descent algorithm
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of shape (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- True to print the loss every 100 steps
Returns:
params -- dictionary containing the weights w and bias b
grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
Tips:
You basically need to write down two steps and iterate through them:
1) Calculate the cost and the gradient for the current parameters. Use propagate().
2) Update the parameters using gradient descent rule for w and b.
costs = []
for i in range(num_iterations):
# Cost and gradient calculation (≈ 1-4 lines of code)
### START CODE HERE ###
grads, cost = propagate(w, b, X, Y)
### END CODE HERE ###
# Retrieve derivatives from grads
dw = grads["dw"]
db = grads["db"]
# update rule (≈ 2 lines of code)
### START CODE HERE ###
w = w - learning_rate * dw
b = b - learning_rate * db
### END CODE HERE ###
# Record the costs
if i % 100 == 0:
costs.append(cost)
# Print the cost every 100 training examples
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
params = {"w": w,
"b": b}
grads = {"dw": dw,
"db": db}
return params, grads, costs
params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False)
print ("w = " + str(params["w"]))
print ("b = " + str(params["b"]))
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
# GRADED FUNCTION: predict
def predict(w, b, X):
'''
Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Returns:
Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
'''
m = X.shape[1]
Y_prediction = np.zeros((1,m))
w = w.reshape(X.shape[0], 1)
# Compute vector "A" predicting the probabilities of a cat being present in the picture
### START CODE HERE ### (≈ 1 line of code)
A = sigmoid(np.dot(w.T, X) + b)
### END CODE HERE ###
for i in range(A.shape[1]):
# Convert probabilities A[0,i] to actual predictions p[0,i]
### START CODE HERE ### (≈ 4 lines of code)
Y_prediction[0, i] = A[0,i] > 0.5
### END CODE HERE ###
assert(Y_prediction.shape == (1, m))
return Y_prediction
print ("predictions = " + str(predict(w, b, X)))
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):
Builds the logistic regression model by calling the function you've implemented previously
Arguments:
X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)
Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)
X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)
Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)
num_iterations -- hyperparameter representing the number of iterations to optimize the parameters
learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()
print_cost -- Set to true to print the cost every 100 iterations
Returns:
d -- dictionary containing information about the model.
### START CODE HERE ###
# initialize parameters with zeros (≈ 1 line of code)
w, b = initialize_with_zeros(X_train.shape[0])
# Gradient descent (≈ 1 line of code)
parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost = print_cost)
# Retrieve parameters w and b from dictionary "parameters"
w = parameters["w"]
b = parameters["b"]
# Predict test/train set examples (≈ 2 lines of code)
Y_prediction_test = predict(w, b, X_test)
Y_prediction_train = predict(w, b, X_train)
### END CODE HERE ###
# Print train/test Errors
print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))
d = {"costs": costs,
"Y_prediction_test": Y_prediction_test,
"Y_prediction_train" : Y_prediction_train,
"w" : w,
"b" : b,
"learning_rate" : learning_rate,
"num_iterations": num_iterations}
return d
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)
# Example of a picture that was wrongly classified.
index = 1
plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3)))
print ("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0,index]].decode("utf-8") + "\" picture.")
# Plot learning curve (with costs)
costs = np.squeeze(d['costs'])
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(d["learning_rate"]))
plt.show()
learning_rates = [0.01, 0.001, 0.0001]
models = {}
for i in learning_rates:
print ("learning rate is: " + str(i))
models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False)
print ('\n' + "-------------------------------------------------------" + '\n')
for i in learning_rates:
plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"]))
plt.ylabel('cost')
plt.xlabel('iterations')
legend = plt.legend(loc='upper center', shadow=True)
frame = legend.get_frame()
frame.set_facecolor('0.90')
plt.show()
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "my_image.jpg" # change this to the name of your image file
## END CODE HERE ##
# We preprocess the image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T
my_predicted_image = predict(d["w"], d["b"], my_image)
plt.imshow(image)
print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read in Google Scraper search results table
Step2: Programatically identify unique images
Step3: Add the hash to each row in our data dataframe we loaded above
Step4: Open new dataframe with image hashes
Step5: HILLARY CLINTON DATA
Step6: Find an image file representative of each unique hash so we can look at each unique image
Step7: Collect unique images and put in separate directory
Step8: DONALD TRUMP DATA
Step9: Find an image file representative of each unique hash.
Step10: Collect unique images and put in separate directory
Step11: NEWS SOURCE INFORMATION
Step12: Getting Political Leaning from Allsides from News Sources of all images in basline dataset
Step13: Getting political leaning from a Facebook political bias ratings study
Step14: Combining ratings from Allsides and Facebook, together with crowdsourced political bias ratings from Mondo Times and author's judgement
Step15: Next, ratings from Allsides and the Facebook study were combined. Where ratings from both Allsides and Facebook are absent, or disagree, bias rating was decided by Mondo Times where an outlet received more than 20 votes, and /or the author based on outlets' "About" pages and other information. Where doubt remained, or ratings are not applicable (eg. Getty Images), Unknown / Unreliable was assigned.
Step16: Make list of unique news sources, combine lists together from HC and DT, and output to a csv
Step17: Despite YouTube being in the Facebook study, the channel observed here in Hillary Clinton's list is a channel that displays a distinct bias toward Donald Trump and against Hillary Clinton. Therefore, it will be manually rated.
Step18: Read in my manual bias ratings
| <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (15, 3)
plt.rcParams['font.family'] = 'sans-serif'
pd.set_option('display.width', 5000)
pd.set_option('display.max_columns', 60)
cols = ['requested_at', 'search_query', 'visible_link', 'rank' ,'image_path']
data = pd.read_csv('./image_box_data.csv', parse_dates=['requested_at'], usecols=cols)
data.head()
type(data.requested_at[0])
# Check the shelf has content. If nothing is printed out, run the following two scripts/cells
import shelve
db = shelve.open('db.shelve')
for k in db.keys():
print(k)
db.close()
%run ../index.py --dataset ./clinton --shelve db.shelve
%run ../index.py --dataset ./trump --shelve db.shelve
import shelve
db = shelve.open('db.shelve')
for key in db.keys(): # For every hash KEY
for f in db[key]: # for every file path name ITEM within the KEY
for index, i in enumerate(data.image_path): # For every Image path in each row of my DF
if f in i: # If the ITEM file path is also in the IMAGE PATH of my DF
data.loc[index, 'image_hash'] = key # Put the KEY into the 'image_hash' Column
data.to_csv("./hashedDF.csv", index=False)
db.close()
hashedDF = pd.read_csv('./hashedDF.csv')
hashedDF.head()
hashedDF['requested_at'] = pd.to_datetime(hashedDF['requested_at'])
type(hashedDF.requested_at[0])
len(hashedDF)
HC = hashedDF[hashedDF.search_query == 'hillary clinton']
HC.to_csv('HC_hashed.csv', index=False)
HC = pd.read_csv('HC_hashed.csv')
# What are the news sources, and how many times do they appear in the dataset?
HC.visible_link.value_counts()
# What are the hashes, and how many times does each one appear in the dataset?
HC.image_hash.value_counts()
print(len(HC.visible_link.unique()))
print(len(HC.image_hash.unique()))
HC.head()
print(type(HC.image_hash[0]))
print(type(HC['rank'][0]))
print(type(HC.requested_at[0]))
HC_unique_images = HC.groupby('image_hash').first().reset_index()
HC_unique_images
from shutil import copyfile
import os
def select_images(df_series, src_dir, dest_dir):
'''
provide dataframe series with all the image file names, the directory containing the images, and directory
where the unique images should go.
'''
try:
os.mkdir(dest_dir)
for file in df_series:
file = file.split('/')[-1]
copyfile(src_dir + file, dest_dir + file)
except FileExistsError:
pass
select_images(HC_unique_images.image_path,'clinton/', 'clinton_unique/' )
DT = hashedDF[hashedDF.search_query == 'donald trump']
DT.to_csv('DT_hashed.csv', index=False)
DT = pd.read_csv('DT_hashed.csv')
DT.visible_link.value_counts()
DT.image_hash.value_counts()
DT_unique_images = DT.groupby('image_hash').first().reset_index()
DT_unique_images
select_images(DT_unique_images.image_path,'trump/', 'trump_unique/' )
HC.visible_link.describe()
DT.visible_link.describe()
allsides = pd.read_json('../BASELINE/allsides_data.json')
allsides.head()
HC_unique_news_sources = []
HC.visible_link.unique()
HC[HC.visible_link.isnull()]
def get_unique_news_sources(col, source_list):
print(len(col.unique()))
for i in col.unique():
print(i)
source_list.append(i.split('//')[1].split('/')[0])
get_unique_news_sources(HC.visible_link, HC_unique_news_sources)
HC_unique_news_sources
DT_unique_news_sources = []
get_unique_news_sources(DT.visible_link, DT_unique_news_sources)
DT_unique_news_sources
def get_url(col):
col = col.split('//')[1]
col = col.split('/')[0]
return col
HC.loc[:, 'news_source_url'] = HC.visible_link.apply(get_url)
DT.loc[:, 'news_source_url'] = DT.visible_link.apply(get_url)
HC.head()
allsides.head()
def tag_bias_rating(candidate):
candidate.loc['allsides_bias_rating'] = 999
allsides = pd.read_json('../BASELINE/allsides_data.json')
for i, valuei in enumerate(candidate.news_source_url):
for j, valuej in enumerate(allsides.url):
if 'http' in valuej:
# print("Found an HTTP in ", valuej)
valuej = valuej.split('//')[1]
# print(valuej)
try:
if valuei in valuej:
print(valuei, valuej)
if allsides.loc[j, 'bias_rating'] == 71: # Left
candidate.loc[i, 'allsides_bias_rating'] = -2
elif allsides.loc[j, 'bias_rating'] == 72: # Lean left
candidate.loc[i, 'allsides_bias_rating'] = -1
elif allsides.loc[j, 'bias_rating'] == 73: # center
candidate.loc[i, 'allsides_bias_rating'] = 0
elif allsides.loc[j, 'bias_rating'] == 74: # lean right
candidate.loc[i, 'allsides_bias_rating'] = 1
elif allsides.loc[j, 'bias_rating'] == 75: # Right
candidate.loc[i, 'allsides_bias_rating'] = 2
else:
candidate.loc[i, 'allsides_bias_rating'] = 999
except TypeError:
continue
tag_bias_rating(HC)
tag_bias_rating(DT)
for i in allsides.url:
if 'http' in i:
print(i.split('//')[1])
else:
print(i)
facebook = pd.read_csv('../Facebook_study.csv')
facebook.head()
cols = ['p', 'avg_align']
facebook = pd.read_csv('../Facebook_study.csv', usecols=cols)
facebook.head()
def tag_facebookbias_rating(candidate):
candidate['facebook_p'] = ''
candidate['facebookbias_rating'] = 999
count = 0
for i, valuei in enumerate(candidate.visible_link):
count += 1
valuei = valuei.split('//')[1]
valuei = valuei.split('/')[0]
print(valuei, count)
for j, valuej in enumerate(facebook.p):
if valuej == valuei:
print(valuei, valuej)
candidate.loc[i, 'facebookbias_rating'] = facebook.loc[j, 'avg_align']
candidate.loc[i, 'facebook_p'] = valuej
tag_facebookbias_rating(HC)
tag_facebookbias_rating(DT)
DT.facebookbias_rating[DT.facebookbias_rating < 3].plot.hist(alpha=0.5, bins=20, range=(-1,1), color='red')
HC.facebookbias_rating[HC.facebookbias_rating < 3].plot.hist(alpha=0.5, bins=20, range=(-1,1), color='blue')
plt.savefig('imagebox_facebookbias_hist.png')
DT.facebookbias_rating[DT.facebookbias_rating > 3].plot.hist(alpha=0.5, bins=10, range=(998,1000), color='red')
HC.facebookbias_rating[HC.facebookbias_rating > 3].plot.hist(alpha=0.5, bins=10, range=(998,1000), color='blue')
plt.savefig('imagebox_facebookbias_hist_unknowns.png')
HC.facebookbias_rating[HC.facebookbias_rating > 3].plot.hist()
DT.facebookbias_rating[DT.facebookbias_rating > 3].plot.hist()
HC.facebookbias_rating.value_counts()
DT.facebookbias_rating.value_counts()
def convert_facebookbias_toInts(col):
if col >= 0.6 and col <= 1:
return 2
elif col >= 0.2 and col < 0.6:
return 1
elif col > -0.2 and col < 0.2:
return 0
elif col > -0.6 and col <= -0.2:
return -1
elif col <= -0.6:
return -2
elif col == 999:
return 999
else:
return 999
HC['facebook_int'] = HC.facebookbias_rating.apply(convert_facebookbias_toInts)
DT['facebook_int'] = DT.facebookbias_rating.apply(convert_facebookbias_toInts)
HC.head()
HC.facebook_int.value_counts()
DT.facebook_int.value_counts()
def combine_ratings(candidate):
candidate['combine_rating'] = 'Not Rated'
for i, valuei in enumerate(candidate.allsides_bias_rating):
try:
# STATEMENTS FOR IF BOTH RATINGS AGREE:
# Both bias ratings say LEFT
if (valuei < 0) and (candidate.loc[i, 'facebook_int'] < 0):
print(valuei, candidate.loc[i, 'facebook_int'], "Left")
candidate.loc[i, 'combine_rating'] = "Left"
# Both bias ratings say CENTER
elif (valuei == 0.0) and (candidate.loc[i, 'facebook_int'] == 0):
print(valuei, candidate.loc[i, 'facebook_int'], "Center")
candidate.loc[i, 'combine_rating'] = "Center"
# Both bias ratings say RIGHT
elif (0 < valuei < 3) and (0 < candidate.loc[i, 'facebook_int'] < 3):
print(valuei, candidate.loc[i, 'facebook_int'], "Right")
candidate.loc[i, 'combine_rating'] = "Right"
# STATEMENTS FOR IF RATINGS ARE ONLY PRESENT IN ONE (ALLSIDES OR FACEBOOK STUDY)
# Only one scale has a rating of LEFT, while the other has no entry
elif (valuei < 0 and candidate.loc[i, 'facebook_int'] == 999) or (valuei == 999 and candidate.loc[i, 'facebook_int'] < 0):
print(valuei, candidate.loc[i, 'facebook_int'], "Left")
candidate.loc[i, 'combine_rating'] = "Left"
# Only one scale has a rating of CENTER, while the other has no entry
elif (valuei == 0 and candidate.loc[i, 'facebook_int'] == 999) or (valuei == 999 and candidate.loc[i, 'facebook_int'] == 0):
print(valuei, candidate.loc[i, 'facebook_int'], "Center")
candidate.loc[i, 'combine_rating'] = "Center"
# Only one scale has a rating of RIGHT, while the other has no entry
elif (0 < valuei < 3 and candidate.loc[i, 'facebook_int'] == 999) or (valuei == 999 and 0 < candidate.loc[i, 'facebook_int'] < 3):
print(valuei, candidate.loc[i, 'facebook_int'], "Right")
candidate.loc[i, 'combine_rating'] = "Right"
# ALL OTHER RATINGS ARE EITHER ABSENT FOR BOTH SCALES OR THE SCALES DISAGREE
else:
print(valuei, candidate.loc[i, 'facebook_int'], "Not Rated")
candidate.loc[i, 'combine_rating'] = "Unknown / unreliable"
except KeyError:
continue
combine_ratings(HC)
len(HC)
combine_ratings(DT)
HC_Unrated = HC[HC.combine_rating == "Unknown / unreliable"]
DT_Unrated = DT[DT.combine_rating == "Unknown / unreliable"]
HC_Unrated.news_source_url.unique()
DT_Unrated.news_source_url.unique()
Unrated_newssource_list = HC_Unrated.news_source_url.unique().tolist()
DT_Unrated_newssource_list = DT_Unrated.news_source_url.unique().tolist()
print(len(Unrated_newssource_list))
print(len(DT_Unrated_newssource_list))
Unrated_newssource_list
DT_Unrated_newssource_list
HC[HC.news_source_url == 'www.youtube.com']
Unrated_newssource_list.append('www.youtube.com')
for i in DT_Unrated_newssource_list:
if i not in Unrated_newssource_list:
Unrated_newssource_list.append(i)
len(Unrated_newssource_list)
#tmp = pd.DataFrame(Unrated_newssource_list, columns=["news_source"])
#tmp.to_csv('unrated_newssources_imagebox.csv', index=False)
manual_rating = pd.read_csv('unrated_newssources_imagebox.csv')
manual_rating
def merge_manual_ratings(candidate, col):
candidate['final_rating'] = ''
for i, valuei in enumerate(candidate.news_source_url):
for j, valuej in enumerate(manual_rating.news_source):
if (valuei == valuej):
print(valuei, valuej, manual_rating.loc[j, col])
try:
if manual_rating.loc[j, col] < 0:
print("Left")
candidate.loc[i, 'final_rating'] = "Left"
elif manual_rating.loc[j, col] == 0:
print("Center")
candidate.loc[i, 'final_rating'] = "Center"
elif 999 > manual_rating.loc[j, col] > 0:
print("Right")
candidate.loc[i, 'final_rating'] = "Right"
elif manual_rating.loc[j, col] == 999:
print("Unknown/Unreliable")
candidate.loc[i, 'final_rating'] = "Unknown / unreliable"
except KeyError:
continue
for i, valuei in enumerate(candidate.final_rating):
if valuei == '':
try:
print("currently empty. Let's fill it up!!")
candidate.loc[i, 'final_rating'] = candidate.loc[i, 'combine_rating']
except KeyError:
continue
merge_manual_ratings(HC, 'final_rating_HC')
merge_manual_ratings(DT, 'final_rating_DT')
HC.allsides_bias_rating.value_counts()
DT.allsides_bias_rating.value_counts()
HC.facebookbias_rating.value_counts()
DT.facebookbias_rating.value_counts()
HC.final_rating.value_counts()
DT.final_rating.value_counts()
HC.final_rating.value_counts().plot(kind='bar', alpha=0.5, color='blue')
DT.final_rating.value_counts().plot(kind='bar', alpha=0.5, color='red')
HC.allsides_bias_rating.value_counts().plot(kind='bar', alpha=0.5, color='blue')
DT.allsides_bias_rating.value_counts().plot(kind='bar', alpha=0.5, color='red')
HC[HC.news_source_url == 'russia-insider.com']
DT[DT.final_rating == 'Unknown / unreliable']
HC.to_csv('HC_imagebox_full_ratings.csv', index=False)
DT.to_csv('DT_imagebox_full_ratings.csv', index=False)
HC.columns
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To access the 3D final velocity use
Step2: Individual voxels in these 3D volumetric data cube can be accessed as follows
Step3: where i,j and k index voxel positions along the x,y and z axes respectively. All indices run from 0 to 255.
Step4: Units are km/s for the mean field and (km/s)$^2$ for the variance field.
Step5: Example plot
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
velocity = np.load('borg_sdss_velocity.npz')
#3D probabilistic maps for velocity field
vx_mean=velocity['vx_mean']
vx_var=velocity['vx_var']
vy_mean=velocity['vy_mean']
vy_var=velocity['vy_var']
vz_mean=velocity['vz_mean']
vz_var=velocity['vz_var']
k=10;j=127;i=243
vx_mean_ijk=vx_mean[k,j,i]
#Minimum and maximum position along the x-axis in Mpc/h
xmin=velocity['ranges'][0]
xmax=velocity['ranges'][1]
#Minimum and maximum position along the y-axis in Mpc/h
ymin=velocity['ranges'][2]
ymax=velocity['ranges'][3]
#Minimum and maximum position along the z-axis in Mpc/h
zmin=velocity['ranges'][4]
zmax=velocity['ranges'][5]
from matplotlib import pyplot as plt
%matplotlib inline
f, (ax1, ax2, ax3) = plt.subplots(1, 3, sharey=True, figsize=(15,5))
ax1.imshow(vx_mean[:,:,128], origin='lower', extent=[ymin,ymax,zmin,zmax], cmap="magma")
ax1.set_title('$v_x$')
ax1.set_aspect('equal')
ax2.imshow(vy_mean[:,:,128], origin='lower', extent=[ymin,ymax,zmin,zmax], cmap="magma")
ax2.set_title('$v_y$')
ax3.imshow(vz_mean[:,:,128], origin='lower', extent=[ymin,ymax,zmin,zmax], cmap="magma")
ax3.set_title('$v_z$')
plt.show()
from matplotlib import pyplot as plt
%matplotlib inline
f, (ax1, ax2, ax3) = plt.subplots(1, 3, sharey=True, figsize=(15,5))
ax1.imshow(vx_var[:,:,128], origin='lower', extent=[ymin,ymax,zmin,zmax], cmap="viridis")
ax1.set_title('$v_x$')
ax1.set_aspect('equal')
ax2.imshow(vy_var[:,:,128], origin='lower', extent=[ymin,ymax,zmin,zmax], cmap="viridis")
ax2.set_title('$v_y$')
ax3.imshow(vz_var[:,:,128], origin='lower', extent=[ymin,ymax,zmin,zmax], cmap="viridis")
ax3.set_title('$v_z$')
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step6: Define VAE
Step7: 1D Gaussian
Step8: Plot the data to verify
Step9: Merge and shuffle them, use VAE to train on data
Step10: Reconstruct Data
Step11: 2D Gaussian
Step12: Reconstruct Data
| <ASSISTANT_TASK:>
Python Code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow as tf
import time
from tensorflow.python.client import timeline
import matplotlib.pyplot as plt
%matplotlib inline
FLAGS = tf.app.flags.FLAGS
# number of device count
tf.app.flags.DEFINE_integer('num_cpu_core', 1, 'Number of CPU cores to use')
tf.app.flags.DEFINE_integer('intra_op_parallelism_threads', 1, 'How many ops can be launched in parallel')
tf.app.flags.DEFINE_integer('num_gpu_core', 0, 'Number of GPU cores to use')
device_id = -1 # Global Variable Counter for device_id used
def next_device(use_cpu = True):
''' See if there is available next device;
Args: use_cpu, global device_id
Return: new device id
'''
global device_id
if (use_cpu):
if ((device_id + 1) < FLAGS.num_cpu_core):
device_id += 1
device = '/cpu:%d' % device_id
else:
if ((device_id + 1) < FLAGS.num_gpu_core):
device_id += 1
device = '/gpu:%d' % device_id
return device
def xavier_init(neuron_in, neuron_out, constant=1):
low = -constant*np.sqrt(6/(neuron_in + neuron_out))
high = constant*np.sqrt(6/(neuron_in + neuron_out))
return tf.random_uniform((neuron_in, neuron_out), minval=low, maxval=high, dtype=tf.float32)
def init_weights(config):
Initialize weights with specified configuration using Xavier algorithm
encoder_weights = dict()
decoder_weights = dict()
# two layers encoder
encoder_weights['h1'] = tf.Variable(xavier_init(config['x_in'], config['encoder_1']))
encoder_weights['h2'] = tf.Variable(xavier_init(config['encoder_1'], config['encoder_2']))
encoder_weights['mu'] = tf.Variable(xavier_init(config['encoder_2'], config['z']))
encoder_weights['sigma'] = tf.Variable(xavier_init(config['encoder_2'], config['z']))
encoder_weights['b1'] = tf.Variable(tf.zeros([config['encoder_1']], dtype=tf.float32))
encoder_weights['b2'] = tf.Variable(tf.zeros([config['encoder_2']], dtype=tf.float32))
encoder_weights['bias_mu'] = tf.Variable(tf.zeros([config['z']], dtype=tf.float32))
encoder_weights['bias_sigma'] = tf.Variable(tf.zeros([config['z']], dtype=tf.float32))
# two layers decoder
decoder_weights['h1'] = tf.Variable(xavier_init(config['z'], config['decoder_1']))
decoder_weights['h2'] = tf.Variable(xavier_init(config['decoder_1'], config['decoder_2']))
decoder_weights['mu'] = tf.Variable(xavier_init(config['decoder_2'], config['x_in']))
decoder_weights['sigma'] = tf.Variable(xavier_init(config['decoder_2'], config['x_in']))
decoder_weights['b1'] = tf.Variable(tf.zeros([config['decoder_1']], dtype=tf.float32))
decoder_weights['b2'] = tf.Variable(tf.zeros([config['decoder_2']], dtype=tf.float32))
decoder_weights['bias_mu'] = tf.Variable(tf.zeros([config['x_in']], dtype=tf.float32))
decoder_weights['bias_sigma'] = tf.Variable(tf.zeros([config['x_in']], dtype=tf.float32))
return (encoder_weights, decoder_weights)
def forward_z(x, encoder_weights):
Compute mean and sigma of z
with tf.device(next_device()):
layer_1 = tf.nn.softplus(tf.add(tf.matmul(x, encoder_weights['h1']), encoder_weights['b1']))
with tf.device(next_device()):
layer_2 = tf.nn.softplus(tf.add(tf.matmul(layer_1, encoder_weights['h2']), encoder_weights['b2']))
z_mean = tf.add(tf.matmul(layer_2, encoder_weights['mu']), encoder_weights['bias_mu'])
z_sigma = tf.add(tf.matmul(layer_2, encoder_weights['sigma']), encoder_weights['bias_sigma'])
return(z_mean, z_sigma)
def reconstruct_x(z, decoder_weights):
Use z to reconstruct x
with tf.device(next_device()):
layer_1 = tf.nn.softplus(tf.add(tf.matmul(z, decoder_weights['h1']), decoder_weights['b1']))
with tf.device(next_device()):
layer_2 = tf.nn.softplus(tf.add(tf.matmul(layer_1, decoder_weights['h2']), decoder_weights['b2']))
x_prime = tf.nn.sigmoid(tf.add(tf.matmul(layer_2, decoder_weights['mu']), decoder_weights['bias_mu']))
return x_prime
def optimize_func(z, z_mean, z_sigma, x, x_prime, learn_rate):
Define cost and optimize function
# define loss function
# reconstruction lost
recons_loss = -tf.reduce_sum(x * tf.log(1e-10 + x_prime) + (1-x) * tf.log(1e-10 + 1 - x_prime), 1)
# KL distance
latent_loss = -0.5 * tf.reduce_sum(1 + z_sigma - tf.square(z_mean) - tf.exp(z), 1)
# summing two loss terms together
cost = tf.reduce_mean(recons_loss + latent_loss)
# use ADAM to optimize
optimizer = tf.train.AdamOptimizer(learning_rate=learn_rate).minimize(cost)
return (cost, optimizer)
def vae_init(batch_size=100, learn_rate=0.001, config={}):
This function build a varational autoencoder based on https://jmetzen.github.io/2015-11-27/vae.html
In consideration of simplicity and future work on optimization, we removed the class structure
A tensorflow session, optimizer and cost function as well as input data will be returned
# default configuration of network
# x_in = 784
# encoder_1 = 500
# encoder_2 = 500
# decoder_1 = 500
# decoder_2 = 500
# z = 20
# use default setting if no configuration is specified
if not config:
config['x_in'] = 784
config['encoder_1'] = 500
config['encoder_2'] = 500
config['decoder_1'] = 500
config['decoder_2'] = 500
config['z'] = 20
# input
x = tf.placeholder(tf.float32, [None, config['x_in']])
# initialize weights
(encoder_weights, decoder_weights) = init_weights(config)
# compute mean and sigma of z
(z_mean, z_sigma) = forward_z(x, encoder_weights)
# compute z by drawing sample from normal distribution
eps = tf.random_normal((batch_size, config['z']), 0, 1, dtype=tf.float32)
z_val = tf.add(z_mean, tf.multiply(tf.sqrt(tf.exp(z_sigma)), eps))
# use z to reconstruct the network
x_prime = reconstruct_x(z_val, decoder_weights)
# define loss function
(cost, optimizer) = optimize_func(z_val, z_mean, z_sigma, x, x_prime, learn_rate)
# initialize all variables
init = tf.global_variables_initializer()
#
config_ = tf.ConfigProto(device_count={"CPU": FLAGS.num_cpu_core}, # limit to num_cpu_core CPU usage
inter_op_parallelism_threads = 1,
intra_op_parallelism_threads = FLAGS.intra_op_parallelism_threads,
log_device_placement=True)
# define and return the session
sess = tf.InteractiveSession(config=config_)
sess.run(init)
return (sess, optimizer, cost, x, x_prime)
mu_H0, sigma_H0 = 0, 0.5
mu_H1, sigma_H1 = 2, 0.5
n_samples = 1000000
H0 = np.random.normal(mu_H0, sigma_H0, n_samples).reshape((-1,1))
H1 = np.random.normal(mu_H1, sigma_H1, n_samples).reshape((-1,1))
H0_label = np.zeros((n_samples, 1))
H0_label = np.ones((n_samples, 1))
_, bins_0, _ = plt.hist(H0, 50, normed=True, label='H0')
_, bins_1, _ = plt.hist(H1, 50, normed=True, label='H1')
plt.plot(bins_0, 1/(sigma_H0 * np.sqrt(2 * np.pi)) * np.exp( - (bins_0 - mu_H0)**2 / (2 * sigma_H0**2) ),
linewidth=2, color='r', label='H0 ideal')
plt.plot(bins_1, 1/(sigma_H1 * np.sqrt(2 * np.pi)) * np.exp( - (bins_1 - mu_H1)**2 / (2 * sigma_H1**2) ),
linewidth=2, color='y', label='H1 ideal')
plt.legend(fontsize=10)
plt.show()
data = np.concatenate((H0, H1), axis=0)
np.random.shuffle(data)
def vae_train(sess, optimizer, cost, x, n_samples, batch_size=100, learn_rate=0.001, train_epoch=10, verb=1, verb_step=5):
start_time = time.time()
for epoch in range(train_epoch):
avg_cost = 0
total_batch = int(n_samples / batch_size)
for i in range(total_batch):
batch_x = data[i*batch_size:(i+1)*batch_size]
_, c = sess.run((optimizer, cost), feed_dict={x: batch_x})
avg_cost += c / n_samples * batch_size
elapsed_time = (time.time() - start_time)* 1000 / verb_step
start_time = time.time()
if verb:
if epoch % verb_step == 0:
print('Epoch:%04d' % (epoch+1), 'cost=', '{:.9f}'.format(avg_cost), 'Elapsed time: ','%.9f' % elapsed_time)
config = {}
config['x_in'] = 1
config['encoder_1'] = 1
config['encoder_2'] = 1
config['decoder_1'] = 1
config['decoder_2'] = 1
config['z'] = 2
batch_size = 50000
(sess, optimizer, cost, x, x_prime) = vae_init(batch_size=batch_size, config=config)
vae_train(sess, optimizer, cost, x, n_samples*2, batch_size=batch_size, learn_rate=0.00001, train_epoch=26)
total_batch = int(n_samples*2 / batch_size)
data_reconstruct = []
for i in range(total_batch):
data_reconstruct.append(sess.run(x_prime, feed_dict={x: data[i*batch_size:(i+1)*batch_size]}))
data_reconstruct = np.concatenate(data_reconstruct, axis=0)
_, bins, _ = plt.hist(data_reconstruct, bins=50, normed=True, label='reconstruct')
plt.plot(bins_0, 1/(sigma_H0 * np.sqrt(2 * np.pi)) * np.exp( - (bins_0 - mu_H0)**2 / (2 * sigma_H0**2) ),
linewidth=2, color='r', label='H0 ideal')
plt.plot(bins_1, 1/(sigma_H1 * np.sqrt(2 * np.pi)) * np.exp( - (bins_1 - mu_H1)**2 / (2 * sigma_H1**2) ),
linewidth=2, color='y', label='H1 ideal')
plt.legend(fontsize=10)
plt.show()
mu_H0, sigma_H0 = (0,0), [[0.3,0],[0,0.3]]
mu_H1, sigma_H1 = (2,2), [[0.1,0],[0,0.9]]
n_samples = 500000
H0 = np.random.multivariate_normal(mu_H0, sigma_H0, n_samples).reshape((-1,2))
H1 = np.random.multivariate_normal(mu_H1, sigma_H1, n_samples).reshape((-1,2))
H0_label = np.zeros((n_samples, 1))
H0_label = np.ones((n_samples, 1))
plt.scatter(H0[:,0], H0[:,1], c='b', label='H0')
plt.scatter(H1[:,0], H1[:,1], c='r', label='H1')
plt.legend(fontsize=10)
plt.show()
data = np.concatenate((H0, H1), axis=0)
np.random.shuffle(data)
def vae_train(sess, optimizer, cost, x, n_samples, batch_size=100, learn_rate=0.001, train_epoch=10, verb=1, verb_step=5):
start_time = time.time()
for epoch in range(train_epoch):
avg_cost = 0
total_batch = int(n_samples / batch_size)
for i in range(total_batch):
batch_x = data[i*batch_size:(i+1)*batch_size,:]
_, c = sess.run((optimizer, cost), feed_dict={x: batch_x})
avg_cost += c / n_samples * batch_size
elapsed_time = (time.time() - start_time)* 1000 / verb_step
start_time = time.time()
if verb:
if epoch % verb_step == 0:
print('Epoch:%04d' % (epoch+1), 'cost=', '{:.9f}'.format(avg_cost), 'Elapsed time: ','%.9f' % elapsed_time)
config = {}
config['x_in'] = 2
config['encoder_1'] = 1
config['encoder_2'] = 1
config['decoder_1'] = 1
config['decoder_2'] = 1
config['z'] = 2
batch_size = 10000
(sess, optimizer, cost, x, x_prime) = vae_init(batch_size=batch_size, config=config)
vae_train(sess, optimizer, cost, x, n_samples*2, batch_size=batch_size, learn_rate=0.001, train_epoch=26)
total_batch = int(n_samples*2 / batch_size)
data_reconstruct = []
for i in range(total_batch):
data_reconstruct.append(sess.run(x_prime, feed_dict={x: data[i*batch_size:(i+1)*batch_size,:]}))
data_reconstruct = np.concatenate(data_reconstruct, axis=0)
plt.scatter(data_reconstruct[:,0], data_reconstruct[:,1])
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: NumPy
Step2: Requests
Step3: Pandas (optional)
Step4: SciPy (optional)
Step5: 2) Importar scidb-py
Step6: conectarse al servidor de Base de datos
Step7: 3) Leer archivo con cada una de las ondas
Step8: Quitarle caracteres especiales
Step9: 4) Importar WFDB para conectarse a physionet
Step10: Busca la ubicacion de la señal tipo II
Step11: Normaliza la señal y le quita los valores en null
Step12: Cambiar los guiones "-" por raya al piso "_" porque por algun motivo SciDB tiene problemas con estos caracteres
Step13: Check de list of arrays in SciDB
| <ASSISTANT_TASK:>
Python Code:
import sys
sys.version_info
import numpy as np
np.__version__
import requests
requests.__version__
import pandas as pd
pd.__version__
import scipy
scipy.__version__
import scidbpy
scidbpy.__version__
from scidbpy import connect
sdb = connect('http://localhost:8080')
import urllib.request # urllib2 in python2 the lib that handles the url stuff
target_url = "https://physionet.org/physiobank/database/mimic3wdb/matched/RECORDS-waveforms"
data = urllib.request.urlopen(target_url) # it's a file like object and works just like a file
lines = data.readlines();
line = str(lines[2])
line
line = line.replace('b\'','').replace('\'','').replace('\\n','')
splited = line.split("/")
splited
carpeta,subCarpeta,onda = line.split("/")
carpeta = carpeta+"/"+subCarpeta
onda
import wfdb
carpeta = "p05/p050140"
onda = "p050140-2188-07-26-05-51"
sig, fields = wfdb.srdsamp(onda,pbdir='mimic3wdb/matched/'+carpeta, sampfrom=10000)
print(sig)
print("signame: " + str(fields['signame']))
print("units: " + str(fields['units']))
print("fs: " + str(fields['fs']))
print("comments: " + str(fields['comments']))
print("fields: " + str(fields))
signalII = None
try:
signalII = fields['signame'].index("II")
except ValueError:
print("List does not contain value")
if(signalII!=None):
print("List contain value")
#array = wfdb.processing.normalize(x=sig[:, signalII], lb=-2, ub=2)
array = sig[:, signalII]
array = array[~np.isnan(sig[:, signalII])]
arrayNun = np.trim_zeros(array)
array
ondaName = onda.replace("-", "_")
if arrayNun.size>0 :
sdb.input(upload_data=array).store(ondaName,gc=False)
# sdb.iquery("store(input(<x:int64>[i], '{fn}', 0, '{fmt}'), "+ondaName+")", upload_data=array)
dir(sdb.arrays)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise. Write a snippet of code to verify that the vertex IDs are dense in some interval $[1, n]$. That is, there is a minimum value of $1$, some maximum value $n$, and no missing values between $1$ and $n$.
Step2: Exercise. Make sure every edge has its end points in the vertex table.
Step3: Exercise. Determine which vertices have no incident edges. Store the number of such vertices in a variable, num_solo_vertices.
Step4: Exercise. Compute a view called Outdegrees, which contains the following columns
Step5: Exercise. Query the database to extract a report of which URLs point to which URLs. Also include the source vertex out-degree and order the rows in descending order by it.
Step6: Part 2
Step7: Errata
Step10: Exercise. Implement a function to multiply a sparse matrix by a dense vector, assuming a dense vector defined as follows.
Step11: As a quick test, let's verify that multiplying $A_1$ by the vector of all ones, $u$, counts the number of vertices.
Step16: Exercise. Complete the PageRank implementation for this dataset. To keep it simple, you may take $\alpha=0.85$, $x(0)$ equal to the vector of all $1/n$ values, and 25 iterations.
Step17: Exercise. Check your result by first inserting the final computed PageRank vector back into the database, and then using a SQL query to see the ranked URLs. In your query output, also include both the in-degrees and out-degrees of each vertex.
| <ASSISTANT_TASK:>
Python Code:
import sqlite3 as db
import pandas as pd
def get_table_names (conn):
assert type (conn) == db.Connection # Only works for sqlite3 DBs
query = "SELECT name FROM sqlite_master WHERE type='table'"
return pd.read_sql_query (query, conn)
def print_schemas (conn, table_names=None, limit=0):
assert type (conn) == db.Connection # Only works for sqlite3 DBs
if table_names is None:
table_names = get_table_names (conn)
c = conn.cursor ()
query = "PRAGMA TABLE_INFO ({table})"
for name in table_names:
c.execute (query.format (table=name))
columns = c.fetchall ()
print ("=== {table} ===".format (table=name))
col_string = "[{id}] {name} : {type}"
for col in columns:
print (col_string.format (id=col[0],
name=col[1],
type=col[2]))
print ("\n")
conn = db.connect ('poliblogs.db')
for name in get_table_names (conn)['name']:
print_schemas (conn, [name])
query = '''SELECT * FROM %s LIMIT 5''' % name
print (pd.read_sql_query (query, conn))
print ("\n")
query = '''
SELECT MIN(Id) AS MinId,
MAX(Id) AS MaxId,
COUNT(DISTINCT Id) AS NumDistinctIds
FROM Vertices
'''
df = pd.read_sql_query (query, conn)
print df
assert df.MinId[0] == 1
assert df.MaxId[0] == df.NumDistinctIds[0]
print ("\n==> Verified: Vertex ids cover [1, %d] densely." \
% df.NumDistinctIds[0])
query = '''
SELECT {col} FROM Edges
WHERE {col} NOT IN (SELECT Id FROM Vertices)
'''
df_s = pd.read_sql_query (query.format (col='Source'), conn)
print (df_s['Source'])
df_t = pd.read_sql_query (query.format (col='Target'), conn)
print (df_t['Target'])
assert df_s['Source'].empty
assert df_t['Target'].empty
print ("==> Verified: All source and target IDs are vertices.")
query = '''
SELECT Id, Url
FROM Vertices
WHERE (Id NOT IN (SELECT DISTINCT Source FROM Edges))
AND (Id NOT IN (SELECT DISTINCT Target FROM Edges))
'''
df_solo_vertices = pd.read_sql_query (query, conn)
print df_solo_vertices.head ()
num_solo_vertices = len (df_solo_vertices)
# Our testing code follows, assuming your `num_solo_vertices` variable:
print ("\n==> %d vertices have no incident edges." % num_solo_vertices)
assert num_solo_vertices == 266
# Complete this query:
query = '''
CREATE VIEW IF NOT EXISTS Outdegrees AS
SELECT Source AS Id, COUNT(*) AS Degree
FROM Edges
GROUP BY Source
'''
c = conn.cursor ()
c.execute (query)
from IPython.display import display
query = '''
SELECT Outdegrees.Id, Degree, Url
FROM Outdegrees, Vertices
WHERE Outdegrees.Id = Vertices.Id
ORDER BY -Degree
'''
df_outdegrees = pd.read_sql_query (query, conn)
print "==> A few entries with large out-degrees:"
display (df_outdegrees.head ())
print "\n==> A few entries with small out-degrees:"
display (df_outdegrees.tail ())
query = '''
SELECT S.Url, T.Url, Out.Degree
FROM Edges AS E,
(SELECT Id, Url FROM Vertices) AS S,
(SELECT Id, Url FROM Vertices) AS T,
(SELECT Id, Degree FROM Outdegrees) AS Out
WHERE (E.Source=S.Id) AND (E.Target=T.Id) AND (E.Source=Out.Id)
ORDER BY -Out.Degree
'''
df_G = pd.read_sql_query (query, conn)
from IPython.display import display
display (df_G.head ())
print ("...")
display (df_G.tail ())
from cse6040utils import sparse_matrix
# Extract entries from the table
query = '''
SELECT Target AS Row, Source AS Col, 1.0/Degree AS Val
FROM Edges, Outdegrees
WHERE Edges.Source = Outdegrees.Id
'''
df_A = pd.read_sql_query (query, conn)
display (df_A.head (10))
# Copy entries from df_A into A_1
A_1 = sparse_matrix () # Initially all zeros, with no rows or columns
for (i, j, a_ij) in zip (df_A['Row'], df_A['Col'], df_A['Val']):
A_1[i-1][j-1] += a_ij # "-1" switches to 0-based indexing
# Select all vertices with no outgoing edges
query = '''
SELECT Id FROM Vertices
WHERE Id NOT IN (SELECT DISTINCT Source FROM Edges)
'''
df_anti_social = pd.read_sql_query (query, conn)
print ("==> Found %d vertices with no outgoing links." \
% len (df_anti_social))
# Add self-edges for empty rows/columns
for i in df_anti_social['Id']:
A_1[i-1][i-1] = 1.0
def dense_vector (n, init_val=0.0):
Returns a dense vector of length `n`, with all entries set to
`init_val`.
return [init_val] * n
def spmv (n, A, x):
Returns a dense vector y of length n, where y = A*x.
y = dense_vector (n)
for (i, A_i) in A.items ():
s = 0
for (j, a_ij) in A_i.items ():
s += a_ij * x[j]
y[i] = s
return y
n = df.NumDistinctIds[0] # Number of vertices, from Part 1
u = dense_vector (n, 1.0)
y = spmv (n, A_1, u)
print sum (y)
# Some helper functions, in case you need them
import math
def vec_scale (x, alpha):
Scales the vector x by a constant alpha.
return [x_i*alpha for x_i in x]
def vec_add_scalar (x, c):
Adds the scalar value c to every element of x.
return [x_i+c for x_i in x]
def vec_sub (x, y):
Returns x - y
return [x_i - y_i for (x_i, y_i) in zip (x, y)]
def vec_2norm (x):
Returns ||x||_2
return math.sqrt (sum ([x_i**2 for x_i in x]))
# YOUR CODE GOES BELOW. We've provided some scaffolding code,
# so you just need to complete it.
ALPHA = 0.85 # Probability of following some link
MAX_ITERS = 25
n = df.NumDistinctIds[0] # Number of vertices, from Part 1
# Let X[t] store the dense x(t) vector at time t
X = []
x_0 = dense_vector (n, 1.0/n) # Initial distribution: 1/n at each page
X.append (x_0)
for t in range (1, MAX_ITERS):
# Complete this implementation
X.append (...)
# Write some code here to create a table in the database
# called PageRank. It should have one column to hold the
# page (vertex) ID, and one for the rank value.
# Some helper code to compute a view containing the indegrees.
query = '''
CREATE VIEW IF NOT EXISTS Indegrees AS
SELECT Target AS Id, COUNT(*) AS Degree
FROM Edges
GROUP BY Target
'''
c = conn.cursor ()
c.execute (query)
# Complete this query:
query = '''
...
'''
df_ranks = pd.read_sql_query (query, conn)
display (df_ranks)
sum (df_ranks['Rank'])
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To obtain these curves, we sort the predictions made by the classifier from the smallest to the biggest for each group and put them on a $[0, 1]$ scale on the x-axis. The value corresponding to $x=0.5$ is the median of the distribution. Similarly for each quantile level in $[0,1]$ we obtain the corresponding quantile of the distribution.
Step2: We can see on this figure that the support of the two quantile function is different, since the number of points in the two groups is different. In order to compute the gap between the two curves, we first interpolate the two curves on the union of the supports. The Wasserstein distance corresponds to the gap between the two quantile functions. Here we show two interpolations schemes that make it easy to estimate the Wasserstein distance between two 1D measures.
Step3: Soft Wasserstein
Step4: It is noteworthy to see that the obtained interpolation corresponds to a smooth version of the 'nearest' interpolation.
Step5: Training a network
Step6: We can see that when we increase the fairness regularization factor $\lambda$, the training accuracy slightly decreases but it does not impact too much the eval accuracy. The fairness regularizer is a rather good regularizer. For $\lambda = 1000$ the training metrics are a bit more degraded as well as the eval ones, but we also note that after 100 epochs this classifier has not converged yet, so we could also imagine that it would catch up in terms of eval metrics.
| <ASSISTANT_TASK:>
Python Code:
fig, ax = plt.subplots(1, 1, figsize=(8, 5))
plot_quantiles(logits, groups, ax)
ax.tick_params(axis='both', which='major', labelsize=16)
ax.set_title(f'Baseline Quantiles', fontsize=22)
ax.set_xlabel('Quantile Level', fontsize=18)
ax.set_ylabel('Prediction', fontsize=18)
N = 24
rng = jax.random.PRNGKey(1)
rng, *rngs = jax.random.split(rng, 3)
y_pred = 3 * jax.random.uniform(rngs[0], (N,))
groups = jax.random.uniform(rngs[1], (N,)) < 0.25
support_0 = jnp.linspace(0, 1, N - jnp.sum(groups))
support_1 = jnp.linspace(0, 1, jnp.sum(groups))
quantiles_0 = jnp.sort(y_pred[jnp.logical_not(groups)])
quantiles_1 = jnp.sort(y_pred[groups])
fig, ax = plt.subplots(1, 1, figsize=(8, 5))
ax.plot(support_0, quantiles_0,
lw=3, marker='o', markersize=10, label='group 0', markeredgecolor='k')
ax.plot(support_1, quantiles_1,
lw=3, marker='o', markersize=10, label='group 1', markeredgecolor='k')
ax.set_xlabel('Quantile level', fontsize=18)
ax.tick_params(axis='both', which='major', labelsize=16)
ax.legend(fontsize=16)
import scipy
kinds = ['linear', 'nearest']
fig, axes = plt.subplots(1, len(kinds), figsize=(8 * len(kinds), 5))
for ax, kind in zip(axes, kinds):
q0 = scipy.interpolate.interp1d(support_0, quantiles_0, kind=kind)
q1 = scipy.interpolate.interp1d(support_1, quantiles_1, kind=kind)
support_01 = jnp.sort(jnp.concatenate([support_0, support_1]))
ax.plot(support_01, q0(support_01), label='group 0',
lw=3, marker='o', markersize=10, markeredgecolor='k')
ax.plot(support_01, q1(support_01), label='group 1',
lw=3, marker='o', markersize=10, markeredgecolor='k')
ax.fill_between(support_01, q0(support_01), q1(support_01),
color='y', hatch='|', fc='w')
ax.set_xlabel('Quantile level', fontsize=18)
ax.tick_params(axis='both', which='major', labelsize=16)
ax.legend(fontsize=16)
ax.set_title(f'Interpolation {kind}', fontsize=20)
import functools
@functools.partial(jax.jit, static_argnums=(2,))
def sort_group(inputs: jnp.ndarray, group: jnp.ndarray, target_size: int = 16):
a = group / jnp.sum(group)
b = jnp.ones(target_size) / target_size
ot = ott.tools.soft_sort.transport_for_sort(inputs, a, b, dict(epsilon=1e-3))
return 1.0 / b * ot.apply(inputs, axis=0)
target_sizes = [4, 16, 64]
_, axes = plt.subplots(1, len(target_sizes), figsize=(len(target_sizes * 8), 5))
for ax, target_size in zip(axes, target_sizes):
ax.plot(sort_group(y_pred, jnp.logical_not(groups), target_size),
lw=3, marker='o', markersize=10, markeredgecolor='k', label='group 0')
ax.plot(sort_group(y_pred, groups, target_size),
lw=3, marker='o', markersize=10, markeredgecolor='k', label='group 0')
ax.legend(fontsize=16)
ax.tick_params(axis='both', which='major', labelsize=16)
ax.set_title(f'Group soft sorting on support of size {target_size}', fontsize=20)
import matplotlib.pyplot as plt
fig, axes = plt.subplots(2, 2, figsize=(16, 10))
for weight, curves in result.items():
for ax_row, metric in zip(axes, ['loss', 'accuracy']):
for ax, phase in zip(ax_row, ['train', 'eval']):
arr = np.array(curves[f'{phase}_{metric}'])
ax.plot(arr[:, 0], arr[:, 1], label=f'$\lambda={weight:.0f}$',
lw=5, marker='o', markersize=12, markeredgecolor='k', markevery=10)
ax.set_title(f'{metric} / {phase}', fontsize=20)
ax.legend(fontsize=18)
ax.set_xlabel('Epoch', fontsize=18)
ax.tick_params(axis='both', which='major', labelsize=16)
plt.tight_layout()
num_rows = 2
num_cols = len(weights[1:]) // 2
fig, axes = plt.subplots(num_rows, num_cols, figsize=(7 * num_cols, 5 * num_rows))
for ax, w in zip(axes.ravel(), weights[1:]):
logits, groups = get_predictions(ds_test, config, states[w])
plot_quantiles(logits, groups, ax)
ax.set_title(f'$\lambda = {w:.0f}$', fontsize=22)
ax.set_ylabel('Prediction', fontsize=18)
plt.tight_layout()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Des méthodes peuvent être enchaînées sur les corps célestes présents dans planets.
Step2: La méthode utc permet d'entrer des données temporelles manuellement, mais aussi d'utiliser le module datetime de Python.
Step3: <div class="alert alert-warning">
Step4: <div class="alert alert-warning">
Step5: <div class="alert alert-success">
Step6: <div class="alert alert-info">
Step7: Le système GCRS et son lien avec latitude, longitude et temps sidéral
Step8: <div class="alert alert-warning">
Step10: Propagation de trajectoires d'objets en orbite
Step11: <div class="alert alert-warning">
Step12: <div class="alert alert-warning">
Step13: Le terminateur
Step14: Prochains passage de l'ISS
Step15: <div class="alert alert-warning" style="margin-top
Step16: Bonus
| <ASSISTANT_TASK:>
Python Code:
from skyfield.api import load, utc
ts = load.timescale()
# chargement des éphémérides
planets = load('de421.bsp')
earth = planets['earth']
sun = planets['sun']
moon = planets['moon']
# Position de la Terre au 1er janvier 2017
earth.at(ts.utc(2017, 1, 1))
import datetime
now = datetime.datetime.now(utc)
now
# Position relative du soleil par rapport à la Terre au 1er janvier 2017, 12h10
earth.at(ts.utc(now)).observe(sun)
earth.at(ts.utc(now)).observe(sun).radec()
# %load solutions/declinaison_nulle.py
# %load solutions/tropiques.py
from collections import namedtuple
from skyfield.api import Topos
import pytz
city = namedtuple('city', ['coords', 'winter', 'summer'])
fmt = '%H:%M:%S %Z %z'
msg = "Midi à {} ({}): azimuth de {:0.02f} deg"
timezones = {
# ORY
'Europe/Paris': city(coords=(48.725278, 2.359444), winter=1, summer=2),
# SXF
'Europe/Berlin': city(coords=(52.380001, 13.52258), winter=1, summer=2),
# LHR
'Europe/London': city(coords=(51.4775, -0.461389), winter=0, summer=1),
# HND
'Asia/Tokyo': city(coords=(35.552258, 139.779694), winter=9, summer=9),
# EZE
'America/Buenos_Aires': city(coords=(-34.822222, -58.535833), winter=-3, summer=-3)
}
print("Heures d'hiver:")
for tz, city in timezones.items():
noon = datetime.datetime(2017, 1, 1, 12 - city.winter, tzinfo=utc)
_, az, _ = (earth + Topos(*city.coords)).at(ts.utc(noon)).observe(sun).apparent().altaz()
print(msg.format(tz, noon.astimezone(pytz.timezone(tz)).strftime(fmt), az.degrees))
print()
print("Heures d'été:")
for tz, city in timezones.items():
if 'Europe' not in tz: continue
noon = datetime.datetime(2017, 7, 1, 12 - city.summer, tzinfo=utc)
_, az, _ = (earth + Topos(*city.coords)).at(ts.utc(noon)).observe(sun).apparent().altaz()
print(msg.format(tz, noon.astimezone(pytz.timezone(tz)).strftime(fmt), az.degrees))
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
today = datetime.date.today()
# Combien y a-t-il de jours ce mois-ci ?
import calendar
_, total_days = calendar.monthrange(today.year, today.month)
# La Lune observée à Paris
paris = earth + Topos(*timezones['Europe/Paris'].coords)
stack=[]
for day in range(1, total_days+1):
dt = ts.utc(datetime.date(today.year, today.month, day))
_, moon_az, _ = paris.at(dt).observe(moon).apparent().altaz()
_, sun_az, _ = paris.at(dt).observe(sun).apparent().altaz()
stack.append(moon_az.radians - sun_az.radians)
# Revenir entre -π et π
stack = np.angle(np.exp(1j*np.array(stack)))
# Détecter le premier passage de -π à π
wh = np.where(np.abs(stack[1:]-stack[:-1]) > np.pi)[0][0]
fig = plt.figure(figsize=(10, 7))
ax = fig.gca()
# Un trait vertical par jour
for i in range(total_days):
ax.plot([i,i], [-np.pi, np.pi], color='#eeeeee')
# Un trait horizontal par phase principale
pi = np.zeros(stack.size)
phase = ['Pleine lune', 'Premier quartier', 'Nouvelle lune',
'Dernier quartier', 'Pleine lune']
for i in range(5):
plt.plot((i-2)*np.pi/2 + pi, '--', color="#aaaaaa")
plt.annotate(phase[i], (5, (i-2)*np.pi/2 + .1), )
# L'angle d'éclairage sur la Lune, vue de la Terre
plt.plot(list(range(wh+1)), stack[:wh+1], color="#f13a31")
plt.plot(list(range(wh+1, total_days)), stack[wh+1:], color="#f13a31")
# Les axes
ax.set_xticks(list(range(total_days)))
ax.set_xticklabels(list(range(1, total_days+1)))
ax.xaxis.set_ticks_position('bottom')
ax.set_yticks([(i-2)*np.pi/2 for i in range(5)])
ax.set_yticklabels(["- π", "- π/2", "0", "π/2", "π"])
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('axes', -0.02))
# Le titre
month_name = [None, "janvier", "février", "mars",
"avril", "mai", "juin","juillet",
"août", "septembre", "octobre",
"novembre", "décembre"]
ax.set_title("Les phases de la Lune en {} {}".format(month_name[today.month], today.year))
ax.set_frame_on(False)
import geodesy.wgs84 as geo
from skyfield.api import utc
now = datetime.datetime.now(utc)
t = ts.utc(now)
sun_pos = earth.at(t).observe(planets['sun'])
print(sun_pos.position) # by default in astronomic units
print(sun_pos.position.m)
lat, lon, _ = geo.cartesian_to_geodesic(*sun_pos.position.m)
print("lat: {:.4f} deg lon: {:.4f} deg".format(lat, (lon - 15 * t.gmst))) # 15 * 24 h = 360 deg
# %load solutions/between_m180_180.py
# for the example
from skyfield.api import EarthSatellite
iss_text =
ISS (ZARYA)
1 25544U 98067A 14273.50403866 .00012237 00000-0 21631-3 0 1790
2 25544 51.6467 297.5710 0002045 126.1182 27.2142 15.50748592907666
line1, line2 = iss_text.splitlines()[-2:]
iss = EarthSatellite(line1, line2)
iss
# compute present position (makes no sense after so many years...)
iss.at(ts.utc(now)).position.m
# %load solutions/requests.py
# %load solutions/iss_track.py
# %load solutions/terminator.py
# %load solutions/compute_next_pass.py
# %load solutions/map_next_pass.py
# Date en question
now = datetime.datetime(2017, 1, 14, tzinfo=utc)
# Coordonnées GPS depuis leur site
esac_madrid = earth + Topos(40.4438, -3.9529)
# Archives TLE
iss = EarthSatellite(
"1 25544U 98067A 17013.66453757 .00002774 00000-0 49270-4 0 9991",
"2 25544 51.6436 88.6266 0007381 79.9762 16.7314 15.54061850 37728"
)
visible = passing_over(
now, esac_madrid,
lambda iss_alt, sun_alt: iss_alt.degrees > 10,
horizon=datetime.timedelta(days=1),
timestep=datetime.timedelta(minutes=1)
)
# Compute a new track (and keep additional parameters)
MoonPoint = namedtuple(
"MoonPoint", ["iss_alt", "iss_az", "moon_alt", "moon_az", "localtime"]
)
def moon_track(start, position):
track = []
moon = planets["moon"]
for k in range(1200): # 10 minutes à 0.5 secondes
t = ts.utc(start + k * datetime.timedelta(seconds=.5))
iss_alt, iss_az, _ = position.at(t).observe(earth + iss).apparent().altaz()
moon_alt, moon_az, _ = position.at(t).observe(moon).apparent().altaz()
if iss_alt.degrees > 10:
point = MoonPoint(
iss_alt,
iss_az,
moon_alt,
moon_az,
t.astimezone(pytz.timezone("Europe/Madrid")).strftime("%H:%M:%S"),
)
track.append(point)
return track
# Compute the track of the pass over
track = moon_track(visible[0][0] - datetime.timedelta(minutes=3), esac_madrid)
small_track = track[368:372]
fig = plt.figure()
ax = fig.gca()
plt.plot(
[t.iss_az.degrees for t in small_track],
[t.iss_alt.degrees for t in small_track],
"-o",
color="#aaaaaa",
)
for t in small_track:
c = plt.Circle(
(t.moon_az.degrees, t.moon_alt.degrees),
radius=0.25, # pertinence du 0.25 ?
facecolor="#d4cf6a",
edgecolor="#7d7813",
)
ax.add_patch(c)
ax.annotate(t.localtime, (t.iss_az.degrees, t.iss_alt.degrees + 0.1))
ax.axis("scaled")
ax.set_xlim((154, 157.5))
ax.set_ylim((61, 63))
ax.set_frame_on(False)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Enoncé 1
Step2: Pour cette question, quelques élèves ont vérifié que n était plus petit que 2014 d'abord. Ce n'est pas vraiment la peine.
Step3: range(0,10) va de 0 à 10 exclu.
Step4: Q3
Step5: Sur cette question, on fait régulièrement deux erreurs lorsqu'on commence à programmer
Step6: Utiliser une boucle n'était pas nécessaire, un simple test suffit.
Step7: Q3
Step8: Quelques élèves ont écrit quelque chose comme
Step9: Bien que ceci soit tout-à-fait correct, le fait de trier une copie du tableau nécessite des calculs inutiles. Le coût d'un tri est en $O(n \ln n)$ alors que tester si le tableau est trié est au pire de $O(n)$ puisqu'il suffit de le parcourir une fois. J'ajoute une dernière remarque
Step10: La raison est l'instruction res = tab crée une autre variable res mais l'instruction n'implique pas la copie de la liste. Elle ne fait que donner un autre nom à la même liste. Ainsi
| <ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
def mul2014(n):
return 1 if n % 2014 == 0 else 0
print(mul2014(2014), mul2014(2015))
import math
min ( math.cos(i) for i in range(1,11) )
list(range(0,10))
def symetrie(s):
i=0
j=len(s)-1
while i < j :
if s[i] != s[j] : return False
i += 1
j -= 1
return True
print(symetrie("kayak"), symetrie("kakaks"))
def contient_A(s):
return 1 if "A" in s else 0
print(contient_A("e"))
import math
x = 0.3
sum ( x**(2*i) / math.sin(i) for i in range(1,11) )
def est_trie(tab):
for i in range(1,len(tab)):
if tab[i-1] > tab[i] : return False
return True
est_trie( [1]),est_trie( [1,2,3]),est_trie( [1,2,3,0])
def est_trie(tab):
res = tab.copy()
res.sort()
return res == tab
def est_trie_nocopy(tab):
res = tab
res.sort()
return res == tab
t = [ 0,1,2 ]
print( est_trie(t), est_trie_nocopy(t) )
t = [ 0,1,2,0 ]
print( est_trie(t), est_trie_nocopy(t) )
t = [0,1,2]
t2 = t
t2[0] = 1000000000
t,t2
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Stats Quality for 2016 College Nationals
Step2: Since we should already have the data downloaded as csv files in this repository, we will not need to re-scrape the data. Omit this cell to directly download from the USAU website (may be slow).
Step3: Let's take a look at the games for which the sum of the player goals/assists is less than the final score of the game
Step4: There are a total of 69 unreported scorers and 86 unreported assisters (although its possible some of those 17 scores were callahans). At a quick glance a lot of these missing results are from less important games, such as the Machine-Madison Club placement game.
Step5: All games had reported turnovers
| <ASSISTANT_TASK:>
Python Code:
import usau.reports
import usau.fantasy
from IPython.display import display, HTML
import pandas as pd
pd.options.display.width = 200
pd.options.display.max_colwidth = 200
pd.options.display.max_columns = 200
def display_url_column(df):
Helper for formatting url links
df.url = df.url.apply(lambda url: "<a href='{base}{url}'>Match Report Link</a>"
.format(base=usau.reports.USAUResults.BASE_URL, url=url))
display(HTML(df.to_html(escape=False)))
# Read data from csv files
usau.reports.club_nats_men_2016.load_from_csvs()
usau.reports.club_nats_mixed_2016.load_from_csvs()
usau.reports.club_nats_women_2016.load_from_csvs()
missing_tallies = pd.concat([usau.reports.club_nats_men_2016.missing_tallies,
usau.reports.club_nats_mixed_2016.missing_tallies,
usau.reports.club_nats_women_2016.missing_tallies,
])
display_url_column(missing_tallies[["Score", "Gs", "As", "Ds", "Ts", "Team", "Opponent", "url"]])
(missing_tallies["Score"] - missing_tallies["Gs"]).sum(), (missing_tallies["Score"] - missing_tallies["As"]).sum()
men_matches = usau.reports.club_nats_men_2016.match_results
mixed_matches = usau.reports.club_nats_mixed_2016.match_results
women_matches = usau.reports.club_nats_women_2016.match_results
display_url_column(pd.concat([men_matches[(men_matches.Ts == 0) & (men_matches.Gs > 0)],
mixed_matches[(mixed_matches.Ts == 0) & (mixed_matches.Gs > 0)],
women_matches[(women_matches.Ts == 0) & (women_matches.Gs > 0)]])
[["Score", "Gs", "As", "Ds", "Ts", "Team", "Opponent", "url"]])
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Tokenizing
Step2: Stop words
Step3: Stemming
Step4: Part of Speech Tagging
Step6: Chunking
| <ASSISTANT_TASK:>
Python Code:
import nltk
from nltk import tokenize
# TODO: we don't relly want to download packages each time when we lauch this script
# so it'll better to check somehow whether we have packages or not - or Download on demand
# nltk.download()
example = 'Hello Mr. Smith, how are you doing today? The weather is great, and Python is awesome. ' \
'The sky is pinkish-blue. You shouldn\'t eat cardboard.'
tokenize.sent_tokenize(example)
tokenize.word_tokenize(example)
from nltk import corpus, tokenize
example_sentence = 'This is a sample sentence, showing off the stop words filtration.'
stop_words = set(corpus.stopwords.words('english'))
words = tokenize.word_tokenize(example_sentence)
filtered_sentence = [w for w in words if w not in stop_words]
print(filtered_sentence)
from nltk import stem, tokenize
ps = stem.PorterStemmer()
example_words = ['python', 'pythoner', 'pythoning', 'pythoned', 'pythonly', 'pythonic', 'pythonista']
['{} --> {}'.format(w, ps.stem(w)) for w in example_words]
example_text = 'It is important to by very pythonly while you are pythoning with python. '\
'All pythoners have pythoned poorly at least once.'
['{} --> {}'.format(w, ps.stem(w)) for w in tokenize.word_tokenize(example_text)]
import nltk
from nltk import corpus, tokenize
train_text = corpus.state_union.raw('2005-GWBush.txt')
sample_text = corpus.state_union.raw('2006-GWBush.txt')
# Map tag to description, useful for annotations
tag_to_description = {
'CC': 'coordinating conjunction',
'CD': 'cardinal digit',
'DT': 'determiner',
'EX': 'existential there (like: "there is" ... think of it like "there exists")',
'FW': 'foreign word',
'IN': 'preposition/subordinating conjunction',
'JJ': 'adjective "big"',
'JJR': 'adjective, comparative "bigger"',
'JJS': 'adjective, superlative "biggest"',
'LS': 'list marker 1)',
'MD': 'modal could, will',
'NN': 'noun, singular "desk"',
'NNS': 'noun plural "desks"',
'NNP': 'proper noun, singular "Harrison"',
'NNPS': 'proper noun, plural "Americans"',
'PDT': 'predeterminer "all tdhe kids"',
'POS': 'possessive ending parent"s',
'PRP': 'personal pronoundß I, he, she',
'PRP$': 'possessive pronoun my, his, hers',
'RB': 'adverb very, silently,',
'RBR': 'adverb, comparative better',
'RBS': 'adverb, superlative best',
'RP': 'particle give up',
'TO': 'to go "to" the store.',
'UH': 'interjection errrrrrrrm',
'VB': 'verb, base form take',
'VBD': 'verb, past tense took',
'VBG': 'verb, gerund/present participle taking',
'VBN': 'verb, past participle taken',
'VBP': 'verb, sing. present, non-3d take',
'VBZ': 'verb, 3rd person sing. present takes',
'WDT': 'wh-determiner which',
'WP': 'wh-pronoun who, what',
'WP$': 'possessive wh-pronoun whose',
'WRB': 'wh-abverb where, when',
}
from collections import Counter
from operator import itemgetter, attrgetter
custom_sent_tokenizer = tokenize.PunktSentenceTokenizer(train_text)
tokenized_text = custom_sent_tokenizer.tokenize(sample_text)
total_counts = Counter()
for i in tokenized_text[:5]:
words = nltk.word_tokenize(i)
tagged = nltk.pos_tag(words)
print('# Sentence:')
print(i)
print('# Words:')
print(words)
print('# Tagged:')
print(tagged)
counts = Counter(tag for word, tag in tagged)
total_counts += counts
print('\n')
total = sum(total_counts.values())
freq = dict((word, float(count) / total) for word, count in sorted(total_counts.items()))
print('# Counts:')
print('\n\n-----\n\n'.join(['{}\n[{}] {}'.format(f, tag, tag_to_description.get(tag, tag)) for tag, f in sorted(freq.items(), key=itemgetter(1), reverse=True)]))
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
chunkGram = rChunk: {<RB.?>*<VB.?>*<NNP>+<NN>?}
chunkParser = nltk.RegexpParser(chunkGram)
for i in tokenized_text[:5]:
words = nltk.word_tokenize(i)
tagged = nltk.pos_tag(words)
chunked = chunkParser.parse(tagged)
# TODO: should fix it
# I'm using jupyter inside of Docker so maybe it is reason why doesn't work :(
# I've found this one https://stackoverflow.com/questions/31779707/how-do-you-make-nltk-draw-trees-that-are-inline-in-ipython-jupyter
# but haven't checkit it yet.
chunked.draw()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's go over the columns
Step2: Now suppose we want a DataFrame of the Blaze Data Object above, but only want the asof_date, repurchase_units, and the repurchase_amount.
| <ASSISTANT_TASK:>
Python Code:
# import the dataset
from quantopian.interactive.data.eventvestor import share_repurchases
# or if you want to import the free dataset, use:
# from quantopian.interactive.data.eventvestor import share_repurchases_free
# import data operations
from odo import odo
# import other libraries we will use
import pandas as pd
# Let's use blaze to understand the data a bit using Blaze dshape()
share_repurchases.dshape
# And how many rows are there?
# N.B. we're using a Blaze function to do this, not len()
share_repurchases.count()
# Let's see what the data looks like. We'll grab the first three rows.
share_repurchases[:3]
# get apple's sid first
apple_sid = symbols('AAPL').sid
buybacks = share_repurchases[('2013-12-31' < share_repurchases['asof_date']) &
(share_repurchases['asof_date'] <'2015-01-01') &
(share_repurchases.sid == apple_sid)]
# When displaying a Blaze Data Object, the printout is automatically truncated to ten rows.
buybacks.sort('asof_date')
df = odo(buybacks, pd.DataFrame)
df = df[['asof_date','repurchase_amount','repurchase_units']]
df
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: import the new haven report card module
Step2: now determine the root directory for the repo
Step3: read in the issue data from file (to speed things up)
Step4: now determine the neighborhoods for each issue using the get_neighborhoods routine
Step5: now add the neighborhoods to the DataFrame
Step6: Statistics to calculate
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%load_ext autoreload
%autoreload 2
import nhrc2
from nhrc2.backend import get_neighborhoods as get_ngbrhd
from nhrc2.backend import read_issues as ri
import pandas as pd
import numpy as np
nhrc2dir = '/'.join(str(nhrc2.__file__).split('/')[:-1])+'/'
scf_df_cat = ri.read_categories()
scf_df_cat
scf_df_cat[scf_df_cat['organization'] == 'City of New Haven']
readfile=False
writejson=False
scf_df = ri.get_issues(readfile=readfile, writejson=writejson)
hoods = get_ngbrhd.get_neighborhoods()
scf_df['neighborhood'] = hoods
scf_df.columns
scf_df.loc[0:1, 'created_at']
pd.to_datetime(scf_df.loc[0, 'created_at'])
scf_df['time_to_acknowledge'] = (pd.to_datetime(scf_df['acknowledged_at']) - pd.to_datetime(scf_df['created_at']))/pd.Timedelta('1d')
scf_df['time_to_close'] = (pd.to_datetime(scf_df['closed_at']) - pd.to_datetime(scf_df['created_at']))/pd.Timedelta('1d')
scf_df.loc[0:1, 'time_to_acknowledge']
np.median(scf_df['time_to_acknowledge'].values)
np.median(scf_df['time_to_close'].values)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import the file into pandas, and drop all rows without a GPS fix
Step2: Find the Lat/Lon bounding box and create a new map from the osmapping library
Step3: Import the shapefiles from Mapzen for Boston
Step4: Select most road-types and some parks for plotting
Step5: Plot the basemap and then overlay the trip trace
| <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import osmapping
import glob
%matplotlib inline
dname = '/Users/astyler/projects/torquedata/'
trips = []
fnames = glob.glob(dname+'*.csv')
for fname in fnames:
trip = pd.read_csv(fname, na_values=['-'],encoding ='U8',index_col=False, header=False, names=['GPSTime','Time','Longitude','Latitude','GPSSpeed','GPSError','Altitude','Bearing','Gx','Gy','Gz','G','Az','Ay','Ax','A','Power','Accuracy','Satellites','GPSAltitude','GPSBearing','Lat2','Lon2','OBDSpeed','GPSSpeedkmhr'])
trip = trip.dropna(subset = ['Longitude','Latitude'])
trips.append(trip)
fnames
buffr = 0.01
mins=[(min(trip.Longitude) -buffr,min(trip.Latitude)-buffr) for trip in trips]
maxs=[(max(trip.Longitude) + buffr,max(trip.Latitude)+buffr) for trip in trips]
ll = map(min,zip(*mins))
ur = map(max,zip(*maxs))
print ll
print ur
mymap = osmapping.MLMap(ll,ur)
for trip in trips:
trip['x'], trip['y'] = mymap.convert_coordinates(trip[['Longitude','Latitude']].values).T
reload(osmapping)
mymap.load_shape_file('./shapefiles/boston/line.shp')
mymap.load_shape_file('./shapefiles/boston/polygon.shp')
mymap.shapes.shape
coords = [(79,80),(15,24)]
print zip(*coords)
print zip(*[(1,1),(2,2)])
#print mymap.basemap([79,15],[80,24])
print mymap.basemap(79,80)
print mymap.basemap(15,24)
print zip(*mymap.basemap(*zip(*coords)))
mymap.clear_selected_shapes()
road = {'edgecolor':'white','lw':3, 'facecolor':'none','zorder':6};
mymap.select_shape('highway','motorway',**road)
mymap.select_shape('highway','trunk',**road)
mymap.select_shape('highway','primary',**road)
mymap.select_shape('highway','secondary',**road)
mymap.select_shape('highway','tertiary',**road)
mymap.select_shape('highway','residential',**road)
mymap.select_shape('leisure','park',facecolor='#BBDDBB',edgecolor='none',zorder=4)
mymap.select_shape('waterway','riverbank',facecolor='#0044CC', edgecolor='none', zorder=5)
mymap.select_shape('natural','water',facecolor='#CCCCEE', edgecolor='none', zorder=5)
bselect = lambda x: x['building'] in ['yes', 'apartments', 'commercial', 'house', 'residential', 'university', 'church', 'garage']
bldg = {'facecolor':'none', 'edgecolor':'#dedede', 'hatch':'////','zorder':7}
mymap.select_shapes(bselect, **bldg)
for trip in trips:
trip.loc[trip.Satellites < 5,'Satellites'] = None
trip.loc[trip.Accuracy > 20,'Accuracy'] = None
trip.dropna(subset=['Accuracy'], inplace=True)
fig = plt.figure(figsize=(12,12))
ax = fig.add_subplot(111)
mymap.draw_map(ax, map_fill='#eeeeee')
for (idx,trip) in enumerate(trips):
ax.plot(trip.x, trip.y, lw=2, alpha=1,zorder=99, label=str(idx))
plt.legend()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The tricky histogram with pre-counted data
Step2: Q
Step3: As you can see, the default histogram does not normalize with binwidth and simply shows the counts! This can be very misleading if you are working with variable bin width (e.g. logarithmic bins). So please be mindful about histograms when you work with variable bins.
Step4: Let's use an actual dataset
Step5: Note
Step6: Let's plot the histogram of IMDB ratings.
Step7: Did you get an error or a warning? What's going on?
Step8: As you can see there are a bunch of missing rows. You can count them.
Step9: or drop them.
Step10: The dropna can be applied to the dataframe too.
Step11: How does matplotlib decides the bins? Actually matplotlib's hist function uses numpy's histogram function under the hood.
Step12: Have you noticed that this function returns three objects? Take a look at the documentation here to figure out what they are.
Step13: Here, n_raw contains the values of histograms, i.e., the number of movies in each of the 10 bins. Thus, the sum of the elements in n_raw should be equal to the total number of movies.
Step14: The second returned object (bins_raw) is a list containing the edges of the 10 bins
Step15: The width is same as the maximum value minus minimum value, divided by 10.
Step16: Now, let's plot a normalized (density) histogram.
Step17: The ten bins do not change. But now n represents the density of the data inside each bin. In other words, the sum of the area of each bar will equal to 1.
Step18: Anyway, these data generated from the hist function is calculated from numpy's histogram function. https
Step19: If you look at the documentation, you can see that numpy uses simply 10 as the default number of bins. But you can set it manually or set it to be auto, which is the "Maximum of the sturges and fd estimators.". Let's try this auto option.
Step20: Consequences of the binning parameter
Step21: What does the argument in plt.subplot(1,2,1) mean? If you're not sure, check out
Step22: Do you see the issues with having too few bins or too many bins? In particular, do you notice weird patterns that emerge from bins=30?
Step23: Formulae for choosing the number of bins.
Step24: But we can also use built-in formulae too. Let's try all of them.
Step25: Some are decent, but several of them tend to overestimate the good number of bins. As you have more data points, some of the formulae may overestimate the necessary number of bins. Particularly in our case, because of the precision issue, we shouldn't increase the number of bins too much.
Step26: You can also combine with options such as histtype and density.
Step27: And increase the number of bins.
Step28: This method works fine. By increasing the number of bins, you can get a CDF in the resolution that you want. But let's also try it manually to better understand what's going on. First, we should sort all the values.
Step29: We need to know the number of data points,
Step30: And I think this may be useful for you.
Step31: Q
Step32: A bit more histogram with altair
Step33: As mentioned before, in altair histogram is not special. It is just a plot that use bars (mark_bar()) where X axis is defined by IMDB_Rating with bins (bin=True), and Y axis is defined by count() aggregation function.
Step34: Have you noted that it is IMDB_Rating
Step35: In altair, you want to specify the data types using one of the four categories
Step36: Composing charts in altair
Step37: Vertical commposition?
Step38: Shall we avoid some repetitions? You can define a base empty chart first and then assign encodings later when you put together multiple charts together. Here is an example
Step39: Q
| <ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import altair as alt
import pandas as pd
import matplotlib
matplotlib.__version__
bins = [0, 1, 3, 5, 10, 24]
data = {0.5: 4300, 2: 6900, 4: 4900, 7: 2000, 15: 2100}
data.keys()
# TODO: draw a histogram with weighted data.
# TODO: fix it with density option.
import vega_datasets
vega_datasets.__version__
movies = vega_datasets.data.movies()
movies.head()
try:
plt.hist(movies['IMDB_Rating'])
except KeyError as e:
print("movies has given KeyError: ", str(e))
movies['IMDB_Rating'].isna()
sum(movies['IMDB_Rating'].isna())
IMDB_ratings_nan_dropped = movies['IMDB_Rating'].dropna()
len(IMDB_ratings_nan_dropped)
213 + 2988
# TODO
# Both should be zero.
print(sum(movies['IMDB_Rating'].isna()), sum(movies['IMDB_Votes'].isna()))
# TODO
# TODO: Replace dummy value of below variables with actual values.
n_raw, bins_raw, patches = [1., 2.], [0.5, 1.0], 0.
print(n_raw)
print(bins_raw)
# TODO: test whether the sum of the numbers in n_raw is equal to the number of movies.
np.diff(bins_raw)
min_rating = min(movies['IMDB_Rating'])
max_rating = max(movies['IMDB_Rating'])
print(min_rating, max_rating)
print( (max_rating-min_rating) / 10 )
n, bins, patches = plt.hist(movies['IMDB_Rating'], density=True)
print(n)
print(bins)
# TODO
# TODO: If below code gives ValueError with NaN, then there are still missing values in IMDB_Rating and you must remove it.
try:
np.histogram(movies['IMDB_Rating'])
except ValueError as e:
print("Resulted in ValueError:", str(e))
plt.hist(movies['IMDB_Rating'])
_ = plt.hist(movies['IMDB_Rating'], bins='auto')
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
movies['IMDB_Rating'].hist(bins=3)
plt.subplot(1,2,2)
movies['IMDB_Rating'].hist(bins=20)
nbins = [2, 3, 5, 10, 30, 40, 60, 100 ]
figsize = (18, 10)
# TODO
# TODO: Provide your answer and evidence here
N = len(movies)
plt.figure(figsize=(12,4))
# Sqrt
nbins = int(np.sqrt(N))
plt.subplot(1,3,1)
plt.title("SQRT, {} bins".format(nbins))
movies['IMDB_Rating'].hist(bins=nbins)
# Sturge's formula
nbins = int(np.ceil(np.log2(N) + 1))
plt.subplot(1,3,2)
plt.title("Sturge, {} bins".format(nbins))
movies['IMDB_Rating'].hist(bins=nbins)
# Freedman-Diaconis
# TODO: If below code gives ValueError with NaN, then there are still missing values in IMDB_Rating and you must remove it.
try:
iqr = np.percentile(movies['IMDB_Rating'], 75) - np.percentile(movies['IMDB_Rating'], 25)
width = 2*iqr/np.power(N, 1/3)
nbins = int((max(movies['IMDB_Rating']) - min(movies['IMDB_Rating'])) / width)
plt.subplot(1,3,3)
plt.title("F-D, {} bins".format(nbins))
movies['IMDB_Rating'].hist(bins=nbins)
except ValueError as e:
print("Resulted in ValueError:", str(e))
plt.figure(figsize=(20,4))
plt.subplot(161)
movies['IMDB_Rating'].hist(bins='fd')
plt.subplot(162)
movies['IMDB_Rating'].hist(bins='doane')
plt.subplot(163)
movies['IMDB_Rating'].hist(bins='scott')
plt.subplot(164)
movies['IMDB_Rating'].hist(bins='rice')
plt.subplot(165)
movies['IMDB_Rating'].hist(bins='sturges')
plt.subplot(166)
movies['IMDB_Rating'].hist(bins='sqrt')
movies['IMDB_Rating'].hist(cumulative=True)
movies['IMDB_Rating'].hist(histtype='step', cumulative=True, density=True)
movies['IMDB_Rating'].hist(cumulative=True, density=True, bins=1000)
rating_sorted = movies['IMDB_Rating'].sort_values()
rating_sorted.head()
N = len(rating_sorted)
N
n = 50
np.linspace(1/n, 1.0, num=n)
# Implement
vega_datasets.data.movies.url
# Choose based on your environment
#alt.renderers.enable('notebook')
#alt.renderers.enable('jupyterlab')
#alt.renderers.enable('default')
alt.Chart(vega_datasets.data.movies.url).mark_bar().encode(
alt.X("IMDB_Rating:Q", bin=True),
alt.Y('count()')
)
alt.Chart(vega_datasets.data.movies.url).mark_bar().encode(
alt.X('IMDB_Rating', type='quantitative', bin=True),
alt.Y(aggregate='count', type='quantitative')
)
from altair import Bin
alt.Chart(vega_datasets.data.movies.url).mark_bar().encode(
alt.X("IMDB_Rating:Q", bin=Bin(step=0.09)),
alt.Y('count()')
)
alt.Chart(vega_datasets.data.movies.url).mark_bar().encode(
alt.X("IMDB_Rating:Q", bin=Bin(nice=True, maxbins=20)),
alt.Y('count()')
)
chart1 = alt.Chart(vega_datasets.data.movies.url).mark_bar().encode(
alt.X("IMDB_Rating:Q", bin=Bin(step=0.1)),
alt.Y('count()')
).properties(
width=300,
height=150
)
chart2 = alt.Chart(vega_datasets.data.movies.url).mark_bar().encode(
alt.X("IMDB_Rating:Q", bin=Bin(nice=True, maxbins=20)),
alt.Y('count()')
).properties(
width=300,
height=150
)
chart1 | chart2
alt.hconcat(chart1, chart2)
alt.vconcat(chart1, chart2)
chart1 & chart2
base = alt.Chart().mark_bar().encode(
alt.X("IMDB_Rating:Q", bin=Bin(nice=True, maxbins=20)),
alt.Y('count()')
).properties(
width=300,
height=150
)
chart = alt.vconcat(data=vega_datasets.data.movies.url)
for bin_param in [Bin(step=0.1), Bin(nice=True, maxbins=20)]:
row = alt.hconcat()
row |= base.encode(x=alt.X("IMDB_Rating:Q", bin=bin_param), y='count()')
chart &= row
chart
# TODO
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Problem setting
Step5: Graph Laplacian
Step6: Fourier basis
Step8: Ground truth graph filter
Step9: Graph signals
Step10: Non-parametrized filter learning
Step11: Optimization
Step15: Optimization
Step16: Results
Step18: Parametrized filter learning
Step22: Ground truth Chebyshev expansion's coefficients
Step23: Polynomial order
Step24: Choose the polynomial order $K$ and compute the basis $T$ with their associate coefficients c_cg.
Step26: Loss function
Step27: Optimality condition
Step28: Stochastic gradient descent
Step29: Results
Step32: Filter learning
Step33: Clenshaw's method
Step34: Testing polynomials evaluation and filtering
Step35: Loss function and optimality condition
Step 2
Step36: A direct solution is available trough the computation of the pseudo-inverse.
Step37: Stochastic gradient descent
Step38: Results
Step39: Filter learning
Step40: Lanczos basis method
Step41: Fast Lanczos method
Step42: Stochastic gradient descent
Step43: Results
Step44: The optimal spectral filter c_o has different optimal representations in the Lanczos bases, depending on the signal.
Step45: Lanczos vs Chebyshev
| <ASSISTANT_TASK:>
Python Code:
import time
import numpy as np
import scipy.sparse, scipy.sparse.linalg, scipy.spatial.distance
import matplotlib.pyplot as plt
%matplotlib inline
tol = 1e-10
M = 100 # nodes
k = 4 # edges per vertex
def graph_random():
Random connections and weights.
I = np.arange(0, M).repeat(k)
J = np.random.randint(0, M, M*k)
V = np.random.uniform(0, 1, M*k)
W = scipy.sparse.coo_matrix((V, (I, J)), shape=(M, M))
return graph_regularize(W)
def graph_grid():
Construct a kNN graph aranged on a 2D grid.
# Construct a grid.
m = np.int(np.sqrt(M))
x = np.linspace(0,1,m)
y = np.linspace(0,1,m)
xx, yy = np.meshgrid(x, y)
z = np.empty((M,2))
z[:,0] = xx.reshape(M)
z[:,1] = yy.reshape(M)
# Compute pairwise distances.
d = scipy.spatial.distance.pdist(z, 'euclidean')
d = scipy.spatial.distance.squareform(d)
# k-NN graph.
idx = np.argsort(d)[:,1:k+1]
d.sort()
d = d[:,1:k+1]
# Weights.
sigma2 = np.mean(d[:,-1])**2
d = np.exp(- d**2 / sigma2)
# Weight matrix.
I = np.arange(0, M).repeat(k)
J = idx.reshape(M*k)
V = d.reshape(M*k)
W = scipy.sparse.coo_matrix((V, (I, J)), shape=(M, M))
return graph_regularize(W)
def graph_regularize(W):
# No self-connections.
W.setdiag(0)
# Non-directed graph.
bigger = W.T > W
W = W - W.multiply(bigger) + W.T.multiply(bigger)
del bigger
assert np.abs(W - W.T).mean() < tol
# CSR sparse matrix format for efficient multiplications.
W = W.tocsr()
W.eliminate_zeros()
return W
W = graph_random()
W = graph_grid()
print("{} > {} edges".format(W.nnz, M*k))
def laplacian(W, normalized=True):
Return the Laplacian of the weigth matrix.
# Degree matrix.
d = W.sum(axis=0)
# Laplacian matrix.
if not normalized:
D = scipy.sparse.diags(d.A.squeeze(), 0)
L = D - W
else:
d = 1 / np.sqrt(d)
D = scipy.sparse.diags(d.A.squeeze(), 0)
I = scipy.sparse.identity(M, dtype=D.dtype)
L = I - D * W * D
# Upper-bound on the spectrum.
lmax = scipy.sparse.linalg.eigsh(L, k=1, which='LM', return_eigenvectors=False)[0]
if normalized:
assert lmax <= 2
lmax = 2
return L, lmax
t_start = time.process_time()
LL, lmax = laplacian(W, True)
print('Execution time: {:1.0f}ms'.format((time.process_time() - t_start) * 1000))
assert np.abs(LL - LL.T).mean() < tol
def fourier(L):
def sort(lamb, U):
idx = lamb.argsort()
return lamb[idx], U[:,idx]
t_start = time.process_time()
lamb, U = np.linalg.eig(LL.toarray())
lamb, U = sort(lamb, U)
print('Execution time: {:1.0f}ms'.format((time.process_time() - t_start) * 1000))
# Orthonormal basis: all basis vectors are of length one (unit energy).
np.testing.assert_allclose(np.linalg.norm(U, axis=0), np.ones(M))
np.testing.assert_allclose(np.linalg.norm(U, axis=1), np.ones(M))
if M <= 100: # Because of the computational complexity.
lamb_, U_ = np.linalg.eigh(LL.toarray())
np.testing.assert_allclose(lamb_, lamb, atol=tol)
np.testing.assert_allclose(np.abs(U_), np.abs(U), atol=tol)
lamb_, U_ = scipy.sparse.linalg.eigs(LL, k=M-2, which='SM')
lamb_, U_ = sort(lamb_, U_)
np.testing.assert_allclose(lamb[:-2], lamb_, atol=tol)
np.testing.assert_allclose(np.abs(U[:,:-2]), np.abs(U_), atol=tol)
lamb_, U_ = scipy.sparse.linalg.eigsh(LL, k=M-1, which='SM')
np.testing.assert_allclose(lamb[:-1], lamb_, atol=tol)
np.testing.assert_allclose(np.abs(U[:,:-1]), np.abs(U_), atol=tol)
return lamb, U
lamb, U = fourier(LL)
print('Spectrum: [{:1.2e}, {:1.2e}]'.format(lamb[0], lamb[-1]))
np.testing.assert_allclose(lamb[0], 0, atol=tol)
assert lamb[-1] <= lmax + tol
def plot_eigenvectors(U, nrows, ncols):
fig, axes = plt.subplots(nrows, ncols, figsize=(15,5.5))
m = np.int(np.sqrt(M))
n = nrows * ncols
vmin, vmax = U[:,:n].min(), U[:,:n].max()
for i, ax in enumerate(axes.flat):
x = np.reshape(U[:,i], (m,m))
im = ax.imshow(x, vmin=vmin, vmax=vmax)
ax.set_title('Eigenvector u_{}'.format(i))
fig.subplots_adjust(right=0.8)
cax = fig.add_axes([0.82, 0.16, 0.02, 0.7])
fig.colorbar(im, cax=cax)
plt.show()
plot_eigenvectors(U, 2, 5)
def gen_filter(type='step', t=2):
if type is 'random':
return np.random.normal(0, 1, M), None
elif type is 'step':
g = lambda x: np.minimum(np.array(x < lmax/2) + 0.5, 1)
elif type is 'sin':
g = lambda x: np.sin(t * (x-2)**2)
elif type is 'heat':
g = lambda x: np.exp(-t * x)
elif type is 'hat':
g = lambda x: t * x * np.exp(-x**2)
return g(lamb), g
c_g, g = gen_filter('step', 6)
def filter_full(X, c):
Filter X with a full spectral domain filter.
return U @ np.diag(c) @ U.T @ X
def plot_filtering(*args):
m = np.int(np.sqrt(M))
c = gen_filter(*args)[0]
x1 = np.zeros((m,m))
x2 = np.zeros((m,m))
x1[5,5] = 1
x2[5,:] = 1
x1.shape = M
x2.shape = M
fig, axes = plt.subplots(1,5, figsize=(15,5))
axes[0].imshow((U @ c).reshape((m,m)))
axes[0].set_title('Filter')
for i, x in enumerate([x1, x2]):
axes[2*i+1].imshow(x.reshape((m,m)), vmin=0, vmax=1)
axes[2*i+1].set_title('Signal')
y = filter_full(x, c)
axes[2*i+2].imshow(y.reshape((m,m)), vmin=0, vmax=1)
axes[2*i+2].set_title('Filtered signal')
plt.show()
plot_filtering('heat', 3)
# Parseval (energy preservation).
E = np.linalg.norm(c_g)**2
print('Filter energy E = {:.1f}'.format(E))
np.testing.assert_allclose(np.linalg.norm(U @ c_g)**2, E)
N = 200 # signals
eps = 0.1 # noise
def gen_signal(spectral_mean):
Xf = np.empty((M,N))
for m in range(M):
a = spectral_mean[m]
Xf[m,:] = np.random.normal(a, a/2, N)
return U @ Xf
X = np.random.normal(0, 1, (M,N))
X = gen_signal(np.exp(-1.5*lamb))
np.testing.assert_allclose(filter_full(X, np.ones(M)), X, atol=tol)
Y = filter_full(X, c_g) + (np.random.normal(0, eps, (M,N)) if eps > 0 else 0)
fig = plt.figure(figsize=(15,5))
ax = fig.add_subplot(1,1,1)
ax.plot(np.abs(np.mean((U.T @ X), axis=1)), label='Input signals X')
ax.plot(np.abs(np.mean((U.T @ Y), axis=1)), label='Target signals Y')
ax.plot(c_g, label='Ground-truth filter c_g')
ax.legend()
ax.set_title('Mean absolute spectrum E[ |U^T X| ]')
ax.set_xlabel('Frequency')
ax.set_ylabel('Amplitude')
ax.set_xlim(0, M-1)
ax.set_ylim(-.1, 1.1)
plt.show()
def L(c):
M, N = X.shape
return np.linalg.norm(filter_full(X, c) - Y, ord='fro')**2 / N
np.testing.assert_allclose(L(c_g), M * eps**2, 5e-2)
def dL(X, Y, c, variant=None):
M, N = X.shape
Xh = U.T @ X
Yh = U.T @ Y
# Speed: v3 >> v1 > v2.
if variant is 1:
return 2 / N * np.diag((np.diag(c) @ Xh - Yh) @ Xh.T)
elif variant is 2:
dc = np.empty(M)
for i in range(M):
dc[i] = 2 / N * (c[i] * Xh[i,:] - Yh[i,:]) @ Xh.T[:,i]
return dc
else:
# Speed: .sum(axis=1) is faster than *np.ones(N).
return 2 / N * ((c[:,np.newaxis] * Xh - Yh) * Xh).sum(axis=1)
# Gradient should be null at the global minimum. With noise, c_g is not necessary the optimum.
if eps <= 0:
np.testing.assert_allclose(dL(X, Y, c_g), 0, atol=tol)
np.testing.assert_allclose(dL(X, Y, c_g), dL(X, Y, c_g, 1))
np.testing.assert_allclose(dL(X, Y, c_g), dL(X, Y, c_g, 2))
t_start = time.process_time()
Xh = U.T @ X
Yh = U.T @ Y
c_o = (Xh * Yh).sum(axis=1) / (Xh * Xh).sum(axis=1)
print('Execution time: {:1.0f}ms'.format((time.process_time() - t_start) * 1000))
assert L(c_o) < L(c_g) + tol
assert np.linalg.norm(dL(X, Y, c_o)) < np.linalg.norm(dL(X, Y, c_g))
np.testing.assert_allclose(dL(X, Y, c_o), 0, atol=tol)
if eps <= 0:
np.testing.assert_allclose(c_o, c_g, atol=tol)
np.testing.assert_allclose(L(c_o), L(c_g), atol=tol)
def sgd(c0, L, dL, learning_rate=.1, batch_size=100, crit=1e-3, maxit=100, window=10):
Stochastic (mini-batch) gradient descent.
indices = []
c = c0
loss = [L(c)]
conv = []
def stop(loss):
Stop after convergence of the loss.
if len(loss) > maxit:
return True
#elif np.linalg.norm(dL(X, Y, c)) < crit:
#return True
elif len(loss) >= 2 * window:
avg1 = np.mean(loss[-window:])
avg2 = np.mean(loss[-2*window:-window])
return True if avg2 - avg1 < crit else False
else:
return False
while not stop(loss):
# Be sure to have used all the samples before using one a second time.
if len(indices) < batch_size:
new_indices = np.arange(N)
np.random.shuffle(new_indices)
indices.extend(new_indices)
idx = indices[:batch_size]
del indices[:batch_size]
c_last = c.copy()
n = len(loss) # Used when evaluating learning_rate
c -= eval(learning_rate) * dL(X[:,idx], Y[:,idx], c)
loss.append(L(c))
conv.append(np.linalg.norm(c - c_last) / np.linalg.norm(c))
return c, loss, conv
def rate(rate0, a, k):
Decaying learning rate w.r.t. iteration k.
return rate0 / (1 + a * rate0 * k)
def sgd_plot_convergence(c0, L, dL, params, crit, maxit):
fig_loss = plt.figure(figsize=(15,5))
fig_conv = plt.figure(figsize=(15,5))
fig_rate = plt.figure(figsize=(15,5))
ax_loss = fig_loss.add_subplot(1,1,1)
ax_conv = fig_conv.add_subplot(1,1,1)
ax_rate = fig_rate.add_subplot(1,1,1)
c_sgd = []
mlen = 0
for param in params:
t_start = time.process_time()
c, loss, conv = sgd(c0.copy(), L, dL, param[0], param[1], crit, maxit)
t = (time.process_time() - t_start) * 1000
label = 'rate={}, size={}, L(c)={:1.2e}, |dL(c)|={:1.2e}, conv={:1.2e},time={:1.0f}ms'.format(
param[0], param[1], L(c), np.linalg.norm(dL(X, Y, c)), conv[-1], t)
n = np.arange(1, len(conv)+1)
ax_loss.plot(loss, label=label)
ax_conv.loglog(n, conv, label=label)
learning_rate = eval(param[0])
if not isinstance(learning_rate, np.ndarray):
learning_rate *= np.ones(len(n))
ax_rate.plot(n, learning_rate, label=label)
c_sgd.append(c)
mlen = max(mlen, len(loss))
n = np.arange(1,mlen)
ax_conv.semilogy(n, 1/n**2, label='O(n^-2)')
for ax in ax_loss, ax_conv, ax_rate:
ax.set_title('Convergence, M={}, N={}, eps={}'.format(M, N, eps))
ax.set_xlabel('iteration n')
ax.legend(loc='best')
ax_loss.set_ylabel('loss L(c^n)')
ax_conv.set_ylabel('coefficients convergence ||c^n - c^n-1|| / ||c^n||')
ax_rate.set_ylabel('learning rate')
ax_loss.set_xlim(0, mlen-1)
ax_conv.set_xlim(1, mlen-1)
ax_rate.set_xlim(1, mlen-1)
plt.show()
return c_sgd
params = []
params.append(['0.2', 1])
params.append(['0.2', 5])
params.append(['0.2', 50])
params.append(['0.2', 100])
params.append(['0.6', 100])
params.append(['(1*n)**-1.', 100])
params.append(['rate(.9,.1,n)', 100])
c0 = np.random.uniform(0, 1, M)
c_s = sgd_plot_convergence(c0, L, dL, params, crit=1e-3, maxit=100)
def plot_filters(coeffs):
fig = plt.figure(figsize=(15,5))
ax = fig.add_subplot(1,1,1)
for coeff in coeffs:
c = eval(coeff)
label = '{}: L={:1.2e}, |dL|={:1.2e}'.format(coeff, L(c), np.linalg.norm(dL(X,Y,c)))
ax.plot(lamb, c, '.-', label=label)
# np.testing.assert_allclose(np.linalg.norm(c)**2, E, rtol=1e-2)
ax.set_xlim(lamb[0], lamb[-1])
ax.set_title('Filter coefficients, M={}, N={}, eps={}'.format(M, N, eps))
ax.set_xlabel('frequency')
ax.set_ylabel('amplitude')
ax.legend(loc='best')
plot_filters(['c_s[4]', 'c_s[0]', 'c_o', 'c_g'])
K = 5
def cheby_basis(K, x):
Return the Chebyshev basis of order K (composed of the
first K polynomials) evaluated at x. Polynomials are generated
by their recursive formulation.
T = np.empty((x.size, K))
T[:,0] = np.ones(x.size)
if K >= 2:
T[:,1] = x
for k in range(2, K):
T[:,k] = 2 * x * T[:,k-1] - T[:,k-2]
# T /= np.linalg.norm(T, axis=0) # Scaling to unit norm.
return T
fig = plt.figure(figsize=(15,5))
ax = fig.add_subplot(1,1,1)
x = np.linspace(-1,1,100)
T = cheby_basis(K, x)
for k in range(K):
ax.plot(x, T[:,k], label='T_{}, E={:.1f}'.format(k, np.linalg.norm(T[:,k])**2))
ax.set_title('Chebyshev polynomials of the first kind')
ax.set_xlabel('x')
ax.set_ylabel('T_n(x)')
ax.set_xlim(-1, 1)
ax.set_ylim(-1, 1.1)
ax.legend(loc='best')
plt.show()
K = 10
def rescale(x, reverse=False):
Rescale the spectral domain to [-1,1].
if reverse:
return x / lmax * 2 - 1
else:
return (x + 1) / 2 * lmax
np.testing.assert_allclose(lamb, rescale(rescale(lamb, True)), atol=tol)
def cheby_nodes(K):
Return the K Chebyshev nodes in [-1,1].
return np.cos(np.pi * (np.arange(K) + 1/2) / K)
def cheby_coeff(K, f):
Compute the coefficients of the Chebyshev polynomial approximation.
# Coefficients from discrete orthogonality condition.
# It can be done faster via the discrete cosine transform.
c = np.empty(K)
x = cheby_nodes(K)
T = cheby_basis(K, x)
for k in range(K):
c[k] = 2 / K * np.sum(f(x) * T[:,k])
c[0] /= 2
return c
# Domain is [-1, 1].
x = np.linspace(-1,1,100)
x = rescale(lamb, True)
f = lambda x: g(rescale(x))
np.testing.assert_allclose(f(x), c_g, atol=tol)
c_cg = cheby_coeff(K, f)
np.testing.assert_allclose(f(cheby_nodes(K)), cheby_basis(K, cheby_nodes(K)) @ c_cg)
T = cheby_basis(K, x)
c_co = np.linalg.lstsq(T, c_g)[0]
plot_filters(['T @ c_co', 'T @ c_cg', 'c_g'])
plt.plot(rescale(cheby_nodes(K)), f(cheby_nodes(K)), 'k.', markersize=15, label='Chebyshev nodes');
def polynomial_order(K):
loss_cg = np.empty((K))
loss_co = np.empty((K))
approx_inf_cg = np.empty((K))
approx_2_cg = np.empty((K))
approx_inf_co = np.empty((K))
approx_2_co = np.empty((K))
kk = np.arange(1, K+1)
for k in kk:
T = cheby_basis(k, x)
c_cg = cheby_coeff(k, f)
loss_cg[k-1] = L(T @ c_cg)
approx_inf_cg[k-1] = np.linalg.norm(c_g - T @ c_cg, ord=np.inf)
approx_2_cg[k-1] = np.linalg.norm(c_g - T @ c_cg, ord=2)
c_co = np.linalg.lstsq(T, f(x))[0]
loss_co[k-1] = L(T @ c_co)
approx_inf_co[k-1] = np.linalg.norm(c_g - T @ c_co, ord=np.inf)
approx_2_co[k-1] = np.linalg.norm(c_g - T @ c_co, ord=2)
fig_loss = plt.figure(figsize=(15,5))
fig_approx = plt.figure(figsize=(15,5))
ax_loss = fig_loss.add_subplot(1,1,1)
ax_approx = fig_approx.add_subplot(1,1,1)
ax_loss.semilogy(kk, loss_cg, label='L(T @ c_cg)')
ax_loss.semilogy(kk, loss_co, label='L(T @ c_co)')
ax_loss.semilogy(kk, L(c_g) * np.ones(K), label='L(c_g)')
ax_approx.semilogy(kk, approx_inf_cg, label='||c_g - T @ c_cg||_inf')
ax_approx.semilogy(kk, approx_2_cg, label='||c_g - T @ c_cg||_2')
ax_approx.semilogy(kk, approx_inf_co, label='||c_g - T @ c_co||_inf')
ax_approx.semilogy(kk, approx_2_co, label='||c_g - T @ c_co||_2')
ax_loss.set_title('Loss due to Chebyshev approximation')
ax_loss.set_ylabel('Loss L')
ax_approx.set_title('Chebyshev approximation error')
ax_approx.set_ylabel('Approximation error')
for ax in ax_loss, ax_approx:
ax.set_xlabel('Polynomial order')
ax.set_xlim(1, K)
ax.legend(loc='best')
plt.show()
polynomial_order(60)
K = 15
t_start = time.process_time()
c_cg = cheby_coeff(K, f)
T = cheby_basis(K, x)
print('Execution time: {:1.0f}ms'.format((time.process_time() - t_start) * 1000))
# If the order is sufficient for a perfect (as good as c_gt) reconstruction (test only).
pol_order_is_sufficient = False
def filter_chebyshev(X, c):
Filter X with the Chebyshev coefficients of the full filter.
return filter_full(X, T @ c)
c = np.zeros(K)
c[0] = 1
np.testing.assert_allclose(filter_chebyshev(X, c), X, atol=tol)
def Lc(c):
M, N = X.shape
return np.linalg.norm(filter_chebyshev(X, c) - Y, ord='fro')**2 / N
np.testing.assert_allclose(Lc(c_cg), L(T @ c_cg), atol=tol)
if pol_order_is_sufficient:
np.testing.assert_allclose(Lc(c_cg), M * eps**2, rtol=1e-2, atol=tol)
np.testing.assert_allclose(Lc(c_cg), L(c_g), atol=tol)
def dLc(X, Y, c):
M, N = X.shape
Xh = U.T @ X
Yh = U.T @ Y
return 2 / N * T.T @ (Xh * ((T @ c)[:,np.newaxis] * Xh - Yh)).sum(axis=1)
# Gradient should be null at the global minimum. With noise, c_cg is not necessary the optimum.
if eps <= 0 and pol_order_is_sufficient:
np.testing.assert_allclose(dLc(X, Y, c_cg), 0, atol=tol)
t_start = time.process_time()
c_co = np.linalg.lstsq(T, c_o)[0]
print('Execution time: {:1.0f}ms'.format((time.process_time() - t_start) * 1000))
assert Lc(c_co) < Lc(c_cg) + tol
assert np.linalg.norm(dLc(X, Y, c_co)) < np.linalg.norm(dLc(X, Y, c_cg))
#np.testing.assert_allclose(dLc(X, Y, c_co), 0, atol=tol)
if eps <= 0 and pol_order_is_sufficient:
np.testing.assert_allclose(Lc(c_co), Lc(c_cg), atol=tol)
c0 = np.random.uniform(0, 1, K)
c_cs = sgd_plot_convergence(c0, Lc, dLc, [['0.005', 100]], crit=1e-3, maxit=100)[0]
def plot_coefficients(coeffs):
fig = plt.figure(figsize=(15,5))
ax = fig.add_subplot(1,1,1)
for coeff in coeffs:
c = eval(coeff)
label = '{}: L={:1.2e}, |dL|={:1.2e}'.format(coeff, Lc(c), np.linalg.norm(dLc(X,Y,c)))
ax.plot(c, 'o', label=label)
# np.testing.assert_allclose(np.linalg.norm(T @ c)**2, E, rtol=1e-2)
ax.set_xlim(-1, K)
ax.set_title('Chebyshev expansion coefficients, M={}, N={}, K={}, eps={}'.format(M, N, K, eps))
ax.set_xlabel('number')
ax.set_ylabel('value')
ax.legend(loc='best')
plot_coefficients(['c_cs', 'c_co', 'c_cg'])
plot_filters(['T @ c_cs', 'T @ c_co', 'T @ c_cg', 'c_o'])
def rescale_L(L):
Rescale the Laplacian eigenvalues in [-1,1].
M, M = L.shape
I = scipy.sparse.identity(M, format='csr')
return L / lmax * 2 - I
def cheby_basis_eval(L, X, K):
Return T_k X where T_k are the Chebyshev polynomials of order up to K.
Complexity is O(KMN).
M, N = X.shape
L = rescale_L(L)
# Xt = T @ X: MxM @ MxN.
Xt = np.empty((K, M, N))
# Xt_0 = T_0 X = I X = X.
Xt[0,...] = X
# Xt_1 = T_1 X = L X.
if K > 1:
Xt[1,...] = L.dot(X)
# Xt_k = 2 L Xt_k-1 - Xt_k-2.
for k in range(2, K):
Xt[k,...] = 2 * L.dot(Xt[k-1,...]) - Xt[k-2,...]
return Xt
np.testing.assert_allclose(cheby_basis_eval(LL, X, 1)[0,...], X)
def filter_basis(Xt, c):
K, M, N = Xt.shape
Y = c @ Xt.reshape((K, M*N))
Y.shape = (M, N)
return Y
def eval_clenshaw(x, c):
K = len(c)
b2 = 0
b1 = c[K-1] * np.ones(x.shape) if K >= 2 else 0
for k in range(K-2, 0, -1):
b = c[k] + 2 * x * b1 - b2
b2, b1 = b1, b
return c[0] + x * b1 - b2
def test(K, N=100):
x = np.linspace(-1, 1, N)
T = np.empty((N, K))
for k in range(K):
c = np.zeros(k+1)
c[k] = 1
T[:,k] = eval_clenshaw(x, c)
np.testing.assert_allclose(T, cheby_basis(K, x))
test(50)
def filter_clenshaw(L, X, c):
K = len(c)
L = rescale_L(L)
B2 = 0
B1 = c[K-1] * X if K >= 2 else np.zeros(X.shape)
for k in range(K-2, 0, -1):
B = c[k] * X + 2 * L.dot(B1) - B2
B2, B1 = B1, B
return c[0] * X + L.dot(B1) - B2
def test(c):
t_start = time.process_time()
T = cheby_basis(len(c), x)
Y1 = filter_full(X, T @ c)
t_full = (time.process_time() - t_start) * 1000
t_start = time.process_time()
Xt = cheby_basis_eval(LL, X, len(c))
Y2 = filter_basis(Xt, c)
t_cheby = (time.process_time() - t_start) * 1000
np.testing.assert_allclose(Y1, Y2, atol=tol)
t_start = time.process_time()
Y2 = filter_clenshaw(LL, X, c)
t_clenshaw = (time.process_time() - t_start) * 1000
np.testing.assert_allclose(Y1, Y2, atol=tol)
print('Execution times: {:1.0f}ms {:1.0f}ms {:1.0f}ms'.format(t_full, t_cheby, t_clenshaw))
test(np.array([1]))
test(np.array([1,0,0,0]))
test(np.array([0,1,0,0]))
test(np.array([0,0,1,0]))
test(np.array([0,0,0,1]))
test(np.random.uniform(0, 5, size=100))
test(c_cg)
test(c_co)
def vectorize(Xt, Y):
K, M, N = Xt.shape
return Xt.reshape((K, M*N)), Y.reshape((M*N))
def Lcr(c):
Xt = cheby_basis_eval(LL, X, len(c))
return np.linalg.norm(filter_basis(Xt, c) - Y, ord='fro')**2 / N
def dLcr(X, Y, c):
Xt = cheby_basis_eval(LL, X, len(c))
Xb, yb = vectorize(Xt, Y)
return 2 / N * (c @ Xb - yb) @ Xb.T
def basis_coeff_opt(basis_eval, X, Y, K):
Xt = basis_eval(LL, X, K)
Xb, yb = vectorize(Xt, Y)
return np.linalg.lstsq(Xb.T, yb)[0]
t_start = time.process_time()
c_cro = basis_coeff_opt(cheby_basis_eval, X, Y, K)
print('Execution time: {:1.0f}ms'.format((time.process_time() - t_start) * 1000))
np.testing.assert_allclose(Lcr(c_cro), L(T @ c_cro), atol=tol)
assert Lcr(c_cro) < Lcr(c_cg) + tol
assert Lcr(c_cro) < Lcr(c_co) + tol
if pol_order_is_sufficient:
np.testing.assert_allclose(Lcr(c_cro), M * eps**2, rtol=2e-2, atol=tol)
if eps <= 0 and pol_order_is_sufficient:
np.testing.assert_allclose(Lcr(c_cro), Lcr(c_co), atol=tol)
np.testing.assert_allclose(dLcr(X, Y, c_cro), 0, atol=1e-8)
assert np.linalg.norm(dLcr(X, Y, c_cro)) < np.linalg.norm(dLcr(X, Y, c_cg)) + tol
assert np.linalg.norm(dLcr(X, Y, c_cro)) < np.linalg.norm(dLcr(X, Y, c_co)) + tol
def cheby_coeff_direct(X, Y, K, svd=False):
Xt = cheby_basis_eval(LL, X, K)
Xb, yb = vectorize(Xt, Y)
if svd:
Xpinv = np.linalg.pinv(Xb.T)
else:
A = Xb @ Xb.T
Ainv = np.linalg.inv(A)
#np.testing.assert_allclose(Ainv @ A, np.identity(K), atol=1e-5)
Xpinv = Ainv @ Xb
#np.testing.assert_allclose(Xpinva, Xpinv, atol=tol)
return Xpinv @ yb
c_crd = cheby_coeff_direct(X, Y, K)
params = []
params.append(['0.01', 100])
params.append(['(80*n)**-1.', 100])
params.append(['rate(.01,.9,n)', 100])
c0 = np.random.uniform(0, 1, K)
c_crs = sgd_plot_convergence(c0, Lcr, dLcr, params, crit=1e-3, maxit=100)[0]
plot_coefficients(['c_crs', 'c_crd', 'c_cro', 'c_cs', 'c_co', 'c_cg'])
plot_filters(['T @ c_crs', 'T @ c_crd', 'T @ c_cro', 'c_o'])
def lanczos(L, X, K):
M, N = X.shape
a = np.empty((K, N))
b = np.zeros((K, N))
V = np.empty((K, M, N))
V[0,...] = X / np.linalg.norm(X, axis=0)
for k in range(K-1):
W = L.dot(V[k,...])
a[k,:] = np.sum(W * V[k,...], axis=0)
W = W - a[k,:] * V[k,...] - (b[k,:] * V[k-1,...] if k>0 else 0)
b[k+1,:] = np.linalg.norm(W, axis=0)
V[k+1,...] = W / b[k+1,:]
a[K-1,:] = np.sum(L.dot(V[K-1,...]) * V[K-1,...], axis=0)
return V, a, b
def lanczos_H_diag(a, b, impl=5):
K, N = a.shape
H = np.empty((K, K, N))
Q = np.empty((K, K, N))
Sigma = np.empty((K, N))
if impl is 1:
for n in range(N):
H[...,n] = np.diag(a[:,n]) + np.diag(b[1:,n], +1) + np.diag(b[1:,n], -1)
#H[...,n] = V[...,n] @ LL.dot(V[...,n].T)
#Sigma[:,n], Q[...,n] = np.linalg.eig(H[...,n])
Sigma[:,n], Q[...,n] = np.linalg.eigh(H[...,n])
elif impl is 2:
for n in range(N):
H[...,n] = np.diag(a[:,n]) + np.diag(b[1:,n], -1)
Sigma[:,n], Q[...,n] = np.linalg.eigh(H[...,n], UPLO='L')
elif impl is 3:
for n in range(N):
H[...,n] = np.diag(a[:,n]) + np.diag(b[1:,n], +1) + np.diag(b[1:,n], -1)
Sigma, Q = np.linalg.eigh(H.T)
Q = np.swapaxes(Q,1,2).T
Sigma = Sigma.T
elif impl is 4:
for n in range(N):
H[...,n] = np.diag(a[:,n]) + np.diag(b[1:,n], +1)
Sigma, Q = np.linalg.eigh(H.T, UPLO='L')
Q = np.swapaxes(Q,1,2).T
Sigma = Sigma.T
elif impl is 5:
H = np.zeros((K*K, N))
H[:K**2:K+1, :] = a
H[1:(K-1)*K:K+1, :] = b[1:,:]
H.shape = (K, K, N)
Sigma, Q = np.linalg.eigh(H.T, UPLO='L')
Q = np.swapaxes(Q,1,2).T
Sigma = Sigma.T
elif impl is 6:
H = np.zeros((K, K, N))
h = np.diagonal(H, 0, 0, 1)
h.setflags(write=True)
h[:] = a.T
h = np.diagonal(H, 1, 0, 1)
h.setflags(write=True)
h[:] = b[1:,:].T
Sigma, Q = np.linalg.eigh(H.T, UPLO='L')
Q = np.swapaxes(Q,1,2).T
Sigma = Sigma.T
elif impl is 7:
H_band = np.empty((2, K, N))
H_band[0,...] = b
H_band[1,...] = a
for n in range(N):
Sigma[:,n], Q[...,n] = scipy.linalg.eig_banded(H_band[...,n])
return H, Q, Sigma
def test():
V, a, b = lanczos(LL, X, K)
H, Q, Sigma = lanczos_H_diag(a, b, impl=1)
for impl in range(1, 8):
tstart = time.process_time()
h, q, sigma = lanczos_H_diag(a, b, impl)
t = (time.process_time() - tstart) * 1000
print('Implementation {}: {:.2f} ms'.format(impl, t))
np.testing.assert_allclose(sigma, Sigma)
np.testing.assert_allclose(q, Q)
# Spectrum is preserved.
assert np.all(Sigma > -tol)
assert np.all(Sigma < lmax -tol)
# Orthonormal bases V and Q.
def test_orthonormality(A):
K, _, N = A.shape
A = A.reshape(K, -1)
np.testing.assert_allclose(A @ A.T, N * np.identity(K), atol=1e-6)
test_orthonormality(V)
test_orthonormality(Q)
# Verify H = Q Sigma Q.T
Hh = np.empty((K, K, N))
for n in range(N):
Hh[...,n] = Q[...,n] @ np.diag(Sigma[:,n]) @ Q[...,n].T
np.testing.assert_allclose(Hh, H, atol=tol)
# Verify L approx V H V.T
err = 0
for n in range(N):
Lh = V[...,n].T @ H[...,n] @ V[...,n]
err += np.linalg.norm(LL - Lh) / scipy.sparse.linalg.norm(LL)
print('Laplacian approximation error: {}'.format(err / N))
# Verify H = V L V.T
for n in range(N):
np.testing.assert_allclose(V[...,n] @ LL.dot(V[...,n].T), H[...,n], atol=1e-5)
# Test the Lanczos coefficients for a linear LL.dot(x).
for n in range(N):
y = LL.dot(X[:,n])
e1 = np.zeros(K)
e1[0] = 1
c_lg = np.linalg.norm(X[:,n]) * H[...,n] @ e1
c_lo = np.linalg.lstsq(V[...,n].T, y)[0]
np.testing.assert_allclose(c_lo, c_lg, atol=tol)
np.testing.assert_allclose(c_lg @ V[...,n], LL.dot(X[:,n]), atol=tol)
test()
def lanczos_basis_eval_f(L, X, K):
V, a, b = lanczos(L, X, K)
H, Q, Sigma = lanczos_H_diag(a, b)
return V, Q
def lanczos_representation_f(X, Y, K, impl=2):
V, Q = lanczos_basis_eval_f(LL, X, K)
if impl is 1:
Xh = np.empty((K, N))
Yh = np.empty((K, N))
for n in range(N):
Xh[:,n] = Q[...,n].T @ V[...,n] @ X[:,n]
Yh[:,n] = Q[...,n].T @ V[...,n] @ Y[:,n]
elif impl is 2:
Xh = np.empty((K, N))
Yh = np.empty((K, N))
for n in range(N):
QV = Q[...,n].T @ V[...,n]
Xh[:,n] = QV @ X[:,n]
Yh[:,n] = QV @ Y[:,n]
elif impl is 3:
QV = np.empty((K, M, N))
for n in range(N):
QV[...,n] = Q[...,n].T @ V[...,n]
Xh = np.einsum('ijl,jl->il', QV, X)
Yh = np.einsum('ijl,jl->il', QV, Y)
elif impl is 4:
QV = np.einsum('ijl,ikl->jkl', Q, V)
Xh = np.einsum('ijl,jl->il', QV, X)
Yh = np.einsum('ijl,jl->il', QV, Y)
return Xh, Yh
def test():
Xh, Yh = lanczos_representation_f(X, Y, K, impl=1)
for impl in range(1, 5):
tstart = time.process_time()
xh, yh = lanczos_representation_f(X, Y, K, impl=impl)
t = (time.process_time() - tstart) * 1000
print('Implementation {}: {:.2f} ms'.format(impl, t))
np.testing.assert_allclose(xh, Xh)
np.testing.assert_allclose(yh, Yh)
test()
def lanczos_coeff_direct_f(X, Y, K, impl=2):
Xh, Yh = lanczos_representation_f(X, Y, K)
return (Xh * Yh).sum(axis=1) / (Xh * Xh).sum(axis=1)
def filter_lanczos_f(X, c):
V, Q = lanczos_basis_eval_f(LL, X, len(c))
Y = np.empty((M, N))
for n in range(N):
QV = Q[...,n].T @ V[...,n]
Y[:,n] = QV.T @ np.diag(c) @ QV @ X[:,n]
return Y
def Lc_f(c):
return np.linalg.norm(filter_lanczos_f(X, c) - Y, ord='fro')**2 / N
def dLc_f(X, Y, c):
M, N = X.shape
Xh, Yh = lanczos_representation_f(X, Y, K)
return 2 / N * ((c[:,np.newaxis] * Xh - Yh) * Xh).sum(axis=1)
c_lf = lanczos_coeff_direct_f(X, Y, K)
np.testing.assert_allclose(dLc_f(X, Y, c_lf), 0, atol=tol)
def lanczos_basis_eval(L, X, K, ret_q=False, impl=2):
V, a, b = lanczos(L, X, K)
H, Q, Sigma = lanczos_H_diag(a, b)
M, N = X.shape
if impl is 1:
Xt = np.empty((K, M, N))
for n in range(N):
Xt[...,n] = np.linalg.norm(X[:,n]) * np.diag(Q[0,:,n]) @ Q[...,n].T @ V[...,n]
#Xt[...,n] = np.linalg.norm(X[:,n]) * Q[...,n].T @ V[...,n]
#Xt[...,n] = np.linalg.norm(X[:,n]) * V[...,n]
# This is the fastest.
elif impl is 2:
Xt = np.empty((K, M, N))
for n in range(N):
Xt[...,n] = Q[...,n].T @ V[...,n]
Xt *= Q[0,:,np.newaxis,:]
Xt *= np.linalg.norm(X, axis=0)
# Too bad that einsum is so slow...
elif impl is 3:
Xt = np.einsum('ijl,ikl->jkl', Q, V)
Xt *= Q[0,:,np.newaxis,:]
Xt *= np.linalg.norm(X, axis=0)
if ret_q:
return Xt, Q
else:
return Xt
def test():
Xt = lanczos_basis_eval(LL, X, K, impl=1)
for impl in range(1, 4):
tstart = time.process_time()
xt = lanczos_basis_eval(LL, X, K, impl=impl)
t = (time.process_time() - tstart) * 1000
print('Implementation {}: {:.2f} ms'.format(impl, t))
np.testing.assert_allclose(xt, Xt)
test()
def lanczos_coeff_direct(X, Y, K):
Xt, Q = lanczos_basis_eval(LL, X, K, True)
Xb, yb = vectorize(Xt, Y)
return Xb @ yb / np.sum((np.linalg.norm(X, axis=0) * Q[0,...])**2, axis=1)
#return Xb @ yb / np.linalg.norm(X, ord='fro')**2
#return Xb @ yb / N
def Lc(c):
Xt = lanczos_basis_eval(LL, X, len(c))
return np.linalg.norm(filter_basis(Xt, c) - Y, ord='fro')**2 / N
def dLc(X, Y, c):
Xt = lanczos_basis_eval(LL, X, len(c))
Xb, yb = vectorize(Xt, Y)
return 2 / N * (c @ Xb - yb) @ Xb.T
c_ld = lanczos_coeff_direct(X, Y, K)
np.testing.assert_allclose(c_ld, c_lf, atol=tol)
np.testing.assert_allclose(dLc(X, Y, c_ld), 0, atol=tol)
c_lo = basis_coeff_opt(lanczos_basis_eval, X, Y, K)
np.testing.assert_allclose(c_lo, c_lf, atol=tol)
np.testing.assert_allclose(dLc(X, Y, c_lo), 0, atol=tol)
print('Coefficients energy E = {}'.format(np.linalg.norm(c_lo)**2))
def test():
c = np.random.uniform(0, 1, K)
np.testing.assert_allclose(Lc_f(c), Lc(c))
np.testing.assert_allclose(dLc_f(X, Y, c), dLc(X, Y, c))
c0 = np.random.uniform(0, 1, K)
c_ls = sgd_plot_convergence(c0, Lc, dLc, [['0.3', 100]], crit=1e-3, maxit=100)[0]
plot_coefficients(['c_ls', 'c_ld', 'c_lo', 'c_lf'])
def c_l(n):
Xt = lanczos_basis_eval(LL, X, K)
return Xt[...,n] @ c_o
plot_coefficients(['c_lo', 'c_l(0)', 'c_l(20)', 'c_l(100)'])
def c_l(n):
x = X[:,n:n+1]
Xt = lanczos_basis_eval(LL, x, K)
yh = U.T @ filter_basis(Xt, c_lo)
xh = U.T @ x
return (yh / xh).squeeze()
return (U.T @ (c_lo @ V[...,n])) / (U.T @ X[:,n])
plot_filters(['c_o', 'c_l(0)', 'c_l(20)', 'c_l(100)'])
def polynomial_order(K, step=1):
kk = np.arange(1, K+1, step)
loss_lo = np.empty(len(kk))
loss_ld = np.empty(len(kk))
loss_lf = np.empty(len(kk))
loss_cro = np.empty(len(kk))
loss_crd = np.empty(len(kk))
time_lo = np.empty(len(kk))
time_ld = np.empty(len(kk))
time_lf = np.empty(len(kk))
time_cro = np.empty(len(kk))
time_crd = np.empty(len(kk))
for i, k in enumerate(kk):
tstart = time.process_time()
c_lo = basis_coeff_opt(lanczos_basis_eval, X, Y, k)
loss_lo[i] = Lc(c_lo)
time_lo[i] = time.process_time() - tstart
tstart = time.process_time()
c_ld = lanczos_coeff_direct(X, Y, k)
loss_ld[i] = Lc(c_ld)
time_ld[i] = time.process_time() - tstart
tstart = time.process_time()
c_lf = lanczos_coeff_direct_f(X, Y, k)
loss_lf[i] = Lc_f(c_lf)
time_lf[i] = time.process_time() - tstart
tstart = time.process_time()
c_cro = basis_coeff_opt(cheby_basis_eval, X, Y, k)
loss_cro[i] = Lcr(c_cro)
time_cro[i] = time.process_time() - tstart
tstart = time.process_time()
c_crd = cheby_coeff_direct(X, Y, k)
loss_crd[i] = Lcr(c_crd)
time_crd[i] = time.process_time() - tstart
fig = plt.figure(figsize=(15,5))
ax = fig.add_subplot(1,1,1)
ax.plot(kk, loss_lo, label='Lanczos (fast SVD)')
ax.plot(kk, loss_ld, label='Lanczos (fast closed-form)')
ax.plot(kk, loss_lf, label='Lanczos (closed-form)')
ax.plot(kk, loss_cro, label='Chebyshev (SVD)')
ax.plot(kk, loss_crd, label='Chebyshev (closed-form)')
ax.plot(kk, L(c_o) * np.ones(len(kk)), label='Optimal filter')
ax.set_title('Loss due to approximation')
ax.set_ylabel('Loss L')
ax.set_xlabel('Polynomial order')
ax.set_xlim(1, K)
ax.legend(loc='best')
plt.show()
fig = plt.figure(figsize=(15,5))
ax = fig.add_subplot(1,1,1)
ax.plot(kk, time_lo * 1000, label='Lanczos (fast SVD)')
ax.plot(kk, time_ld * 1000, label='Lanczos (fast closed-form)')
ax.plot(kk, time_lf * 1000, label='Lanczos (closed-form)')
ax.plot(kk, time_cro * 1000, label='Chebyshev (SVD)')
ax.plot(kk, time_crd * 1000, label='Chebyshev (closed-form)')
ax.set_title('Execution time')
ax.set_ylabel('Time [ms]')
ax.set_xlabel('Polynomial order')
ax.set_xlim(1, K)
ax.legend(loc='best')
plt.show()
polynomial_order(30, 1)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Read the data
Step2: Let's look at the first five rows
Step3: What is the size of the table?
Step4: What are the types of the data?
Step5: What are the values of type ?
Step6: How many different ids are there?
Step7: What is the type of the timestamps?
Step8: They are strings, it would be better if they were dats, lets fix that
Step9: Now the time column contains datetime objects
Step10: We will now create a simple scatter plot with all the X and Y values in our subsample
Step11: It looks very similar to the pats in the map
Step12: Lets look at the range of the location data
Step13: Now lets create a 2d histogram to see which locations are more popular. We will use the hist2d function
Step14: We can increase the number of bins
Step15: 3. Single guest
Step16: Now lets try to use the time information
Step17: At what time did he arrive?
Step18: At what time did he leave?
Step19: So how long did he stay?
Step20: 4. Single time frame
Step21: lets add some jitter
Step22: 5. Time analysis
Step23: Now for the question of park occupacy, we need to group the dataframe by time
| <ASSISTANT_TASK:>
Python Code:
import pandas as pd
%matplotlib inline
import seaborn as sns
from matplotlib import pyplot as plt
import numpy as np
sns.set_style("darkgrid")
%cd C:\Users\Profesor\Documents\curso_va_2015\va_course_2015
df = pd.read_csv("../MC1 2015 Data/park-movement-Fri.csv")
df.head()
df.shape
df.dtypes
df["type"].unique()
df.groupby("type")["type"].count()
df["id"].unique().shape
pd.pivot_table(df,columns="type", values="X", index="id", aggfunc=len).head()
pd.pivot_table(df,columns="type", values="X", index="id", aggfunc=len).mean()
type(df.Timestamp[0])
df["time"] = pd.to_datetime(df.Timestamp, format="%Y-%m-%d %H:%M:%S")
df.tail()
df.dtypes
df_small = df.sample(10000)
df_small.shape
df_small.plot("X","Y","scatter")
df_small.loc[df_small["type"]=="check-in"].plot("X","Y","scatter")
df["X"].min()
df["X"].max()
df["Y"].min()
df["Y"].max()
cnts, xe, ye, img = plt.hist2d(df_small["X"], df_small["Y"],range=((0,100),(0,100)),normed=True)
cnts, xe, ye, img = plt.hist2d(df_small["X"], df_small["Y"],range=((0,100),(0,100)),normed=True, bins=20)
df_small.plot("X","Y","hexbin")
guest_id = np.random.choice(df["id"])
guest_df = df.loc[df["id"]==guest_id]
guest_df.shape
guest_df.plot("X","Y","scatter")
plt.scatter(guest_df["X"],guest_df["Y"],c=guest_df["time"])
guest_df["time"].min()
guest_df["time"].max()
guest_df["time"].max() - guest_df["time"].min()
noon_dates = (df["time"] < '2014-06-06 12:05:00') & (df["time"] >= '2014-06-06 12:00:00')
noon_df = df.loc[noon_dates]
noon_df.shape
plt.scatter(noon_df["X"], noon_df["Y"], alpha=0.01, marker="o", s=30)
plt.scatter(noon_df["X"] +5*np.random.random(len(noon_df))
,noon_df["Y"]+5*np.random.random(len(noon_df)),
alpha=0.01, marker="o", s=30)
grouped_times = df.groupby("id")["time"]
arrivals = grouped_times.min()
departures = grouped_times.max()
duration = departures - arrivals
sns.distplot(arrivals.dt.hour+arrivals.dt.minute/60)
sns.distplot(departures.dt.hour+departures.dt.minute/60)
h_duration = duration.dt.seconds/60/60
sns.distplot(h_duration)
time_groups = df.groupby(df.time.dt.hour)
occupancy = time_groups["id"].aggregate(lambda x:len(np.unique(x)))
occupancy.plot()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note
Step3: Details of the "Happy" dataset
Step4: You have now built a function to describe your model. To train and test this model, there are four steps in Keras
Step5: Exercise
Step6: Exercise
Step7: Note that if you run fit() again, the model will continue to train with the parameters it has already learnt instead of reinitializing them.
Step8: If your happyModel() function worked, you should have observed much better than random-guessing (50%) accuracy on the train and test sets.
Step9: 5 - Other useful functions in Keras (Optional)
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
from keras import layers
from keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D
from keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D
from keras.models import Model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from kt_utils import *
import keras.backend as K
K.set_image_data_format('channels_last')
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
%matplotlib inline
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.
# Reshape
Y_train = Y_train_orig.T
Y_test = Y_test_orig.T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
# GRADED FUNCTION: HappyModel
def HappyModel(input_shape):
Implementation of the HappyModel.
Arguments:
input_shape -- shape of the images of the dataset
Returns:
model -- a Model() instance in Keras
### START CODE HERE ###
# Feel free to use the suggested outline in the text above to get started, and run through the whole
# exercise (including the later portions of this notebook) once. The come back also try out other
# network architectures as well.
X_input = Input(input_shape)
X = ZeroPadding2D((3, 3))(X_input)
X = Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0')(X)
X = BatchNormalization(axis = 3, name = 'bn0')(X)
X = Activation('relu')(X)
X = MaxPooling2D((2, 2), name = 'max_pool')(X)
X = Flatten()(X)
X = Dense(1, activation = 'sigmoid', name = 'fc')(X)
model = Model(inputs = X_input, outputs = X, name = 'HappyModel')
### END CODE HERE ###
return model
### START CODE HERE ### (1 line)
happyModel = HappyModel((64, 64, 3))
### END CODE HERE ###
### START CODE HERE ### (1 line)
happyModel.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
### END CODE HERE ###
### START CODE HERE ### (1 line)
happyModel.fit(x = X_train, y = Y_train, epochs = 40, batch_size = 16)
### END CODE HERE ###
### START CODE HERE ### (1 line)
preds = happyModel.evaluate(x=X_test, y=Y_test, batch_size=16, verbose=1, sample_weight=None)
### END CODE HERE ###
print()
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
### START CODE HERE ###
img_path = 'images/64.jpg'
### END CODE HERE ###
img = image.load_img(img_path, target_size=(64, 64))
imshow(img)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print(happyModel.predict(x))
happyModel.summary()
plot_model(happyModel, to_file='HappyModel.png')
SVG(model_to_dot(happyModel).create(prog='dot', format='svg'))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Class의 함수
Step2: add_constraints
Step3: update
Step4: optimize
Step5: Test example #1
Step6: Test example #2
Step7: Test example #3
Step8: Test example #4
Step9: Test example #5
Step10: 결과 제출 하기
| <ASSISTANT_TASK:>
Python Code:
from gachon_lp_solver import GachonLPSolver # gachon_lp_solver 파일(모듈)에서 GachonLPSolver class를 import
lpsover = GachonLPSolver("test_example") #GachonLPSolver class의 첫 번째 argument인 model_name에 "test_example" 를 할당함
lpsover.model_name
import numpy as np
import importlib
import gachon_lp_solver
from gachon_lp_solver import GachonLPSolver
importlib.reload(gachon_lp_solver) # gachon_lp_solver.py이 수정되면 다시 호출하는 모듈
lpsolver = GachonLPSolver("test_example")
lpsolver.model_name
objective_coefficient_vector = np.array([30, 40])
lpsolver.set_objective_variables(objective_coefficient_vector, GachonLPSolver.MAXIMIZE)
lpsolver.objective_variables
import numpy as np
import importlib
import gachon_lp_solver
from gachon_lp_solver import GachonLPSolver
importlib.reload(gachon_lp_solver) # gachon_lp_solver.py이 수정되면 다시 호출하는 모듈
lpsolver = GachonLPSolver("test_example")
lpsolver.model_name
objective_coefficient_vector = np.array([30, 40])
lpsolver.set_objective_variables(objective_coefficient_vector, GachonLPSolver.MAXIMIZE)
lpsolver.add_constraints([2, 1], GachonLPSolver.LESSEQUAL, 8)
lpsolver.add_constraints([1, 3], GachonLPSolver.LESSEQUAL, 8)
lpsolver.constraints_coefficient_matrix
lpsover.constraints_sign_list
import importlib
import gachon_lp_solver
from gachon_lp_solver import GachonLPSolver
importlib.reload(gachon_lp_solver) # gachon_lp_solver.py이 수정되면 다시 호출하는 모듈
lpsolver = GachonLPSolver("test_example")
lpsolver.model_name
objective_coefficient_vector = np.array([30, 40])
lpsolver.set_objective_variables(objective_coefficient_vector, GachonLPSolver.MAXIMIZE)
lpsolver.add_constraints([2, 1], GachonLPSolver.LESSEQUAL, 8)
lpsolver.add_constraints([1, 3], GachonLPSolver.LESSEQUAL, 8)
lpsolver.update()
lpsolver.standard_form_matrix
import importlib
import gachon_lp_solver
from gachon_lp_solver import GachonLPSolver
importlib.reload(gachon_lp_solver) # gachon_lp_solver.py이 수정되면 다시 호출하는 모듈
lpsolver = GachonLPSolver("test_example")
lpsolver.model_name
objective_coefficient_vector = [30, 40]
lpsolver.set_objective_variables(objective_coefficient_vector, GachonLPSolver.MAXIMIZE)
lpsolver.add_constraints([2, 1], GachonLPSolver.LESSEQUAL, 8)
lpsolver.add_constraints([1, 3], GachonLPSolver.LESSEQUAL, 8)
lpsolver.update()
lpsolver.optimize()
lpsolver.standard_form_matrix
lpsolver.get_z_value()
lpsolver.get_objective_variables()
import importlib
import gachon_lp_solver
from gachon_lp_solver import GachonLPSolver
importlib.reload(gachon_lp_solver) # gachon_lp_solver.py이 수정되면 다시 호출하는 모듈
lpsolver = GachonLPSolver("test_example")
objective_coefficient_vector = [60, 30, 20]
lpsolver.set_objective_variables(objective_coefficient_vector, GachonLPSolver.MAXIMIZE)
lpsolver.add_constraints([8, 6, 1], GachonLPSolver.LESSEQUAL, 48)
lpsolver.add_constraints([4, 2, 1.5], GachonLPSolver.LESSEQUAL, 20)
lpsolver.add_constraints([2, 1.5, 0.5], GachonLPSolver.LESSEQUAL, 8)
lpsolver.add_constraints([0, 1, 0], GachonLPSolver.LESSEQUAL, 5)
lpsolver.update()
lpsolver.standard_form_matrix
# [[ 1. -60. -30. -20. 0. 0. 0. 0. 0. ]
# [ 0. 8. 6. 1. 1. 0. 0. 0. 48. ]
# [ 0. 4. 2. 1.5 0. 1. 0. 0. 20. ]
# [ 0. 2. 1.5 0.5 0. 0. 1. 0. 8. ]
# [ 0. 0. 1. 0. 0. 0. 0. 1. 5. ]]
lpsolver.optimize()
print(lpsolver.standard_form_matrix)
# [[ 1. 0. 5. 0. 0. 10. 10. 0. 280. ]
# [ 0. 0. -2. 0. 1. 2. -8. 0. 24. ]
# [ 0. 0. -2. 1. 0. 2. -4. 0. 8. ]
# [ 0. 1. 1.25 0. 0. -0.5 1.5 0. 2. ]
# [ 0. 0. 1. 0. 0. 0. 0. 1. 5. ]]
lpsolver.get_z_value() # 280.0
lpsolver.get_objective_variables() # [2.0, 0, 8.0]
import importlib
import gachon_lp_solver
from gachon_lp_solver import GachonLPSolver
importlib.reload(gachon_lp_solver) # gachon_lp_solver.py이 수정되면 다시 호출하는 모듈
model_name = "example_model"
lp_solver = GachonLPSolver(model_name)
objective_coefficient_vector = np.array([3, 2])
lp_solver.set_objective_variables(objective_coefficient_vector, GachonLPSolver.MAXIMIZE)
lp_solver.add_constraints([2, 1], GachonLPSolver.LESSEQUAL, 100)
lp_solver.add_constraints([1, 1], GachonLPSolver.LESSEQUAL, 80)
lp_solver.add_constraints([1, 0], GachonLPSolver.LESSEQUAL, 40)
lp_solver.update()
print(lp_solver.standard_form_matrix)
# [[ 1. -3. -2. 0. 0. 0. 0.]
# [ 0. 2. 1. 1. 0. 0. 100.]
# [ 0. 1. 1. 0. 1. 0. 80.]
# [ 0. 1. 0. 0. 0. 1. 40.]]
lp_solver.optimize()
print(lp_solver.standard_form_matrix)
# [[ 1. 0. 0. 1. 1. 0. 180.]
# [ 0. 0. 1. -1. 2. 0. 60.]
# [ 0. 0. 0. -1. 1. 1. 20.]
# [ 0. 1. 0. 1. -1. 0. 20.]]
print(lp_solver.get_z_value())
print(lp_solver.get_objective_variables())
import importlib
import gachon_lp_solver
from gachon_lp_solver import GachonLPSolver
importlib.reload(gachon_lp_solver) # gachon_lp_solver.py이 수정되면 다시 호출하는 모듈
model_name = "example_model"
lp_solver = GachonLPSolver(model_name)
objective_coefficient_vector = np.array([20, 15])
lp_solver.set_objective_variables(objective_coefficient_vector, GachonLPSolver.MAXIMIZE)
lp_solver.add_constraints([1, 0], GachonLPSolver.LESSEQUAL, 100)
lp_solver.add_constraints([0, 1], GachonLPSolver.LESSEQUAL, 100)
lp_solver.add_constraints([50, 35], GachonLPSolver.LESSEQUAL, 6000)
lp_solver.add_constraints([20, 15], GachonLPSolver.GRATERQUAL, 2000)
lp_solver.update()
lp_solver.standard_form_matrix
# array([[ 1.00000000e+00, -2.00000000e+01, -1.50000000e+01,
# 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
# 0.00000000e+00, 0.00000000e+00],
# [ 0.00000000e+00, 1.00000000e+00, 0.00000000e+00,
# 1.00000000e+00, 0.00000000e+00, 0.00000000e+00,
# -0.00000000e+00, 1.00000000e+02],
# [ 0.00000000e+00, 0.00000000e+00, 1.00000000e+00,
# 0.00000000e+00, 1.00000000e+00, 0.00000000e+00,
# -0.00000000e+00, 1.00000000e+02],
# [ 0.00000000e+00, 5.00000000e+01, 3.50000000e+01,
# 0.00000000e+00, 0.00000000e+00, 1.00000000e+00,
# -0.00000000e+00, 6.00000000e+03],
# [ 0.00000000e+00, 2.00000000e+01, 1.50000000e+01,
# 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
# -1.00000000e+00, 2.00000000e+03]])
lp_solver.optimize()
print(lp_solver.standard_form_matrix)
# [[ 1.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
# 1.00000000e+00 4.00000000e-01 0.00000000e+00 2.50000000e+03]
# [ 0.00000000e+00 1.00000000e+00 0.00000000e+00 0.00000000e+00
# -7.00000000e-01 2.00000000e-02 -0.00000000e+00 5.00000000e+01]
# [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 1.00000000e+00
# 7.00000000e-01 -2.00000000e-02 0.00000000e+00 5.00000000e+01]
# [ 0.00000000e+00 0.00000000e+00 1.00000000e+00 0.00000000e+00
# 1.00000000e+00 -3.46944695e-18 0.00000000e+00 1.00000000e+02]
# [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
# -1.00000000e+00 -4.00000000e-01 -1.00000000e+00 -5.00000000e+02]]
lp_solver.get_z_value()
# 2500.0
lp_solver.get_objective_variables()
# [50.0, 100.0]
import importlib
import gachon_lp_solver
from gachon_lp_solver import GachonLPSolver
importlib.reload(gachon_lp_solver) # gachon_lp_solver.py이 수정되면 다시 호출하는 모듈
model_name = "example_model"
lp_solver = GachonLPSolver(model_name)
objective_coefficient_vector = np.array([13, 16, 16, 14, 39])
lp_solver.set_objective_variables(objective_coefficient_vector, GachonLPSolver.MAXIMIZE)
lp_solver.add_constraints([11, 53, 5, 5, 29], GachonLPSolver.LESSEQUAL, 40)
lp_solver.add_constraints([3, 6, 5, 1, 34], GachonLPSolver.LESSEQUAL, 20)
lp_solver.add_constraints([1, 0, 0, 0, 0], GachonLPSolver.LESSEQUAL, 1)
lp_solver.add_constraints([0, 1, 0, 0, 0], GachonLPSolver.LESSEQUAL, 1)
lp_solver.add_constraints([0, 0, 1, 0, 0], GachonLPSolver.LESSEQUAL, 1)
lp_solver.add_constraints([0, 0, 0, 1, 0], GachonLPSolver.LESSEQUAL, 1)
lp_solver.add_constraints([0, 0, 0, 0, 1], GachonLPSolver.LESSEQUAL, 1)
lp_solver.update()
lp_solver.standard_form_matrix
# array([[ 1., -13., -16., -16., -14., -39., 0., 0., 0., 0., 0.,
# 0., 0., 0.],
# [ 0., 11., 53., 5., 5., 29., 1., 0., 0., 0., 0.,
# 0., 0., 40.],
# [ 0., 3., 6., 5., 1., 34., 0., 1., 0., 0., 0.,
# 0., 0., 20.],
# [ 0., 1., 0., 0., 0., 0., 0., 0., 1., 0., 0.,
# 0., 0., 1.],
# [ 0., 0., 1., 0., 0., 0., 0., 0., 0., 1., 0.,
# 0., 0., 1.],
# [ 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 1.,
# 0., 0., 1.],
# [ 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.,
# 1., 0., 1.],
# [ 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.,
# 0., 1., 1.]])
lp_solver.optimize()
print(lp_solver.standard_form_matrix)
# [[ 1.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
# 0.00000000e+00 0.00000000e+00 1.90417690e-01 9.84643735e-01
# 7.95147420e+00 0.00000000e+00 1.01246929e+01 1.20632678e+01
# 0.00000000e+00 5.74490172e+01]
# [ 0.00000000e+00 0.00000000e+00 1.00000000e+00 0.00000000e+00
# 0.00000000e+00 0.00000000e+00 2.08845209e-02 -1.78132678e-02
# -1.76289926e-01 0.00000000e+00 -1.53562654e-02 -8.66093366e-02
# 0.00000000e+00 2.00859951e-01]
# [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
# 0.00000000e+00 1.00000000e+00 -3.68550369e-03 3.25552826e-02
# -5.71253071e-02 0.00000000e+00 -1.44348894e-01 -1.41277641e-02
# 0.00000000e+00 2.88083538e-01]
# [ 0.00000000e+00 1.00000000e+00 0.00000000e+00 0.00000000e+00
# 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
# 1.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
# 0.00000000e+00 1.00000000e+00]
# [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
# 0.00000000e+00 0.00000000e+00 -2.08845209e-02 1.78132678e-02
# 1.76289926e-01 1.00000000e+00 1.53562654e-02 8.66093366e-02
# 0.00000000e+00 7.99140049e-01]
# [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 1.00000000e+00
# 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
# 0.00000000e+00 0.00000000e+00 1.00000000e+00 0.00000000e+00
# 0.00000000e+00 1.00000000e+00]
# [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
# 1.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
# 0.00000000e+00 0.00000000e+00 0.00000000e+00 1.00000000e+00
# 0.00000000e+00 1.00000000e+00]
# [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
# 0.00000000e+00 0.00000000e+00 3.68550369e-03 -3.25552826e-02
# 5.71253071e-02 0.00000000e+00 1.44348894e-01 1.41277641e-02
# 1.00000000e+00 7.11916462e-01]]
lp_solver.get_z_value()
# 57.449017199017206
lp_solver.get_objective_variables()
# [1.0, 0.20085995085995084, 1.0, 1.0, 0.28808353808353804]
import importlib
import gachon_lp_solver
from gachon_lp_solver import GachonLPSolver
importlib.reload(gachon_lp_solver) # gachon_lp_solver.py이 수정되면 다시 호출하는 모듈
model_name = "example_model"
lp_solver = GachonLPSolver(model_name)
objective_coefficient_vector = np.array([5000, 8500, 2400, 2800])
lp_solver.set_objective_variables(objective_coefficient_vector, GachonLPSolver.MAXIMIZE)
lp_solver.add_constraints([1, 0, 0, 0], GachonLPSolver.LESSEQUAL, 12)
lp_solver.add_constraints([0, 1, 0, 0], GachonLPSolver.LESSEQUAL, 5)
lp_solver.add_constraints([0, 0, 1, 0], GachonLPSolver.LESSEQUAL, 25)
lp_solver.add_constraints([0, 0, 0, 1], GachonLPSolver.LESSEQUAL, 20)
lp_solver.add_constraints([800, 925, 290, 380], GachonLPSolver.LESSEQUAL, 8000)
lp_solver.add_constraints([0, 0, 1, 1], GachonLPSolver.GRATERQUAL, 5)
lp_solver.add_constraints([0, 0, 290, 380], GachonLPSolver.LESSEQUAL, 1800)
lp_solver.update()
lp_solver.standard_form_matrix
# array([[ 1.00000000e+00, -5.00000000e+03, -8.50000000e+03,
# -2.40000000e+03, -2.80000000e+03, 0.00000000e+00,
# 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
# 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
# 0.00000000e+00],
# [ 0.00000000e+00, 1.00000000e+00, 0.00000000e+00,
# 0.00000000e+00, 0.00000000e+00, 1.00000000e+00,
# 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
# 0.00000000e+00, -0.00000000e+00, 0.00000000e+00,
# 1.20000000e+01],
# [ 0.00000000e+00, 0.00000000e+00, 1.00000000e+00,
# 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
# 1.00000000e+00, 0.00000000e+00, 0.00000000e+00,
# 0.00000000e+00, -0.00000000e+00, 0.00000000e+00,
# 5.00000000e+00],
# [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
# 1.00000000e+00, 0.00000000e+00, 0.00000000e+00,
# 0.00000000e+00, 1.00000000e+00, 0.00000000e+00,
# 0.00000000e+00, -0.00000000e+00, 0.00000000e+00,
# 2.50000000e+01],
# [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
# 0.00000000e+00, 1.00000000e+00, 0.00000000e+00,
# 0.00000000e+00, 0.00000000e+00, 1.00000000e+00,
# 0.00000000e+00, -0.00000000e+00, 0.00000000e+00,
# 2.00000000e+01],
# [ 0.00000000e+00, 8.00000000e+02, 9.25000000e+02,
# 2.90000000e+02, 3.80000000e+02, 0.00000000e+00,
# 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
# 1.00000000e+00, -0.00000000e+00, 0.00000000e+00,
# 8.00000000e+03],
# [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
# 1.00000000e+00, 1.00000000e+00, 0.00000000e+00,
# 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
# 0.00000000e+00, -1.00000000e+00, 0.00000000e+00,
# 5.00000000e+00],
# [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
# 2.90000000e+02, 3.80000000e+02, 0.00000000e+00,
# 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
# 0.00000000e+00, -0.00000000e+00, 1.00000000e+00,
# 1.80000000e+03]])
lp_solver.optimize()
print(lp_solver.standard_form_matrix)
# [[ 1.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
# 3.44827586e+02 0.00000000e+00 2.71875000e+03 0.00000000e+00
# 0.00000000e+00 6.25000000e+00 0.00000000e+00 2.02586207e+00
# 6.72403017e+04]
# [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
# 1.38777878e-17 1.00000000e+00 1.15625000e+00 0.00000000e+00
# 0.00000000e+00 -1.25000000e-03 0.00000000e+00 1.25000000e-03
# 1.00312500e+01]
# [ 0.00000000e+00 0.00000000e+00 1.00000000e+00 0.00000000e+00
# 0.00000000e+00 0.00000000e+00 1.00000000e+00 0.00000000e+00
# 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
# 5.00000000e+00]
# [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
# -1.31034483e+00 0.00000000e+00 0.00000000e+00 1.00000000e+00
# 0.00000000e+00 0.00000000e+00 0.00000000e+00 -3.44827586e-03
# 1.87931034e+01]
# [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
# 1.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
# 1.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
# 2.00000000e+01]
# [ 0.00000000e+00 1.00000000e+00 0.00000000e+00 0.00000000e+00
# -1.38777878e-17 0.00000000e+00 -1.15625000e+00 0.00000000e+00
# 0.00000000e+00 1.25000000e-03 0.00000000e+00 -1.25000000e-03
# 1.96875000e+00]
# [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 1.00000000e+00
# 1.31034483e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
# 0.00000000e+00 0.00000000e+00 0.00000000e+00 3.44827586e-03
# 6.20689655e+00]
# [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
# 3.10344828e-01 0.00000000e+00 0.00000000e+00 0.00000000e+00
# 0.00000000e+00 0.00000000e+00 1.00000000e+00 3.44827586e-03
# 1.20689655e+00]]
lp_solver.get_z_value()
# 67240.301724137928
lp_solver.get_objective_variables()
# [1.96875, 5.0, 6.2068965517241379, 0]
import gachon_autograder_client as g_autograder
THE_TEMLABIO_ID = "#YOUR_ID"
PASSWORD = "#YOUR_PASSWORD"
ASSIGNMENT_FILE_NAME = "gachon_lp_solver.py"
g_autograder.submit_assignment(THE_TEMLABIO_ID , PASSWORD, ASSIGNMENT_FILE_NAME)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1.1 Smoothing operator
Step2: 1.2 Interpolation Operator
Step3: 1.3 Restriction
Step4: 1.4 Bottom Solver
Step5: Thats it! Now we can see it in action. We can use a problem with a known solution to test our code. The following functions set up a rhs for a problem with homogenous dirichlet BC on the unit square.
Step6: Let us set up the problem, discretization and solver details. The number of divisions along each dimension is given as a power of two function of the number of levels. In principle this is not required, but having it makes the inter-grid transfers easy.
Step7: Now we can call the solver
Step8: True error is the difference of the approximation with the analytical solution. It is largely the discretization error. This what would be present when we solve the discrete equation with a direct/exact method like gaussian elimination. We see that true error stops reducing at the 5th cycle. The approximation is not getting any better after this point. So we can stop after 5 cycles. But, in general we dont know the true error. In practice we use the norm of the (relative) residual as a stopping criterion. As the cycles progress the floating point round-off error limit is reached and the residual also stops decreasing.
Step9: Lets call the FMG solver for the same problem
Step10: It works wonderfully. The residual is large but the true error is within the discretization level. FMG is said to be scalable because the amount of work needed is linearly proportional to the the size of the problem. In big-O notation, FMG is $\mathcal{O}(N)$. Where N is the number of unknowns. Exact methods (Gaussian Elimination, LU decomposition ) are typically $\mathcal{O}(N^3)$
Step11: Let us define the Poisson matrix also as a LinearOperator
Step12: The nested function is required because "matvec" in LinearOperator takes only one argument-- the vector. But we require the grid details and boundary condition information to create the Poisson matrix. Now will use these to solve a problem. Unlike earlier where we used an analytical solution and RHS, we will start with a random vector which will be our exact solution, and multiply it with the Poisson matrix to get the Rhs vector for the problem. There is no analytical equation associated with the matrix equation.
Step13: Lets look at what happens with and without the preconditioner.
Step14: Without the preconditioner ~150 iterations were needed, where as with the V-cycle preconditioner the solution was obtained in far fewer iterations. Let's try with CG
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
def Jacrelax(nx,ny,u,f,iters=1):
'''
under-relaxed Jacobi iteration
'''
dx=1.0/nx; dy=1.0/ny
Ax=1.0/dx**2; Ay=1.0/dy**2
Ap=1.0/(2.0*(Ax+Ay))
#Dirichlet BC
u[ 0,:] = -u[ 1,:]
u[-1,:] = -u[-2,:]
u[:, 0] = -u[:, 1]
u[:,-1] = -u[:,-2]
for it in range(iters):
u[1:nx+1,1:ny+1] = 0.8*Ap*(Ax*(u[2:nx+2,1:ny+1] + u[0:nx,1:ny+1])
+ Ay*(u[1:nx+1,2:ny+2] + u[1:nx+1,0:ny])
- f[1:nx+1,1:ny+1])+0.2*u[1:nx+1,1:ny+1]
#Dirichlet BC
u[ 0,:] = -u[ 1,:]
u[-1,:] = -u[-2,:]
u[:, 0] = -u[:, 1]
u[:,-1] = -u[:,-2]
res=np.zeros([nx+2,ny+2])
res[1:nx+1,1:ny+1]=f[1:nx+1,1:ny+1]-(( Ax*(u[2:nx+2,1:ny+1]+u[0:nx,1:ny+1])
+ Ay*(u[1:nx+1,2:ny+2]+u[1:nx+1,0:ny])
- 2.0*(Ax+Ay)*u[1:nx+1,1:ny+1]))
return u,res
def prolong(nx,ny,v):
'''
interpolate 'v' to the fine grid
'''
v_f=np.zeros([2*nx+2,2*ny+2])
v_f[1:2*nx:2 ,1:2*ny:2 ] = 0.5625*v[1:nx+1,1:ny+1]+0.1875*(v[0:nx ,1:ny+1]+v[1:nx+1,0:ny] )+0.0625*v[0:nx ,0:ny ]
v_f[2:2*nx+1:2,1:2*ny:2 ] = 0.5625*v[1:nx+1,1:ny+1]+0.1875*(v[2:nx+2,1:ny+1]+v[1:nx+1,0:ny] )+0.0625*v[2:nx+2,0:ny ]
v_f[1:2*nx:2 ,2:2*ny+1:2] = 0.5625*v[1:nx+1,1:ny+1]+0.1875*(v[0:nx ,1:ny+1]+v[1:nx+1,2:ny+2])+0.0625*v[0:nx ,2:ny+2]
v_f[2:2*nx+1:2,2:2*ny+1:2] = 0.5625*v[1:nx+1,1:ny+1]+0.1875*(v[2:nx+2,1:ny+1]+v[1:nx+1,2:ny+2])+0.0625*v[2:nx+2,2:ny+2]
return v_f
def restrict(nx,ny,v):
'''
restrict 'v' to the coarser grid
'''
v_c=np.zeros([nx+2,ny+2])
v_c[1:nx+1,1:ny+1]=0.25*(v[1:2*nx:2,1:2*ny:2]+v[1:2*nx:2,2:2*ny+1:2]+v[2:2*nx+1:2,1:2*ny:2]+v[2:2*nx+1:2,2:2*ny+1:2])
return v_c
def V_cycle(nx,ny,num_levels,u,f,level=1):
if(level==num_levels):#bottom solve
u,res=Jacrelax(nx,ny,u,f,iters=50)
return u,res
#Step 1: Relax Au=f on this grid
u,res=Jacrelax(nx,ny,u,f,iters=1)
#Step 2: Restrict residual to coarse grid
res_c=restrict(nx//2,ny//2,res)
#Step 3:Solve A e_c=res_c on the coarse grid. (Recursively)
e_c=np.zeros_like(res_c)
e_c,res_c=V_cycle(nx//2,ny//2,num_levels,e_c,res_c,level+1)
#Step 4: Interpolate(prolong) e_c to fine grid and add to u
u+=prolong(nx//2,ny//2,e_c)
#Step 5: Relax Au=f on this grid
u,res=Jacrelax(nx,ny,u,f,iters=1)
return u,res
#analytical solution
def Uann(x,y):
return (x**3-x)*(y**3-y)
#RHS corresponding to above
def source(x,y):
return 6*x*y*(x**2+ y**2 - 2)
#input
max_cycles = 30
nlevels = 6
NX = 2*2**(nlevels-1)
NY = 2*2**(nlevels-1)
tol = 1e-15
#the grid has one layer of ghost cellss
uann=np.zeros([NX+2,NY+2])#analytical solution
u =np.zeros([NX+2,NY+2])#approximation
f =np.zeros([NX+2,NY+2])#RHS
#calcualte the RHS and exact solution
DX=1.0/NX
DY=1.0/NY
xc=np.linspace(0.5*DX,1-0.5*DX,NX)
yc=np.linspace(0.5*DY,1-0.5*DY,NY)
XX,YY=np.meshgrid(xc,yc,indexing='ij')
uann[1:NX+1,1:NY+1]=Uann(XX,YY)
f[1:NX+1,1:NY+1] =source(XX,YY)
print('mgd2d.py solver:')
print('NX:',NX,', NY:',NY,', tol:',tol,'levels: ',nlevels)
for it in range(1,max_cycles+1):
u,res=V_cycle(NX,NY,nlevels,u,f)
rtol=np.max(np.max(np.abs(res)))
if(rtol<tol):
break
error=uann[1:NX+1,1:NY+1]-u[1:NX+1,1:NY+1]
print(' cycle: ',it,', L_inf(res.)= ',rtol,',L_inf(true error): ',np.max(np.max(np.abs(error))))
error=uann[1:NX+1,1:NY+1]-u[1:NX+1,1:NY+1]
print('L_inf (true error): ',np.max(np.max(np.abs(error))))
def FMG(nx,ny,num_levels,f,nv=1,level=1):
if(level==num_levels):#bottom solve
u=np.zeros([nx+2,ny+2])
u,res=Jacrelax(nx,ny,u,f,iters=50)
return u,res
#Step 1: Restrict the rhs to a coarse grid
f_c=restrict(nx//2,ny//2,f)
#Step 2: Solve the coarse grid problem using FMG
u_c,_=FMG(nx//2,ny//2,num_levels,f_c,nv,level+1)
#Step 3: Interpolate u_c to the fine grid
u=prolong(nx//2,ny//2,u_c)
#step 4: Execute 'nv' V-cycles
for _ in range(nv):
u,res=V_cycle(nx,ny,num_levels-level,u,f)
return u,res
print('mgd2d.py FMG solver:')
print('NX:',NX,', NY:',NY,', levels: ',nlevels)
u,res=FMG(NX,NY,nlevels,f,nv=1)
rtol=np.max(np.max(np.abs(res)))
print(' FMG L_inf(res.)= ',rtol)
error=uann[1:NX+1,1:NY+1]-u[1:NX+1,1:NY+1]
print('L_inf (true error): ',np.max(np.max(np.abs(error))))
from scipy.sparse.linalg import LinearOperator,bicgstab,cg
def MGVP(nx,ny,num_levels):
'''
Multigrid Preconditioner. Returns a (scipy.sparse.linalg.) LinearOperator that can
be passed to Krylov solvers as a preconditioner.
'''
def pc_fn(v):
u =np.zeros([nx+2,ny+2])
f =np.zeros([nx+2,ny+2])
f[1:nx+1,1:ny+1] =v.reshape([nx,ny]) #in practice this copying can be avoived
#perform one V cycle
u,res=V_cycle(nx,ny,num_levels,u,f)
return u[1:nx+1,1:ny+1].reshape(v.shape)
M=LinearOperator((nx*ny,nx*ny), matvec=pc_fn)
return M
def Laplace(nx,ny):
'''
Action of the Laplace matrix on a vector v
'''
def mv(v):
u =np.zeros([nx+2,ny+2])
u[1:nx+1,1:ny+1]=v.reshape([nx,ny])
dx=1.0/nx; dy=1.0/ny
Ax=1.0/dx**2; Ay=1.0/dy**2
#BCs. Needs to be generalized!
u[ 0,:] = -u[ 1,:]
u[-1,:] = -u[-2,:]
u[:, 0] = -u[:, 1]
u[:,-1] = -u[:,-2]
ut = (Ax*(u[2:nx+2,1:ny+1]+u[0:nx,1:ny+1])
+ Ay*(u[1:nx+1,2:ny+2]+u[1:nx+1,0:ny])
- 2.0*(Ax+Ay)*u[1:nx+1,1:ny+1])
return ut.reshape(v.shape)
A = LinearOperator((nx*ny,nx*ny), matvec=mv)
return A
def solve_sparse(solver,A, b,tol=1e-10,maxiter=500,M=None):
num_iters = 0
def callback(xk):
nonlocal num_iters
num_iters+=1
x,status=solver(A, b,tol=tol,maxiter=maxiter,callback=callback,M=M)
return x,status,num_iters
A = Laplace(NX,NY)
#Exact solution and RHS
uex=np.random.rand(NX*NY,1)
b=A*uex
#Multigrid Preconditioner
M=MGVP(NX,NY,nlevels)
u,info,iters=solve_sparse(bicgstab,A,b,tol=1e-10,maxiter=500)
print('Without preconditioning. status:',info,', Iters: ',iters)
error=uex-u
print('error :',np.max(np.abs(error)))
u,info,iters=solve_sparse(bicgstab,A,b,tol=1e-10,maxiter=500,M=M)
print('With preconditioning. status:',info,', Iters: ',iters)
error=uex-u
print('error :',np.max(np.abs(error)))
u,info,iters=solve_sparse(cg,A,b,tol=1e-10,maxiter=500)
print('Without preconditioning. status:',info,', Iters: ',iters)
error=uex-u
print('error :',np.max(np.abs(error)))
u,info,iters=solve_sparse(cg,A,b,tol=1e-10,maxiter=500,M=M)
print('With preconditioning. status:',info,', Iters: ',iters)
error=uex-u
print('error :',np.max(np.abs(error)))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Querying for potential hosts
Step2: The first question is
Step3: That looks right to me. I think this is RA and DEC, but I don't think I need to care as long as they're in the right order — the query server is pretty flexible with formats.
Step4: This seems pretty good! We can even get the fluxes for these and have a look at them.
Step5: Now, I want to apply this over the whole ATLAS dataset. I've frozen the ATLAS consensuses in a database, but we'll also need the relevant catalogues - I can't figure out how to download the whole catalogues, so I'll only run this over a hundred or so subjects so I don't hammer the server too hard.
| <ASSISTANT_TASK:>
Python Code:
import collections
import io
from pprint import pprint
import sqlite3
import sys
import warnings
import astropy.io.votable
import astropy.wcs
import matplotlib.pyplot
import numpy
import requests
import requests_cache
%matplotlib inline
sys.path.insert(1, '..')
import crowdastro.data
import crowdastro.labels
import crowdastro.rgz_analysis.consensus
import crowdastro.show
warnings.simplefilter('ignore', UserWarning) # astropy always raises warnings on Windows.
requests_cache.install_cache(cache_name='gator_cache', backend='sqlite', expire_after=None)
subject = crowdastro.data.db.radio_subjects.find_one({'metadata.survey': 'atlas', 'state': 'complete',
'zooniverse_id': 'ARG0003r18'})
crowdastro.show.subject(subject)
matplotlib.pyplot.show()
pprint(subject['coords'])
query = {
'catalog': 'chandra_cat_f05',
'spatial': 'box',
'objstr': '{} {}'.format(*subject['coords']),
'size': '120',
'outfmt': '3',
}
url = 'http://irsa.ipac.caltech.edu/cgi-bin/Gator/nph-query'
r = requests.get(url, params=query)
votable = astropy.io.votable.parse_single_table(io.BytesIO(r.content), pedantic=False)
# Pull out the RAs and DECs
ras = votable.array['ra']
decs = votable.array['dec']
# We need to convert to pixels. We can do this with astropy.wcs.
fits = crowdastro.data.get_ir_fits(subject)
wcs = astropy.wcs.WCS(fits.header)
xs, ys = wcs.all_world2pix(ras, decs, 0)
# Let's also find the consensus hosts to compare.
consensus_xs = []
consensus_ys = []
consensus = crowdastro.rgz_analysis.consensus.consensus(subject['zooniverse_id'])
for answer in consensus['answer'].values():
consensus_xs.append(answer['ir_peak'][0] * 201 / 500 )
consensus_ys.append(201 - answer['ir_peak'][1] * 201 / 500)
# Plot them!
crowdastro.show.ir(subject)
matplotlib.pyplot.scatter(xs, ys, c='r', marker='+')
matplotlib.pyplot.scatter(consensus_xs, consensus_ys, c='cyan', marker='x')
matplotlib.pyplot.show()
matplotlib.pyplot.figure(figsize=(10, 10))
matplotlib.rcParams.update({'font.size': 22})
xs = votable.array['flux_ap2_36']
ys = votable.array['flux_ap2_58']
matplotlib.pyplot.scatter(xs, ys)
matplotlib.pyplot.xlabel('$S_{3.6 \mathrm{μm}}$')
matplotlib.pyplot.ylabel('$S_{5.8 \mathrm{μm}}$')
matplotlib.pyplot.show()
fluxes = []
all_fluxes = []
conn = sqlite3.connect('../crowdastro-data/processed.db')
def append_fluxes(subject):
if subject['metadata']['source'].startswith('C'):
# CDFS
catalog = 'chandra_cat_f05'
else:
# ELAIS-S1
catalog = 'elaiss1_cat_f05'
query = {
'catalog': catalog,
'spatial': 'box',
'objstr': '{} {}'.format(*subject['coords']),
'size': '120',
'outfmt': '3',
}
url = 'http://irsa.ipac.caltech.edu/cgi-bin/Gator/nph-query'
r = requests.get(url, params=query)
votable = astropy.io.votable.parse_single_table(io.BytesIO(r.content), pedantic=False)
ras = votable.array['ra']
decs = votable.array['dec']
# Convert to px.
fits = crowdastro.data.get_ir_fits(subject)
wcs = astropy.wcs.WCS(fits.header)
xs, ys = wcs.all_world2pix(ras, decs, 0)
consensus_xs = []
consensus_ys = []
consensus = crowdastro.labels.get_subject_consensus(subject, conn, 'classifications')
for x, y in consensus.values():
consensus_xs.append(x)
consensus_ys.append(y)
for cx, cy in zip(consensus_xs, consensus_ys):
if cx is None or cy is None:
continue
closest = None
min_distance = float('inf')
for i, x, y in zip(range(len(xs)), xs, ys):
dist = numpy.hypot(x - cx, y - cy)
if dist < min_distance:
closest = (x, y)
min_distance = dist
flux_36 = votable.array['flux_ap2_36'][i]
flux_58 = votable.array['flux_ap2_58'][i]
fluxes.append((flux_36, flux_58))
for flux_36, flux_58 in zip(votable.array['flux_ap2_36'], votable.array['flux_ap2_58']):
all_fluxes.append((flux_36, flux_58))
for subject in crowdastro.data.get_all_subjects(atlas=True).limit(100):
append_fluxes(subject)
all_fluxes = numpy.array(all_fluxes)
fluxes = numpy.array(fluxes)
matplotlib.pyplot.loglog(all_fluxes[:, 0], all_fluxes[:, 1], c='r', marker='+', linestyle='None')
matplotlib.pyplot.loglog(fluxes[:, 0], fluxes[:, 1], c='g', marker='*', linestyle='None')
matplotlib.pyplot.show()
fluxes
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introduction for the two-level system
Step2: The emission can be decomposed into a so-called coherent and incoherent portion. The coherent portion is simply due to the classical mean of the dipole moment, i.e.
Step3: Visualize the incoherent and coherent emissions
Step4: Introduction for the Jaynes-Cummings system
Step5: Effective polaritonic two-level system
Step6: Define reference system for homodyne interference
Step7: Calculate JC emission
Step8: Visualize the emitted flux with and without interference
Step9: Calculate coherent/incoherent portions of emission from JC system and its $g^{(2)}(0)$
Step10: Visualize the results
Step11: Calculate homodyned JC emission
Step12: Calculate the results
Step13: Second-order coherence with delay
Step14: Visualize the comparison to TLS correlations
Step15: Versions
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import numpy as np
from qutip import *
# define system operators
gamma = 1 # decay rate
sm_TLS = destroy(2) # dipole operator
c_op_TLS = [np.sqrt(gamma)*sm_TLS] # represents spontaneous emission
# choose range of driving strengths to simulate
Om_list_TLS = gamma*np.logspace(-2, 1, 300)
# calculate steady-state density matricies for the driving strengths
rho_ss_TLS = []
for Om in Om_list_TLS:
H_TLS = Om * (sm_TLS + sm_TLS.dag())
rho_ss_TLS.append(steadystate(H_TLS, c_op_TLS))
# decompose the emitted light into the coherent and incoherent
# portions
I_c_TLS = expect(sm_TLS.dag(), rho_ss_TLS)*expect(sm_TLS, rho_ss_TLS)
I_inc_TLS = expect(sm_TLS.dag()*sm_TLS, rho_ss_TLS) - I_c_TLS
plt.semilogx(Om_list_TLS, abs(I_c_TLS),
label='TLS $I_\mathrm{c}$')
plt.semilogx(Om_list_TLS, abs(I_inc_TLS),
'r', label='TLS $I_\mathrm{inc}$')
plt.xlabel('Driving strength [$\Gamma$]')
plt.ylabel('Normalized flux [$\Gamma$]')
plt.legend(loc=2);
# truncate size of cavity's Fock space
N = 15
# setup system operators
sm = tensor(destroy(2), qeye(N))
a = tensor(qeye(2), destroy(N))
# define system parameters, barely into strong coupling regime
kappa = 1
g = 0.6 * kappa
detuning = 3 * g # cavity-atom detuning
delta_s = detuning/2 + np.sqrt(detuning ** 2 / 4 + g ** 2)
# we only consider cavities in the good-emitter limit, where
# the atomic decay is irrelevant
c_op = [np.sqrt(kappa)*a]
effective_gamma = kappa / 2 + 2 * np.imag(
np.sqrt(g ** 2 - (kappa / 4 + 1j * detuning / 2) ** 2))
# set driving strength based on the effective polariton's
# emission rate (driving strength goes as sqrt{gamma})
Om = 0.4 * np.sqrt(effective_gamma)
# reference cavity operator
a_r = destroy(N)
c_op_r = [np.sqrt(kappa)*a_r]
# reference cavity Hamiltonian, no atom coupling
H_c = Om * (a_r + a_r.dag()) + delta_s * a_r.dag() * a_r
# solve for coherent state amplitude at driving strength Om
rho_ss_c = steadystate(H_c, c_op_r)
alpha = -expect(rho_ss_c, a_r)
alpha_c = alpha.conjugate()
def calculate_rho_ss(delta_scan):
H = Om * (a + a.dag()) + g * (sm.dag() * a + sm * a.dag()) + \
delta_scan * (
sm.dag() * sm + a.dag() * a) - detuning * sm.dag() * sm
return steadystate(H, c_op)
delta_list = np.linspace(-6 * g, 9 * g, 200)
rho_ss = parfor(calculate_rho_ss, delta_list)
# calculate JC emission
I = expect(a.dag()*a, rho_ss)
# calculate JC emission homodyned with optimal state beta
I_int = expect((a.dag() + alpha_c) * (a + alpha), rho_ss)
plt.figure(figsize=(8,5))
plt.plot(delta_list/g, I/effective_gamma,
'k', linestyle='dashed', label='JC')
plt.plot(delta_list/g, I_int/effective_gamma,
'blueviolet', label='JC w/ interference')
plt.vlines(delta_s/g, 0, 0.7, 'gray')
plt.xlim(-6, 9)
plt.ylim(0, 0.7)
plt.xlabel('Detuning [g]')
plt.ylabel('Noramlized flux [$\Gamma_\mathrm{eff}$]')
plt.legend(loc=1);
Om_list = kappa*np.logspace(-2, 1, 300)*np.sqrt(effective_gamma)
def calculate_rho_ss(Om):
H = Om * (a + a.dag()) + g * (sm.dag() * a + sm * a.dag()) + \
delta_s*(sm.dag()*sm + a.dag()*a) - detuning*sm.dag()*sm
return steadystate(H, c_op)
rho_ss = parfor(calculate_rho_ss, Om_list)
# decompose emission again into incoherent and coherent portions
I_c = expect(a.dag(), rho_ss)*expect(a, rho_ss)
I_inc = expect(a.dag()*a, rho_ss) - I_c
# additionally calculate g^(2)(0)
g20 = expect(a.dag()*a.dag()*a*a, rho_ss)/expect(a.dag()*a, rho_ss)**2
plt.figure(figsize=(8,8))
plt.subplot(211)
plt.semilogx(Om_list/np.sqrt(effective_gamma), abs(I_c)/kappa,
'k', linestyle='dashed', label='JC $I_\mathrm{c}$')
plt.semilogx(Om_list/np.sqrt(effective_gamma), abs(I_inc)/kappa,
'r', linestyle='dashed', label='JC $I_\mathrm{inc}$')
plt.xlabel(r'Driving strength [$\Gamma_\mathrm{eff}$]')
plt.ylabel('Normalized Flux [$\kappa$]')
plt.legend(loc=2)
plt.subplot(212)
plt.loglog(Om_list/np.sqrt(effective_gamma), g20,
'k', linestyle='dashed')
lim = (1e-4, 2e0)
plt.ylim(lim)
plt.xlabel(r'Driving strength [$\Gamma_\mathrm{eff}$]')
plt.ylabel('$g^{(2)}(0)$');
def calculate_rho_ss_c(Om):
H_c = Om * (a_r + a_r.dag()) + delta_s * a_r.dag() * a_r
return steadystate(H_c, c_op_r)
rho_ss_c = parfor(calculate_rho_ss_c, Om_list)
# calculate list of interference values for all driving strengths
alpha_list = -expect(rho_ss_c, a_r)
alpha_c_list = alpha_list.conjugate()
# decompose emission for all driving strengths
g20_int = []
I_c_int = []
I_inc_int = []
for i, rho in enumerate(rho_ss):
g20_int.append(
expect((a.dag() + alpha_c_list[i]) *
(a.dag() + alpha_c_list[i]) *
(a + alpha_list[i]) *
(a + alpha_list[i]),
rho) /
expect((a.dag() + alpha_c_list[i]) *
(a + alpha_list[i]),
rho)**2
)
I_c_int.append(expect(a.dag() + alpha_c_list[i], rho) *
expect(a + alpha_list[i], rho))
I_inc_int.append(expect(
(a.dag() + alpha_c_list[i]) *
(a + alpha_list[i]), rho) - I_c_int[-1])
plt.figure(figsize=(8,8))
plt.subplot(211)
plt.semilogx(Om_list_TLS, abs(I_c_TLS),
linestyle='dashed', label='TLS $I_\mathrm{c}$')
plt.semilogx(Om_list_TLS, abs(I_inc_TLS), 'r',
linestyle='dashed', label='TLS $I_\mathrm{inc}$')
plt.semilogx(Om_list/np.sqrt(effective_gamma),
abs(I_c/effective_gamma), 'k', linestyle='dashed',
label='JC $I_\mathrm{c}$')
plt.semilogx(Om_list/np.sqrt(effective_gamma),
abs(I_inc/effective_gamma),
'r', label='JC $I_\mathrm{inc}$')
plt.semilogx(Om_list/np.sqrt(effective_gamma),
abs(I_c_int/effective_gamma),
'b', label='JC w/ homodyne $I_\mathrm{c}$')
plt.semilogx(Om_list/np.sqrt(effective_gamma),
abs(I_inc_int/effective_gamma),
'r')
plt.ylim(5e-4, 0.6)
plt.xlabel(r'Driving strength [$\Gamma_\mathrm{eff}$]')
plt.ylabel('Normalized flux [$\Gamma_\mathrm{eff}$]')
plt.legend(loc=2)
plt.subplot(212)
plt.loglog(Om_list/np.sqrt(effective_gamma), g20,
'k', linestyle='dashed', label='JC')
plt.loglog(Om_list/np.sqrt(effective_gamma), g20_int,
'blueviolet', label='JC w/ interference')
plt.ylim(lim)
plt.xlabel(r'Driving strength [$\Gamma_\mathrm{eff}$]')
plt.ylabel(r'$g^{(2)}(0)$')
plt.legend(loc=4);
# first calculate the steady state
H = Om * (a + a.dag()) + g * (sm.dag() * a + sm * a.dag()) + \
delta_s * (sm.dag() * sm + a.dag() * a) - \
detuning * sm.dag() * sm
rho0 = steadystate(H, c_op)
taulist = np.linspace(0, 5/effective_gamma, 1000)
# next evolve the states according the quantum regression theorem
# ...with the b operator
corr_vec_int = expect(
(a.dag() + alpha.conjugate()) * (a + alpha),
mesolve(
H, (a + alpha) * rho0 * (a.dag() + alpha.conjugate()),
taulist, c_op, [],
options=Options(atol=1e-13, rtol=1e-11)
).states
)
n_int = expect(rho0, (a.dag() + alpha.conjugate()) * (a + alpha))
# ...with the a operator
corr_vec = expect(
a.dag() * a ,
mesolve(
H, a * rho0 * a.dag(),
taulist, c_op, [],
options=Options(atol=1e-12, rtol=1e-10)
).states
)
n = expect(rho0, a.dag() * a)
# ...perform the same for the TLS comparison
H_TLS = Om*(sm_TLS + sm_TLS.dag())*np.sqrt(effective_gamma)
c_ops_TLS = [sm_TLS*np.sqrt(effective_gamma)]
rho0_TLS = steadystate(H_TLS, c_ops_TLS)
corr_vec_TLS = expect(
sm_TLS.dag() * sm_TLS,
mesolve(
H_TLS, sm_TLS * rho0_TLS * sm_TLS.dag(),
taulist, c_ops_TLS, []
).states
)
n_TLS = expect(rho0_TLS, sm_TLS.dag() * sm_TLS)
plt.figure(figsize=(8,5))
l1, = plt.plot(taulist*effective_gamma, corr_vec_TLS/n_TLS**2,
'blueviolet', linestyle='dotted', label='TLS')
plt.plot(taulist*effective_gamma, corr_vec/n**2,
'k', linestyle='dashed', label='JC')
plt.plot(taulist*effective_gamma, corr_vec_int/n_int**2,
'blueviolet', label='JC w/ interference')
plt.xlabel('$\\tau$ [$1/\Gamma_\mathrm{eff}$]')
plt.ylabel('$g^{(2)}(\\tau)$')
plt.legend(loc=2);
from qutip.ipynbtools import version_table
version_table()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We see that by numbers we have a good coefficient correlation between the true values and the predicted ones
Step2: Kind of gaussian distribution of the error but the test dataset is rather small
Step3: The conclusion is that even if the cpu power was independently correlated with price the overall effect on price seem to be played by the manufacturer, and cool features, such as iris scanner, or tango sensor which allows augmented reality to work better
| <ASSISTANT_TASK:>
Python Code:
def getModel(alpha):
return Ridge(alpha=alpha, fit_intercept=True, normalize=False, copy_X=True, random_state=random_state)
model = getModel(alpha=0.01)
cvs = cross_val_score(estimator=model, X=XX, y=yy, cv=10)
cvs
cv_score = np.mean(cvs)
cv_score
def gpOptimization(n_jobs=n_jobs, cv=10, verbose=True):
def objective(params): # Here we define the metric we want to minimise
(alpha) = params
if verbose:
print "current L2: {}".format(alpha)
start = time()
model = getModel(alpha=alpha)
cv_results = cross_val_score(estimator=model, X=XX, y=yy, cv=cv)
cv_score = np.mean(cv_results)
if verbose:
print "current CV score: {}".format(cv_score)
print "time elapsed: {}".format(time() - start)
print
# We want to maximise accuracy score, i.e. minimise minus average accuracy
return -cv_score
alpha_space = Real(1e-4, 1e1, prior='log-uniform') #uniform or log-uniform
return gp_minimize(
func=objective, # function that we wish to minimise
dimensions=[alpha_space], #the search space for the hyper-parameters
#x0=x0, #inital values for the hyper-parameters
n_calls=30, #number of times the function will be evaluated
random_state=random_state, #random seed
n_random_starts=5, #before we start modelling the optimised function with a GP Regression
#model, we want to try a few random choices for the hyper-parameters.
kappa=1.9, #trade-off between exploration vs. exploitation.
n_jobs=n_jobs
)
res_gp = gpOptimization()
fig = plt.figure(figsize=(12,6))
plot_convergence(res_gp)
plt.grid()
plt.show()
plt.figure(figsize=(12,6))
plt.plot(res_gp.func_vals)
plt.scatter(range(len(res_gp.func_vals)), res_gp.func_vals)
plt.ylabel(r'$f(x)$')
plt.xlabel('Number of calls $n$')
plt.xlim([0, len(res_gp.func_vals)])
plt.show()
best_alpha = res_gp.x[0]
best_alpha
model = getModel(alpha=best_alpha)
model.fit(XX, yy)
model.score(XX, yy)
model.score(XX_test, yy_test)
#r2_score(yy_test, model.predict(XX_test)) #exactly same as above
def fit_scatter(y_true, y_pred, x_label = 'Measured', y_label = 'Predicted'):
assert y_true.shape == y_pred.shape
fig, ax = plt.subplots()
ax.scatter(y_true, y_pred)
ax.plot([y_true.min(), y_true.max()], [y_true.min(), y_true.max()], 'k--', lw=4)
ax.set_xlabel(x_label)
ax.set_ylabel(y_label)
return fig, ax
preds_test = model.predict(XX_test)
fig = plt.figure()
#plt.figure(figsize=(17,7))
fig, ax = fit_scatter(y_true=yy_test, y_pred=preds_test,
x_label='Measured Price', y_label='Predicted Price')
#ax.set_figsize((17,7))
fig.set_figheight(12)
fig.set_figwidth(12)
plt.show()
#fig, ax = plt.subplots(figsize=(12,6))
#ax.set_xlabel('price error')
plt.figure(figsize=(12,6))
sns.distplot( yy_test - preds_test, rug=True, axlabel='Price Error')
plt.show()
coefdic = dict(zip(XX.columns, np.absolute(model.coef_) ))
from collections import OrderedDict
#weights_sorted = sorted(coefdic, key=coefdic.get)[::-1]
weights_sorted = OrderedDict(sorted(coefdic.items(), key=lambda x: x[1]))
weights_sorted.keys()[::-1]
plt.rc('ytick', labelsize=20)
def weight_plot(weights_dic, step=1):
fig = plt.figure(figsize=(14, 17))
xvalues = np.arange(0, len(weights_dic)) + 1
#plt.bar(left = weights_dic.keys(), height=weights_dic.keys()) #, align='center')
#plt.xticks(xvalues[::step], weights_dic.keys()[::step])
sns.barplot(weights_dic.values(), weights_dic.keys())
plt.show()
weight_plot(weights_dic=weights_sorted, step=15)
strong_weights = OrderedDict(weights_sorted.items()[::-1][:40])
weight_plot(weights_dic=strong_weights)
df.shape
df_all = pd.concat((df, df_test), axis=0)
XX_all = df_all.drop(labels=SkroutzMobile.PRICE_COLS, axis=1)
yy_all = df_all[SkroutzMobile.TARGET_COL]
XX_all.shape, yy_all.shape
preds_all = model.predict(XX_all)
len(preds_all)
deal = preds_all - yy_all
deal.sample(5, random_state=random_state)
sorted_deal = deal.sort_values(ascending=False)
sorted_deal[:5]
orig_df = pd.read_csv('../mobiles_02_no_outliers.csv', index_col=0, encoding='utf-8', quoting=csv.QUOTE_ALL)
deals_display = pd.concat( (sorted_deal, orig_df['display_name']), axis=1).sort_values(
by='price_min', ascending=False) #concat here works as expected
#rename column
deals_display = deals_display.rename_axis({"price_min": "deal_metric"}, axis="columns")
deals_display
deals_display_to_render = deals_display[:10].append(deals_display[-10:])
fig = plt.figure(figsize=(14, 17))
sns.barplot(deals_display_to_render['deal_metric'], deals_display_to_render['display_name'])
plt.show()
#https://awwapp.com/b/uh8hfmo8g/
#deals_display.to_csv('../mobiles_04_deals_display.csv', encoding='utf-8', quoting=csv.QUOTE_ALL)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 위 선형 모형은 다음과 같다.
Step2: 이번에는 n_features 즉, 독립 변수가 2개인 표본 데이터를 생성하여 스캐터 플롯을 그리면 다음과 같다. 종속 변수 값은 점의 명암으로 표시하였다.
Step3: 만약 실제로 y값에 영향을 미치는 독립 변수는 하나 뿐이라면 다음과 같이 사용한다.
Step4: 만약 두 독립 변수가 상관관계가 있다면 다음과 같이 생성하고 스캐터 플롯에서도 이를 알아볼 수 있다.
| <ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import make_regression
X, y, c = make_regression(n_samples=10, n_features=1, bias=0, noise=0, coef=True, random_state=0)
print("X\n", X)
print("y\n", y)
print("c\n", c)
plt.scatter(X, y, s=100)
plt.show()
X, y, c = make_regression(n_samples=50, n_features=1, bias=100, noise=10, coef=True, random_state=0)
plt.scatter(X, y, s=100)
plt.show()
X, y, c = make_regression(n_samples=300, n_features=2, noise=10, coef=True, random_state=0)
plt.scatter(X[:,0], X[:,1], c=y, s=100)
plt.xlabel("x1")
plt.ylabel("x2")
plt.axis("equal")
plt.show()
X, y, c = make_regression(n_samples=300, n_features=2, n_informative=1, noise=0, coef=True, random_state=0)
plt.scatter(X[:,0], X[:,1], c=y, s=100)
plt.xlabel("x1")
plt.ylabel("x2")
plt.axis("equal")
plt.show()
X, y, c = make_regression(n_samples=300, n_features=2, effective_rank=1, noise=0, tail_strength=0, coef=True, random_state=0)
plt.scatter(X[:,0], X[:,1], c=y, s=100)
plt.xlabel("x1")
plt.ylabel("x2")
plt.axis("equal")
plt.show()
X, y, c = make_regression(n_samples=300, n_features=2, effective_rank=1, noise=0, tail_strength=1, coef=True, random_state=0)
plt.scatter(X[:,0], X[:,1], c=y, s=100)
plt.xlabel("x1")
plt.ylabel("x2")
plt.axis("equal")
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Live Predictions
Step3: TensorFlow.js
Step4: Convert Model
Step5: Predict in JS
Step6: 2. A static web server
Step7: 3. Port forwarding
| <ASSISTANT_TASK:>
Python Code:
# In Jupyter, you would need to install TF 2 via !pip.
%tensorflow_version 2.x
## Load models from Drive (Colab only).
models_path = '/content/gdrive/My Drive/amld_data/models'
data_path = '/content/gdrive/My Drive/amld_data/zoo_img'
## Or load models from local machine.
# models_path = './amld_models'
# data_path = './amld_data'
## Or load models from GCS (Colab only).
# models_path = 'gs://amld-datasets/models'
# data_path = 'gs://amld-datasets/zoo_img_small'
if models_path.startswith('/content/gdrive/'):
from google.colab import drive
drive.mount('/content/gdrive')
if models_path.startswith('gs://'):
# Keras doesn't read directly from GCS -> download.
from google.colab import auth
import os
os.makedirs('./amld_models', exist_ok=True)
auth.authenticate_user()
!gsutil cp -r "$models_path"/\* ./amld_models
models_path = './amld_models'
!ls -lh "$models_path"
import json, os
import numpy as np
from matplotlib import pyplot as plt
import tensorflow as tf
# Tested with TensorFlow 2.1.0
print('version={}, CUDA={}, GPU={}, TPU={}'.format(
tf.__version__, tf.test.is_built_with_cuda(),
# GPU attached? Note that you can "Runtime/Change runtime type..." in Colab.
len(tf.config.list_physical_devices('GPU')) > 0,
# TPU accessible? (only works on Colab)
'COLAB_TPU_ADDR' in os.environ))
# Load the label names from the dataset.
labels = [label.strip() for label in
tf.io.gfile.GFile('{}/labels.txt'.format(data_path))]
print('\n'.join(['%2d: %s' % (i, label) for i, label in enumerate(labels)]))
# Load model from 2_keras.ipynb
model = tf.keras.models.load_model(os.path.join(models_path, 'linear.h5'))
model.summary()
from google.colab import output
import IPython
def predict(img_64):
Get Predictions for provided image.
Args:
img_64: Raw image data (dtype int).
Returns:
A JSON object with the value for `result` being a text representation of the
top predictions.
# Reshape image into batch with single image (extra dimension "1").
preds = model.predict(np.array(img_64, float).reshape([1, 64, 64]))
# Get top three predictions (reverse argsort).
top3 = (-preds[0]).argsort()[:3]
# Return both probability and prediction label name.
result = '\n'.join(['%.3f: %s' % (preds[0, i], labels[i]) for i in top3])
return IPython.display.JSON(dict(result=result))
output.register_callback('amld.predict', predict)
%%html
<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no">
<canvas width="256" height="256" id="canvas" style="border:1px solid black"></canvas><br />
<button id="clear">clear</button><br />
<pre id="output"></pre>
<script>
let upscaleFactor = 4, halfPenSize = 2
let canvas = document.getElementById('canvas')
let output = document.getElementById('output')
let ctx = canvas.getContext('2d')
let img_64 = new Uint8Array(64*64)
let dragging = false
let timeout
let predict = () => {
google.colab.kernel.invokeFunction('amld.predict', [Array.from(img_64)], {}).then(
obj => output.textContent = obj.data['application/json'].result)
}
const getPos = e => {
let x = e.offsetX, y = e.offsetY
if (e.touches) {
const rect = canvas.getBoundingClientRect()
x = e.touches[0].clientX - rect.left
y = e.touches[0].clientY - rect.left
}
return {
x: Math.floor((x - 2*halfPenSize*upscaleFactor/2)/upscaleFactor),
y: Math.floor((y - 2*halfPenSize*upscaleFactor/2)/upscaleFactor),
}
}
const handler = e => {
const { x, y } = getPos(e)
ctx.fillStyle = 'black'
ctx.fillRect(x*upscaleFactor, y*upscaleFactor,
2*halfPenSize*upscaleFactor, 2*halfPenSize*upscaleFactor)
for (let yy = y - halfPenSize; yy < y + halfPenSize; yy++)
for (let xx = x - halfPenSize; xx < x + halfPenSize; xx++)
img_64[64*Math.min(63, Math.max(0, yy)) + Math.min(63, Math.max(0, xx))] = 1
clearTimeout(timeout)
timeout = setTimeout(predict, 500)
}
canvas.addEventListener('touchstart', e => {dragging=true; handler(e)})
canvas.addEventListener('touchmove', e => {e.preventDefault(); dragging && handler(e)})
canvas.addEventListener('touchend', () => dragging=false)
canvas.addEventListener('mousedown', e => {dragging=true; handler(e)})
canvas.addEventListener('mousemove', e => {dragging && handler(e)})
canvas.addEventListener('mouseup', () => dragging=false)
canvas.addEventListener('mouseleave', () => dragging=false)
document.getElementById('clear').addEventListener('click', () => {
ctx.fillStyle = 'white'
ctx.fillRect(0, 0, 64*upscaleFactor, 64*upscaleFactor)
output.textContent = ''
img_64 = new Uint8Array(64*64)
})
</script>
# YOUR ACTION REQUIRED:
# Load another model from 2_keras.ipynb and observe:
# - Do you get better/worse predictions?
# - Do you feel a difference in latency?
# - Can you figure out by how the model "thinks" by providing similar images
# that yield different predictions, or different images that yield the same
# picture?
#--snip
model = tf.keras.models.load_model(os.path.join(models_path, 'conv.h5'))
model.summary()
# Getting the data of a tensor in TensorFlow.js: Use the async .data() method
# to show the output in the "output" element.
# See output in javascript console (e.g. Chrome developer tools).
# For convenience, you can also use the following Codepen:
# https://codepen.io/amld-tensorflow-basics/pen/OJPagyN
%%html
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@1.0.0/dist/tf.min.js"></script>
<pre id="output"></pre>
<script>
let output = document.getElementById('output')
let t = tf.tensor([1, 2, 3])
output.textContent = t
// YOUR ACTION REQUIRED:
// Use "t.data()" to append the tensor's data values to "output.textContent".
//--snip
t.data().then(t_data => t_data.forEach(
(value, idx) => output.textContent += `\n${idx}: ${value}`
))
# Get top 3 predictions using TensorFlow Eager.
preds = tf.constant([0.1, 0.5, 0.2, 0.0])
topk = tf.math.top_k(preds, 3)
for idx, value in zip(topk.indices.numpy(), topk.values.numpy()):
print('idx', idx, 'value', value)
# Implement the same top 3 functionality in TensorFlow.js, showing the output
# in the "output" element.
# See https://js.tensorflow.org/api/latest/index.html#topk
%%html
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@1.0.0/dist/tf.min.js"></script>
<pre id="output"></pre>
<script>
let output = document.getElementById('output')
let preds = tf.tensor([0.1, 0.5, 0.2, 0.0])
// YOUR ACTION REQUIRED:
// Use tf.topk() to get top 3 predictions in "preds" and append both the
// index and the value of these predictions to "output".
//--snip
const { indices, values } = tf.topk(preds, 3)
Promise.all([indices.data(), values.data()]).then(indices_values => {
const [ indices, values ] = indices_values
indices.forEach((idx, i) => {
output.textContent += `idx ${idx} value ${values[i]}\n`
})
})
# (Never mind the incompatible package complaints - it just works fine.)
!pip install -q tensorflowjs
# Specify directory where to store model.
tfjs_model_path = './tfjs/model'
!mkdir -p "$tfjs_model_path"
import tensorflowjs as tfjs
# Convert model
tf.keras.backend.clear_session() # Clean up variable names before exporting.
# (You can safely ignore the H5pyDeprecationWarning here...)
model = tf.keras.models.load_model(os.path.join(models_path, 'linear.h5'))
tfjs.converters.save_keras_model(model, tfjs_model_path)
!ls -lh "$tfjs_model_path"
import json
# You can copy this into the JavaScript code in the next cell if you load a
# model trained on a custom dataset (code below assumes dataset="zoo").
print(json.dumps(labels))
with open('./tfjs/index.html', 'w') as f:
f.write('''
<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no">
<canvas width="256" height="256" id="canvas" style="border:1px solid black"></canvas><br />
<button id="clear">clear</button><br />
<pre id="output"></pre>
<script>
let upscaleFactor = 4, halfPenSize = 2
let canvas = document.getElementById('canvas')
let output = document.getElementById('output')
let ctx = canvas.getContext('2d')
let img_64 = new Uint8Array(64*64)
let dragging = false
let timeout
let predict = () => {
google.colab.kernel.invokeFunction('amld.predict', [Array.from(img_64)], {}).then(
obj => output.textContent = obj.data['application/json'].result)
}
const getPos = e => {
let x = e.offsetX, y = e.offsetY
if (e.touches) {
const rect = canvas.getBoundingClientRect()
x = e.touches[0].clientX - rect.left
y = e.touches[0].clientY - rect.left
}
return {
x: Math.floor((x - 2*halfPenSize*upscaleFactor/2)/upscaleFactor),
y: Math.floor((y - 2*halfPenSize*upscaleFactor/2)/upscaleFactor),
}
}
const handler = e => {
const { x, y } = getPos(e)
ctx.fillStyle = 'black'
ctx.fillRect(x*upscaleFactor, y*upscaleFactor,
2*halfPenSize*upscaleFactor, 2*halfPenSize*upscaleFactor)
for (let yy = y - halfPenSize; yy < y + halfPenSize; yy++)
for (let xx = x - halfPenSize; xx < x + halfPenSize; xx++)
img_64[64*Math.min(63, Math.max(0, yy)) + Math.min(63, Math.max(0, xx))] = 1
clearTimeout(timeout)
timeout = setTimeout(predict, 500)
}
canvas.addEventListener('touchstart', e => {dragging=true; handler(e)})
canvas.addEventListener('touchmove', e => {e.preventDefault(); dragging && handler(e)})
canvas.addEventListener('touchend', () => dragging=false)
canvas.addEventListener('mousedown', e => {dragging=true; handler(e)})
canvas.addEventListener('mousemove', e => {dragging && handler(e)})
canvas.addEventListener('mouseup', () => dragging=false)
canvas.addEventListener('mouseleave', () => dragging=false)
document.getElementById('clear').addEventListener('click', () => {
ctx.fillStyle = 'white'
ctx.fillRect(0, 0, 64*upscaleFactor, 64*upscaleFactor)
output.textContent = ''
img_64 = new Uint8Array(64*64)
})
</script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@1.0.0/dist/tf.min.js"></script>
<script>
const labels = %s
const modelPath = './model/model.json'
let model = null
tf.loadLayersModel(modelPath)
.then(response => model = response)
.catch(error => output.textContent = 'ERROR : ' + error.message)
predict = () => {
const preds = model.predict(tf.tensor(img_64).reshape([1, 64, -1]))
const { values, indices } = tf.topk(preds, 3)
Promise.all([values.data(), indices.data()]).then(data => {
const [ values, indices ] = data
output.textContent = ''
values.forEach((v, i) => output.textContent += `${labels[indices[i]]} : ${v.toFixed(3)}\n`)
})
}
</script>''' % json.dumps(labels))
# Download ngrok for tunneling.
!if [ ! -f ./ngrok ]; then \
wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip; \
unzip -o ngrok-stable-linux-amd64.zip; \
fi
# Then start a mini web server at a random port.
import random
port = random.randint(1000, 2**16)
!pkill ngrok
!kill $(ps x | grep -v grep | grep http.server | awk '{print $1}') 2>/dev/null
get_ipython().system_raw(
'cd ./tfjs && python3 -m http.server {} &'
.format(port)
)
# And, forward the port using ngrok.
get_ipython().system_raw('./ngrok http {} &'.format(port))
# Get the public address from localhost:4040 (ngrok's web interface).
import time, urllib
time.sleep(1) # Give ngrok time to startup.
ngrok_data = json.load(
urllib.request.urlopen('http://localhost:4040/api/tunnels'))
ngrok_data['tunnels'][0]['public_url']
# You can connect to this external address using your mobile phone!
# Once the page is loaded you can turn on flight modus and verify that
# predictions are really generated on-device. :-)
!pip install -q qrcode
import qrcode
qrcode.make(ngrok_data['tunnels'][0]['public_url'])
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: SHO
Step2: The bound states (below the cutoff) are clearly linear in energy (as expected), then above that we see the ∞-well solutions.
Step3: Periodic wells
Step4: Bandgaps!
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy.linalg import eigh, inv
import matplotlib.pyplot as plt
%matplotlib inline
N = 1000
x, dx = np.linspace(-1,1,N,retstep=True)
#dx = dx*0.1
# Finite square well
V_0 = np.zeros(N)
V_0[:] = 450
V_0[int(N/2 - N/6):int(N/2+N/6)] = 0
plt.plot(x,V_0)
plt.ylim(V.min() - 0.1*V_0.max(),V_0.max()*1.1)
plt.xlim(-1.1,1.1)
Alower = np.diag(np.ones(N)[:-1],k=-1)
Aupper = np.diag(np.ones(N)[:-1],k=1)
Amid = np.diag(-2*np.ones(N),k=0)
A = 1/dx**2 * (Alower + Amid + Aupper)
Blower = np.diag(np.ones(N)[:-1],k=-1)
Bupper = np.diag(np.ones(N)[:-1],k=1)
Bmid = np.diag(10*np.ones(N),k=0)
B = 1/12 * (Blower + Bmid + Bupper)
V = np.diag(V_0)
hbar=1
m=0.5
H = -(hbar**2)/(2*m)*inv(B)*A + V
energy, evecs = eigh(H,eigvals=(0,20))
E0 = energy[0] # ground state energy
states = [evecs[:,i] for i in range(20)]
plt.plot(energy,".")
plt.fill_between(range(21),E0,E0+V_0.max(), color='c', alpha=0.25) # Shade the bound states
for i,state in enumerate(states[0:17]):
# Make these plot at the height for a cool figure!
plt.plot(x,state**2*2000 + energy[i])
plt.title("Finite square well")
#plt.fill_between(x,0,V,color='k',alpha=0.1) # shade in the potential well
# Finite square well
V_0 = 250*x**2
plt.plot(x,V_0)
plt.ylim(-50,400)
plt.xlim(-1.1,1.1)
Alower = np.diag(np.ones(N)[:-1],k=-1)
Aupper = np.diag(np.ones(N)[:-1],k=1)
Amid = np.diag(-2*np.ones(N),k=0)
A = 1/dx**2 * (Alower + Amid + Aupper)
Blower = np.diag(np.ones(N)[:-1],k=-1)
Bupper = np.diag(np.ones(N)[:-1],k=1)
Bmid = np.diag(10*np.ones(N),k=0)
B = 1/12 * (Blower + Bmid + Bupper)
V = np.diag(V_0)
hbar=1
m=0.5
H = -(hbar**2)/(2*m)*inv(B)*A + V
energy, evecs = eigh(H,eigvals=(0,30))
E0 = energy[0]
states = [evecs[:,i] for i in range(30)]
plt.plot(energy,".")
plt.fill_between(range(31),E0,E0+V_0.max(), color='c', alpha=0.25)
for i,state in enumerate(states[0:8]):
# Make these plot at the height for a cool figure!
plt.plot(x,state**2*1000 + energy[i])
plt.title("Harmonic oscillator")
plt.ylim(E0,E0+100)
plt.fill_between(x,E0,E0+V_0,color='k',alpha=0.1)
N = 1000
x, dx = np.linspace(-1,1,N,retstep=True)
V_0 = np.zeros(N)
# periodic wells
V_0[:] = 1000
L = N/12 # width
S = N/10 # s
a = N/4 #
for i in range(5):
V_0[int(i*S+a):int(i*S+a+L)] = 0
plt.plot(x,V_0)
plt.ylim(-50,3050)
plt.xlim(-1.1,1.1)
Alower = np.diag(np.ones(N)[:-1],k=-1)
Aupper = np.diag(np.ones(N)[:-1],k=1)
Amid = np.diag(-2*np.ones(N),k=0)
A = 1/dx**2 * (Alower + Amid + Aupper)
Blower = np.diag(np.ones(N)[:-1],k=-1)
Bupper = np.diag(np.ones(N)[:-1],k=1)
Bmid = np.diag(10*np.ones(N),k=0)
B = 1/12 * (Blower + Bmid + Bupper)
V = np.diag(V_0)
hbar=1
m=0.5
H = -(hbar**2)/(2*m)*inv(B)*A + V
energy, evecs = eigh(H,eigvals=(0,30))
E0 = energy[0]
states = [evecs[:,i] for i in range(30)]
plt.plot(energy,".")
plt.fill_between(range(31),E0,E0+V_0.max(), color='c', alpha=0.25)
plt.figure(figsize=(16,6))
for i,state in enumerate(states[0:15]):
# Make these plot at the height for a cool figure!
plt.plot(x,state**2*3000 + energy[i])
plt.fill_between(x,E0,E0+V_0,color='k',alpha=0.1)
#plt.plot(E0+V_0) TODO
plt.title("Bandgaps in periodic structure")
for i,state in enumerate(states[0:5]):
plt.subplot(5,1,i+1)
plt.plot(x, state**2)
for i,state in enumerate(states[20:25]):
plt.subplot(5,1,i+1)
plt.plot(x, state**2)
plt.figure(figsize=(10,3))
plt.plot(x,states[24]**2)
plt.plot(x,states[20]**2)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Clearly, the OM10 catalog is extended in MAGI/z space well beyond the CFHT reference.
| <ASSISTANT_TASK:>
Python Code:
plt.scatter(db.lenses['ZLENS'],db.lenses['APMAG_I'],color='Orange',marker='.',label='OM10')
plt.scatter(data[:,2],data[:,6],color='Blue',marker='.',label='CFHTLS')
plt.scatter(matched['ZLENS'],matched['APMAG_I'],color='Lime',marker='.',label='Matched OM10',alpha=.05)
plt.title('CFHTLS vs. OM10 Catalogs')
plt.xlabel('Deflector redshift z')
plt.ylabel('Deflector i-band magnitude (AB)')
plt.legend(loc=4)
print len(matched)/len(db.lenses)
db.select_random(maglim=21.4,area=100000.0,IQ=1.0)
ps1matched = db.sample[db.sample['DEC'] > -99]
plt.scatter(db.sample['ZLENS'],db.sample['APMAG_I'],color='Orange',marker='.',label='OM10--PS1')
plt.scatter(data[:,2],data[:,6],color='Blue',marker='.',label='CFHTLS')
plt.scatter(ps1matched['ZLENS'],ps1matched['APMAG_I'],color='Fuchsia',marker='.',label='Matched OM10--PS1',alpha=.5)
plt.title('CFHTLS vs. OM10 Catalogs')
plt.xlabel('Deflector redshift z')
plt.ylabel('Deflector i-band magnitude (AB)')
plt.legend(loc=4)
print len(ps1matched)/len(db.sample)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Skewed split train test
Step2: La répartition train/test est loin d'être statisfaisante lorsqu'il existe une classe sous représentée.
Step3: Une astuce pour éviter les doublons avant d'effecturer un pivot.
Step4: Le modèle AdaBoost construit 10 arbres tout comme la forêt aléatoire à ceci près que le poids associé à chacun des arbres des différents et non uniforme.
| <ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
import numpy, numpy.random
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.metrics import confusion_matrix
N = 1000
res = []
for n in [1, 2, 5, 10, 20, 50, 80, 90, 100, 110]:
print("n=", n)
for k in range(10):
X = numpy.zeros((N, 2))
X[:, 0] = numpy.random.randint(0, 2, (N,))
X[:, 1] = numpy.random.randint(0, n+1, (N,))
Y = X[:, 0] + X[:, 1] + numpy.random.normal(size=(N,)) / 2
Y[Y < 1.5] = 0
Y[Y >= 1.5] = 1
X_train, X_test, y_train, y_test = train_test_split(X, Y)
stat = dict(N=N, n=n, ratio_train=y_train.sum()/y_train.shape[0],
k=k, ratio_test=y_test.sum()/y_test.shape[0])
for model in [LogisticRegression(solver="liblinear"),
MLPClassifier(max_iter=500),
RandomForestClassifier(n_estimators=10),
AdaBoostClassifier(DecisionTreeClassifier(), n_estimators=10)]:
obs = stat.copy()
obs["model"] = model.__class__.__name__
if obs["model"] == "AdaBoostClassifier":
obs["model"] = "AdaB-" + model.base_estimator.__class__.__name__
try:
model.fit(X_train, y_train)
except ValueError as e:
obs["erreur"] = str(e)
res.append(obs)
continue
sc = model.score(X_test, y_test)
obs["accuracy"] = sc
conf = confusion_matrix(y_test, model.predict(X_test))
try:
obs["Error-0|1"] = conf[0, 1] / conf[0, :].sum()
obs["Error-1|0"] = conf[1, 0] / conf[1, :].sum()
except Exception:
pass
res.append(obs)
from pandas import DataFrame
df = DataFrame(res)
df = df.sort_values(['n', 'model', 'model', "k"]).reset_index(drop=True)
df["diff_ratio"] = (df["ratio_test"] - df["ratio_train"]).abs()
df.head(n=5)
df.tail(n=5)
df[df.n==100][["n", "ratio_test", "ratio_train"]].head(n=10)
#df.to_excel("data.xlsx")
columns = ["n", "N", "model"]
agg = df.groupby(columns, as_index=False).mean().sort_values(["n", "model"]).reset_index(drop=True)
agg.tail()
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 2, figsize=(10,4))
agg.plot(x="n", y="diff_ratio", ax=ax[0])
agg.plot(x="n", y="ratio_train", ax=ax[1])
agg.plot(x="n", y="ratio_test", ax=ax[1])
ax[0].set_title("Maximum difference between\nratio of first class on train and test")
ax[1].set_title("Ratio of first class on train and test")
ax[0].legend();
agg2 = agg.copy()
agg2["ratio_test2"] = agg2["ratio_test"] + agg2["n"] / 100000
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 3, figsize=(14,4))
agg2.pivot("ratio_test2", "model", "accuracy").plot(ax=ax[0])
agg2.pivot("ratio_test2", "model", "Error-0|1").plot(ax=ax[1])
agg2.pivot("ratio_test2", "model", "Error-1|0").plot(ax=ax[2])
ax[0].plot([0.5, 1.0], [0.5, 1.0], '--', label="constant")
ax[0].set_title("Accuracy")
ax[1].set_title("Error-0|1")
ax[2].set_title("Error-1|0")
ax[0].legend();
agg2.pivot("ratio_test2", "model", "Error-0|1")
from sklearn.datasets import load_diabetes
boston = load_diabetes()
X, y = boston.data, boston.target
X_train, X_test, y_train, y_test = train_test_split(X, y)
from sklearn.ensemble import RandomForestRegressor
model = None
res = []
for i in range(0, 20):
if model is None:
model = RandomForestRegressor(n_estimators=1, warm_start=True)
else:
model.set_params(**dict(n_estimators=model.n_estimators+1))
model.fit(X_train, y_train)
score = model.score(X_test, y_test)
res.append(dict(n_estimators=model.n_estimators, score=score))
df = DataFrame(res)
df.head()
ax = df.plot(x="n_estimators", y="score")
ax.set_title("Apprentissage continu\nmesure de la performance à chaque itération");
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Cross correlation
Step3: Edge detection
Step4: Now we apply a vertical edge detector. It fires on the 1-0 and 0-1 boundaries.
Step5: It fails to detect horizontal edges.
Step6: Convolution as matrix multiplication
Step7: Optimizing the kernel parameters
Step9: Multiple input channels
Step10: Multiple output channels
Step11: 1x1 convolution
Step12: Pooling
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(seed=1)
import math
try:
import torch
except ModuleNotFoundError:
%pip install -qq torch
import torch
from torch import nn
from torch.nn import functional as F
!mkdir figures # for saving plots
import warnings
warnings.filterwarnings("ignore")
# For reproducibility on different runs
torch.backends.cudnn.deterministic = True
torch.manual_seed(hash("by removing stochasticity") % 2**32 - 1)
torch.cuda.manual_seed_all(hash("so runs are repeatable") % 2**32 - 1)
# Cross correlation
def corr2d(X, K):
Compute 2D cross-correlation.
h, w = K.shape
Y = torch.zeros((X.shape[0] - h + 1, X.shape[1] - w + 1))
for i in range(Y.shape[0]):
for j in range(Y.shape[1]):
Y[i, j] = (X[i : i + h, j : j + w] * K).sum()
return Y
X = torch.tensor([[0.0, 1.0, 2.0], [3.0, 4.0, 5.0], [6.0, 7.0, 8.0]])
K = torch.tensor([[0.0, 1.0], [2.0, 3.0]])
print(corr2d(X, K))
X = torch.ones((6, 8))
X[:, 2:6] = 0
X
K = torch.tensor([[1.0, -1.0]])
Y = corr2d(X, K)
print(Y)
corr2d(X.t(), K)
# K = torch.tensor([[0, 1], [2, 3]])
K = torch.tensor([[1, 2], [3, 4]])
print(K)
def kernel2matrix(K):
k, W = torch.zeros(5), torch.zeros((4, 9))
k[:2], k[3:5] = K[0, :], K[1, :]
W[0, :5], W[1, 1:6], W[2, 3:8], W[3, 4:] = k, k, k, k
return W
W = kernel2matrix(K)
print(W)
X = torch.arange(9.0).reshape(3, 3)
Y = corr2d(X, K)
print(Y)
Y2 = torch.mv(W, X.reshape(-1)).reshape(2, 2)
assert np.allclose(Y, Y2)
# Construct a two-dimensional convolutional layer with 1 output channel and a
# kernel of shape (1, 2). For the sake of simplicity, we ignore the bias here
conv2d = nn.Conv2d(1, 1, kernel_size=(1, 2), bias=False)
# The two-dimensional convolutional layer uses four-dimensional input and
# output in the format of (example channel, height, width), where the batch
# size (number of examples in the batch) and the number of channels are both 1
# Defining X and Y again.
X = torch.ones((6, 8))
X[:, 2:6] = 0
K = torch.tensor([[1.0, -1.0]])
Y = corr2d(X, K)
X = X.reshape((1, 1, 6, 8))
Y = Y.reshape((1, 1, 6, 7))
for i in range(10):
Y_hat = conv2d(X)
l = (Y_hat - Y) ** 2
conv2d.zero_grad()
l.sum().backward()
# Update the kernel
conv2d.weight.data[:] -= 3e-2 * conv2d.weight.grad
if (i + 1) % 2 == 0:
print(f"batch {i + 1}, loss {l.sum():.3f}")
print(conv2d.weight.data.reshape((1, 2)))
def corr2d(X, K):
Compute 2D cross-correlation.
h, w = K.shape
Y = torch.zeros((X.shape[0] - h + 1, X.shape[1] - w + 1))
for i in range(Y.shape[0]):
for j in range(Y.shape[1]):
Y[i, j] = torch.sum((X[i : i + h, j : j + w] * K))
return Y
def corr2d_multi_in(X, K):
# First, iterate through the 0th dimension (channel dimension) of `X` and
# `K`. Then, add them together
return sum(corr2d(x, k) for x, k in zip(X, K))
X = torch.tensor(
[[[0.0, 1.0, 2.0], [3.0, 4.0, 5.0], [6.0, 7.0, 8.0]], [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]]]
)
K = torch.tensor([[[0.0, 1.0], [2.0, 3.0]], [[1.0, 2.0], [3.0, 4.0]]])
print(X.shape) # 2 channels, each 3x3
print(K.shape) # 2 sets of 2x2 filters
out = corr2d_multi_in(X, K)
print(out.shape)
print(out)
def corr2d_multi_in_out(X, K):
# Iterate through the 0th dimension of `K`, and each time, perform
# cross-correlation operations with input `X`. All of the results are
# stacked together
return torch.stack([corr2d_multi_in(X, k) for k in K], 0)
K = torch.stack((K, K + 1, K + 2), 0)
print(K.shape)
out = corr2d_multi_in_out(X, K)
print(out.shape)
# 1x1 conv is same as multiplying each feature column at each pixel
# by a fully connected matrix
def corr2d_multi_in_out_1x1(X, K):
c_i, h, w = X.shape
c_o = K.shape[0]
X = X.reshape((c_i, h * w))
K = K.reshape((c_o, c_i))
Y = torch.matmul(K, X) # Matrix multiplication in the fully-connected layer
return Y.reshape((c_o, h, w))
X = torch.normal(0, 1, (3, 3, 3)) # 3 channels per pixel
K = torch.normal(0, 1, (2, 3, 1, 1)) # map from 3 channels to 2
Y1 = corr2d_multi_in_out_1x1(X, K)
Y2 = corr2d_multi_in_out(X, K)
print(Y2.shape)
assert float(torch.abs(Y1 - Y2).sum()) < 1e-6
def pool2d(X, pool_size, mode="max"):
p_h, p_w = pool_size
Y = torch.zeros((X.shape[0] - p_h + 1, X.shape[1] - p_w + 1))
for i in range(Y.shape[0]):
for j in range(Y.shape[1]):
if mode == "max":
Y[i, j] = X[i : i + p_h, j : j + p_w].max()
elif mode == "avg":
Y[i, j] = X[i : i + p_h, j : j + p_w].mean()
return Y
# X = torch.arange(16, dtype=torch.float32).reshape((1, 1, 4, 4))
X = torch.arange(16, dtype=torch.float32).reshape((4, 4))
print(X)
print(X.shape)
print(pool2d(X, (3, 3), "max"))
X = torch.arange(16, dtype=torch.float32).reshape((1, 1, 4, 4))
pool2d = nn.MaxPool2d(3, padding=0, stride=1)
print(pool2d(X))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Importing modules
Step2: Some Editing tricks
Step3: Very quick plotting (just for export really)
Step4: Try export > html, > pdf (requires pandoc or even a LaTeX installation).
Step5: Some magic and multiple outputs
Step6: Jupyter has a kind of meta-commands starting with the percent character. Some of these are useful for displaying information, such as writing formulas with latex.
Step7: Useful to know that we can also set environment variables (also useful for Theano)
Step8: %store to pass variables between notebooks!
Step9: No clue what kendalltau is?
| <ASSISTANT_TASK:>
Python Code:
!ls
!pip install --user pandas matplotlib sklearn seaborn
!pip install version_information
%load_ext version_information
%version_information pandas, sklearn
!pip install watermark
%load_ext watermark
%watermark -a "Gerrit Gruben" -d -t -v -p numpy,pandas -g
from somemodule import hello
hello()
hello()
del hello
%load_ext autoreload
%autoreload 2
%aimport somemodule
somemodule.hello()
somemodule.hello()
from IPython.display import FileLink
FileLink("Simple.ipynb")
import sklearn
from sklearn.datasets import load_boston
df = load_boston()
X, y = df.data, df.target
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
from sklearn.metrics import mean_squared_error
from sklearn.ensemble import RandomForestRegressor
rf_reg = RandomForestRegressor(2)
rf_reg.fit(X_train, y_train)
print(mean_squared_error(y_test, rf_reg.predict(X_test)))
# Just need a df
from sklearn.datasets import california_housing
cal = california_housing.fetch_california_housing()
df = pd.DataFrame(data=cal.data, columns=cal.feature_names, index=cal.target)
df.head(10)
%matplotlib inline
import matplotlib.pyplot as plt
plt.scatter(df.MedInc, df.index)
import seaborn as sns
sns.jointplot(df.MedInc, df.index)
%config InlineBackend.figure_format = "retina"
x, y = 5, 3
x
y
# Show all output values
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
x, y = 5, 3
x
y
%lsmagic
%whos
%%latex
$$ x^3 + C = \int{\frac{1}{3} x^2 \; dx} \quad (C \in \mathbb{R})$$
%%system
ls -laH
du -sh .
%env OMP_NUM_THREADS=8
%%writefile some_code.py
import numpy as np
from scipy.stats import kendalltau
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="ticks")
rs = np.random.RandomState(11)
x = rs.gamma(2, size=1000)
y = -.5 * x + rs.normal(size=1000)
sns.jointplot(x, y, kind="hex", stat_func=kendalltau, color="#4CB391")
plt.show()
kendalltau?
%pycat some_code.py
%run some_code.py
%matplotlib inline
%run some_code.py
InteractiveShell.ast_node_interactivity = "last"
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get TFIDF scores for corpus without pivoted document length normalisation
Step2: Get TFIDF scores for corpus with pivoted document length normalisation testing on various values of alpha.
Step3: Visualizing the pivoted normalization
| <ASSISTANT_TASK:>
Python Code:
#
# Download our dataset
#
import gensim.downloader as api
nws = api.load("20-newsgroups")
#
# Pick texts from relevant newsgroups, split into training and test set.
#
cat1, cat2 = ('sci.electronics', 'sci.space')
#
# X_* contain the actual texts as strings.
# Y_* contain labels, 0 for cat1 (sci.electronics) and 1 for cat2 (sci.space)
#
X_train = []
X_test = []
y_train = []
y_test = []
for i in nws:
if i["set"] == "train" and i["topic"] == cat1:
X_train.append(i["data"])
y_train.append(0)
elif i["set"] == "train" and i["topic"] == cat2:
X_train.append(i["data"])
y_train.append(1)
elif i["set"] == "test" and i["topic"] == cat1:
X_test.append(i["data"])
y_test.append(0)
elif i["set"] == "test" and i["topic"] == cat2:
X_test.append(i["data"])
y_test.append(1)
from gensim.parsing.preprocessing import preprocess_string
from gensim.corpora import Dictionary
id2word = Dictionary([preprocess_string(doc) for doc in X_train])
train_corpus = [id2word.doc2bow(preprocess_string(doc)) for doc in X_train]
test_corpus = [id2word.doc2bow(preprocess_string(doc)) for doc in X_test]
print(len(X_train), len(X_test))
# We perform our analysis on top k documents which is almost top 10% most scored documents
k = len(X_test) // 10
from gensim.sklearn_api.tfidf import TfIdfTransformer
from sklearn.linear_model import LogisticRegression
from gensim.matutils import corpus2csc
# This function returns the model accuracy and indivitual document prob values using
# gensim's TfIdfTransformer and sklearn's LogisticRegression
def get_tfidf_scores(kwargs):
tfidf_transformer = TfIdfTransformer(**kwargs).fit(train_corpus)
X_train_tfidf = corpus2csc(tfidf_transformer.transform(train_corpus), num_terms=len(id2word)).T
X_test_tfidf = corpus2csc(tfidf_transformer.transform(test_corpus), num_terms=len(id2word)).T
clf = LogisticRegression().fit(X_train_tfidf, y_train)
model_accuracy = clf.score(X_test_tfidf, y_test)
doc_scores = clf.decision_function(X_test_tfidf)
return model_accuracy, doc_scores
params = {}
model_accuracy, doc_scores = get_tfidf_scores(params)
print(model_accuracy)
import numpy as np
# Sort the document scores by their scores and return a sorted list
# of document score and corresponding document lengths.
def sort_length_by_score(doc_scores, X_test):
doc_scores = sorted(enumerate(doc_scores), key=lambda x: x[1])
doc_leng = np.empty(len(doc_scores))
ds = np.empty(len(doc_scores))
for i, _ in enumerate(doc_scores):
doc_leng[i] = len(X_test[_[0]])
ds[i] = _[1]
return ds, doc_leng
print(
"Normal cosine normalisation favors short documents as our top {} "
"docs have a smaller mean doc length of {:.3f} compared to the corpus mean doc length of {:.3f}"
.format(
k, sort_length_by_score(doc_scores, X_test)[1][:k].mean(),
sort_length_by_score(doc_scores, X_test)[1].mean()
)
)
best_model_accuracy = 0
optimum_slope = 0
for slope in np.arange(0, 1.1, 0.1):
params = {"pivot": 10, "slope": slope}
model_accuracy, doc_scores = get_tfidf_scores(params)
if model_accuracy > best_model_accuracy:
best_model_accuracy = model_accuracy
optimum_slope = slope
print("Score for slope {} is {}".format(slope, model_accuracy))
print("We get best score of {} at slope {}".format(best_model_accuracy, optimum_slope))
params = {"pivot": 10, "slope": optimum_slope}
model_accuracy, doc_scores = get_tfidf_scores(params)
print(model_accuracy)
print(
"With pivoted normalisation top {} docs have mean length of {:.3f} "
"which is much closer to the corpus mean doc length of {:.3f}"
.format(
k, sort_length_by_score(doc_scores, X_test)[1][:k].mean(),
sort_length_by_score(doc_scores, X_test)[1].mean()
)
)
%matplotlib inline
import matplotlib.pyplot as py
best_model_accuracy = 0
optimum_slope = 0
w = 2
h = 2
f, axarr = py.subplots(h, w, figsize=(15, 7))
it = 0
for slope in [1, 0.2]:
params = {"pivot": 10, "slope": slope}
model_accuracy, doc_scores = get_tfidf_scores(params)
if model_accuracy > best_model_accuracy:
best_model_accuracy = model_accuracy
optimum_slope = slope
doc_scores, doc_leng = sort_length_by_score(doc_scores, X_test)
y = abs(doc_scores[:k, np.newaxis])
x = doc_leng[:k, np.newaxis]
py.subplot(1, 2, it+1).bar(x, y, width=20, linewidth=0)
py.title("slope = " + str(slope) + " Model accuracy = " + str(model_accuracy))
py.ylim([0, 4.5])
py.xlim([0, 3200])
py.xlabel("document length")
py.ylabel("confidence score")
it += 1
py.tight_layout()
py.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 为了修正这个问题,你可以修改模式字符串,增加对换行的支持。比如:
Step2: 在这个模式中, (?
| <ASSISTANT_TASK:>
Python Code:
import re
comment = re.compile(r"/\*(.*?)\*/")
text1 = '/* this is a comment */'
text2 = '''/* this is a
multiline comment */
'''
comment.findall(text1)
comment.findall(text2)
comment = re.compile(r'/\*((?:.|\n)*?)\*/')
comment.findall(text2)
comment = re.compile(r'/\*(.*?)\*/', re.DOTALL)
comment.findall(text2)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data import
Step2: Convenience function for reading the data in
Step3: Getting a list of all user_ids in the sample.
Step4: Pick a subset of users for testing and validation
Step5: Create new test and training sets, using bookings from 2013 as training data and 2014 as test data.
Step6: Remove click events from cv_test as in original test data.
Step7: Model 0
Step8: Predicting the most common clusters in groups of srch_destination_id, hotel_country, hotel_market.
Step9: This dictionary has a key of srch_destination_id, hotel_country, hotel_market and each value is another dictionary, with hotel clusters as keys and scores as values.
Step10: Making predictions based on destination
Step11: srch_destination_id, is_booking
Step12: Make dummy variables from categorical features. Pandas has get_dummies(), but currently this returns only float64-s, that thends to be rather memory hungry and slow. See #8725.
Step13: I would say that not that bad at all (comparing the random forrest classifier in the Dataquest tutorial).
Step14: => destination latent features seem not to be for any good use?!
Step15: The best single predictor of a hotel cluster seems to be hotel_market.
| <ASSISTANT_TASK:>
Python Code:
import collections
import itertools
import operator
import random
import heapq
import matplotlib.pyplot as plt
import ml_metrics as metrics
import numpy as np
import pandas as pd
import sklearn
import sklearn.decomposition
import sklearn.linear_model
import sklearn.preprocessing
%matplotlib notebook
traincols = ['date_time', 'site_name', 'posa_continent', 'user_location_country',
'user_location_region', 'user_location_city', 'orig_destination_distance',
'user_id', 'is_mobile', 'is_package', 'channel', 'srch_ci', 'srch_co',
'srch_adults_cnt', 'srch_children_cnt', 'srch_rm_cnt', 'srch_destination_id',
'srch_destination_type_id', 'is_booking', 'cnt', 'hotel_continent',
'hotel_country', 'hotel_market', 'hotel_cluster']
testcols = ['id', 'date_time', 'site_name', 'posa_continent', 'user_location_country',
'user_location_region', 'user_location_city', 'orig_destination_distance',
'user_id', 'is_mobile', 'is_package', 'channel', 'srch_ci', 'srch_co',
'srch_adults_cnt', 'srch_children_cnt', 'srch_rm_cnt', 'srch_destination_id',
'srch_destination_type_id', 'hotel_continent', 'hotel_country', 'hotel_market']
def read_csv(filename, cols, nrows=None):
datecols = ['date_time', 'srch_ci', 'srch_co']
dateparser = lambda x: pd.to_datetime(x, format='%Y-%m-%d %H:%M:%S', errors='coerce')
dtypes = {
'id': np.uint32,
'site_name': np.uint8,
'posa_continent': np.uint8,
'user_location_country': np.uint16,
'user_location_region': np.uint16,
'user_location_city': np.uint16,
'orig_destination_distance': np.float32,
'user_id': np.uint32,
'is_mobile': bool,
'is_package': bool,
'channel': np.uint8,
'srch_adults_cnt': np.uint8,
'srch_children_cnt': np.uint8,
'srch_rm_cnt': np.uint8,
'srch_destination_id': np.uint32,
'srch_destination_type_id': np.uint8,
'is_booking': bool,
'cnt': np.uint64,
'hotel_continent': np.uint8,
'hotel_country': np.uint16,
'hotel_market': np.uint16,
'hotel_cluster': np.uint8,
}
df = pd.read_csv(
filename,
nrows=nrows,
usecols=cols,
dtype=dtypes,
parse_dates=[col for col in datecols if col in cols],
date_parser=dateparser,
)
if 'date_time' in df.columns:
df['month'] = df['date_time'].dt.month.astype(np.uint8)
df['year'] = df['date_time'].dt.year.astype(np.uint16)
if 'srch_ci' and 'srch_co' in df.columns:
df['srch_ngt'] = (df['srch_co'] - df['srch_ci']).astype('timedelta64[h]')
if 'srch_children_cnt' in df.columns:
df['is_family'] = np.array(df['srch_children_cnt'] > 0)
return df
train = read_csv('data/train.csv.gz', nrows=None, cols=traincols)
train_ids = set(train.user_id.unique())
len(train_ids)
sel_user_ids = sorted(random.sample(train_ids, 12000))
sel_train = train[train.user_id.isin(sel_user_ids)]
cv_train = sel_train[sel_train.year == 2013]
cv_test = sel_train[sel_train.year == 2014]
cv_test = cv_test[cv_test.is_booking == True]
most_common_clusters = list(cv_train.hotel_cluster.value_counts().head().index)
match_cols = ['srch_destination_id']
match_cols = ['srch_destination_id', 'hotel_country', 'hotel_market']
groups = cv_train.groupby(match_cols + ['hotel_cluster'])
top_clusters = {}
for name, group in groups:
bookings = group['is_booking'].sum()
clicks = len(group) - bookings
score = bookings + .15*clicks
clus_name = name[:len(match_cols)]
if clus_name not in top_clusters:
top_clusters[clus_name] = {}
top_clusters[clus_name][name[-1]] = score
cluster_dict = {}
for n in top_clusters:
tc = top_clusters[n]
top = [l[0] for l in sorted(tc.items(), key=operator.itemgetter(1), reverse=True)[:5]]
cluster_dict[n] = top
preds = []
for index, row in cv_test.iterrows():
key = tuple([row[m] for m in match_cols])
pred = cluster_dict.get(key, most_common_clusters)
preds.append(pred)
cv_target = [[l] for l in cv_test['hotel_cluster']]
metrics.mapk(cv_target, preds, k=5)
clf = sklearn.linear_model.SGDClassifier(loss='log', n_jobs=4)
cv_train_data = pd.DataFrame()
for elem in cv_train['srch_destination_id'].unique():
cv_train_data[str(elem)] = cv_train['srch_destination_id'] == elem
cv_test_data = pd.DataFrame()
for elem in cv_train_data.columns:
cv_test_data[elem] = cv_test['srch_destination_id'] == int(elem)
# cv_train_data['is_booking'] = cv_train['is_booking']
# cv_test_data['is_booking'] = np.ones(len(cv_test_data), dtype=bool)
clf.fit(cv_train_data, cv_train['hotel_cluster'])
result = clf.predict_proba(cv_test_data)
preds = [heapq.nlargest(5, clf.classes_, row.take) for row in result]
metrics.mapk(cv_target, preds, k=5)
dest = pd.read_csv(
'data/destinations.csv.gz',
index_col = 'srch_destination_id',
)
pca = sklearn.decomposition.PCA(n_components=10)
dest_small = pca.fit_transform(dest[['d{}'.format(i) for i in range(1,150)]])
dest_small = pd.DataFrame(dest_small, index=dest.index)
cv_train_data = pd.DataFrame({key: cv_train[key] for key in ['srch_destination_id']})
cv_train_data = cv_train_data.join(dest_small, on=['srch_destination_id'], how='left')
cv_train_data = cv_train_data.fillna(dest_small.mean())
cv_test_data = pd.DataFrame({key: cv_test[key] for key in ['srch_destination_id']})
cv_test_data = cv_test_data.join(dest_small, on='srch_destination_id', how='left', rsuffix='dest')
cv_test_data = cv_test_data.fillna(dest_small.mean())
clf = sklearn.linear_model.SGDClassifier(loss='log', n_jobs=4)
clf.fit(cv_train_data, cv_train['hotel_cluster'])
result = clf.predict_proba(cv_test_data)
preds = [heapq.nlargest(5, clf.classes_, row.take) for row in result]
metrics.mapk(cv_target, preds, k=5)
features = [
'site_name', 'posa_continent', 'user_location_country',
'user_location_region', 'user_location_city',
'is_mobile', 'is_package',
'channel', 'srch_adults_cnt', 'srch_destination_id',
'srch_destination_type_id', 'is_booking', 'cnt',
'hotel_continent', 'hotel_country', 'hotel_market',
'month', 'year', 'is_family',
]
def fit_features(features, train, test):
# Data manipulation - split categorical features
train_data = pd.DataFrame()
test_data = pd.DataFrame()
for feature in features:
if train[feature].dtype == np.dtype('bool'):
train_data[feature] = train[feature]
test_data[feature] = test[feature]
else:
for elem in train[feature].unique():
train_data['{}_{}'.format(feature, elem)] = train[feature] == elem
test_data['{}_{}'.format(feature, elem)] = test[feature] == elem
# Fitting
clf = sklearn.linear_model.SGDClassifier(loss='log', n_jobs=4)
clf.fit(train_data, train['hotel_cluster'])
# Cross-validate the fit
result = clf.predict_proba(test_data)
preds = [heapq.nlargest(5, clf.classes_, row.take) for row in result]
target = [[l] for l in test['hotel_cluster']]
return metrics.mapk(target, preds, k=5)
cv_results = {}
for feature in features:
cv_results[feature] = fit_features([feature], cv_train, cv_test)
print('{}: {}'.format(feature, cv_results[feature]))
sorted(cv_results.items(), key=operator.itemgetter(1), reverse=True)
features2 = [['hotel_market'] + [f] for f in features if f not in ['hotel_market']]
cv_results2 = {}
for feature in features2:
cv_results2[tuple(feature)] = fit_features(feature, cv_train, cv_test)
print('{}: {}'.format(feature, cv_results2[tuple(feature)]))
sorted(cv_results2.items(), key=operator.itemgetter(1), reverse=True)[:3]
features3 = [['hotel_market', 'srch_destination_id'] + [f] for f in features if f not in ['hotel_market', 'srch_destination_id']]
cv_results3 = {}
for feature in features3:
cv_results3[tuple(feature)] = fit_features(feature, cv_train, cv_test)
print('{}: {}'.format(feature, cv_results3[tuple(feature)]))
sorted(cv_results3.items(), key=operator.itemgetter(1), reverse=True)[:3]
features4 = [['hotel_market', 'srch_destination_id', 'hotel_country'] + [f] for f in features if f not in ['hotel_market', 'srch_destination_id', 'hotel_country']]
cv_results4 = {}
for feature in features4:
cv_results4[tuple(feature)] = fit_features(feature, cv_train, cv_test)
print('{}: {}'.format(feature, cv_results4[tuple(feature)]))
sorted(cv_results4.items(), key=operator.itemgetter(1), reverse=True)[:3]
sel_features = ['hotel_market', 'srch_destination_id', 'hotel_country', 'is_package']
features5 = [sel_features + [f] for f in features if f not in sel_features]
cv_results5 = {}
for feature in features5:
cv_results5[tuple(feature)] = fit_features(feature, cv_train, cv_test)
print('{}: {}'.format(feature, cv_results5[tuple(feature)]))
sorted(cv_results5.items(), key=operator.itemgetter(1), reverse=True)[:3]
sel_features = ['hotel_market', 'srch_destination_id', 'hotel_country', 'is_package', 'is_booking']
features6 = [sel_features + [f] for f in features if f not in sel_features]
cv_results6 = {}
for feature in features6:
cv_results6[tuple(feature)] = fit_features(feature, cv_train, cv_test)
print('{}: {}'.format(feature, cv_results6[tuple(feature)]))
sorted(cv_results6.items(), key=operator.itemgetter(1), reverse=True)[:3]
sel_features = ['hotel_market', 'srch_destination_id', 'hotel_country', 'is_package', 'is_booking', 'posa_continent']
features7 = [sel_features + [f] for f in features if f not in sel_features]
cv_results7 = {}
for feature in features7:
cv_results7[tuple(feature)] = fit_features(feature, cv_train, cv_test)
print('{}: {}'.format(feature, cv_results7[tuple(feature)]))
sorted(cv_results7.items(), key=operator.itemgetter(1), reverse=True)[:3]
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Half Adder
Step4: myHDL Testing
Step5: Verilog Code
Step7: Verilog Testbench
Step9: Full Adder From Exspresion
Step11: myHDL Testing
Step12: Verilog Code
Step14: Verilog Testbench
Step16: Full Adder via HalfAdders
Step18: myHDL Testing
Step19: Verilog Code
Step21: Verilog Testbench
Step23: 4 Bit Ripple Carry
Step24: myHDL Testing
Step26: Verilog Testbench
Step28: N Bit Ripple Carray Adder
Step29: myHDL Testing
Step30: Verilog Code
Step32: myHDL Testbench (ToDo)
Step34: myHDL Testing
Step35: Verilog Code
Step37: Verilog Testbench
Step39: Stage Three
Step41: myHDL Testing
Step42: Verilog Code
Step44: Verilog Testbench
Step45: Stage Two
| <ASSISTANT_TASK:>
Python Code:
from myhdl import *
from myhdlpeek import Peeker
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sympy import *
init_printing()
import itertools
#https://github.com/jrjohansson/version_information
%load_ext version_information
%version_information myhdl, myhdlpeek, numpy, pandas, matplotlib, sympy, random, itertools
#helper functions to read in the .v and .vhd generated files into python
def VerilogTextReader(loc, printresult=True):
with open(f'{loc}.v', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***Verilog modual from {loc}.v***\n\n', VerilogText)
return VerilogText
def VHDLTextReader(loc, printresult=True):
with open(f'{loc}.vhd', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***VHDL modual from {loc}.vhd***\n\n', VerilogText)
return VerilogText
BitVals=[0,1]
@block
def HalfAdder(x1, x2, s, c):
Half Adder
Input:
x1(bool): bit signal to be added
x2(bool): bit signal to be added
Output:
s(bool): Half Adder Sum
c(bool): Half Adder Carry
@always_comb
def logic():
s.next=x1 ^ x2
c.next=x1 & x2
return instances()
Peeker.clear()
x1=Signal(bool(0)); Peeker(x1, 'x1')
x2=Signal(bool(0)); Peeker(x2, 'x2')
s=Signal(bool(0)); Peeker(s, 's')
c=Signal(bool(0)); Peeker(c, 'c')
#generate test values
x1TVals=[i[0] for i in itertools.product(BitVals, repeat=len(BitVals))]
x2TVals=[i[1] for i in itertools.product(BitVals, repeat=len(BitVals))]
DUT=HalfAdder(x1, x2, s, c)
def HalfAdder_TB():
myHDL only Testbench for module "HalfAdder"
@instance
def stimules():
for i, j in zip(x1TVals, x2TVals):
x1.next=i; x2.next=j
yield delay(1)
raise StopSimulation()
return instances()
sim=Simulation(DUT, HalfAdder_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom('x1', 'x2', 's', 'c')
HalfAdderData=Peeker.to_dataframe()
HalfAdderData=HalfAdderData[['x1', 'x2', 's', 'c']]
HalfAdderData
HalfAdderData['sCheck']=HalfAdderData.apply(lambda row: row['x1']^row['x2'], axis=1)
HalfAdderData['cCheck']=HalfAdderData.apply(lambda row: row['x1']&row['x2'], axis=1)
HalfAdderData
SumCheck=(HalfAdderData['s']==HalfAdderData['sCheck']).all()
CarryCheck=(HalfAdderData['c']==HalfAdderData['cCheck']).all()
print(f'Sum Result Check: {SumCheck}; Carry Result Check: {CarryCheck}')
DUT.convert()
VerilogTextReader('HalfAdder');
#generate test values
x1TVals=[i[0] for i in itertools.product(BitVals, repeat=len(BitVals))]
x2TVals=[i[1] for i in itertools.product(BitVals, repeat=len(BitVals))]
#create single value representation of Test Inputs
x1TVal=intbv(int(''.join([str(i) for i in x1TVals]), 2))[len(x1TVals):]
x2TVal=intbv(int(''.join([str(i) for i in x2TVals]), 2))[len(x2TVals):]
print(f'x1Test: {x1TVals}, {x1TVal}, {[int(i) for i in x1TVal]}')
print(f'x2Test: {x2TVals}, {x2TVal}, {[int(i) for i in x2TVal]}')
@block
def HalfAdder_TBV():
myHDL -> Verilog Testbench for module "HalfAdder"
x1=Signal(bool(0))
x2=Signal(bool(0))
s=Signal(bool(0))
c=Signal(bool(0))
#test stimuli
x1TVals=Signal(x1TVal)
x2TVals=Signal(x2TVal)
@always_comb
def print_data():
print(x1, x2, s, c)
DUT=HalfAdder(x1, x2, s, c)
@instance
def stimules():
for i in range(len(x1TVal)):
x1.next=x1TVals[i]; x2.next=x2TVals[i]
yield delay(1)
raise StopSimulation()
return instances()
TB=HalfAdder_TBV()
TB.convert(hdl="Verilog", initial_values=True)
VerilogTextReader('HalfAdder_TBV');
@block
def FullAdder(x1, x2, cin, s, c):
Full Adder 2bit+1 input from exspresion
Input:
x1(bool): bit signal to be added
x2(bool): bit signal to be added
cin(bool): carry in bit signal
Output:
s(bool): Full Adder Sum
c(bool): Full Adder Carry
@always_comb
def logic():
s.next=x1^x2^cin
c.next=(x1&x2) | (x1&cin) | (x2&cin)
return instances()
Peeker.clear()
x1=Signal(bool(0)); Peeker(x1, 'x1')
x2=Signal(bool(0)); Peeker(x2, 'x2')
cin=Signal(bool(0)); Peeker(cin, 'cin')
s=Signal(bool(0)); Peeker(s, 's')
c=Signal(bool(0)); Peeker(c, 'c')
#generate test values
x1TVals=[i[0] for i in itertools.product(BitVals, repeat=len(BitVals)+1)]
x2TVals=[i[1] for i in itertools.product(BitVals, repeat=len(BitVals)+1)]
cinTVals=[i[2] for i in itertools.product(BitVals, repeat=len(BitVals)+1)]
DUT=FullAdder(x1, x2, cin, s, c)
def FullAdder_TB():
myHDL only Testbench for module "FullAdder"
@instance
def stimules():
for i, j, k in zip(x1TVals, x2TVals, cinTVals):
x1.next=i; x2.next=j; cin.next=k
yield delay(1)
raise StopSimulation()
return instances()
sim=Simulation(DUT, FullAdder_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom('x1', 'x2', 'cin', 's', 'c')
FullAdderData=Peeker.to_dataframe()
FullAdderData=FullAdderData[['x1', 'x2', 'cin', 's', 'c']]
FullAdderData
FullAdderData['sCheck']=FullAdderData.apply(lambda row: row['x1']^row['x2']^row['cin'], axis=1)
FullAdderData['cCheck']=FullAdderData.apply(lambda row: (row['x1']&row['x2'])|(row['cin']*(row['x1']^row['x2'])), axis=1)
FullAdderData
SumCheck=(FullAdderData['s']==FullAdderData['sCheck']).all()
CarryCheck=(FullAdderData['c']==FullAdderData['cCheck']).all()
print(f'Sum Result Check: {SumCheck}; Carry Result Check: {CarryCheck}')
DUT.convert()
VerilogTextReader('FullAdder');
#generate test values
x1TVals=[i[0] for i in itertools.product(BitVals, repeat=len(BitVals)+1)]
x2TVals=[i[1] for i in itertools.product(BitVals, repeat=len(BitVals)+1)]
cinTVals=[i[2] for i in itertools.product(BitVals, repeat=len(BitVals)+1)]
#create single value representation of Test Inputs
x1TVal=intbv(int(''.join([str(i) for i in x1TVals]), 2))[len(x1TVals):]
x2TVal=intbv(int(''.join([str(i) for i in x2TVals]), 2))[len(x2TVals):]
cinTVal=intbv(int(''.join([str(i) for i in cinTVals]), 2))[len(cinTVals):]
print(f'x1Test: {x1TVals}, {x1TVal}, {[int(i) for i in x1TVal]}')
print(f'x2Test: {x2TVals}, {x2TVal}, {[int(i) for i in x2TVal]}')
print(f'cinTest: {cinTVals}, {cinTVal}, {[int(i) for i in cinTVal]}')
@block
def FullAdder_TBV():
myHDL -> Verilog Testbench for module "FullAdder"
x1=Signal(bool(0))
x2=Signal(bool(0))
cin=Signal(bool(0))
s=Signal(bool(0))
c=Signal(bool(0))
#test stimuli
x1TVals=Signal(x1TVal)
x2TVals=Signal(x2TVal)
cinTVals=Signal(cinTVal)
@always_comb
def print_data():
print(x1, x2, cin, s, c)
DUT=FullAdder(x1, x2, cin, s, c)
@instance
def stimules():
for i in range(len(x1TVal)):
x1.next=x1TVals[i]; x2.next=x2TVals[i]; cin.next=cinTVals[i]
yield delay(1)
raise StopSimulation()
return instances()
TB=FullAdder_TBV()
TB.convert(hdl="Verilog", initial_values=True)
VerilogTextReader('FullAdder_TBV');
@block
def FullAdderViaHAs(x1, x2, cin, s, c):
Full Adder 2bit+1 input from Half Adders
Input:
x1(bool): bit signal to be added
x2(bool): bit signal to be added
cin(bool): carry in bit signal
Output:
s(bool): Full Adder Sum
c(bool): Full Adder Carry
#create ouput wires from first HA (HA1) to next HA and
#`CarryLogic`
s_HA1HA2=Signal(bool(0)); c_HA1CL=Signal(bool(0))
#create first HA and wire it
HA1_x1x2=HalfAdder(x1=x1, x2=x2, s=s_HA1HA2, c=c_HA1CL)
#create output wire for seconed HA (HA2) to `CarryLogic`
c_HA2CL=Signal(bool(0))
HA2_HA1cin=HalfAdder(x1=cin, x2=s_HA1HA2, s=s, c=c_HA2CL)
@always_comb
def CarryLogic():
c.next= c_HA1CL|c_HA2CL
return instances()
Peeker.clear()
x1=Signal(bool(0)); Peeker(x1, 'x1')
x2=Signal(bool(0)); Peeker(x2, 'x2')
cin=Signal(bool(0)); Peeker(cin, 'cin')
s=Signal(bool(0)); Peeker(s, 's')
c=Signal(bool(0)); Peeker(c, 'c')
#generate test values
x1TVals=[i[0] for i in itertools.product(BitVals, repeat=len(BitVals)+1)]
x2TVals=[i[1] for i in itertools.product(BitVals, repeat=len(BitVals)+1)]
cinTVals=[i[2] for i in itertools.product(BitVals, repeat=len(BitVals)+1)]
DUT=FullAdderViaHAs(x1, x2, cin, s, c)
def FullAdderViaHAs_TB():
myHDL only Testbench for module "FullAdderViaHAs"
@instance
def stimules():
for i, j, k in zip(x1TVals, x2TVals, cinTVals):
x1.next=i; x2.next=j; cin.next=k
yield delay(1)
raise StopSimulation()
return instances()
sim=Simulation(DUT, FullAdder_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom('x1', 'x2', 'cin', 's', 'c')
FullAdderHAData=Peeker.to_dataframe()
FullAdderHAData=FullAdderHAData[['x1', 'x2', 'cin', 's', 'c']]
FullAdderHAData
FullAdderHAData['sCheck']=FullAdderHAData.apply(lambda row: row['x1']^row['x2']^row['cin'], axis=1)
FullAdderHAData['cCheck']=FullAdderHAData.apply(lambda row: (row['x1']&row['x2'])|(row['cin']*(row['x1']^row['x2'])), axis=1)
FullAdderHAData
SumCheck=(FullAdderHAData['s']==FullAdderHAData['sCheck']).all()
CarryCheck=(FullAdderHAData['c']==FullAdderHAData['cCheck']).all()
print(f'Sum Result Check: {SumCheck}; Carry Result Check: {CarryCheck}')
(FullAdderData==FullAdderHAData).all()
DUT.convert()
VerilogTextReader('FullAdderViaHAs');
#generate test values
x1TVals=[i[0] for i in itertools.product(BitVals, repeat=len(BitVals)+1)]
x2TVals=[i[1] for i in itertools.product(BitVals, repeat=len(BitVals)+1)]
cinTVals=[i[2] for i in itertools.product(BitVals, repeat=len(BitVals)+1)]
#create single value representation of Test Inputs
x1TVal=intbv(int(''.join([str(i) for i in x1TVals]), 2))[len(x1TVals):]
x2TVal=intbv(int(''.join([str(i) for i in x2TVals]), 2))[len(x2TVals):]
cinTVal=intbv(int(''.join([str(i) for i in cinTVals]), 2))[len(cinTVals):]
print(f'x1Test: {x1TVals}, {x1TVal}, {[int(i) for i in x1TVal]}')
print(f'x2Test: {x2TVals}, {x2TVal}, {[int(i) for i in x2TVal]}')
print(f'cinTest: {cinTVals}, {cinTVal}, {[int(i) for i in cinTVal]}')
@block
def FullAdderViaHAs_TBV():
myHDL -> Verilog Testbench for module "FullAdderViaHAs"
x1=Signal(bool(0))
x2=Signal(bool(0))
cin=Signal(bool(0))
s=Signal(bool(0))
c=Signal(bool(0))
#test stimuli
x1TVals=Signal(x1TVal)
x2TVals=Signal(x2TVal)
cinTVals=Signal(cinTVal)
@always_comb
def print_data():
print(x1, x2, cin, s, c)
DUT=FullAdderViaHAs(x1, x2, cin, s, c)
@instance
def stimules():
for i in range(len(x1TVal)):
x1.next=x1TVals[i]; x2.next=x2TVals[i]; cin.next=cinTVals[i]
yield delay(1)
raise StopSimulation()
return instances()
TB=FullAdderViaHAs_TBV()
TB.convert(hdl="Verilog", initial_values=True)
VerilogTextReader('FullAdderViaHAs_TBV');
@block
def RippleCarry4Bit(X1, X2, cin, S, c):
4 Bit Ripple Carray Adder
Input:
Output:
S_i = [Signal(bool(0)) for _ in range(len(S))] # Sum bit for each stage.
#create intercontect wire between FA0 and FA1 and intialze FA0 and wire up
c_FA0FA1=Signal(bool(0))
FA0=FullAdder(x1=X1(0), x2=X2(0), cin=cin, s=S_i[0], c=c_FA0FA1)
c_FA1FA2=Signal(bool(0))
FA1=FullAdder(x1=X1(1), x2=X2(1), cin=c_FA0FA1, s=S_i[1], c=c_FA1FA2)
c_FA2FA3=Signal(bool(0))
FA2=FullAdder(x1=X1(2), x2=X2(2), cin=c_FA1FA2, s=S_i[2], c=c_FA2FA3)
FA3=FullAdder(x1=X1(3), x2=X2(3), cin=c_FA2FA3, s=S_i[3], c=c)
#concat bus wires to single bitvec wire
@always_comb
def ConCatSum():
S.next=concat(S_i[3], S_i[2], S_i[1], S_i[0])
return instances()
BitSize=4
np.random.seed(12)
X1TVals=np.random.randint(0, 2**BitSize, 2**BitSize)
X2TVals=np.random.randint(0, 2**BitSize, 2**BitSize)
cinTVals=np.random.randint(0,2,2**BitSize)
print(X1TVals)
print(X2TVals)
print(cinTVals)
Peeker.clear()
X1=Signal(intbv(0)[BitSize:]); Peeker(X1, 'X1')
X2=Signal(intbv(0)[BitSize:]); Peeker(X2, 'X2')
cin=Signal(bool(0)); Peeker(cin, 'cin')
S=Signal(intbv(0)[BitSize:]); Peeker(S, 'S')
c=Signal(bool(0)); Peeker(c, 'c')
DUT = RippleCarry4Bit(X1=X1, X2=X2, cin=cin, S=S, c=c)
def RippleCarry4Bit_TB():
@instance
def stimules():
for i in range(len(X1TVals)):
X1.next=int(X1TVals[i])
X2.next=int(X2TVals[i])
cin.next=int(cinTVals[i])
yield delay(1)
raise StopSimulation()
return instances()
sim=Simulation(DUT, RippleCarry4Bit_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom('cin', 'X1', 'X2', 'S', 'c')
RC4BData=Peeker.to_dataframe()
RC4BData=RC4BData[['cin', 'X1', 'X2', 'S', 'c']]
RC4BData
def RC4B_SCheck(cin, X1, X2):
S=cin+X1+X2
if S<16:
return S
else:
return S-16
def RC4B_cCheck(cin, X1, X2):
S=cin+X1+X2
if S<16:
return 0
else:
return 1
RC4BData['SCheck']=RC4BData.apply(lambda row: RC4B_SCheck(row['cin'], row['X1'], row['X2']), axis=1)
RC4BData['cCheck']=RC4BData.apply(lambda row: RC4B_cCheck(row['cin'], row['X1'], row['X2']), axis=1)
RC4BData
SumCheck=(RC4BData['S']==RC4BData['SCheck']).all()
CarryCheck=(RC4BData['c']==RC4BData['cCheck']).all()
print(f'Sum Result Check: {SumCheck}; Carry Result Check: {CarryCheck}')
DUT.convert()
VerilogTextReader('RippleCarry4Bit');
X1TVal=[intbv(int(i))[BitSize:] for i in X1TVals]
X2TVal=[intbv(int(i))[BitSize:] for i in X2TVals]
cinTVal=intbv(int(''.join([str(i) for i in cinTVals]), 2))[len(cinTVals):]
print(X1TVal)
print(X2TVal)
print(cinTVal)
@block
def RippleCarry4Bit_TBV():
myHDL -> Verilog Testbench for module "FullAdderViaHAs"
X1=Signal(intbv(0)[BitSize:])
X2=Signal(intbv(0)[BitSize:])
cin=Signal(bool(0))
S=Signal(intbv(0)[BitSize:])
c=Signal(bool(0))
#test stimuli
X1TVals=[Signal(i) for i in X1TVal]
X2TVals=[Signal(i) for i in X2TVal]
cinTVals=Signal(cinTVal)
@always_comb
def print_data():
print(X1, X2, cin, S, c)
DUT=RippleCarry4Bit(X1=X1, X2=X2, cin=cin, S=S, c=c)
@instance
def stimules():
for i in range(len(X1TVals)):
X1.next=X1TVals[i]
X2.next=X2TVals[i]
cin.next=cinTVals[i]
yield delay(1)
raise StopSimulation()
return instances()
TB=RippleCarry4Bit_TBV()
TB.convert(hdl="Verilog", initial_values=True)
VerilogTextReader('RippleCarry4Bit_TBV');
@block
def RippleCarryNBit(X1, X2, cin, S, c):
N Bit Ripple Carray Adder
Input:
Output:
S_i = [Signal(bool(0)) for _ in range(len(S))] # Sum bit for each stage.
c_FAiFAj=[Signal(bool(0)) for _ in range(len(S)-1)]
FAStages=[]
for i in range(len(S)):
if i==0:
FAStages.append(FullAdder(x1=X1(i), x2=X2(i), cin=cin, s=S_i[i], c=c_FAiFAj[i]))
elif i<(len(S)-1):
FAStages.append(FullAdder(x1=X1(i), x2=X2(i), cin=c_FAiFAj[i-1], s=S_i[i], c=c_FAiFAj[i]))
else:
FAStages.append(FullAdder(x1=X1(i), x2=X2(i), cin=c_FAiFAj[i-1], s=S_i[i], c=c))
#concat bus wires to single bitvec wire
@always_comb
def ConCatSum():
S.next=concat(*reversed(S_i))
return instances()
BitSize=16
np.random.seed(12)
X1TVals=np.random.randint(0, 2**BitSize, 2**4)
X2TVals=np.random.randint(0, 2**BitSize, 2**4)
cinTVals=np.random.randint(0,2,2**4)
print(X1TVals)
print(X2TVals)
print(cinTVals)
Peeker.clear()
X1=Signal(intbv(0)[BitSize:]); Peeker(X1, 'X1')
X2=Signal(intbv(0)[BitSize:]); Peeker(X2, 'X2')
cin=Signal(bool(0)); Peeker(cin, 'cin')
S=Signal(intbv(0)[BitSize:]); Peeker(S, 'S')
c=Signal(bool(0)); Peeker(c, 'c')
DUT = RippleCarryNBit(X1=X1, X2=X2, cin=cin, S=S, c=c)
def RippleCarryNBit_TB():
@instance
def stimules():
for i in range(len(X1TVals)):
X1.next=int(X1TVals[i])
X2.next=int(X2TVals[i])
cin.next=int(cinTVals[i])
yield delay(1)
raise StopSimulation()
return instances()
sim=Simulation(DUT, RippleCarryNBit_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom('cin', 'X1', 'X2', 'S', 'c')
RCNBData=Peeker.to_dataframe()
RCNBData=RCNBData[['cin', 'X1', 'X2', 'S', 'c']]
RCNBData
def RCNB_SCheck(cin, X1, X2):
S=cin+X1+X2
if S<2**BitSize:
return S
else:
return S-(2**BitSize)
def RCNB_cCheck(cin, X1, X2):
S=cin+X1+X2
if S<2**BitSize:
return 0
else:
return 1
RCNBData['SCheck']=RCNBData.apply(lambda row: RCNB_SCheck(row['cin'], row['X1'], row['X2']), axis=1)
RCNBData['cCheck']=RCNBData.apply(lambda row: RCNB_cCheck(row['cin'], row['X1'], row['X2']), axis=1)
RCNBData
SumCheck=(RCNBData['S']==RCNBData['SCheck']).all()
CarryCheck=(RCNBData['c']==RCNBData['cCheck']).all()
print(f'Sum Result Check: {SumCheck}; Carry Result Check: {CarryCheck}')
#DUT.convert()
#VerilogTextReader('RippleCarryNBit');
@block
def CLAS1(x1, x2, p, g):
Carry Look Ahead Adder Stage One
Input:
x1(bool): x1 input
x2(bool): x2 input
Output:
p(bool):carry propagate
g(bool):carry generate
@always_comb
def logic():
p.next=x1^x2
g.next=x1&x2
return instances()
Peeker.clear()
x1=Signal(bool(0)); Peeker(x1, 'x1')
x2=Signal(bool(0)); Peeker(x2, 'x2')
p=Signal(bool(0)); Peeker(p, 'p')
g=Signal(bool(0)); Peeker(g, 'g')
#generate test values
x1TVals=[i[0] for i in itertools.product(BitVals, repeat=len(BitVals))]
x2TVals=[i[1] for i in itertools.product(BitVals, repeat=len(BitVals))]
DUT=CLAS1(x1, x2, p, g)
def CLAS1_TB():
myHDL only Testbench for module "CL"
@instance
def stimules():
for i, j in zip(x1TVals, x2TVals):
x1.next=i; x2.next=j
yield delay(1)
raise StopSimulation()
return instances()
sim=Simulation(DUT, CLAS1_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom('x1', 'x2', 'p', 'g')
CLAS1Data=Peeker.to_dataframe()
CLAS1Data=CLAS1Data[['x1', 'x2', 'p', 'g']]
CLAS1Data
CLAS1Data['pCheck']=CLAS1Data.apply(lambda row: row['x1']^row['x2'], axis=1)
CLAS1Data['gCheck']=CLAS1Data.apply(lambda row: row['x1']&row['x2'], axis=1)
CLAS1Data
pCheck=(CLAS1Data['p']==CLAS1Data['pCheck']).all()
gCheck=(CLAS1Data['g']==CLAS1Data['gCheck']).all()
print(f'p Result Check: {pCheck}; g Result Check: {gCheck}')
DUT.convert()
VerilogTextReader('CLAS1');
#generate test values
x1TVals=[i[0] for i in itertools.product(BitVals, repeat=len(BitVals))]
x2TVals=[i[1] for i in itertools.product(BitVals, repeat=len(BitVals))]
#create single value representation of Test Inputs
x1TVal=intbv(int(''.join([str(i) for i in x1TVals]), 2))[len(x1TVals):]
x2TVal=intbv(int(''.join([str(i) for i in x2TVals]), 2))[len(x2TVals):]
print(f'x1Test: {x1TVals}, {x1TVal}, {[int(i) for i in x1TVal]}')
print(f'x2Test: {x2TVals}, {x2TVal}, {[int(i) for i in x2TVal]}')
@block
def CLAS1_TBV():
myHDL -> Verilog Testbench for module "CLAS1"
x1=Signal(bool(0))
x2=Signal(bool(0))
p=Signal(bool(0))
g=Signal(bool(0))
#test stimuli
x1TVals=Signal(x1TVal)
x2TVals=Signal(x2TVal)
@always_comb
def print_data():
print(x1, x2, p, g)
DUT=CLAS1(x1, x2, p, g)
@instance
def stimules():
for i in range(len(x1TVal)):
x1.next=x1TVals[i]; x2.next=x2TVals[i]
yield delay(1)
raise StopSimulation()
return instances()
TB=CLAS1_TBV()
TB.convert(hdl="Verilog", initial_values=True)
VerilogTextReader('CLAS1_TBV');
@block
def CLAS3(p, c, s):
Carrry Look Ahead Adder Stage 3
Input:
p(bool):carry propagate
c(bool): carry
Output:
s(bool): sum
@always_comb
def logic():
s.next=p^c
return instances()
Peeker.clear()
p=Signal(bool(0)); Peeker(p, 'p')
c=Signal(bool(0)); Peeker(c, 'c')
s=Signal(bool(0)); Peeker(s, 's')
#generate test values
pTVals=[i[0] for i in itertools.product(BitVals, repeat=len(BitVals))]
cTVals=[i[1] for i in itertools.product(BitVals, repeat=len(BitVals))]
DUT=CLAS3(p, c, s)
def CLAS3_TB():
myHDL only Testbench for module "CLAS3"
@instance
def stimules():
for i, j in zip(pTVals, cTVals):
p.next=i; c.next=j
yield delay(1)
raise StopSimulation()
return instances()
sim=Simulation(DUT, CLAS3_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom('p', 'c', 's')
CLAS3Data=Peeker.to_dataframe()
CLAS3Data=CLAS3Data[['p', 'c', 's']]
CLAS3Data
CLAS3Data['sCheck']=CLAS3Data.apply(lambda row: row['p']^row['c'], axis=1)
CLAS3Data
sCheck=(CLAS3Data['s']==CLAS3Data['sCheck']).all()
print(f's Result Check: {sCheck}')
DUT.convert()
VerilogTextReader('CLAS3');
#generate test values
pTVals=[i[0] for i in itertools.product(BitVals, repeat=len(BitVals))]
cTVals=[i[1] for i in itertools.product(BitVals, repeat=len(BitVals))]
#create single value representation of Test Inputs
pTVal=intbv(int(''.join([str(i) for i in pTVals]), 2))[len(pTVals):]
cTVal=intbv(int(''.join([str(i) for i in cTVals]), 2))[len(cTVals):]
print(f'pTest: {pTVals}, {pTVal}, {[int(i) for i in pTVal]}')
print(f'cTest: {cTVals}, {cTVal}, {[int(i) for i in cTVal]}')
@block
def CLAS3_TBV():
myHDL -> Verilog Testbench for module "CLAS3"
p=Signal(bool(0))
c=Signal(bool(0))
s=Signal(bool(0))
#test stimuli
pTVals=Signal(pTVal)
cTVals=Signal(cTVal)
@always_comb
def print_data():
print(p, c, s)
DUT=CLAS3(p, c, s)
@instance
def stimules():
for i in range(len(pTVal)):
p.next=pTVals[i]; c.next=cTVals[i]
yield delay(1)
raise StopSimulation()
return instances()
TB=CLAS3_TBV()
TB.convert(hdl="Verilog", initial_values=True)
VerilogTextReader('CLAS3_TBV');
def CLAS2B4_TestFunc(P, G, cin):
P=[int(i) for i in reversed(bin(P,4))]
G=[int(i) for i in reversed(bin(G,4))]
C=[]
for i in range(4):
if i==0:
C.append(int(G[i]) | (P[i]&cin))
else:
C.append(int(G[i]) | (P[i] &C[i-1]))
C=int(''.join(str(i) for i in C[::-1]),2)
return C
CLAS2B4_TestFunc(8,6,1)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get Numpy on the cluster
Step2: DataFrame --> GraphFrame
Step3: Loading the Data - Edges
Step4: Make the graph
Step5: Graph Analytics
Step6: Put a feature in pandas - put it back into pandas if resorting to sklearn
Step7: Extra Queries - THIS IS WHERE THINGS BREAK
Step8: There are two implementations of PageRank.
Step9: Porting the graphframe (if necessary)
Step10: ## Run the K-Means algorithm -----------------------------------------------------
Step11: PowerIteration Clustering -----------------------------------------------------
Step12: Model Evaluation on Local Machine
Step13: Conditional entropy analyses on clusters
Step14: supervised learning techniques are rigid and don't take into account changing patterns.
Step15: DOESNT WORK
Step16: FINISH LATER
| <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
# from pyspark import SparkContext, SparkConf
# from pyspark.mllib.clustering import KMeans, KMeansModel
# # http://spark.apache.org/docs/2.0.0/api/python/pyspark.mllib.html#pyspark.mllib.evaluation.RankingMetrics
# from pyspark.mllib.evaluation import BinaryClassificationMetrics, MulticlassMetrics, RankingMetrics
import numpy as np
from numpy import array
import pandas as pd
from random import randrange
from math import sqrt
!ls -l
sc
conf = SparkConf()
# sc = SparkContext("local", "Simple App")
# sc.stop(sc)
# sc.getOrCreate("local", "Simple App")
transactions = pd.read_csv('transactions.csv')
transactions['Date'] = pd.to_datetime(transactions['Date'],unit='ms') #coerce date format
transactions[:3]
print('transactions columns: ', list(transactions.columns))
#establish sql context
from pyspark.sql import SQLContext
# Instantiate SQL_SparkContext object
SQL_CONTEXT = SQLContext(sc)
nodes = pd.read_csv('transactions.csv',
usecols=['Source', 'isTainted'],
low_memory=True,
iterator=True,
chunksize=1000)
# Concatenate chunks into list & convert to DataFrame
nodes = pd.DataFrame(pd.concat(list(nodes), ignore_index=True))
# Create a Vertex DataFrame with unique ID column "id"
nodes.columns = ['id', 'isTainted']
print('nodes columns: ', list(nodes.columns))
NODES = SQL_CONTEXT.createDataFrame(nodes)
NODES.take(3)
# Parallelize -----------------------------------------------------
# VERTICES = sc.parallelize(nodes)
# data = array([observation_group_1, observation_group_2, observation_group_3]).reshape(n_in_each_group*3, 5)
# data
# data = sc.parallelize(data)
# data.getNumPartitions()
# # Generate the observations -----------------------------------------------------
# n_in_each_group = 10 # how many observations in each group
# n_of_feature = 5 # how many features we have for each observation
# observation_group_1=[]
# for i in range(n_in_each_group*n_of_feature):
# observation_group_1.append(randrange(5, 8))
# observation_group_2=[]
# for i in range(n_in_each_group*n_of_feature):
# observation_group_2.append(randrange(55, 58))
# observation_group_3=[]
# for i in range(n_in_each_group*n_of_feature):
# observation_group_3.append(randrange(105, 108))
# del GENES_DF_CLEAN, GENES_DF, GENES
edges = pd.read_csv('transactions.csv',
usecols=['Amount $', 'Date', 'Destination', 'Source', 'Transaction ID', 'isTainted'],
low_memory=True,
iterator=True,
chunksize=1000)
# Concatenate chunks into list & convert to DataFrame
edges = pd.DataFrame(pd.concat(list(edges), ignore_index=True))
cols = ['Source', 'Destination', 'isTainted', 'Amount $', 'Date', 'Transaction ID']
edges = edges[cols]
print('edges columns: ', list(edges.columns))
# Create an Edge DataFrame with "src" and "dst" columns
edges.columns = ["src", "dst", "relationship", 'Amount $', 'Date', 'TxID']
print('edges columns: ', list(edges.columns))
EDGES = SQL_CONTEXT.createDataFrame(edges)
EDGES.take(3)
from graphframes import *
type(NODES), type(EDGES)
# Next we finally create the graph:
g = GraphFrame(NODES, EDGES)
# Query: Count the number of "isTainted" connections in the graph.
print(g.vertices.count())
print(g.edges.count())
print(g.degrees.count())
print(g.vertices.filter("isTainted = 5").count())
# Query: Get in-degree of each vertex.
print("Vertex in-Degree -----------------------------------------------------------------------")
df = g.inDegrees.sort('inDegree', ascending=False).toPandas()
transactions = transactions.merge(df,
left_on='Source',
right_on='id',)
transactions.head()
print("Vertex out-Degree ----------------------------------------------------------------------")
df = g.outDegrees.sort('outDegree', ascending=False).toPandas()
transactions = transactions.merge(df,
left_on='Source',
right_on='id')
transactions.head()
print("Vertex degree --------------------------------------------------------------------------")
df = g.degrees.sort('degree', ascending=False).toPandas()
transactions = transactions.merge(df,
left_on='Source',
right_on='id')
transactions.head()
transactions = transactions.drop(['id_x', 'id_y', 'id'], axis = 1)
# hits no space left on device
print("Triangle Count -------------------------------------------------------------------------")
RESULTS = g.triangleCount()
df = RESULTS.select("id", "count").toPandas()
transactions = transactions.merge(df,
left_on='Source',
right_on='id')
transactions.head()
print("Label Propagation ----------------------------------------------------------------------")
# Convergence is not guaranteed
df = g.labelPropagation(maxIter=10).toPandas()
# transactions = transactions.merge(df,
# left_on='Source',
# right_on='id')
df.head()
# # Run PageRank algorithm the other one, and show results.
# results = g.pageRank(resetProbability=0.01, maxIter=20)
# results.vertices.select("id", "pagerank").show()
# Run PageRank algorithm (takes awhile), and show results.
print("PageRank -------------------------------------------------------------------------------")
df = g.pageRank(resetProbability=0.15, tol=0.01)\
.vertices.sort('pagerank', ascending=False).toPandas()
# transactions = transactions.merge(df,
# left_on='Source',
# right_on='id')
df.head()
print("Find Shortest Paths w.r.t. Tainted Wallets ---------------------------------------------------")
SHORTEST_PATH = g.shortestPaths(landmarks=["5"])
df = SHORTEST_PATH.select("id", "distances").toPandas()
# transactions = transactions.merge(df,
# left_on='Source',
# right_on='id')
df.head()
print('transactions columns: ', list(transactions.columns))
cols = ['inDegree', 'outDegree', 'degree']
tmp_transactions = transactions[cols]
def string_to_int(value):
try:
return int(value)
except ValueError:
return None
for column in tmp_transactions.columns:
tmp_transactions[column] = tmp_transactions[column].apply(string_to_int)
tmp_transactions.info()
data = sc.parallelize(tmp_transactions)
data.getNumPartitions()
# for MultinomialNB classification
tmp_transactions.to_csv('tmp-txns-no-headers.txt', header=False, index = True)
!ls
# Build the K-Means model
# the initializationMode can also be "k-means||" or set by users.
clusters = KMeans.train(data, 2, maxIterations=3, initializationMode="random")
# Collect the clustering result
result=data.map(lambda point: clusters.predict(point)).collect()
print(result)
# Evaluate clustering by computing Within Set Sum of Squared Errors
def error(point):
center = clusters.centers[clusters.predict(point)]
return sqrt(sum([x**2 for x in (point - center)]))
WSSSE = data.map(lambda point: error(point)).reduce(lambda x, y: x + y)
print("Within Set Sum of Squared Error = " + str(WSSSE))
from pyspark.mllib.clustering import PowerIterationClustering, PowerIterationClusteringModel
# Load and parse the data
data = sc.textFile("tmp-txns-no-headers.txt")
similarities = data.map(lambda line: tuple([float(x) for x in line.split(',')]))
type(similarities)
# Cluster the data into two classes using PowerIterationClustering
model = PowerIterationClustering.train(similarities, k=2, maxIterations=10, initMode='random')
results = []
assignments = model.assignments().collect()
assignments[:3]
for x in assignments:
results.append([x.id, x.cluster])
results[:3]
# Save and load model
# model.save(sc, "PICModel")
# sameModel = PowerIterationClusteringModel.load(sc, "PICModel")
results_df = pd.DataFrame(results, index=None, columns = ['id', 'cluster'])
# DOUBLE BRACKETS CREATES A DF
merged = transactions[['isTainted']].merge(results_df, left_index=True, right_on='id')
import pickle
with open("cluster-results.pkl", 'rb') as picklefile:
results = pickle.load(picklefile)
results.head(3)
def convert_to_true(value):
if value == 5:
return 1
else:
return 0
results['isTainted'] = results['isTainted'].apply(convert_to_true)
results[results['isTainted'] == 1].head(3)
from sklearn.metrics import classification_report, homogeneity_score, completeness_score
y_true = results['isTainted']
y_pred = results['cluster']
target_names = ['TaintedWallet', 'Wallet']
print(classification_report(y_true, y_pred, target_names=target_names))
# homogeneity: each cluster contains only members of a single class.
homogeneity_score(y_true, y_pred)
# completeness: all members of a given class are assigned to the same cluster.
completeness_score(y_true, y_pred)
# http://spark.apache.org/docs/2.0.0/api/python/pyspark.mllib.html#pyspark.mllib.classification.NaiveBayesModel
from pyspark.mllib.classification import NaiveBayes, NaiveBayesModel
from pyspark.mllib.regression import LabeledPoint
from pyspark.mllib.linalg import SparseVector
data = [LabeledPoint(0.0, [0.0, 0.0]),
LabeledPoint(0.0, [0.0, 1.0]),
LabeledPoint(1.0, [1.0, 0.0])]
model = NaiveBayes.train(sc.parallelize(data))
model.predict(array([0.0, 1.0]))
model.predict(array([1.0, 0.0]))
model.predict(sc.parallelize([[1.0, 0.0]])).collect()
sparse_data = [LabeledPoint(0.0, SparseVector(2, {1: 0.0})),
LabeledPoint(0.0, SparseVector(2, {1: 1.0})),
LabeledPoint(1.0, SparseVector(2, {0: 1.0}))]
model = NaiveBayes.train(sc.parallelize(sparse_data))
model.predict(SparseVector(2, {1: 1.0}))
model.predict(SparseVector(2, {0: 1.0}))
import os, tempfile
path = tempfile.mkdtemp()
model.save(sc, path)
sameModel = NaiveBayesModel.load(sc, path)
sameModel.predict(SparseVector(2, {0: 1.0})) == model.predict(SparseVector(2, {0: 1.0}))
from shutil import rmtree
try:
rmtree(path)
except OSError:
pass
from graphframes.examples import Graphs
g = Graphs(sqlContext).friends() # Get example graph
# Search for pairs of vertices with edges in both directions between them.
motifs = g.find("(a)-[e]->(b); (b)-[e2]->(a)")
motifs.show()
# More complex queries can be expressed by applying filters.
motifs.filter("b.age > 30").show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
| <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csiro-bom', 'sandbox-1', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Adding volumes from HathiTrust
Step2: Working with Extracted Features
Step3: Now we'll feed these paths into the FeatureReader method which will create a FeatureReader object
Step4: We can now cycle through properties of the FeatureReader
Step5: Let's try to pull out some more metadata about these titles, using the Volume object in FeatureReader. We'll get the HT URL, year, and page count for each volume.
Step6: The source_institution tells us where the volumes were scanned
Step7: Let's take a closer look at the first volume
Step8: The tokens_per_page method will give us the words in the volume
Step9: We can easily plot the number of tokens across every page of the book
Step10: Now let's look at some specific pages, using the Page object in FeatureReader. We'll take the first 200 pages in this volume
Step11: Then we'll index the 200th page
Step12: We can get a list of the tokens with the tokenlist method
Step13: We can do this for every page and get a huge table!
Step14: Challenge
| <ASSISTANT_TASK:>
Python Code:
%%capture
!pip install htrc-feature-reader
import os
from htrc_features import FeatureReader
from datascience import *
import pandas as pd
%matplotlib inline
!rm -rf local-folder/
!rm -rf local-folder/
!rm -rf data/coo*
!rm -rf data/mdp*
!rm -rf data/uc1*
download_output = !htid2rsync --f data/vol_ids_5.txt | rsync -azv --files-from=- data.sharc.hathitrust.org::features/ data/
download_output
suffix = '.json.bz2'
file_paths = ['data/' + path for path in download_output if path.endswith(suffix)]
file_paths
fr = FeatureReader(file_paths)
for vol in fr.volumes():
print(vol.id, vol.title, vol.author)
print()
for vol in fr.volumes():
print("URL: %s Year: %s Page count: %s " % (vol.handle_url, vol.year, vol.page_count))
for vol in fr.volumes():
print("Source institution: %s " % (vol.source_institution))
vol = fr.first()
vol.title
tokens = vol.tokens_per_page()
tokens.head()
tokens.plot()
pages = [page for page in vol.pages()]
page_200 = pages[199]
print("The body has %s lines, %s empty lines, and %s sentences" % (page_200.line_count(),
page_200.empty_line_count(),
page_200.sentence_count()))
Table.from_df(page_200.tokenlist().reset_index())
all_pages_meta = Table.from_df(pd.concat([p.tokenlist().reset_index() for p in pages]))
all_pages_meta.show(10)
vol.
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Description
Step2: Example 2
| <ASSISTANT_TASK:>
Python Code:
def rgb2hsv(rgb_img):
import numpy as np
r = rgb_img[:,:,0].ravel()
g = rgb_img[:,:,1].ravel()
b = rgb_img[:,:,2].ravel()
hsv_map = map(rgb2hsvmap, r, g, b)
hsv_img = np.array(list(hsv_map)).reshape(rgb_img.shape)
return hsv_img
def rgb2hsvmap(r, g, b):
maxc = max(r, g, b)
minc = min(r, g, b)
# HSV: Hue, Saturation, Value
# H: position in the spectrum
# S: color saturation ("purity")
# V: color brightness
v = maxc
if minc == maxc:
return 0.0, 0.0, v
s = (maxc-minc) / maxc
rc = (maxc-r) / (maxc-minc)
gc = (maxc-g) / (maxc-minc)
bc = (maxc-b) / (maxc-minc)
if r == maxc:
h = bc-gc
elif g == maxc:
h = 2.0+rc-bc
else:
h = 4.0+gc-rc
h = (h/6.0) % 1.0
#tonalidade variando de 0 a 360, indicando o ângulo no circulo aonde a tonalidade (H) está definido,
#e a saturação e o brilho variando de 0.0 a 1.0, representando o menor e o maior valor possível
return h, s, v
#adapted from python source code (Lib/colorsys.py - rgb_to_hsv(r, g, b))
testing = (__name__ == "__main__")
if testing:
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import matplotlib.colors as mcolor
rgb_img = mpimg.imread('/etc/jupyterhub/ia898_1s2017/ia898/data/boat.tif')
print('rgb_img.shape: ' , rgb_img.shape)
plt.figure(1)
plt.title('RGB')
plt.imshow(rgb_img)
r = rgb_img[:,:,0].ravel()
g = rgb_img[:,:,1].ravel()
b = rgb_img[:,:,2].ravel()
print('r: ', r)
print('g: ',g)
print('b: ',b)
hsv_img = rgb2hsv(rgb_img)
print('hsv_img.shape: ' , hsv_img.shape)
h = hsv_img[:,:,0].ravel()
s = hsv_img[:,:,1].ravel()
v = hsv_img[:,:,2].ravel()
plt.figure(2)
plt.title('rgb2hsv')
plt.imshow(hsv_img)
mcolor_hsv = mcolor.rgb_to_hsv(rgb_img)
plt.figure(3)
plt.title('mcolor.rgb_to_hsv')
plt.imshow(mcolor_hsv)
print('h: ', h)
print('s: ',s)
print('v: ',v)
if testing:
import sys,os
ia898path = os.path.abspath('/etc/jupyterhub/ia898_1s2017/')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
pixels = np.array([
[[243., 114., 25.],
[111., 19., 115.],
[43., 188., 69.]],
[[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]]
])
print("Shape: ", pixels.shape)
hsv_img = rgb2hsv(pixels)
fig = plt.figure(figsize=(10, 10))
fig.add_subplot(1, 3, 1)
plt.imshow(pixels)
plt.title('Original image RGB')
fig.add_subplot(1, 3, 2)
plt.imshow(hsv_img, cmap='hsv')
plt.title('Image HSV rgb2hsv')
mcolor_hsv = mcolor.rgb_to_hsv(pixels.copy())
fig.add_subplot(1, 3, 3)
plt.imshow(mcolor_hsv, cmap='hsv')
plt.title('Image HSV mcolor')
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2nd
Step2: Full batch gradient descent with unnormalized features
Step3: Full batch gradient descent with feature normalization
Step4: Mini-Batch Gradient Descent with unnormalized features
Step5: Mini-Batch Gradient Descent with normalized features
Step6: Mini-Batch Gradient Descent with normalized features and batch_size = 1
Step7: Mini-Batch Gradient Descent with UNnormalized features and batch_size = 1
Step8: Can we get away not having to use normalized features and achieve good performance?
| <ASSISTANT_TASK:>
Python Code:
# Let's try to find the equation y = 2 * x
# We have 6 examples:- (x,y) = (0.1,0.2), (1,2), (2, 4), (3, 6), (-4, -8), (25, 50)
# Let's assume y is a linear combination of the features x, x^2, x^3
# We know that Normal Equation gives us the exact solution so let's first use that
N = 6
x = np.array([0.1, 1, 2, 3, -4, 25])
y = np.array([0.2, 2, 4, 6, -8, 50])
x_2 = x**2
x_3 = x**3
X = np.ones((N, 4))
X[:,1] = x
X[:,2] = x_2
X[:,3] = x_3
_, D = np.shape(X)
regularization_strength = 0.0
XtX = (X.T).dot(X)
I = np.eye(D, dtype=float)
XtX_RI = XtX + regularization_strength*I
XtY = (X.T).dot(y)
w = np.linalg.solve(XtX_RI, XtY)
y_pred = X.dot(w)
loss = np.sqrt(np.mean((y_pred-y)**2))
# As expected w ~ [0 2 0 0]
print("W : ", w)
print("Predicted Y : ", y_pred)
print("RMS loss : ", loss)
# Setup the training and test tensors
# Let's generate 400 examples
N = 400
x = np.random.uniform(low=-75, high=100, size=N)
y = 2*x
X = np.zeros((N, 3))
X[:,0] = x
X[:,1] = x**2
X[:,2] = x**3
X_tensor = Variable(torch.FloatTensor(X), requires_grad=False)
y_tensor = Variable(torch.FloatTensor(y), requires_grad=False)
# Test set initialization
X_test = np.zeros((3, 3))
X_test[:,0] = np.array([-2.5, 0.0, 19])
X_test[:,1] = X_test[:,0]**2
X_test[:,2] = X_test[:,0]**3
X_test_tsr = Variable(torch.FloatTensor(X_test), requires_grad=False)
# Normalized features
X_min = torch.min(X_tensor,0)
X_max = torch.max(X_tensor,0)
X_mean = torch.mean(X_tensor,0)
X_sub_mean = X_tensor-X_mean.expand_as(X_tensor)
X_max_min = X_max[0]-X_min[0] + 1e-7
X_norm_tsr = X_sub_mean/X_max_min.expand_as(X_sub_mean)
X_test_sub_mean = X_test_tsr-X_mean.expand_as(X_test_tsr)
X_test_norm_tsr = X_test_sub_mean/X_max_min.expand_as(X_test_sub_mean)
# A simple linear Neural Net which is y = w_1*x + w_2*x^2 + w_3*x^3 + b
import math
from time import time
def RunLinearNNTraining(X, y, learning_rate=1e-5, epochs=5000, batch_size=None, X_test=None,
use_optimizer=None, adam_betas=(0.9, 0.999)):
# Neural Net
X_size = X.size()
N = X_size[0]
D_in = X_size[1]
D_out = 1
model = torch.nn.Linear(D_in, D_out)
loss_fn = torch.nn.MSELoss(size_average=True)
# Choose Optimizer
optimizer = None
if use_optimizer:
if use_optimizer == 'SGD':
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
elif use_optimizer == 'Adam':
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, betas=adam_betas)
elif use_optimizer == 'Adadelta':
optimizer = torch.optim.Adadelta(model.parameters(), lr=learning_rate)
elif use_optimizer == 'ASGD':
optimizer = torch.optim.ASGD(model.parameters(), lr=learning_rate)
elif use_optimizer == 'RMSprop':
optimizer = torch.optim.RMSprop(model.parameters(), lr=learning_rate)
elif use_optimizer == 'Adagrad':
optimizer = torch.optim.Adagrad(model.parameters(), lr=learning_rate)
else:
print("Invalid Optimizer")
use_optimizer=None
losses = []
loss = None
start_time = time()
for t in range(epochs):
num_batches = 1
X_batch = None
y_batch = None
if batch_size:
num_batches = math.ceil(N/batch_size)
else:
batch_size = N
shuffle = torch.randperm(N)
for b in range(num_batches):
lower_index = b*batch_size
upper_index = min(lower_index+batch_size, N)
indices = shuffle[lower_index:upper_index]
X_batch = X[indices]
y_batch = y[indices]
y_pred = model(X_batch)
loss = loss_fn(y_pred, y_batch)
if use_optimizer:
optimizer.zero_grad()
loss.backward()
optimizer.step()
else:
# Zero the gradients before running the backward pass.
model.zero_grad()
loss.backward()
# Update the weights using gradient descent. Each parameter is a Variable, so
# we can access its data and gradients like we did before.
for param in model.parameters():
param.data -= learning_rate * param.grad.data
losses.append(loss.data[0])
end_time = time()
time_taken = end_time - start_time
print("Time Taken = %.2f seconds " % time_taken)
print("Final Loss: ", loss.data[0])
print("Parameters [w_1, w_2, w_3, b]: ")
for param in model.parameters():
print(param.data[0])
# plot Loss vs Iterations
plt.plot(losses)
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.show()
# Predictions on Test set
if X_test:
print("Test:")
print("X_test: ", X_test.data)
print("y_pred: ", model(X_test))
# use_optimizer can be Adam, RMSprop, Adadelta, ASGD, SGD, Adagrad
RunLinearNNTraining(X=X_tensor, y=y_tensor, batch_size=None, epochs=25000, learning_rate=1e-3,
X_test=X_test_tsr, use_optimizer='Adam')
# Using Adam Optimizer
RunLinearNNTraining(X=X_norm_tsr, y=y_tensor, batch_size=None, epochs=25000, learning_rate=1e-1,
X_test=X_test_norm_tsr, use_optimizer='Adam')
# Look SGD is faster than Adam now. It's curve is more steeper.
RunLinearNNTraining(X=X_norm_tsr, y=y_tensor, batch_size=None, epochs=25000, learning_rate=1e-1,
X_test=X_test_norm_tsr, use_optimizer='SGD')
# SGD doesn't work with unnormalized features so using Adam.
RunLinearNNTraining(X=X_tensor, y=y_tensor, batch_size=25, epochs=3000, learning_rate=1e-3,
X_test=X_test_tsr, use_optimizer='Adam')
# SGD is better than Adam in this case so using SGD
RunLinearNNTraining(X=X_norm_tsr, y=y_tensor, batch_size=25, epochs=3000, learning_rate=1e-1,
X_test=X_test_norm_tsr, use_optimizer='SGD')
# SGD is better than Adam in this case so using SGD
RunLinearNNTraining(X=X_norm_tsr, y=y_tensor, batch_size=1, epochs=25, learning_rate=1e-1,
X_test=X_test_norm_tsr, use_optimizer='SGD')
RunLinearNNTraining(X=X_tensor, y=y_tensor, batch_size=1, epochs=150, learning_rate=1e-3,
X_test=X_test_tsr, use_optimizer='Adam')
RunLinearNNTraining(X=X_tensor, y=y_tensor, batch_size=None, epochs=25000, learning_rate=1e-3,
X_test=X_test_tsr, use_optimizer='Adam', adam_betas=(0.99, 0.999))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Logging into your account on CGC
Step2: Finding the project
Step3: Listing bam files in the project
Step4: Get the app to run
Step5: Set up the number of files per task
Step6: Set up draft tasks and perform analysis
| <ASSISTANT_TASK:>
Python Code:
import sevenbridges as sbg
from sevenbridges.errors import SbgError
from sevenbridges.http.error_handlers import *
import re
import datetime
import binpacking
print("SBG library imported.")
print sbg.__version__
prof = 'default'
config_file = sbg.Config(profile=prof)
api = sbg.Api(config=config_file,error_handlers=[rate_limit_sleeper,maintenance_sleeper,general_error_sleeper])
print "Api Configured!!"
print "Api Username : ", api.users.me()
my_project = api.projects.get(id='anellor1/omfgene')
for m in my_project.get_members():
print m
print my_project.billing_group
#Listing all files in a project
files = [f for f in api.files.query(project=my_project,limit=100).all() if f.name.endswith(".bam")]
print len(files)
app = api.apps.get(id="anellor1/omfgene/omfgene-wrapper")
print app.name
input_port_app = 'input_file'
import math
inputs = {}
num_files = len(files)
num_hosts = 10 #instances in workflow
jobs_per_host = 36 #threads in per instance
minutes_per_run = 25 #estimated
runs_per_hour = 300 / minutes_per_run # Setting number of hours to run an task to be a LCD of minutes_per_run
tasks_per_run = runs_per_hour * jobs_per_host * num_hosts
num_runs = int(math.ceil(num_files*1.0 / tasks_per_run))
print num_files,tasks_per_run,num_runs
for run_index in range(num_runs):
low_bound = run_index * tasks_per_run
high_bound = min((run_index + 1) * tasks_per_run, num_files)
#print low_bound,high_bound
input_files = files[low_bound:high_bound]
task_name = "OMFGene task Run:{}, NumFiles:{}, TimeStamp {}".format(run_index+1, high_bound-low_bound, datetime.datetime.now())
inputs[input_port_app] = input_files
my_task = api.tasks.create(name=task_name, project=my_project,
app=app, inputs=inputs, run=False)
if my_task.errors:
print(my_task.errors())
else:
print('Your task %s is ready to go' % my_task.name)
# Comment off the statement for execution of tasks.
my_task.run()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Functions
Step4: This code starts by splitting the (2D Array) histograms into the pixel values (column 0) and pixel counts (column 1), and normalizes the counts by dividing by the total count (the last value).
Step6: This code runs a reduceRegion() on each image to generate a cumulative histogram, making sure that only pixels that are in both images are included when computing the histograms (just in case there might be a cloud or something else just outside of the high-res image, that might distort the results). It's not important to generate that histogram with a really high fidelity, so the maxPixels argument is set to use less than "4 tiles" of data (256 * 256 * 4) and bestEffort is turned on, to make the computation run fast. When these arguments are set this way, the reduceRegion() function will try to figure out how many pixels it would need to process at the given scale, and if that's greater than the maxPixels value, it computes a lower scale to keep the total number of pixels below maxPixels. That all means you need to specify a scale, but it doesn't matter what it is as it'll be mostly ignored.
Step7: The previous functions are generically useful for performing image histogram matching; they are not specific to any particular image or image collection. They are the building blocks for the procedure.
Step9: Next prepare a Landsat 8 collection by applying a cloud/shadow mask, scaling, and selecting/renaming RGB bands.
Step10: Now find the Landsat images within 32 days of the SkySat image, sort the images by cloud cover and then mosaic them. Use the result as the reference image to histogram-match the SkySat image to.
Step11: Results
Step12: Define a folium map object, add layers, and display it. Until you zoom in really far, it's nearly impossible to tell where the Landsat image ends and the SkySat image begins.
| <ASSISTANT_TASK:>
Python Code:
import ee
ee.Authenticate()
ee.Initialize()
def lookup(source_hist, target_hist):
Creates a lookup table to make a source histogram match a target histogram.
Args:
source_hist: The histogram to modify. Expects the Nx2 array format produced by ee.Reducer.autoHistogram.
target_hist: The histogram to match to. Expects the Nx2 array format produced by ee.Reducer.autoHistogram.
Returns:
A dictionary with 'x' and 'y' properties that respectively represent the x and y
array inputs to the ee.Image.interpolate function.
# Split the histograms by column and normalize the counts.
source_values = source_hist.slice(1, 0, 1).project([0])
source_counts = source_hist.slice(1, 1, 2).project([0])
source_counts = source_counts.divide(source_counts.get([-1]))
target_values = target_hist.slice(1, 0, 1).project([0])
target_counts = target_hist.slice(1, 1, 2).project([0])
target_counts = target_counts.divide(target_counts.get([-1]))
# Find first position in target where targetCount >= srcCount[i], for each i.
def make_lookup(n):
return target_values.get(target_counts.gte(n).argmax())
lookup = source_counts.toList().map(make_lookup)
return {'x': source_values.toList(), 'y': lookup}
def histogram_match(source_img, target_img, geometry):
Performs histogram matching for 3-band RGB images by forcing the histogram CDF of source_img to match target_img.
Args:
source_img: A 3-band ee.Image to be color matched. Must have bands named 'R', 'G', and 'B'.
target_img: A 3-band ee.Image for color reference. Must have bands named 'R', 'G', and 'B'.
geometry: An ee.Geometry that defines the region to generate RGB histograms for.
It should intersect both source_img and target_img inputs.
Returns:
A copy of src_img color-matched to target_img.
args = {
'reducer': ee.Reducer.autoHistogram(**{'maxBuckets': 256, 'cumulative': True}),
'geometry': geometry,
'scale': 1, # Need to specify a scale, but it doesn't matter what it is because bestEffort is true.
'maxPixels': 65536 * 4 - 1,
'bestEffort': True
}
# Only use pixels in target that have a value in source (inside the footprint and unmasked).
source = source_img.reduceRegion(**args)
target = target_img.updateMask(source_img.mask()).reduceRegion(**args)
return ee.Image.cat(
source_img.select(['R']).interpolate(**lookup(source.getArray('R'), target.getArray('R'))),
source_img.select(['G']).interpolate(**lookup(source.getArray('G'), target.getArray('G'))),
source_img.select(['B']).interpolate(**lookup(source.getArray('B'), target.getArray('B')))
).copyProperties(source_img, ['system:time_start'])
def find_closest(target_image, image_col, days):
Filter images in a collection by date proximity and spatial intersection to a target image.
Args:
target_image: An ee.Image whose observation date is used to find near-date images in
the provided image_col image collection. It must have a 'system:time_start' property.
image_col: An ee.ImageCollection to filter by date proximity and spatial intersection
to the target_image. Each image in the collection must have a 'system:time_start'
property.
days: A number that defines the maximum number of days difference allowed between
the target_image and images in the image_col.
Returns:
An ee.ImageCollection that has been filtered to include those images that are within the
given date proximity to target_image and intersect it spatially.
# Compute the timespan for N days (in milliseconds).
range = ee.Number(days).multiply(1000 * 60 * 60 * 24)
filter = ee.Filter.And(
ee.Filter.maxDifference(range, 'system:time_start', None, 'system:time_start'),
ee.Filter.intersects('.geo', None, '.geo'))
closest = (ee.Join.saveAll('matches', 'measure')
.apply(ee.ImageCollection([target_image]), image_col, filter))
return ee.ImageCollection(ee.List(closest.first().get('matches')))
geometry = ee.Geometry.Polygon(
[[[-155.97117211519446, 20.09006980142336],
[-155.97117211519446, 19.7821681268256],
[-155.73256280122962, 19.7821681268256],
[-155.73256280122962, 20.09006980142336]]], None, False)
skysat = (ee.Image('SKYSAT/GEN-A/PUBLIC/ORTHO/RGB/s01_20161020T214047Z')
.clip(geometry))
def prep_landsat(image):
Apply cloud/shadow mask and select/rename Landsat 8 bands.
qa = image.select('pixel_qa')
return (image.updateMask(
qa.bitwiseAnd(1 << 3).eq(0).And(qa.bitwiseAnd(1 << 5).eq(0)))
.divide(10000)
.select(['B4', 'B3', 'B2'], ['R', 'G', 'B'])
.copyProperties(image, ['system:time_start']))
# Get the landsat collection, cloud masked and scaled to surface reflectance.
landsat_col = (ee.ImageCollection('LANDSAT/LC08/C01/T1_SR')
.filterBounds(geometry)
.map(prep_landsat))
reference = find_closest(skysat, landsat_col, 32).sort('CLOUD_COVER').mosaic()
result = histogram_match(skysat, reference, geometry)
import folium
def add_ee_layer(self, ee_image_object, vis_params, name):
map_id_dict = ee.Image(ee_image_object).getMapId(vis_params)
folium.raster_layers.TileLayer(
tiles=map_id_dict['tile_fetcher'].url_format,
attr='Map Data © <a href="https://earthengine.google.com/">Google Earth Engine</a>',
name=name,
overlay=True,
control=True
).add_to(self)
folium.Map.add_ee_layer = add_ee_layer
lon, lat, zoom = -155.79584, 19.99866, 13
map_matched = folium.Map(location=[lat, lon], zoom_start=zoom)
vis_params_refl = {'min': 0, 'max': 0.25}
vis_params_dn = {'min': 0, 'max': 255}
map_matched.add_ee_layer(reference, vis_params_refl, 'Landsat-8 reference')
map_matched.add_ee_layer(skysat, vis_params_dn, 'SkySat source')
map_matched.add_ee_layer(result, vis_params_refl, 'SkySat matched')
display(map_matched.add_child(folium.LayerControl()))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
| <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csir-csiro', 'sandbox-2', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exception handling with lists
| <ASSISTANT_TASK:>
Python Code:
for num in range(10,20): #to iterate between 10 to 20
for i in range(2,num): #to iterate on the factors of the number
if num%i == 0: #to determine the first factor
j=num/i #to calculate the second factor
print '%d equals %d * %d' % (num,i,j)
break #to move to the next number, the #first FOR
else: # else part of the loop
print num, 'is a prime number'
i = 2
while(i < 100):
j = 2
while(j <= (i/j)):
if not(i%j):
break
j = j + 1
if (j > i/j) :
print i, " is prime"
i = i + 1
print "Good bye!"
count = 0
while (count < 9):
print 'The count is:', count
count = count + 1
print "Good bye!"
count = 0
while count < 5:
print count, " is less than 5"
count = count + 1
else:
print count, " is not less than 5"
for i in range(2):
print(i)
else:
print('completed for-loop')
for i in range(2):
print(i)
break
else:
print('completed for-loop')
i = 0
while i < 2:
print(i)
i += 1
else:
print('in else')
i = 0
while i < 2:
print(i)
i += 1
break
else:
print('completed while-loop')
# don't run this code
# flag = 10
#while (flag): print 'Given flag is really true!'
#print "Good bye!"
a_list=[0,1,2,3,4,5]
try:
print('first element:', a_list[0])
except IndexError:
print('raised IndexError')
else:
print('no error in try-block')
try:
print('third element:', a_list[2])
except IndexError:
print('raised IndexError')
else:
print('no error in try-block')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We want to use Theano so that we can use it's auto-differentiation, since I'm too lazy to work out the derivatives of these functions by hand!
Step2: Classification
Step3: Apply
Step4: Probability regression
Step5: Regression
Step6: Regression
Step7: Show the classification logits and probabilities in tables
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
import numpy as np
import pandas as pd
import torch, torch.nn as nn, torch.nn.functional as F
from matplotlib import pyplot as plt
import seaborn as sns
sns.set()
EPSILON = 1.0e-12
SAVE_PLOTS = True
# Softmax function
def f_softmax(logits, axis=1):
ex = np.exp(logits)
return ex / ex.sum(axis=axis, keepdims=True)
# Classification loss: negative log of softmax
def f_clf_loss(logits, axis=1):
t_logits = torch.tensor(logits, requires_grad=True)
# Compute negative log-softmax
return -F.log_softmax(t_logits, dim=axis).detach().numpy()
# Gadient of classification loss
def f_clf_loss_grad(logits, target, axis=1):
t_logits = torch.tensor(logits, requires_grad=True)
t_targets = torch.tensor(target, dtype=torch.int64)
# Compute cross_entropy loss
loss = F.cross_entropy(t_logits, t_targets, reduction='sum')
# Sum and compute gradient
loss.backward()
return t_logits.grad.detach().numpy()
# Compute the range of values that we wish to explore
xs = np.arange(-5.0, 5.001, 1.0/128.0).astype(np.float32)
# Build an array of logit vector, where each logit vector is for a 2-class problem with the values [0, x[i]]
logits = np.stack([np.zeros_like(xs), xs], axis=1)
# Use softmax to compute predicted probabilities:
clf_q = f_softmax(logits)
# Compute negative log loss of softmax:
clf_loss = f_clf_loss(logits)
# Compute gradient of negative log loss of softmax with respect to the logits:
clf_loss_grad = f_clf_loss_grad(logits, np.ones_like(xs))
plt.figure(figsize=(5, 5))
plt.xlim(-5.0, 5.0)
plt.ylim(-5.0, 5.0)
line_p, = plt.plot(xs, np.ones_like(xs), label=r'$p$')
line_q, = plt.plot(xs, clf_q[:, 1], label=r'$q = softmax(X)$')
line_loss, = plt.plot(xs, clf_loss[:, 1], label=r'loss $c =-ln(q)$')
line_loss_grad, = plt.plot(xs, clf_loss_grad[:, 1], label=r'grad loss $\frac{dc}{dX_1}$')
plt.legend(handles=[line_p, line_q, line_loss, line_loss_grad])
plt.xlabel(r'$X_1$')
plt.show()
if SAVE_PLOTS:
plt.figure(figsize=(5, 5))
plt.xlim(-5.0, 5.0)
plt.ylim(-5.0, 5.0)
line_p, = plt.plot(xs, np.ones_like(xs), label=r'$p$')
line_q, = plt.plot(xs, clf_q[:, 1], label=r'$q = softmax(X)$')
plt.legend(handles=[line_p, line_q])
plt.xlabel(r'$X_1$')
plt.savefig('clf_loss_0.png', dpi=600)
plt.close()
plt.figure(figsize=(5, 5))
plt.xlim(-5.0, 5.0)
plt.ylim(-5.0, 5.0)
line_p, = plt.plot(xs, np.ones_like(xs), label=r'$p$')
line_q, = plt.plot(xs, clf_q[:, 1], label=r'$q = softmax(X)$')
line_loss, = plt.plot(xs, clf_loss[:, 1], label=r'loss $c =-ln(q)$')
plt.legend(handles=[line_p, line_q, line_loss])
plt.xlabel(r'$X_1$')
plt.savefig('clf_loss_1.png', dpi=600)
plt.close()
plt.figure(figsize=(5, 5))
plt.xlim(-5.0, 5.0)
plt.ylim(-5.0, 5.0)
line_p, = plt.plot(xs, np.ones_like(xs), label=r'$p$')
line_q, = plt.plot(xs, clf_q[:, 1], label=r'$q = softmax(X)$')
line_loss, = plt.plot(xs, clf_loss[:, 1], label=r'loss $c =-ln(q)$')
line_loss_grad, = plt.plot(xs, clf_loss_grad[:, 1], label=r'grad loss $\frac{dc}{dX_1}$')
plt.legend(handles=[line_p, line_q, line_loss, line_loss_grad])
plt.xlabel(r'$X_1$')
plt.savefig('clf_loss_2.png', dpi=600)
plt.close()
# Sigmoid definition
def f_sigmoid(x):
return 1.0 / (1.0 + np.exp(-x))
def f_prob_regr_loss(q, p):
return
# Binary cross-entropy of sigmoid
def f_prob_regr_loss(logits, target):
t_logits = torch.tensor(logits, requires_grad=True)
t_target = torch.tensor(target, requires_grad=True)
loss = -F.binary_cross_entropy_with_logits(t_logits, t_target)
return loss
# Gadient of binary cross-entropy of sigmoid
def f_prob_regr_loss_grad(logits, target, axis=0):
t_logits = torch.tensor(logits, requires_grad=True)
t_target = torch.tensor(target)
# Compute binary cross-entropy of sigmoid
loss = -F.binary_cross_entropy_with_logits(t_logits, t_target)
# Sum and compute gradient
loss.sum().backward()
return t_logits.grad.detach().numpy()
# Compute the range of values that we wish to explore
xs = np.arange(-5.0, 5.0, 0.01).astype(np.float32)
# Use sigmoid to compute predicted probabilities:
prob_regr_q = [f_sigmoid(x) for x in xs]
# Compute binary cross-entropy of sigmoid:
prob_regr_loss = [f_prob_regr_loss(x, 1.0) for x in xs]
# Compute gradient of binary cross-entropy of sigmoid with respect to xs:
prob_regr_loss_grad = [f_prob_regr_loss_grad(x, 1.0) for x in xs]
plt.figure(figsize=(5,5))
plt.xlim(-5.0, 5.0)
plt.ylim(-5.0, 5.0)
line_p, = plt.plot(xs, np.ones_like(xs), label=r'$p$')
line_q, = plt.plot(xs, prob_regr_q, label=r'$q=sigmoid(X)$')
line_loss, = plt.plot(xs, prob_regr_loss, label=r'loss $c =-ln(q)p-ln(1-q)(1-p)$')
line_loss_grad, = plt.plot(xs, prob_regr_loss_grad, label=r'grad loss $\frac{dc}{dx}$')
plt.legend(handles=[line_p, line_q, line_loss, line_loss_grad])
plt.xlabel(r'$x$')
plt.show()
if SAVE_PLOTS:
plt.figure(figsize=(5, 5))
plt.xlim(-5.0, 5.0)
plt.ylim(-5.0, 5.0)
line_p, = plt.plot(xs, np.ones_like(xs), label=r'$p$')
line_q, = plt.plot(xs, prob_regr_q, label=r'$q=sigmoid(X)$')
line_loss, = plt.plot(xs, prob_regr_loss, label=r'loss $c =-ln(q)p-ln(1-q)(1-p)$')
line_loss_grad, = plt.plot(xs, prob_regr_loss_grad, label=r'grad loss $\frac{dc}{dx}$')
plt.legend(handles=[line_p, line_q, line_loss, line_loss_grad])
plt.xlabel(r'$x$')
plt.savefig('prob_regr_loss_2.png', dpi=600)
plt.close()
# Function for computing squared error loss
def f_regr_sqr_loss(a, b):
return (a - b)**2
# Gadient of squared error loss
def f_regr_sqr_loss_grad(x_hat, x):
t_x_hat = torch.tensor(x_hat, requires_grad=True)
t_x = torch.tensor(x, requires_grad=True)
# Compute squared error
loss = -(t_x_hat - t_x)**2
# Sum and compute gradient
loss.sum().backward()
return t_x.grad.detach().numpy()
# Compute the range of values that we wish to explore
xs = np.arange(-5.0, 5.0, 0.01).astype(np.float32)
# Use squared error loss:
regr_sqr_loss = [f_regr_sqr_loss(x, 0.0) for x in xs]
# Compute gradient of squared error with respect to x-hat
regr_sqr_loss_grad = [f_regr_sqr_loss_grad(x, 0.0) for x in xs]
plt.figure(figsize=(5,5))
plt.xlim(-5.0, 5.0)
plt.ylim(-5.0, 5.0)
line_loss, = plt.plot(xs, regr_sqr_loss, label=r'loss $c = (x - \hat{x})^2$')
line_loss_grad, = plt.plot(xs, regr_sqr_loss_grad, label=r'grad loss $\frac{dc}{dx}$')
plt.legend(handles=[line_loss, line_loss_grad])
plt.xlabel(r'$x$')
plt.show()
if SAVE_PLOTS:
plt.figure(figsize=(5, 5))
plt.xlim(-5.0, 5.0)
plt.ylim(-5.0, 5.0)
line_loss, = plt.plot(xs, regr_sqr_loss, label=r'loss $c = (x - \hat{x})^2$')
line_loss_grad, = plt.plot(xs, regr_sqr_loss_grad, label=r'grad loss $\frac{dc}{dx}$')
plt.legend(handles=[line_loss, line_loss_grad])
plt.xlabel(r'$x$')
plt.savefig('regr_sqr_loss_2.png', dpi=600)
plt.close()
# Use PyTorch `smooth_l1_loss`
def f_regr_huber_loss(predictions, targets, delta=1.0):
t_predictions = torch.tensor(predictions, requires_grad=True)
t_targets = torch.tensor(targets, requires_grad=True)
# Compute squared error
return F.smooth_l1_loss(t_predictions, t_targets)
def f_regr_huber_loss_grad(predictions, targets, delta=1.0):
t_predictions = torch.tensor(predictions, requires_grad=True)
t_targets = torch.tensor(targets, requires_grad=True)
# Compute squared error
loss = F.smooth_l1_loss(t_predictions, t_targets)
# Sum and compute gradient
loss.sum().backward()
return t_predictions.grad.detach().numpy()
# Compute the range of values that we wish to explore
xs = np.arange(-5.0, 5.0, 0.01).astype(np.float32)
# Use Huber loss:
regr_sqr_loss = [f_regr_huber_loss(x, 0.0) for x in xs]
# Compute gradient of Huber loss with respect to x-hat
regr_sqr_loss_grad = [f_regr_huber_loss_grad(x, 0.0) for x in xs]
plt.figure(figsize=(5,5))
plt.xlim(-5.0, 5.0)
plt.ylim(-5.0, 5.0)
line_loss, = plt.plot(xs, regr_sqr_loss, label=r'loss $c = huber(x, \hat{x})$')
line_loss_grad, = plt.plot(xs, regr_sqr_loss_grad, label=r'grad loss $\frac{dc}{dx}$')
plt.legend(handles=[line_loss, line_loss_grad])
plt.xlabel(r'$x$')
plt.show()
if SAVE_PLOTS:
plt.figure(figsize=(5, 5))
plt.xlim(-5.0, 5.0)
plt.ylim(-5.0, 5.0)
line_loss, = plt.plot(xs, regr_sqr_loss, label=r'loss $c = huber(x, \hat{x})$')
line_loss_grad, = plt.plot(xs, regr_sqr_loss_grad, label=r'grad loss $\frac{dc}{dx}$')
plt.legend(handles=[line_loss, line_loss_grad])
plt.xlabel(r'$x$')
plt.savefig('regr_huber_loss_2.png', dpi=600)
plt.close()
data=np.array(logits)
pd.DataFrame(columns=['$X_0$', '$X_1$'], data=data[::128])
data=np.append(np.array(logits), np.array(clf_q), axis=1)
pd.DataFrame(columns=['$X_0$', '$X_1$', '$q_0$', '$q_1$'], data=data[::128])
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: noteStore
Step2: my .__MASTER note__ is actually pretty complex....so parsing it and adding to it will take some effort. But let's give it a try.
Step3: Getting tags by name
Step4: things to do with tags
Step5: synchronization state
Step6: list notebooks and note counts
Step8: compute distribution of note sizes
Step10: creating a new note with content and tag
Step11: Move Evernote tags to have a different parent
| <ASSISTANT_TASK:>
Python Code:
import settings
from evernote.api.client import EvernoteClient
dev_token = settings.authToken
client = EvernoteClient(token=dev_token, sandbox=False)
userStore = client.get_user_store()
user = userStore.getUser()
print user.username
import EvernoteWebUtil as ewu
ewu.init(settings.authToken)
ewu.user.username
# getting notes for a given notebook
import datetime
from itertools import islice
notes = islice(ewu.notes_metadata(includeTitle=True,
includeUpdated=True,
includeUpdateSequenceNum=True,
notebookGuid=ewu.notebook(name=':CORE').guid), None)
for note in notes:
print note.title, note.updateSequenceNum, datetime.datetime.fromtimestamp(note.updated/1000.)
# let's read my __MASTER note__
# is it possible to search notes by title?
[(n.guid, n.title) for n in ewu.notes(title=".__MASTER note__")]
import settings
from evernote.api.client import EvernoteClient
dev_token = settings.authToken
client = EvernoteClient(token=dev_token, sandbox=False)
userStore = client.get_user_store()
user = userStore.getUser()
noteStore = client.get_note_store()
print user.username
userStore.getUser()
noteStore.getNoteContent('ecc59d05-c010-4b3b-a04b-7d4eeb7e8505')
import lxml
ewu.tag('#1-Now')
sorted(ewu.tag_counts_by_name().items(), key=lambda x: -x[1])[:10]
tags = ewu.noteStore.listTags()
tags_by_name = dict([(tag.name, tag) for tag in tags])
tag_counts_by_name = ewu.tag_counts_by_name()
tags_by_guid = ewu.tags_by_guid()
# figure out which tags have no notes attached and possibly delete them -- say if they don't have children tags
# oh -- don't delete them willy nilly -- some have organizational purposes
set(tags_by_name) - set(tag_counts_by_name)
# calculated tag_children -- tags that have children
from collections import defaultdict
tag_children = defaultdict(list)
for tag in tags:
if tag.parentGuid is not None:
tag_children[tag.parentGuid].append(tag)
[tags_by_guid[guid].name for guid in tag_children.keys()]
for (guid, children) in tag_children.items():
print tags_by_guid[guid].name
for child in children:
print "\t", child.name
# find all notes for a given tag
[n.title for n in ewu.notes_metadata(includeTitle=True, tagGuids=[tags_by_name['#1-Now'].guid])]
ewu.notebook(name='Action Pending').guid
[n.title for n in ewu.notes_metadata(includeTitle=True,
notebookGuid=ewu.notebook(name='Action Pending').guid,
tagGuids=[tags_by_name['#1-Now'].guid])]
# with a GUID, you can get the current state of a tag
# http://dev.evernote.com/documentation/reference/NoteStore.html#Fn_NoteStore_getTag
# not super useful for me since I'm already pulling a list of all tags in order to map names to guids
ewu.noteStore.getTag(ewu.tag(name='#1-Now').guid)
# create a tag
# http://dev.evernote.com/documentation/reference/NoteStore.html#Fn_NoteStore_createTag
# must pass name; optional to pass
from evernote.edam.type.ttypes import Tag
ewu.noteStore.createTag(Tag(name="happy happy2!", parentGuid=None))
ewu.tag(name="happy happy2!", refresh=True)
# expunge tag
# http://dev.evernote.com/documentation/reference/NoteStore.html#Fn_NoteStore_expungeTag
ewu.noteStore.expungeTag(ewu.tag("happy happy2!").guid)
# find all notes for a given tag and notebook
action_now_notes = list(ewu.notes_metadata(includeTitle=True,
notebookGuid=ewu.notebook(name='Action Pending').guid,
tagGuids=[tags_by_name['#1-Now'].guid]))
[(n.guid, n.title) for n in action_now_notes ]
# get all tags for a given note
import datetime
from itertools import islice
notes = list(islice(ewu.notes_metadata(includeTitle=True,
includeUpdated=True,
includeUpdateSequenceNum=True,
notebookGuid=ewu.notebook(name=':PROJECTS').guid), None))
plus_tags_set = set()
for note in notes:
tags = ewu.noteStore.getNoteTagNames(note.guid)
plus_tags = [tag for tag in tags if tag.startswith("+")]
plus_tags_set.update(plus_tags)
print note.title, note.updateSequenceNum, datetime.datetime.fromtimestamp(note.updated/1000.), \
len(plus_tags) == 1
syncstate = ewu.noteStore.getSyncState()
syncstate
syncstate.fullSyncBefore, syncstate.updateCount
import datetime
datetime.datetime.fromtimestamp(syncstate.fullSyncBefore/1000.)
ewu.notebookcounts()
k = list(ewu.sizes_of_notes())
print len(k)
plt.plot(k)
sort(k)
plt.plot(sort(k))
plt.plot([log(i) for i in sort(k)])
Make a histogram of normally distributed random numbers and plot the
analytic PDF over it
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
mu, sigma = 100, 15
x = mu + sigma * np.random.randn(10000)
fig = plt.figure()
ax = fig.add_subplot(111)
# the histogram of the data
n, bins, patches = ax.hist(x, 50, normed=1, facecolor='green', alpha=0.75)
# hist uses np.histogram under the hood to create 'n' and 'bins'.
# np.histogram returns the bin edges, so there will be 50 probability
# density values in n, 51 bin edges in bins and 50 patches. To get
# everything lined up, we'll compute the bin centers
bincenters = 0.5*(bins[1:]+bins[:-1])
# add a 'best fit' line for the normal PDF
y = mlab.normpdf( bincenters, mu, sigma)
l = ax.plot(bincenters, y, 'r--', linewidth=1)
ax.set_xlabel('Smarts')
ax.set_ylabel('Probability')
#ax.set_title(r'$\mathrm{Histogram\ of\ IQ:}\ \mu=100,\ \sigma=15$')
ax.set_xlim(40, 160)
ax.set_ylim(0, 0.03)
ax.grid(True)
plt.show()
plt.hist(k)
plt.hist([log10(i) for i in k], 50)
# calculate Notebook name -> note count
nb_guid_dict = dict([(nb.guid, nb) for nb in ewu.all_notebooks()])
nb_name_dict = dict([(nb.name, nb) for nb in ewu.all_notebooks()])
ewu.notes_metadata(includeTitle=True)
import itertools
g = itertools.islice(ewu.notes_metadata(includeTitle=True, includeUpdateSequenceNum=True, notebookGuid=nb_name_dict["Action Pending"].guid), 10)
list(g)
len(_)
# grab content of a specific note
# http://dev.evernote.com/documentation/reference/NoteStore.html#Fn_NoteStore_getNote
# params: guid, withContent, withResourcesData, withResourcesRecognition, withResourcesAlternateData
note = ewu.noteStore.getNote('a49d531e-f3f8-4e72-9523-e5a558f11d87', True, False, False, False)
note_content = ewu.noteStore.getNoteContent('a49d531e-f3f8-4e72-9523-e5a558f11d87')
note_content
import EvernoteWebUtil as ewu
reload(ewu)
from evernote.edam.type.ttypes import Note
note_template = <?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE en-note SYSTEM "http://xml.evernote.com/pub/enml2.dtd">
<en-note style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;">
{0}
</en-note>
note = Note()
note.title = "hello from ipython"
note.content = note_template.format("hello from Canada 2")
note.tagNames = ["hello world"]
note = ewu.noteStore.createNote(note)
note.guid
assert False
from evernote.edam.type.ttypes import Tag
import EvernoteWebUtil as ewu
tags = ewu.noteStore.listTags()
tags_by_name = dict([(tag.name, tag) for tag in tags])
print tags_by_name['+JoinTheAction'], tags_by_name['.Active Projects']
# update +JoinTheAction tag to put it underneath .Active Projects
jta_tag = tags_by_name['+JoinTheAction']
jta_tag.parentGuid = tags_by_name['.Active Projects'].guid
result = ewu.noteStore.updateTag(Tag(name=jta_tag.name, guid=jta_tag.guid, parentGuid=tags_by_name['.Active Projects'].guid))
print result
# mark certain project as inactive
result = ewu.noteStore.updateTag(Tag(name="+Relaunch unglue.it",
guid=tags_by_name["+Relaunch unglue.it"].guid,
parentGuid=tags_by_name['.Inactive Projects'].guid))
# getTag?
ewu.noteStore.getTag(tags_by_name['+JoinTheAction'].guid)
tags_by_name["+Relaunch unglue.it"]
result = ewu.noteStore.updateTag(ewu.authToken, Tag(name="+Relaunch unglue.it",
guid=tags_by_name["+Relaunch unglue.it"].guid,
parentGuid=tags_by_name['.Inactive Projects'].guid))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import TensorFlow and enable Eager execution
Step2: Load the MNIST dataset
Step3: Use tf.data to create batches and shuffle the dataset
Step4: Wire up the generative and inference network with tf.keras.Sequential
Step5: Define the loss function and the optimizer
Step6: Training
Step7: Display an image using the epoch number
Step8: Generate a GIF of all the saved images.
Step9: To downlod the animation from Colab uncomment the code below
| <ASSISTANT_TASK:>
Python Code:
# to generate gifs
!pip install imageio
from __future__ import absolute_import, division, print_function
# Import TensorFlow >= 1.9 and enable eager execution
import tensorflow as tf
tfe = tf.contrib.eager
tf.enable_eager_execution()
import os
import time
import numpy as np
import glob
import matplotlib.pyplot as plt
import PIL
import imageio
from IPython import display
(train_images, _), (test_images, _) = tf.keras.datasets.mnist.load_data()
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')
test_images = test_images.reshape(test_images.shape[0], 28, 28, 1).astype('float32')
# Normalizing the images to the range of [0., 1.]
train_images /= 255.
test_images /= 255.
# Binarization
train_images[train_images >= .5] = 1.
train_images[train_images < .5] = 0.
test_images[test_images >= .5] = 1.
test_images[test_images < .5] = 0.
TRAIN_BUF = 60000
BATCH_SIZE = 100
TEST_BUF = 10000
train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(TRAIN_BUF).batch(BATCH_SIZE)
test_dataset = tf.data.Dataset.from_tensor_slices(test_images).shuffle(TEST_BUF).batch(BATCH_SIZE)
class CVAE(tf.keras.Model):
def __init__(self, latent_dim):
super(CVAE, self).__init__()
self.latent_dim = latent_dim
self.inference_net = tf.keras.Sequential(
[
tf.keras.layers.InputLayer(input_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(
filters=32, kernel_size=3, strides=(2, 2), activation=tf.nn.relu),
tf.keras.layers.Conv2D(
filters=64, kernel_size=3, strides=(2, 2), activation=tf.nn.relu),
tf.keras.layers.Flatten(),
# No activation
tf.keras.layers.Dense(latent_dim + latent_dim),
]
)
self.generative_net = tf.keras.Sequential(
[
tf.keras.layers.InputLayer(input_shape=(latent_dim,)),
tf.keras.layers.Dense(units=7*7*32, activation=tf.nn.relu),
tf.keras.layers.Reshape(target_shape=(7, 7, 32)),
tf.keras.layers.Conv2DTranspose(
filters=64,
kernel_size=3,
strides=(2, 2),
padding="SAME",
activation=tf.nn.relu),
tf.keras.layers.Conv2DTranspose(
filters=32,
kernel_size=3,
strides=(2, 2),
padding="SAME",
activation=tf.nn.relu),
# No activation
tf.keras.layers.Conv2DTranspose(
filters=1, kernel_size=3, strides=(1, 1), padding="SAME"),
]
)
def sample(self, eps=None):
if eps is None:
eps = tf.random_normal(shape=(100, self.latent_dim))
return self.decode(eps, apply_sigmoid=True)
def encode(self, x):
mean, logvar = tf.split(self.inference_net(x), num_or_size_splits=2, axis=1)
return mean, logvar
def reparameterize(self, mean, logvar):
eps = tf.random_normal(shape=mean.shape)
return eps * tf.exp(logvar * .5) + mean
def decode(self, z, apply_sigmoid=False):
logits = self.generative_net(z)
if apply_sigmoid:
probs = tf.sigmoid(logits)
return probs
return logits
def log_normal_pdf(sample, mean, logvar, raxis=1):
log2pi = tf.log(2. * np.pi)
return tf.reduce_sum(
-.5 * ((sample - mean) ** 2. * tf.exp(-logvar) + logvar + log2pi),
axis=raxis)
def compute_loss(model, x):
mean, logvar = model.encode(x)
z = model.reparameterize(mean, logvar)
x_logit = model.decode(z)
cross_ent = tf.nn.sigmoid_cross_entropy_with_logits(logits=x_logit, labels=x)
logpx_z = -tf.reduce_sum(cross_ent, axis=[1, 2, 3])
logpz = log_normal_pdf(z, 0., 0.)
logqz_x = log_normal_pdf(z, mean, logvar)
return -tf.reduce_mean(logpx_z + logpz - logqz_x)
def compute_gradients(model, x):
with tf.GradientTape() as tape:
loss = compute_loss(model, x)
return tape.gradient(loss, model.trainable_variables), loss
optimizer = tf.train.AdamOptimizer(1e-4)
def apply_gradients(optimizer, gradients, variables, global_step=None):
optimizer.apply_gradients(zip(gradients, variables), global_step=global_step)
epochs = 100
latent_dim = 50
num_examples_to_generate = 16
# keeping the random vector constant for generation (prediction) so
# it will be easier to see the improvement.
random_vector_for_generation = tf.random_normal(
shape=[num_examples_to_generate, latent_dim])
model = CVAE(latent_dim)
def generate_and_save_images(model, epoch, test_input):
predictions = model.sample(test_input)
fig = plt.figure(figsize=(4,4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i+1)
plt.imshow(predictions[i, :, :, 0], cmap='gray')
plt.axis('off')
# tight_layout minimizes the overlap between 2 sub-plots
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
generate_and_save_images(model, 0, random_vector_for_generation)
for epoch in range(1, epochs + 1):
start_time = time.time()
for train_x in train_dataset:
gradients, loss = compute_gradients(model, train_x)
apply_gradients(optimizer, gradients, model.trainable_variables)
end_time = time.time()
if epoch % 1 == 0:
loss = tfe.metrics.Mean()
for test_x in test_dataset:
loss(compute_loss(model, test_x))
elbo = -loss.result()
display.clear_output(wait=False)
print('Epoch: {}, Test set ELBO: {}, '
'time elapse for current epoch {}'.format(epoch,
elbo,
end_time - start_time))
generate_and_save_images(
model, epoch, random_vector_for_generation)
def display_image(epoch_no):
return PIL.Image.open('image_at_epoch_{:04d}.png'.format(epoch_no))
display_image(epochs) # Display images
with imageio.get_writer('cvae.gif', mode='I') as writer:
filenames = glob.glob('image*.png')
filenames = sorted(filenames)
last = -1
for i,filename in enumerate(filenames):
frame = 2*(i**0.5)
if round(frame) > round(last):
last = frame
else:
continue
image = imageio.imread(filename)
writer.append_data(image)
image = imageio.imread(filename)
writer.append_data(image)
# this is a hack to display the gif inside the notebook
os.system('cp cvae.gif cvae.gif.png')
display.Image(filename="cvae.gif.png")
#from google.colab import files
#files.download('cvae.gif')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
| <ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.DataFrame({'datetime': ['2021-04-10 01:00:00', '2021-04-10 02:00:00', '2021-04-10 03:00:00', '2021-04-10 04:00:00', '2021-04-10 05:00:00'],
'col1': [25, 25, 25, 50, 100],
'col2': [50, 50, 100, 50, 100],
'col3': [50, 50, 50, 100, 100]})
df['datetime'] = pd.to_datetime(df['datetime'])
import numpy as np
def g(df):
df['state'] = np.where((df['col2'] > 50) & (df['col3'] > 50), df['col1'], df[['col1', 'col2', 'col3']].sum(axis=1))
return df
df = g(df.copy())
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define-se a nossa função
Step2: Método da Bisecção
Step3: Método da Falsa Posição
Step4: Método de Newton-Raphson
Step5: Método da Secante
Step6: Exercício 2
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
f = lambda x: np.sin(x)
df = lambda x: np.cos(x)
my_stop = 1.e-4
my_nitmax = 100000
my_cdif = 1.e-6
def bi(a, b, fun, eps, nitmax):
c = (a + b) / 2
it = 1
while np.abs(fun(c)) > eps and it < nitmax:
if fun(a)*fun(c) < 0: b = c
else: a = c
c = (a + b) / 2
it += 1
return it, c, fun(c)
bi(2, 4, f, my_stop, my_nitmax)
def regfalsi(a, b, fun, eps, nitmax):
c = (a * fun(b) - b * fun(a)) / (fun(b) - fun(a))
it = 1
while np.abs(fun(c)) > eps and it < nitmax:
if fun(a) * fun(c) < 0: b = c
else: a = c
c = (a * fun(b) - b * fun(a)) / (fun(b) - fun(a))
it += 1
return it, c, fun(c)
regfalsi(2, 4, f, my_stop, my_nitmax)
def newtraph(c0, fun, dfun, eps, nitmax):
c = c0
it = 1
while np.abs(fun(c)) > eps and it < nitmax:
c = c - fun(c) / dfun(c)
it += 1
return it, c, fun(c)
newtraph(2, f, df, my_stop, my_nitmax)
def secant(a, b, fun, eps, nitmax):
c = (a * fun(b) - b * fun(a)) / (fun(b) - fun(a))
it = 1
while np.abs(fun(c)) > eps and it < nitmax:
a = b
b = c
c = (a * fun(b) - b * fun(a)) / (fun(b) - fun(a))
it += 1
return it, c, fun(c)
secant(2, 4, f, my_stop, my_nitmax)
def bi2(a, b, fun, eps, nitmax, cdif):
c = (a + b) / 2
c_prev = a
it = 1
while not((np.abs(fun(c)) < eps and np.abs(c - c_prev) < cdif) or it > nitmax):
if fun(a)*fun(c) < 0: b = c
else: a = c
c_prev = c
c = (a + b) / 2
it += 1
return it, c, fun(c), c_prev, fun(c_prev), (c - c_prev)
bi2(2, 4, f, my_stop, my_nitmax, my_cdif)
def regfalsi2(a, b, fun, eps, nitmax, cdif):
c = (a * fun(b) - b * fun(a)) / (fun(b) - fun(a))
c_prev = c + cdif/2
it = 1
while not((np.abs(fun(c)) < eps and np.abs(c - c_prev) < cdif) or it > nitmax):
if fun(a) * fun(c) < 0: b = c
else: a = c
c_prev = c
c = (a * fun(b) - b * fun(a)) / (fun(b) - fun(a))
it += 1
return it, c, fun(c), c_prev, fun(c_prev), (c - c_prev)
regfalsi2(2, 4, f, my_stop, my_nitmax, my_cdif)
def newtraph2(c0, fun, dfun, eps, nitmax, cdif):
c = c0
c_prev = c + cdif/2
it = 1
while not((np.abs(fun(c)) < eps and np.abs(c - c_prev) < cdif) or it > nitmax):
c_prev = c
c = c - fun(c) / dfun(c)
it += 1
return it, c, fun(c), c_prev, fun(c_prev), (c - c_prev)
newtraph2(2, f, df, my_stop, my_nitmax, my_cdif)
def secant2(a, b, fun, eps, nitmax, cdif):
c = (a * fun(b) - b * fun(a)) / (fun(b) - fun(a))
c_prev = c + cdif/2
it = 1
while not((np.abs(fun(c)) < eps and np.abs(c - c_prev) < cdif) or it > nitmax):
a = b
b = c
c_prev = c
c = (a * fun(b) - b * fun(a)) / (fun(b) - fun(a))
it += 1
return it, c, fun(c), c_prev, fun(c_prev), (c - c_prev)
secant2(2, 4, f, my_stop, my_nitmax, my_cdif)
from scipy.misc import derivative
def newtraphd(c0, fun, eps, nitmax):
c = c0
dfun = lambda x: derivative(fun, x, 0.0001)
it = 1
while np.abs(fun(c)) > eps and it < nitmax:
c = c - (fun(c) / dfun(c))
it += 1
return it, c, fun(c), dfun(c)
f2 = lambda x, k: x + np.e ** (-k * x**2) * np.cos(x)
f2_k1 = lambda x: f2(x, 1)
newtraph(0, f2_k1, df2_k1, 1e-4, my_nitmax)
f2_k50 = lambda x: f2(x, 50)
for i in range(1, 10+1): print(newtraphd(0, f2_k50, 1e-4, i))
for i in range(1, 10+1): print(newtraphd(-0.1, f2_k50, 1e-4, i))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This example shows how scan is used
Step3: <a id="generating-inputs-and-targets"></a>
Step13: <a id="defining-the-rnn-model-from-scratch"></a>
Step16: <a id="defining-an-optimizer"></a>
Step18: <a id="training"></a>
Step19: Now we can train our model
Step21: After running tensorboard --logdir ./logdir and navigating to http
| <ASSISTANT_TASK:>
Python Code:
from __future__ import division, print_function
import tensorflow as tf
def fn(previous_output, current_input):
return previous_output + current_input
elems = tf.Variable([1.0, 2.0, 2.0, 2.0])
elems = tf.identity(elems)
initializer = tf.constant(0.0)
out = tf.scan(fn, elems, initializer=initializer)
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
print(sess.run(out))
%reset -f
from __future__ import division, print_function
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('svg')
import matplotlib.pyplot as plt
import os
import shutil
import numpy as np
import tensorflow as tf
from tensorflow.python.ops import functional_ops
def input_target_generator(min_duration=5, max_duration=50):
Generate toy input, target sequences.
Each input sequence has values that are drawn from the standard normal
distribution, and each target sequence is the corresponding cumulative sum.
Sequence durations are chosen at random using a discrete uniform
distribution over `[min_duration, max_duration]`.
Args:
min_duration: A positive integer. The minimum sequence duration.
max_duration: A positive integer. The maximum sequence duration.
Yields:
A tuple,
inputs: A 2-D float32 NumPy array with shape `[duration, 1]`.
targets: A 2-D float32 NumPy array with shape `[duration, 1]`.
while True:
duration = np.random.randint(min_duration, max_duration)
inputs = np.random.randn(duration).astype(np.float32)
targets = np.cumsum(inputs).astype(np.float32)
yield inputs.reshape(-1, 1), targets.reshape(-1, 1)
class Model(object):
def __init__(self, hidden_layer_size, input_size, target_size, init_scale=0.1):
Create a vanilla RNN.
Args:
hidden_layer_size: An integer. The number of hidden units.
input_size: An integer. The number of inputs per time step.
target_size: An integer. The number of targets per time step.
init_scale: A float. All weight matrices will be initialized using
a uniform distribution over [-init_scale, init_scale].
self.hidden_layer_size = hidden_layer_size
self.input_size = input_size
self.target_size = target_size
self.init_scale = init_scale
self._inputs = tf.placeholder(tf.float32, shape=[None, input_size],
name='inputs')
self._targets = tf.placeholder(tf.float32, shape=[None, target_size],
name='targets')
initializer = tf.random_uniform_initializer(-init_scale, init_scale)
with tf.variable_scope('model', initializer=initializer):
self._states, self._predictions = self._compute_predictions()
self._loss = self._compute_loss()
def _vanilla_rnn_step(self, h_prev, x):
Vanilla RNN step.
Args:
h_prev: A 1-D float32 Tensor with shape `[hidden_layer_size]`.
x: A 1-D float32 Tensor with shape `[input_size]`.
Returns:
The updated state `h`, with the same shape as `h_prev`.
h_prev = tf.reshape(h_prev, [1, self.hidden_layer_size])
x = tf.reshape(x, [1, self.input_size])
with tf.variable_scope('rnn_block'):
W_h = tf.get_variable(
'W_h', shape=[self.hidden_layer_size, self.hidden_layer_size])
W_x = tf.get_variable(
'W_x', shape=[self.input_size, self.hidden_layer_size])
b = tf.get_variable('b', shape=[self.hidden_layer_size],
initializer=tf.constant_initializer(0.0))
h = tf.tanh( tf.matmul(h_prev, W_h) + tf.matmul(x, W_x) + b )
h = tf.reshape(h, [self.hidden_layer_size], name='h')
return h
def _compute_predictions(self):
Compute vanilla-RNN states and predictions.
with tf.variable_scope('states'):
initial_state = tf.zeros([self.hidden_layer_size],
name='initial_state')
states = tf.scan(self._vanilla_rnn_step, self.inputs,
initializer=initial_state, name='states')
with tf.variable_scope('predictions'):
W_pred = tf.get_variable(
'W_pred', shape=[self.hidden_layer_size, self.target_size])
b_pred = tf.get_variable('b_pred', shape=[self.target_size],
initializer=tf.constant_initializer(0.0))
predictions = tf.add(tf.matmul(states, W_pred), b_pred, name='predictions')
return states, predictions
def _compute_loss(self):
Compute l2 loss between targets and predictions.
with tf.variable_scope('loss'):
loss = tf.reduce_mean((self.targets - self.predictions)**2, name='loss')
return loss
@property
def inputs(self):
A 2-D float32 placeholder with shape `[dynamic_duration, input_size]`.
return self._inputs
@property
def targets(self):
A 2-D float32 placeholder with shape `[dynamic_duration, target_size]`.
return self._targets
@property
def states(self):
A 2-D float32 Tensor with shape `[dynamic_duration, hidden_layer_size]`.
return self._states
@property
def predictions(self):
A 2-D float32 Tensor with shape `[dynamic_duration, target_size]`.
return self._predictions
@property
def loss(self):
A 0-D float32 Tensor.
return self._loss
class Optimizer(object):
def __init__(self, loss, initial_learning_rate, num_steps_per_decay,
decay_rate, max_global_norm=1.0):
Create a simple optimizer.
This optimizer clips gradients and uses vanilla stochastic gradient
descent with a learning rate that decays exponentially.
Args:
loss: A 0-D float32 Tensor.
initial_learning_rate: A float.
num_steps_per_decay: An integer.
decay_rate: A float. The factor applied to the learning rate
every `num_steps_per_decay` steps.
max_global_norm: A float. If the global gradient norm is less than
this, do nothing. Otherwise, rescale all gradients so that
the global norm because `max_global_norm`.
trainables = tf.trainable_variables()
grads = tf.gradients(loss, trainables)
grads, _ = tf.clip_by_global_norm(grads, clip_norm=max_global_norm)
grad_var_pairs = zip(grads, trainables)
global_step = tf.Variable(0, trainable=False, dtype=tf.int32)
learning_rate = tf.train.exponential_decay(
initial_learning_rate, global_step, num_steps_per_decay,
decay_rate, staircase=True)
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
self._optimize_op = optimizer.apply_gradients(grad_var_pairs,
global_step=global_step)
@property
def optimize_op(self):
An Operation that takes one optimization step.
return self._optimize_op
def train(sess, model, optimizer, generator, num_optimization_steps,
logdir='./logdir'):
Train.
Args:
sess: A Session.
model: A Model.
optimizer: An Optimizer.
generator: A generator that yields `(inputs, targets)` tuples, with
`inputs` and `targets` both having shape `[dynamic_duration, 1]`.
num_optimization_steps: An integer.
logdir: A string. The log directory.
if os.path.exists(logdir):
shutil.rmtree(logdir)
tf.scalar_summary('loss', model.loss)
ema = tf.train.ExponentialMovingAverage(decay=0.99)
update_loss_ema = ema.apply([model.loss])
loss_ema = ema.average(model.loss)
tf.scalar_summary('loss_ema', loss_ema)
summary_op = tf.merge_all_summaries()
summary_writer = tf.train.SummaryWriter(logdir=logdir, graph=sess.graph)
sess.run(tf.initialize_all_variables())
for step in xrange(num_optimization_steps):
inputs, targets = generator.next()
loss_ema_, summary, _, _ = sess.run(
[loss_ema, summary_op, optimizer.optimize_op, update_loss_ema],
{model.inputs: inputs, model.targets: targets})
summary_writer.add_summary(summary, global_step=step)
print('\rStep %d. Loss EMA: %.6f.' % (step+1, loss_ema_), end='')
generator = input_target_generator()
model = Model(hidden_layer_size=256, input_size=1, target_size=1, init_scale=0.1)
optimizer = Optimizer(model.loss, initial_learning_rate=1e-2, num_steps_per_decay=15000,
decay_rate=0.1, max_global_norm=1.0)
sess = tf.Session()
train(sess, model, optimizer, generator, num_optimization_steps=45000)
def test_qualitatively(sess, model, generator, num_examples=5, figsize=(10, 3)):
Test qualitatively.
Args:
sess: A Session.
model: A Model.
generator: A generator that yields `(inputs, targets)` tuples, with
`inputs` and `targets` both having shape `[dynamic_duration, 1]`.
num_examples: An integer. The number of examples to plot.
figsize: A tuple `(width, height)`, the size of each example's figure.
for i in xrange(num_examples):
inputs, targets = generator.next()
predictions = sess.run(model.predictions, {model.inputs: inputs})
fig, ax = plt.subplots(nrows=2, sharex=True, figsize=figsize)
ax[0].plot(inputs.flatten(), label='inputs')
ax[0].legend()
ax[1].plot(targets.flatten(), label='targets')
ax[1].plot(predictions.flatten(), 'o', label='predictions')
ax[1].legend()
test_qualitatively(sess, model, generator, figsize=(8, 2))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Running Example
Step2: When using data sets it's good practice to cite the originators of the data, you can get information about the source of the data from data['citation']
Step3: The data consists of all the male and female sprinting data for 100m, 200m and 400m since 1896 (six outputs in total). The ouput information can be found from
Step4: In GPy we deal with multiple output data in a particular way. We specify the output we are interested in for modelling as an additional input. So whilst for this data, normally, the only input would be the year of the event. We additionally have an input giving the index of the output we are modelling. This can be seen from examining data['X'].
Step5: Now let's plot the data
Step6: In the plot above red is women's events, blue is men's. Squares are 400 m, crosses 200m and circles 100m. Not all events were run in all years, for example the women's 400 m only started in 1964.
Step7: The coregionalize matrix, $\mathbf{B}$, is itself is constructed from two other matrices, $\mathbf{B} = \mathbf{W}\mathbf{W}^\top + \text{diag}(\boldsymbol{\kappa})$. This allows us to specify a low rank form for the coregionalization matrix. However, for our first example we want to specify that the matrix $\mathbf{B}$ is not constrained to have a low rank form.
Step8: Note here that the rank we specify is that of the $\mathbf{W}\mathbf{W}^\top$ part. When this part is combined with the diagonal matrix from $\mathbf{\kappa}$ the matrix $\mathbf{B}$ is totally general. This covariance function can now be used in a standard Gaussian process regression model. Let's build the model and optimize it.
Step9: We can plot the results using the ability to 'fix inputs' in the model.plot() function. We can specify that column 1 should be fixed to event number 2 by passing fixed_inputs = [(1, 2)] to the model. To plot the results for all events on the same figure we also specify fignum=1 in the loop as below.
Step10: There is a lot we can do with this model. First of all, each of the races is a different length, so the series have a different mean. We can include another coregionalization term to deal with the mean. Below we do this and reduce the rank of the coregionalization matrix to 1.
Step11: This is a simple form of the linear model of coregionalization. Note how confident the model is about what the women's 400 m performance would have been. You might feel that the model is being over confident in this region. Perhaps we are forcing too much sharing of information between the sprints. We could return to the intrinsic coregionalization model and force the two base covariance functions to share the same coregionalization matrix.
Step12: Exercise 1
Step13: Predictions in the multioutput case can be very effected by our covariance function design. This reflects the themes we saw on the first day where the importance of covariance function choice was emphasized at design time.
Step14: Low Rank Approximations
Step15: Build a straightforward GP model of our simulation. We’ll also plot the posterior of $f$.
Step16: Exercise 2
Step17: Building the Low Rank Approximation
Step18: In GPy, the sparse inputs $\mathbf{Z}$ are abbreviated 'iip' , for inducing input. Plot the posterior
Step19: Exercise 3
Step20: Exercise 4
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import GPy
import pods
from IPython.display import display
data = pods.datasets.olympic_sprints()
X = data['X']
y = data['Y']
print data['info'], data['details']
print data['citation']
print data['output_info']
#print 'First column of X contains the olympic years.'
#print data['X'][:, 0]
#print 'Second column of X contains the event index.'
#print data['X'][:, 1]
markers = ['bo', 'ro', 'bx', 'rx', 'bs', 'rs']
for i in range(6):
# extract the event
x_event = X[np.nonzero(X[:, 1]==i), 0]
y_event = y[np.nonzero(X[:, 1]==i), 0]
plt.plot(x_event, y_event, markers[i])
plt.title('Olympic Sprint Times')
plt.xlabel('year')
plt.ylabel('time/s')
GPy.kern.Coregionalize?
kern = GPy.kern.RBF(1, lengthscale=80)**GPy.kern.Coregionalize(1,output_dim=6, rank=5)
display(kern)
X
model = GPy.models.GPRegression(X, y, kern)
model.optimize()
fig, ax = plt.subplots()
for i in range(6):
model.plot(fignum=1,fixed_inputs=[(1, i)],ax=ax,legend=i==0)
plt.xlabel('years')
plt.ylabel('time/s')
kern1 = GPy.kern.RBF(1, lengthscale=80)**GPy.kern.Coregionalize(1,output_dim=6, rank=1)
kern2 = GPy.kern.Bias(1)**GPy.kern.Coregionalize(1,output_dim=6, rank=1)
kern = kern1 + kern2
model = GPy.models.GPRegression(X, y, kern)
model.optimize()
fig, ax = plt.subplots()
for i in range(6):
model.plot(fignum=1,fixed_inputs=[(1, i)],ax=ax,legend=i==0)
plt.xlabel('years')
plt.ylabel('time/s')
kern1 = GPy.kern.RBF(1, lengthscale=80) + GPy.kern.Bias(1)
kern1.name = 'rbf_plus_bias'
kern2 = GPy.kern.Coregionalize(1,output_dim=6, rank=5)
kern = kern1**kern2
kern.name = 'product'
display(kern)
model = GPy.models.GPRegression(X, y, kern)
model.optimize()
fig, ax = plt.subplots()
for i in range(6):
model.plot(fignum=1,fixed_inputs=[(1, i)],ax=ax,legend=(i==0))
plt.xlabel('years (after first ')
plt.ylabel('time/s')
# Question 1 answer here
# set up the inputs to index sex and event separately.
X2 = np.zeros((X.shape[0], 3))
X2[:, 0] = X[:, 0]
X2[:, 1] = np.remainder(X[:, 1],2) == 1 # Gender
X2[:, 2] = np.floor(X[:, 1]/2) # Event
kern1 = ((GPy.kern.RBF(1, variance=40, lengthscale=20)+GPy.kern.White(1))
*GPy.kern.Coregionalize(1, output_dim=2, rank=1, active_dims=1, name='gender')
*GPy.kern.Coregionalize(1, output_dim=3, rank=1, active_dims=2, name='event'))
kern2 = ((GPy.kern.Linear(1, 30, active_dims=0)+GPy.kern.Bias(1, 20))
*GPy.kern.Coregionalize(1, output_dim=2, rank=1, active_dims=1, name='gender')
*GPy.kern.Coregionalize(1, output_dim=3, rank=1, active_dims=2, name='event')
)
kern = kern1 + kern2
#display(kern)
#print "First coregion:", [(k.name, k.active_dims) for k in kern.parts[0].parts]
#print "Second ", [(k.name, k.active_dims) for k in kern.parts[0].parts[1].parts]
X2mean = X2[:,0].mean()
# Zero mean X for the linear trend to be feasible
X2m = X2.copy()
X2m[:,0] -= X2mean
model = GPy.models.GPRegression(X2m, y, kern)
model.optimize(messages=1,max_iters=5e5)
fig, ax = plt.subplots()
for i in range(2):
for j in range(3):
model.plot(fignum=1,fixed_inputs=[(1, i), (2, j)],
ax=ax, legend=(i==0)*(j==0)*2,
plot_data=(i==0)*(j==0))
plt.xlabel('years (zero meaned for linear trend)')
plt.ylabel('time/s')
fig, ax = plt.subplots()
for sex in range(2):
c = 'magenta' if sex == 1 else 'blue'
for event in range(3):
model.plot_mean(fixed_inputs=[(1, sex), (2, event)],
color=c, ax=ax)
model.plot_density(fixed_inputs=[(1, sex), (2, event)],
color=c,
ax=ax)
model.plot_data(visible_dims=[0], ax=ax)
plt.xlabel('years')
plt.ylabel('time/s')
display(model)
X = np.sort(np.random.rand(50,1)*12,0)
k = GPy.kern.RBF(1)
K = k.K(X)
K+= np.eye(50)*0.01 # add some independence (noise) to K
y = np.random.multivariate_normal(np.zeros(50), K).reshape(50,1)
model = GPy.models.GPRegression(X,y)
model.optimize()
fig = plt.figure()
ax = fig.add_subplot(111)
model.plot_f(ax=ax)
model._raw_predict?
mu, var = model._raw_predict(X) # this fetches the posterior of f
plt.vlines(X[:,0], mu[:,0]-2.*np.sqrt(var[:,0]), mu[:,0]+2.*np.sqrt(var[:,0]),color='r',lw=2)
# Exercise 2 answer here
Z = np.random.rand(3,1)*12
model = GPy.models.SparseGPRegression(X,y,Z=Z)
display(model)
mu, var = model._raw_predict(Z)
plt.vlines(Z[:,0], mu[:,0]-2.*np.sqrt(var[:,0]), mu[:,0]+2.*np.sqrt(var[:,0]),color='r')
# Exercise 3 answer
# Exercise 5 answer
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introducing Principal Component Analysis
Step2: We can see that there is a definite trend in the data. What PCA seeks to do is to find the Principal Axes in the data, and explain how important those axes are in describing the data distribution
Step3: To see what these numbers mean, let's view them as vectors plotted on top of the data
Step4: Notice that one vector is longer than the other. In a sense, this tells us that that direction in the data is somehow more "important" than the other direction.
Step5: By specifying that we want to throw away 5% of the variance, the data is now compressed by a factor of 50%! Let's see what the data look like after this compression
Step6: The light points are the original data, while the dark points are the projected version. We see that after truncating 5% of the variance of this dataset and then reprojecting it, the "most important" features of the data are maintained, and we've compressed the data by 50%!
Step7: This gives us an idea of the relationship between the digits. Essentially, we have found the optimal stretch and rotation in 64-dimensional space that allows us to see the layout of the digits, without reference to the labels.
Step8: But the pixel-wise representation is not the only choice. We can also use other basis functions, and write something like
Step9: Here we see that with only six PCA components, we recover a reasonable approximation of the input!
Step10: Here we see that our two-dimensional projection loses a lot of information (as measured by the explained variance) and that we'd need about 20 components to retain 90% of the variance. Looking at this plot for a high-dimensional dataset can help you understand the level of redundancy present in multiple observations.
Step11: Let's take another look at this by using IPython's interact functionality to view the reconstruction of several images at once
| <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
plt.style.use('seaborn')
np.random.seed(1)
X = np.dot(np.random.random(size=(2, 2)), np.random.normal(size=(2, 200))).T
plt.plot(X[:, 0], X[:, 1], 'o')
plt.axis('equal');
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(X)
print(pca.explained_variance_)
print(pca.components_)
plt.plot(X[:, 0], X[:, 1], 'o', alpha=0.5)
for length, vector in zip(pca.explained_variance_, pca.components_):
v = vector * 3 * np.sqrt(length)
plt.plot([0, v[0]], [0, v[1]], '-k', lw=3)
plt.axis('equal');
clf = PCA(0.95) # keep 95% of variance
X_trans = clf.fit_transform(X)
print(X.shape)
print(X_trans.shape)
X_new = clf.inverse_transform(X_trans)
plt.plot(X[:, 0], X[:, 1], 'o', alpha=0.2)
plt.plot(X_new[:, 0], X_new[:, 1], 'ob', alpha=0.8)
plt.axis('equal');
from sklearn.datasets import load_digits
digits = load_digits()
X = digits.data
y = digits.target
print(X[0][:8])
print(X[0][8:16])
print(X[0][16:24])
print(X[0][24:32])
print(X[0][32:40])
print(X[0][40:48])
pca = PCA(2) # project from 64 to 2 dimensions
Xproj = pca.fit_transform(X)
print(X.shape)
print(Xproj.shape)
(1797*2)/(1797*64)
plt.scatter(Xproj[:, 0], Xproj[:, 1], c=y, edgecolor='none', alpha=0.5,
cmap=plt.cm.get_cmap('nipy_spectral', 10))
plt.colorbar();
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
def plot_image_components(x, coefficients=None, mean=0, components=None,
imshape=(8, 8), n_components=6, fontsize=12):
if coefficients is None:
coefficients = x
if components is None:
components = np.eye(len(coefficients), len(x))
mean = np.zeros_like(x) + mean
fig = plt.figure(figsize=(1.2 * (5 + n_components), 1.2 * 2))
g = plt.GridSpec(2, 5 + n_components, hspace=0.3)
def show(i, j, x, title=None):
ax = fig.add_subplot(g[i, j], xticks=[], yticks=[])
ax.imshow(x.reshape(imshape), interpolation='nearest')
if title:
ax.set_title(title, fontsize=fontsize)
show(slice(2), slice(2), x, "True")
approx = mean.copy()
show(0, 2, np.zeros_like(x) + mean, r'$\mu$')
show(1, 2, approx, r'$1 \cdot \mu$')
for i in range(0, n_components):
approx = approx + coefficients[i] * components[i]
show(0, i + 3, components[i], r'$c_{0}$'.format(i + 1))
show(1, i + 3, approx,
r"${0:.2f} \cdot c_{1}$".format(coefficients[i], i + 1))
plt.gca().text(0, 1.05, '$+$', ha='right', va='bottom',
transform=plt.gca().transAxes, fontsize=fontsize)
show(slice(2), slice(-2, None), approx, "Approx")
with plt.style.context('seaborn-white'):
plot_image_components(digits.data[0])
def plot_pca_interactive(data, n_components=6):
from sklearn.decomposition import PCA
from ipywidgets import interact
pca = PCA(n_components=n_components)
Xproj = pca.fit_transform(data)
def show_decomp(i=0):
plot_image_components(data[i], Xproj[i],
pca.mean_, pca.components_)
interact(show_decomp, i=(0, data.shape[0] - 1));
plot_pca_interactive(digits.data)
pca = PCA().fit(X)
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance');
fig, axes = plt.subplots(8, 8, figsize=(8, 8))
fig.subplots_adjust(hspace=0.1, wspace=0.1)
for i, ax in enumerate(axes.flat):
pca = PCA(i + 1).fit(X)
im = pca.inverse_transform(pca.transform(X[25:26]))
ax.imshow(im.reshape((8, 8)), cmap='binary')
ax.text(0.95, 0.05, 'n = {0}'.format(i + 1), ha='right',
transform=ax.transAxes, color='green')
ax.set_xticks([])
ax.set_yticks([])
from ipywidgets import interact
def plot_digits(n_components):
fig = plt.figure(figsize=(8, 8))
plt.subplot(1, 1, 1, frameon=False, xticks=[], yticks=[])
nside = 10
pca = PCA(n_components).fit(X)
Xproj = pca.inverse_transform(pca.transform(X[:nside ** 2]))
Xproj = np.reshape(Xproj, (nside, nside, 8, 8))
total_var = pca.explained_variance_ratio_.sum()
im = np.vstack([np.hstack([Xproj[i, j] for j in range(nside)])
for i in range(nside)])
plt.imshow(im)
plt.grid(False)
plt.title("n = {0}, variance = {1:.2f}".format(n_components, total_var),
size=18)
plt.clim(0, 16)
interact(plot_digits, n_components=[1, 15, 20, 25, 32, 40, 64], nside=[1, 8]);
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simple producer test
Step2: Simple producer test
Step3: Simple consumer test
| <ASSISTANT_TASK:>
Python Code:
# Run this cell only if you want to add python module to spark context and have run through steps of option b)
sc.addPyFile("/home/ubuntu/kafka-python-1.3.3/dist/kafka_python-1.3.3-py2.7.egg")
kafka_broker='10.0.1.160:9092' # replace argument with your kafka broker ip (if you have multiple brokers, pick one)
topics = ('test')
from kafka import SimpleProducer
from kafka import KafkaClient
kafka=KafkaClient(kafka_broker)
producer = SimpleProducer(kafka)
producer.send_messages(topics, b'Hello From Kafka-Python Producer')
from kafka import KafkaConsumer
consumer=KafkaConsumer(topics, bootstrap_servers=[kafka_broker], auto_offset_reset='earliest')
for msg in consumer:
print(msg.value)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now our data is in a nice numpy ndarray. We can access it using the numpy methods. For example
Step2: We can also print specific rows of data...
| <ASSISTANT_TASK:>
Python Code:
import numpy as np # get numpy package
data = np.genfromtxt(fname='33182_Breakout_Modeling_Data_5mindata.csv', # data filename
dtype=None, # figure out the data type by column
delimiter=',', # delimit on commas
names=True, # first line contains column namesh
)
column_headers = data.dtype.names
print(column_headers) # print the column headers
print('Number of columns: {}'.format(len(column_headers)))
print('The first row of data is: \n{}'.format(data[0])) # print the first row
print('\n') # print a blank line
print('and the last row of data is: \n{}'.format(data[len(data)-1])) # print the last row
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data
Step2: Array of desired pressure levels
Step3: Interpolate The Data
Step4: Plotting the Data for 700 hPa.
| <ASSISTANT_TASK:>
Python Code:
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
from netCDF4 import Dataset, num2date
from metpy.cbook import get_test_data
from metpy.interpolate import log_interpolate_1d
from metpy.plots import add_metpy_logo, add_timestamp
from metpy.units import units
data = Dataset(get_test_data('wrf_example.nc', False))
lat = data.variables['lat'][:]
lon = data.variables['lon'][:]
time = data.variables['time']
vtimes = num2date(time[:], time.units)
temperature = units.Quantity(data.variables['temperature'][:], 'degC')
pres = units.Quantity(data.variables['pressure'][:], 'Pa')
hgt = units.Quantity(data.variables['height'][:], 'meter')
plevs = [700.] * units.hPa
height, temp = log_interpolate_1d(plevs, pres, hgt, temperature, axis=1)
# Set up our projection
crs = ccrs.LambertConformal(central_longitude=-100.0, central_latitude=45.0)
# Set the forecast hour
FH = 1
# Create the figure and grid for subplots
fig = plt.figure(figsize=(17, 12))
add_metpy_logo(fig, 470, 320, size='large')
# Plot 700 hPa
ax = plt.subplot(111, projection=crs)
ax.add_feature(cfeature.COASTLINE.with_scale('50m'), linewidth=0.75)
ax.add_feature(cfeature.STATES, linewidth=0.5)
# Plot the heights
cs = ax.contour(lon, lat, height[FH, 0, :, :], transform=ccrs.PlateCarree(),
colors='k', linewidths=1.0, linestyles='solid')
cs.clabel(fontsize=10, inline=1, inline_spacing=7, fmt='%i', rightside_up=True,
use_clabeltext=True)
# Contour the temperature
cf = ax.contourf(lon, lat, temp[FH, 0, :, :], range(-20, 20, 1), cmap=plt.cm.RdBu_r,
transform=ccrs.PlateCarree())
cb = fig.colorbar(cf, orientation='horizontal', aspect=65, shrink=0.5, pad=0.05,
extendrect='True')
cb.set_label('Celsius', size='x-large')
ax.set_extent([-106.5, -90.4, 34.5, 46.75], crs=ccrs.PlateCarree())
# Make the axis title
ax.set_title(f'{plevs[0]:~.0f} Heights (m) and Temperature (C)', loc='center', fontsize=10)
# Set the figure title
fig.suptitle(f'WRF-ARW Forecast VALID: {vtimes[FH]} UTC', fontsize=14)
add_timestamp(ax, vtimes[FH], y=0.02, high_contrast=True)
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
| <ASSISTANT_TASK:>
Python Code::
import nltk
nltk.download('wordnet')
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
def lemmatize_words(text):
words = text.split()
words = [lemmatizer.lemmatize(word,pos='v') for word in words]
return ' '.join(words)
df['text'] = df['text'].apply(lemmatize_words)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The variable x is a string in Python
Step2: Its translation into ASCII is unusable by parsers
Step3: Encoding as UTF-8 doesn't help much - use it as an encoder
Step4: Ignoring difficult characters is perhaps an even worse strategy - ignore everything that is not understood
Step5: However, one can normalize then encode…
Step6: Even before this normalization and encoding, you may need to convert some characters explicitly before parsing. For example
Step7: Then consider the results here
Step8: One of the ways to handle punctuations...which drops characters that may be important for parsing a sentence, so instead
| <ASSISTANT_TASK:>
Python Code:
x = "Rinôçérôse screams flow not unlike an encyclopædia, \
'TECHNICIÄNS ÖF SPÅCE SHIP EÅRTH THIS IS YÖÜR CÄPTÅIN SPEÄKING YÖÜR ØÅPTÅIN IS DEA̋D' to Spın̈al Tap."
type(x)
repr(x)
ascii(x)
x.encode('utf8')
x.encode('ascii','ignore')
import unicodedata
# NFKD a robust way to handle normalizers - convert special characters into something
# that can be read and convert into ascii
unicodedata.normalize('NFKD', x).encode('ascii','ignore')
x = "The sky “above” the port … was the color of ‘cable television’ – tuned to the Weather Channel®"
ascii(x)
unicodedata.normalize('NFKD', x).encode('ascii','ignore')
x = x.replace('“', '"').replace('”', '"')
x = x.replace("‘", "'").replace("’", "'")
x = x.replace('…', '...').replace('–', '-')
print(x)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 不规则张量
Step2: 概述
Step3: 还有专门针对不规则张量的方法和运算,包括工厂方法、转换方法和值映射运算。有关支持的运算列表,请参阅 tf.ragged 包文档。
Step4: 与普通张量一样,您可以使用 Python 算术和比较运算符来执行逐元素运算。有关更多信息,请参阅下面的重载运算符一节。
Step5: 如果需要对 RaggedTensor 的值进行逐元素转换,您可以使用 tf.ragged.map_flat_values(它采用一个函数加上一个或多个参数的形式),并应用这个函数来转换 RaggedTensor 的值。
Step6: 不规则张量可以转换为嵌套的 Python list 和 numpy array:
Step7: 构造不规则张量
Step8: 还可以通过将扁平的值张量与行分区张量进行配对来构造不规则张量,行分区张量使用 tf.RaggedTensor.from_value_rowids、tf.RaggedTensor.from_row_lengths 和 tf.RaggedTensor.from_row_splits 等工厂类方法指示如何将值分成各行。
Step9: tf.RaggedTensor.from_row_lengths
Step10: tf.RaggedTensor.from_row_splits
Step11: 有关完整的工厂方法列表,请参阅 tf.RaggedTensor 类文档。
Step12: 示例用例
Step13: 不规则维度和均匀维度
Step14: 可以使用方法 tf.RaggedTensor.bounding_shape 查找给定 RaggedTensor 的紧密边界形状:
Step15: 不规则张量和稀疏张量对比
Step16: 但连接稀疏张量时,相当于连接相应的密集张量,如以下示例所示(其中 Ø 表示缺失的值):
Step17: 另一个说明为什么这种区别非常重要的示例是,考虑一个运算(如 tf.reduce_mean)的“每行平均值”的定义。对于不规则张量,一行的平均值是该行的值总和除以该行的宽度。但对于稀疏张量来说,一行的平均值是该行的值总和除以稀疏张量的总宽度(大于等于最长行的宽度)。
Step18: tf.Example
Step19: 我们可以使用 tf.io.parse_example 解析这个编码数据,它采用序列化字符串的张量和特征规范字典,并将字典映射特征名称返回给张量。要将长度可变特征读入不规则张量,我们只需在特征规范字典中使用 tf.io.RaggedFeature 即可:
Step20: tf.io.RaggedFeature 还可用于读取具有多个不规则维度的特征。有关详细信息,请参阅 API 文档。
Step21: 使用不规则张量构建数据集
Step22: 注:Dataset.from_generator 目前还不支持不规则张量,但不久后将会支持这种张量。
Step23: 相反,可以使用 Dataset.unbatch 将批处理后的数据集转换为扁平数据集。
Step24: 对具有可变长度非不规则张量的数据集进行批处理
Step25: 转换具有不规则张量的数据集
Step26: tf.function
Step27: 如果您希望为 tf.function 明确指定 input_signature,可以使用 tf.RaggedTensorSpec 执行此操作。
Step28: 具体函数
Step29: SavedModel
Step30: 示例:保存自定义模型
Step31: 注:SavedModel 签名是具体函数。如上文的“具体函数”部分所述,从 TensorFlow 2.3 开始,只有具体函数才能正确处理不规则张量。如果您需要在先前版本的 TensorFlow 中使用 SavedModel 签名,建议您将不规则张量分解成其张量分量。
Step32: 由于重载运算符执行逐元素计算,因此所有二进制运算的输入必须具有相同的形状,或者可以广播至相同的形状。在最简单的广播情况下,单个标量与不规则张量中的每个值逐元素组合:
Step33: 有关更高级的用例,请参阅广播一节。
Step34: 索引示例:三维不规则张量
Step35: RaggedTensor 支持多维索引和切片,但有一个限制:不允许索引一个不规则维度。这种情况是有问题的,因为指示的值可能在某些行中存在,而在其他行中不存在。这种情况下,我们不知道是应该 (1) 引发 IndexError;(2) 使用默认值;还是 (3) 跳过该值并返回一个行数比开始时少的张量。根据 Python 的指导原则(“当面对不明确的情况时,不要尝试去猜测”),我们目前不允许此运算。
Step36: 评估不规则张量
Step37: 广播
Step38: 下面是一些不广播的形状示例:
Step39: RaggedTensor 编码
Step40: 选择为行分区使用哪种编码由不规则张量在内部进行管理,以提高某些环境下的效率。尤其是,不同行分区方案的某些优点和缺点是:
Step41: 工厂函数 tf.RaggedTensor.from_nested_row_splits 可用于通过提供一个 row_splits 张量列表,直接构造具有多个不规则维度的 RaggedTensor:
Step42: 不规则秩和扁平值
Step43: 均匀内层维度
Step44: 均匀非内层维度
| <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install -q tf_nightly
import math
import tensorflow as tf
digits = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []])
words = tf.ragged.constant([["So", "long"], ["thanks", "for", "all", "the", "fish"]])
print(tf.add(digits, 3))
print(tf.reduce_mean(digits, axis=1))
print(tf.concat([digits, [[5, 3]]], axis=0))
print(tf.tile(digits, [1, 2]))
print(tf.strings.substr(words, 0, 2))
print(tf.map_fn(tf.math.square, digits))
print(digits[0]) # First row
print(digits[:, :2]) # First two values in each row.
print(digits[:, -2:]) # Last two values in each row.
print(digits + 3)
print(digits + tf.ragged.constant([[1, 2, 3, 4], [], [5, 6, 7], [8], []]))
times_two_plus_one = lambda x: x * 2 + 1
print(tf.ragged.map_flat_values(times_two_plus_one, digits))
digits.to_list()
digits.numpy()
sentences = tf.ragged.constant([
["Let's", "build", "some", "ragged", "tensors", "!"],
["We", "can", "use", "tf.ragged.constant", "."]])
print(sentences)
paragraphs = tf.ragged.constant([
[['I', 'have', 'a', 'cat'], ['His', 'name', 'is', 'Mat']],
[['Do', 'you', 'want', 'to', 'come', 'visit'], ["I'm", 'free', 'tomorrow']],
])
print(paragraphs)
print(tf.RaggedTensor.from_value_rowids(
values=[3, 1, 4, 1, 5, 9, 2],
value_rowids=[0, 0, 0, 0, 2, 2, 3]))
print(tf.RaggedTensor.from_row_lengths(
values=[3, 1, 4, 1, 5, 9, 2],
row_lengths=[4, 0, 2, 1]))
print(tf.RaggedTensor.from_row_splits(
values=[3, 1, 4, 1, 5, 9, 2],
row_splits=[0, 4, 4, 6, 7]))
print(tf.ragged.constant([["Hi"], ["How", "are", "you"]])) # ok: type=string, rank=2
print(tf.ragged.constant([[[1, 2], [3]], [[4, 5]]])) # ok: type=int32, rank=3
try:
tf.ragged.constant([["one", "two"], [3, 4]]) # bad: multiple types
except ValueError as exception:
print(exception)
try:
tf.ragged.constant(["A", ["B", "C"]]) # bad: multiple nesting depths
except ValueError as exception:
print(exception)
queries = tf.ragged.constant([['Who', 'is', 'Dan', 'Smith'],
['Pause'],
['Will', 'it', 'rain', 'later', 'today']])
# Create an embedding table.
num_buckets = 1024
embedding_size = 4
embedding_table = tf.Variable(
tf.random.truncated_normal([num_buckets, embedding_size],
stddev=1.0 / math.sqrt(embedding_size)))
# Look up the embedding for each word.
word_buckets = tf.strings.to_hash_bucket_fast(queries, num_buckets)
word_embeddings = tf.nn.embedding_lookup(embedding_table, word_buckets) # ①
# Add markers to the beginning and end of each sentence.
marker = tf.fill([queries.nrows(), 1], '#')
padded = tf.concat([marker, queries, marker], axis=1) # ②
# Build word bigrams & look up embeddings.
bigrams = tf.strings.join([padded[:, :-1], padded[:, 1:]], separator='+') # ③
bigram_buckets = tf.strings.to_hash_bucket_fast(bigrams, num_buckets)
bigram_embeddings = tf.nn.embedding_lookup(embedding_table, bigram_buckets) # ④
# Find the average embedding for each sentence
all_embeddings = tf.concat([word_embeddings, bigram_embeddings], axis=1) # ⑤
avg_embedding = tf.reduce_mean(all_embeddings, axis=1) # ⑥
print(avg_embedding)
tf.ragged.constant([["Hi"], ["How", "are", "you"]]).shape
print(tf.ragged.constant([["Hi"], ["How", "are", "you"]]).bounding_shape())
ragged_x = tf.ragged.constant([["John"], ["a", "big", "dog"], ["my", "cat"]])
ragged_y = tf.ragged.constant([["fell", "asleep"], ["barked"], ["is", "fuzzy"]])
print(tf.concat([ragged_x, ragged_y], axis=1))
sparse_x = ragged_x.to_sparse()
sparse_y = ragged_y.to_sparse()
sparse_result = tf.sparse.concat(sp_inputs=[sparse_x, sparse_y], axis=1)
print(tf.sparse.to_dense(sparse_result, ''))
# Task: predict whether each sentence is a question or not.
sentences = tf.constant(
['What makes you think she is a witch?',
'She turned me into a newt.',
'A newt?',
'Well, I got better.'])
is_question = tf.constant([True, False, True, False])
# Preprocess the input strings.
hash_buckets = 1000
words = tf.strings.split(sentences, ' ')
hashed_words = tf.strings.to_hash_bucket_fast(words, hash_buckets)
# Build the Keras model.
keras_model = tf.keras.Sequential([
tf.keras.layers.Input(shape=[None], dtype=tf.int64, ragged=True),
tf.keras.layers.Embedding(hash_buckets, 16),
tf.keras.layers.LSTM(32, use_bias=False),
tf.keras.layers.Dense(32),
tf.keras.layers.Activation(tf.nn.relu),
tf.keras.layers.Dense(1)
])
keras_model.compile(loss='binary_crossentropy', optimizer='rmsprop')
keras_model.fit(hashed_words, is_question, epochs=5)
print(keras_model.predict(hashed_words))
import google.protobuf.text_format as pbtext
def build_tf_example(s):
return pbtext.Merge(s, tf.train.Example()).SerializeToString()
example_batch = [
build_tf_example(r'''
features {
feature {key: "colors" value {bytes_list {value: ["red", "blue"]} } }
feature {key: "lengths" value {int64_list {value: [7]} } } }'''),
build_tf_example(r'''
features {
feature {key: "colors" value {bytes_list {value: ["orange"]} } }
feature {key: "lengths" value {int64_list {value: []} } } }'''),
build_tf_example(r'''
features {
feature {key: "colors" value {bytes_list {value: ["black", "yellow"]} } }
feature {key: "lengths" value {int64_list {value: [1, 3]} } } }'''),
build_tf_example(r'''
features {
feature {key: "colors" value {bytes_list {value: ["green"]} } }
feature {key: "lengths" value {int64_list {value: [3, 5, 2]} } } }''')]
feature_specification = {
'colors': tf.io.RaggedFeature(tf.string),
'lengths': tf.io.RaggedFeature(tf.int64),
}
feature_tensors = tf.io.parse_example(example_batch, feature_specification)
for name, value in feature_tensors.items():
print("{}={}".format(name, value))
# Helper function used to print datasets in the examples below.
def print_dictionary_dataset(dataset):
for i, element in enumerate(dataset):
print("Element {}:".format(i))
for (feature_name, feature_value) in element.items():
print('{:>14} = {}'.format(feature_name, feature_value))
dataset = tf.data.Dataset.from_tensor_slices(feature_tensors)
print_dictionary_dataset(dataset)
batched_dataset = dataset.batch(2)
print_dictionary_dataset(batched_dataset)
unbatched_dataset = batched_dataset.unbatch()
print_dictionary_dataset(unbatched_dataset)
non_ragged_dataset = tf.data.Dataset.from_tensor_slices([1, 5, 3, 2, 8])
non_ragged_dataset = non_ragged_dataset.map(tf.range)
batched_non_ragged_dataset = non_ragged_dataset.apply(
tf.data.experimental.dense_to_ragged_batch(2))
for element in batched_non_ragged_dataset:
print(element)
def transform_lengths(features):
return {
'mean_length': tf.math.reduce_mean(features['lengths']),
'length_ranges': tf.ragged.range(features['lengths'])}
transformed_dataset = dataset.map(transform_lengths)
print_dictionary_dataset(transformed_dataset)
@tf.function
def make_palindrome(x, axis):
return tf.concat([x, tf.reverse(x, [axis])], axis)
make_palindrome(tf.constant([[1, 2], [3, 4], [5, 6]]), axis=1)
make_palindrome(tf.ragged.constant([[1, 2], [3], [4, 5, 6]]), axis=1)
@tf.function(
input_signature=[tf.RaggedTensorSpec(shape=[None, None], dtype=tf.int32)])
def max_and_min(rt):
return (tf.math.reduce_max(rt, axis=-1), tf.math.reduce_min(rt, axis=-1))
max_and_min(tf.ragged.constant([[1, 2], [3], [4, 5, 6]]))
# Preferred way to use ragged tensors with concrete functions (TF 2.3+):
try:
@tf.function
def increment(x):
return x + 1
rt = tf.ragged.constant([[1, 2], [3], [4, 5, 6]])
cf = increment.get_concrete_function(rt)
print(cf(rt))
except Exception as e:
print(f"Not supported before TF 2.3: {type(e)}: {e}")
import tempfile
keras_module_path = tempfile.mkdtemp()
tf.saved_model.save(keras_model, keras_module_path)
imported_model = tf.saved_model.load(keras_module_path)
imported_model(hashed_words)
class CustomModule(tf.Module):
def __init__(self, variable_value):
super(CustomModule, self).__init__()
self.v = tf.Variable(variable_value)
@tf.function
def grow(self, x):
return x * self.v
module = CustomModule(100.0)
# Before saving a custom model, we must ensure that concrete functions are
# built for each input signature that we will need.
module.grow.get_concrete_function(tf.RaggedTensorSpec(shape=[None, None],
dtype=tf.float32))
custom_module_path = tempfile.mkdtemp()
tf.saved_model.save(module, custom_module_path)
imported_model = tf.saved_model.load(custom_module_path)
imported_model.grow(tf.ragged.constant([[1.0, 4.0, 3.0], [2.0]]))
x = tf.ragged.constant([[1, 2], [3], [4, 5, 6]])
y = tf.ragged.constant([[1, 1], [2], [3, 3, 3]])
print(x + y)
x = tf.ragged.constant([[1, 2], [3], [4, 5, 6]])
print(x + 3)
queries = tf.ragged.constant(
[['Who', 'is', 'George', 'Washington'],
['What', 'is', 'the', 'weather', 'tomorrow'],
['Goodnight']])
print(queries[1]) # A single query
print(queries[1, 2]) # A single word
print(queries[1:]) # Everything but the first row
print(queries[:, :3]) # The first 3 words of each query
print(queries[:, -2:]) # The last 2 words of each query
rt = tf.ragged.constant([[[1, 2, 3], [4]],
[[5], [], [6]],
[[7]],
[[8, 9], [10]]])
print(rt[1]) # Second row (2-D RaggedTensor)
print(rt[3, 0]) # First element of fourth row (1-D Tensor)
print(rt[:, 1:3]) # Items 1-3 of each row (3-D RaggedTensor)
print(rt[:, -1:]) # Last item of each row (3-D RaggedTensor)
ragged_sentences = tf.ragged.constant([
['Hi'], ['Welcome', 'to', 'the', 'fair'], ['Have', 'fun']])
# RaggedTensor -> Tensor
print(ragged_sentences.to_tensor(default_value='', shape=[None, 10]))
# Tensor -> RaggedTensor
x = [[1, 3, -1, -1], [2, -1, -1, -1], [4, 5, 8, 9]]
print(tf.RaggedTensor.from_tensor(x, padding=-1))
#RaggedTensor -> SparseTensor
print(ragged_sentences.to_sparse())
# SparseTensor -> RaggedTensor
st = tf.SparseTensor(indices=[[0, 0], [2, 0], [2, 1]],
values=['a', 'b', 'c'],
dense_shape=[3, 3])
print(tf.RaggedTensor.from_sparse(st))
rt = tf.ragged.constant([[1, 2], [3, 4, 5], [6], [], [7]])
print("python list:", rt.to_list())
print("numpy array:", rt.numpy())
print("values:", rt.values.numpy())
print("splits:", rt.row_splits.numpy())
print("indexed value:", rt[1].numpy())
# x (2D ragged): 2 x (num_rows)
# y (scalar)
# result (2D ragged): 2 x (num_rows)
x = tf.ragged.constant([[1, 2], [3]])
y = 3
print(x + y)
# x (2d ragged): 3 x (num_rows)
# y (2d tensor): 3 x 1
# Result (2d ragged): 3 x (num_rows)
x = tf.ragged.constant(
[[10, 87, 12],
[19, 53],
[12, 32]])
y = [[1000], [2000], [3000]]
print(x + y)
# x (3d ragged): 2 x (r1) x 2
# y (2d ragged): 1 x 1
# Result (3d ragged): 2 x (r1) x 2
x = tf.ragged.constant(
[[[1, 2], [3, 4], [5, 6]],
[[7, 8]]],
ragged_rank=1)
y = tf.constant([[10]])
print(x + y)
# x (3d ragged): 2 x (r1) x (r2) x 1
# y (1d tensor): 3
# Result (3d ragged): 2 x (r1) x (r2) x 3
x = tf.ragged.constant(
[
[
[[1], [2]],
[],
[[3]],
[[4]],
],
[
[[5], [6]],
[[7]]
]
],
ragged_rank=2)
y = tf.constant([10, 20, 30])
print(x + y)
# x (2d ragged): 3 x (r1)
# y (2d tensor): 3 x 4 # trailing dimensions do not match
x = tf.ragged.constant([[1, 2], [3, 4, 5, 6], [7]])
y = tf.constant([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]])
try:
x + y
except tf.errors.InvalidArgumentError as exception:
print(exception)
# x (2d ragged): 3 x (r1)
# y (2d ragged): 3 x (r2) # ragged dimensions do not match.
x = tf.ragged.constant([[1, 2, 3], [4], [5, 6]])
y = tf.ragged.constant([[10, 20], [30, 40], [50]])
try:
x + y
except tf.errors.InvalidArgumentError as exception:
print(exception)
# x (3d ragged): 3 x (r1) x 2
# y (3d ragged): 3 x (r1) x 3 # trailing dimensions do not match
x = tf.ragged.constant([[[1, 2], [3, 4], [5, 6]],
[[7, 8], [9, 10]]])
y = tf.ragged.constant([[[1, 2, 0], [3, 4, 0], [5, 6, 0]],
[[7, 8, 0], [9, 10, 0]]])
try:
x + y
except tf.errors.InvalidArgumentError as exception:
print(exception)
rt = tf.RaggedTensor.from_row_splits(
values=[3, 1, 4, 1, 5, 9, 2],
row_splits=[0, 4, 4, 6, 7])
print(rt)
rt = tf.RaggedTensor.from_row_splits(
values=tf.RaggedTensor.from_row_splits(
values=[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
row_splits=[0, 3, 3, 5, 9, 10]),
row_splits=[0, 1, 1, 5])
print(rt)
print("Shape: {}".format(rt.shape))
print("Number of partitioned dimensions: {}".format(rt.ragged_rank))
rt = tf.RaggedTensor.from_nested_row_splits(
flat_values=[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
nested_row_splits=([0, 1, 1, 5], [0, 3, 3, 5, 9, 10]))
print(rt)
# shape = [batch, (paragraph), (sentence), (word)]
conversations = tf.ragged.constant(
[[[["I", "like", "ragged", "tensors."]],
[["Oh", "yeah?"], ["What", "can", "you", "use", "them", "for?"]],
[["Processing", "variable", "length", "data!"]]],
[[["I", "like", "cheese."], ["Do", "you?"]],
[["Yes."], ["I", "do."]]]])
conversations.shape
assert conversations.ragged_rank == len(conversations.nested_row_splits)
conversations.ragged_rank # Number of partitioned dimensions.
conversations.flat_values.numpy()
rt = tf.RaggedTensor.from_row_splits(
values=[[1, 3], [0, 0], [1, 3], [5, 3], [3, 3], [1, 2]],
row_splits=[0, 3, 4, 6])
print(rt)
print("Shape: {}".format(rt.shape))
print("Number of partitioned dimensions: {}".format(rt.ragged_rank))
print("Flat values shape: {}".format(rt.flat_values.shape))
print("Flat values:\n{}".format(rt.flat_values))
rt = tf.RaggedTensor.from_uniform_row_length(
values=tf.RaggedTensor.from_row_splits(
values=[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
row_splits=[0, 3, 5, 9, 10]),
uniform_row_length=2)
print(rt)
print("Shape: {}".format(rt.shape))
print("Number of partitioned dimensions: {}".format(rt.ragged_rank))
<END_TASK> |