repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
sequence
types
sequence
ES-DOC/esdoc-jupyterhub
notebooks/ncc/cmip6/models/noresm2-hh/ocnbgchem.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Ocnbgchem\nMIP Era: CMIP6\nInstitute: NCC\nSource ID: NORESM2-HH\nTopic: Ocnbgchem\nSub-Topics: Tracers. \nProperties: 65 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:24\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'ncc', 'noresm2-hh', 'ocnbgchem')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport\n3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks\n4. Key Properties --> Transport Scheme\n5. Key Properties --> Boundary Forcing\n6. Key Properties --> Gas Exchange\n7. Key Properties --> Carbon Chemistry\n8. Tracers\n9. Tracers --> Ecosystem\n10. Tracers --> Ecosystem --> Phytoplankton\n11. Tracers --> Ecosystem --> Zooplankton\n12. Tracers --> Disolved Organic Matter\n13. Tracers --> Particules\n14. Tracers --> Dic Alkalinity \n1. Key Properties\nOcean Biogeochemistry key properties\n1.1. Model Overview\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nOverview of ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nName of ocean biogeochemistry model code (PISCES 2.0,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Type\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nType of ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Geochemical\" \n# \"NPZD\" \n# \"PFT\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Elemental Stoichiometry\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nDescribe elemental stoichiometry (fixed, variable, mix of the two)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Fixed\" \n# \"Variable\" \n# \"Mix of both\" \n# TODO - please enter value(s)\n", "1.5. Elemental Stoichiometry Details\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nDescribe which elements have fixed/variable stoichiometry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Prognostic Variables\nIs Required: TRUE    Type: STRING    Cardinality: 1.N\nList of all prognostic tracer variables in the ocean biogeochemistry component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.7. Diagnostic Variables\nIs Required: TRUE    Type: STRING    Cardinality: 1.N\nList of all diagnotic tracer variables in the ocean biogeochemistry component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.8. Damping\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe any tracer damping used (such as artificial correction or relaxation to climatology,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.damping') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport\nTime stepping method for passive tracers transport in ocean biogeochemistry\n2.1. Method\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nTime stepping framework for passive tracers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n", "2.2. Timestep If Not From Ocean\nIs Required: FALSE    Type: INTEGER    Cardinality: 0.1\nTime step for passive tracers (if different from ocean)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks\nTime stepping framework for biology sources and sinks in ocean biogeochemistry\n3.1. Method\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nTime stepping framework for biology sources and sinks", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n", "3.2. Timestep If Not From Ocean\nIs Required: FALSE    Type: INTEGER    Cardinality: 0.1\nTime step for biology sources and sinks (if different from ocean)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4. Key Properties --> Transport Scheme\nTransport scheme in ocean biogeochemistry\n4.1. Type\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nType of transport scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline\" \n# \"Online\" \n# TODO - please enter value(s)\n", "4.2. Scheme\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nTransport scheme used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Use that of ocean model\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4.3. Use Different Scheme\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDecribe transport scheme if different than that of ocean model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Key Properties --> Boundary Forcing\nProperties of biogeochemistry boundary forcing\n5.1. Atmospheric Deposition\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nDescribe how atmospheric deposition is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Atmospheric Chemistry model\" \n# TODO - please enter value(s)\n", "5.2. River Input\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nDescribe how river input is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Land Surface model\" \n# TODO - please enter value(s)\n", "5.3. Sediments From Boundary Conditions\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nList which sediments are speficied from boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Sediments From Explicit Model\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nList which sediments are speficied from explicit sediment model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --> Gas Exchange\n*Properties of gas exchange in ocean biogeochemistry *\n6.1. CO2 Exchange Present\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nIs CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.2. CO2 Exchange Type\nIs Required: FALSE    Type: ENUM    Cardinality: 0.1\nDescribe CO2 gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.3. O2 Exchange Present\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nIs O2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.4. O2 Exchange Type\nIs Required: FALSE    Type: ENUM    Cardinality: 0.1\nDescribe O2 gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.5. DMS Exchange Present\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nIs DMS gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.6. DMS Exchange Type\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nSpecify DMS gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.7. N2 Exchange Present\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nIs N2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.8. N2 Exchange Type\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nSpecify N2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.9. N2O Exchange Present\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nIs N2O gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.10. N2O Exchange Type\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nSpecify N2O gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.11. CFC11 Exchange Present\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nIs CFC11 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.12. CFC11 Exchange Type\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nSpecify CFC11 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.13. CFC12 Exchange Present\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nIs CFC12 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.14. CFC12 Exchange Type\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nSpecify CFC12 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.15. SF6 Exchange Present\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nIs SF6 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.16. SF6 Exchange Type\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nSpecify SF6 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.17. 13CO2 Exchange Present\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nIs 13CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.18. 13CO2 Exchange Type\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nSpecify 13CO2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.19. 14CO2 Exchange Present\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nIs 14CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.20. 14CO2 Exchange Type\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nSpecify 14CO2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.21. Other Gases\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nSpecify any other gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --> Carbon Chemistry\nProperties of carbon chemistry biogeochemistry\n7.1. Type\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nDescribe how carbon chemistry is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other protocol\" \n# TODO - please enter value(s)\n", "7.2. PH Scale\nIs Required: FALSE    Type: ENUM    Cardinality: 0.1\nIf NOT OMIP protocol, describe pH scale.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea water\" \n# \"Free\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7.3. Constants If Not OMIP\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nIf NOT OMIP protocol, list carbon chemistry constants.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Tracers\nOcean biogeochemistry tracers\n8.1. Overview\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nOverview of tracers in ocean biogeochemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Sulfur Cycle Present\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nIs sulfur cycle modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.3. Nutrients Present\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\nList nutrient species present in ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrogen (N)\" \n# \"Phosphorous (P)\" \n# \"Silicium (S)\" \n# \"Iron (Fe)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Nitrous Species If N\nIs Required: FALSE    Type: ENUM    Cardinality: 0.N\nIf nitrogen present, list nitrous species.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrates (NO3)\" \n# \"Amonium (NH4)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.5. Nitrous Processes If N\nIs Required: FALSE    Type: ENUM    Cardinality: 0.N\nIf nitrogen present, list nitrous processes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dentrification\" \n# \"N fixation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9. Tracers --> Ecosystem\nEcosystem properties in ocean biogeochemistry\n9.1. Upper Trophic Levels Definition\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nDefinition of upper trophic level (e.g. based on size) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Upper Trophic Levels Treatment\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nDefine how upper trophic level are treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Tracers --> Ecosystem --> Phytoplankton\nPhytoplankton properties in ocean biogeochemistry\n10.1. Type\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nType of phytoplankton", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"PFT including size based (specify both below)\" \n# \"Size based only (specify below)\" \n# \"PFT only (specify below)\" \n# TODO - please enter value(s)\n", "10.2. Pft\nIs Required: FALSE    Type: ENUM    Cardinality: 0.N\nPhytoplankton functional types (PFT) (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diatoms\" \n# \"Nfixers\" \n# \"Calcifiers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Size Classes\nIs Required: FALSE    Type: ENUM    Cardinality: 0.N\nPhytoplankton size classes (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microphytoplankton\" \n# \"Nanophytoplankton\" \n# \"Picophytoplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11. Tracers --> Ecosystem --> Zooplankton\nZooplankton properties in ocean biogeochemistry\n11.1. Type\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nType of zooplankton", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"Size based (specify below)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Size Classes\nIs Required: FALSE    Type: ENUM    Cardinality: 0.N\nZooplankton size classes (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microzooplankton\" \n# \"Mesozooplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Tracers --> Disolved Organic Matter\nDisolved organic matter properties in ocean biogeochemistry\n12.1. Bacteria Present\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nIs there bacteria representation ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Lability\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nDescribe treatment of lability in dissolved organic matter", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Labile\" \n# \"Semi-labile\" \n# \"Refractory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Tracers --> Particules\nParticulate carbon properties in ocean biogeochemistry\n13.1. Method\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nHow is particulate carbon represented in ocean biogeochemistry?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diagnostic\" \n# \"Diagnostic (Martin profile)\" \n# \"Diagnostic (Balast)\" \n# \"Prognostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Types If Prognostic\nIs Required: FALSE    Type: ENUM    Cardinality: 0.N\nIf prognostic, type(s) of particulate matter taken into account", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"POC\" \n# \"PIC (calcite)\" \n# \"PIC (aragonite\" \n# \"BSi\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Size If Prognostic\nIs Required: FALSE    Type: ENUM    Cardinality: 0.1\nIf prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No size spectrum used\" \n# \"Full size spectrum\" \n# \"Discrete size classes (specify which below)\" \n# TODO - please enter value(s)\n", "13.4. Size If Discrete\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nIf prognostic and discrete size, describe which size classes are used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.5. Sinking Speed If Prognostic\nIs Required: FALSE    Type: ENUM    Cardinality: 0.1\nIf prognostic, method for calculation of sinking speed of particules", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Function of particule size\" \n# \"Function of particule type (balast)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Tracers --> Dic Alkalinity\nDIC and alkalinity properties in ocean biogeochemistry\n14.1. Carbon Isotopes\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\nWhich carbon isotopes are modelled (C13, C14)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"C13\" \n# \"C14)\" \n# TODO - please enter value(s)\n", "14.2. Abiotic Carbon\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nIs abiotic carbon modelled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14.3. Alkalinity\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nHow is alkalinity modelled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Prognostic\" \n# \"Diagnostic)\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
wbinventor/openmc
examples/jupyter/pincell.ipynb
mit
[ "This notebook is intended to demonstrate the basic features of the Python API for constructing input files and running OpenMC. In it, we will show how to create a basic reflective pin-cell model that is equivalent to modeling an infinite array of fuel pins. If you have never used OpenMC, this can serve as a good starting point to learn the Python API. We highly recommend having a copy of the Python API reference documentation open in another browser tab that you can refer to.", "%matplotlib inline\nimport openmc", "Defining Materials\nMaterials in OpenMC are defined as a set of nuclides with specified atom/weight fractions. To begin, we will create a material by making an instance of the Material class. In OpenMC, many objects, including materials, are identified by a \"unique ID\" that is simply just a positive integer. These IDs are used when exporting XML files that the solver reads in. They also appear in the output and can be used for identification. Since an integer ID is not very useful by itself, you can also give a material a name as well.", "uo2 = openmc.Material(1, \"uo2\")\nprint(uo2)", "On the XML side, you have no choice but to supply an ID. However, in the Python API, if you don't give an ID, one will be automatically generated for you:", "mat = openmc.Material()\nprint(mat)", "We see that an ID of 2 was automatically assigned. Let's now move on to adding nuclides to our uo2 material. The Material object has a method add_nuclide() whose first argument is the name of the nuclide and second argument is the atom or weight fraction.", "help(uo2.add_nuclide)", "We see that by default it assumes we want an atom fraction.", "# Add nuclides to uo2\nuo2.add_nuclide('U235', 0.03)\nuo2.add_nuclide('U238', 0.97)\nuo2.add_nuclide('O16', 2.0)", "Now we need to assign a total density to the material. We'll use the set_density for this.", "uo2.set_density('g/cm3', 10.0)", "You may sometimes be given a material specification where all the nuclide densities are in units of atom/b-cm. In this case, you just want the density to be the sum of the constituents. In that case, you can simply run mat.set_density('sum').\nWith UO2 finished, let's now create materials for the clad and coolant. Note the use of add_element() for zirconium.", "zirconium = openmc.Material(2, \"zirconium\")\nzirconium.add_element('Zr', 1.0)\nzirconium.set_density('g/cm3', 6.6)\n\nwater = openmc.Material(3, \"h2o\")\nwater.add_nuclide('H1', 2.0)\nwater.add_nuclide('O16', 1.0)\nwater.set_density('g/cm3', 1.0)", "An astute observer might now point out that this water material we just created will only use free-atom cross sections. We need to tell it to use an $S(\\alpha,\\beta)$ table so that the bound atom cross section is used at thermal energies. To do this, there's an add_s_alpha_beta() method. Note the use of the GND-style name \"c_H_in_H2O\".", "water.add_s_alpha_beta('c_H_in_H2O')", "When we go to run the transport solver in OpenMC, it is going to look for a materials.xml file. Thus far, we have only created objects in memory. To actually create a materials.xml file, we need to instantiate a Materials collection and export it to XML.", "mats = openmc.Materials([uo2, zirconium, water])", "Note that Materials is actually a subclass of Python's built-in list, so we can use methods like append(), insert(), pop(), etc.", "mats = openmc.Materials()\nmats.append(uo2)\nmats += [zirconium, water]\nisinstance(mats, list)", "Finally, we can create the XML file with the export_to_xml() method. In a Jupyter notebook, we can run a shell command by putting ! before it, so in this case we are going to display the materials.xml file that we created.", "mats.export_to_xml()\n!cat materials.xml", "Element Expansion\nDid you notice something really cool that happened to our Zr element? OpenMC automatically turned it into a list of nuclides when it exported it! The way this feature works is as follows:\n\nFirst, it checks whether Materials.cross_sections has been set, indicating the path to a cross_sections.xml file.\nIf Materials.cross_sections isn't set, it looks for the OPENMC_CROSS_SECTIONS environment variable.\nIf either of these are found, it scans the file to see what nuclides are actually available and will expand elements accordingly.\n\nLet's see what happens if we change O16 in water to elemental O.", "water.remove_nuclide('O16')\nwater.add_element('O', 1.0)\n\nmats.export_to_xml()\n!cat materials.xml", "We see that now O16 and O17 were automatically added. O18 is missing because our cross sections file (which is based on ENDF/B-VII.1) doesn't have O18. If OpenMC didn't know about the cross sections file, it would have assumed that all isotopes exist.\nThe cross_sections.xml file\nThe cross_sections.xml tells OpenMC where it can find nuclide cross sections and $S(\\alpha,\\beta)$ tables. It serves the same purpose as MCNP's xsdir file and Serpent's xsdata file. As we mentioned, this can be set either by the OPENMC_CROSS_SECTIONS environment variable or the Materials.cross_sections attribute.\nLet's have a look at what's inside this file:", "!cat $OPENMC_CROSS_SECTIONS | head -n 10\nprint(' ...')\n!cat $OPENMC_CROSS_SECTIONS | tail -n 10", "Enrichment\nNote that the add_element() method has a special argument enrichment that can be used for Uranium. For example, if we know that we want to create 3% enriched UO2, the following would work:", "uo2_three = openmc.Material()\nuo2_three.add_element('U', 1.0, enrichment=3.0)\nuo2_three.add_element('O', 2.0)\nuo2_three.set_density('g/cc', 10.0)", "Defining Geometry\nAt this point, we have three materials defined, exported to XML, and ready to be used in our model. To finish our model, we need to define the geometric arrangement of materials. OpenMC represents physical volumes using constructive solid geometry (CSG), also known as combinatorial geometry. The object that allows us to assign a material to a region of space is called a Cell (same concept in MCNP, for those familiar). In order to define a region that we can assign to a cell, we must first define surfaces which bound the region. A surface is a locus of zeros of a function of Cartesian coordinates $x$, $y$, and $z$, e.g.\n\nA plane perpendicular to the x axis: $x - x_0 = 0$\nA cylinder parallel to the z axis: $(x - x_0)^2 + (y - y_0)^2 - R^2 = 0$\nA sphere: $(x - x_0)^2 + (y - y_0)^2 + (z - z_0)^2 - R^2 = 0$\n\nBetween those three classes of surfaces (planes, cylinders, spheres), one can construct a wide variety of models. It is also possible to define cones and general second-order surfaces (tori are not currently supported).\nNote that defining a surface is not sufficient to specify a volume -- in order to define an actual volume, one must reference the half-space of a surface. A surface half-space is the region whose points satisfy a positive or negative inequality of the surface equation. For example, for a sphere of radius one centered at the origin, the surface equation is $f(x,y,z) = x^2 + y^2 + z^2 - 1 = 0$. Thus, we say that the negative half-space of the sphere, is defined as the collection of points satisfying $f(x,y,z) < 0$, which one can reason is the inside of the sphere. Conversely, the positive half-space of the sphere would correspond to all points outside of the sphere.\nLet's go ahead and create a sphere and confirm that what we've told you is true.", "sph = openmc.Sphere(R=1.0)", "Note that by default the sphere is centered at the origin so we didn't have to supply x0, y0, or z0 arguments. Strictly speaking, we could have omitted R as well since it defaults to one. To get the negative or positive half-space, we simply need to apply the - or + unary operators, respectively.\n(NOTE: Those unary operators are defined by special methods: __pos__ and __neg__ in this case).", "inside_sphere = -sph\noutside_sphere = +sph", "Now let's see if inside_sphere actually contains points inside the sphere:", "print((0,0,0) in inside_sphere, (0,0,2) in inside_sphere)\nprint((0,0,0) in outside_sphere, (0,0,2) in outside_sphere)", "Everything works as expected! Now that we understand how to create half-spaces, we can create more complex volumes by combining half-spaces using Boolean operators: &amp; (intersection), | (union), and ~ (complement). For example, let's say we want to define a region that is the top part of the sphere (all points inside the sphere that have $z > 0$.", "z_plane = openmc.ZPlane(z0=0)\nnorthern_hemisphere = -sph & +z_plane", "For many regions, OpenMC can automatically determine a bounding box. To get the bounding box, we use the bounding_box property of a region, which returns a tuple of the lower-left and upper-right Cartesian coordinates for the bounding box:", "northern_hemisphere.bounding_box", "Now that we see how to create volumes, we can use them to create a cell.", "cell = openmc.Cell()\ncell.region = northern_hemisphere\n\n# or...\ncell = openmc.Cell(region=northern_hemisphere)", "By default, the cell is not filled by any material (void). In order to assign a material, we set the fill property of a Cell.", "cell.fill = water", "Universes and in-line plotting\nA collection of cells is known as a universe (again, this will be familiar to MCNP/Serpent users) and can be used as a repeatable unit when creating a model. Although we don't need it yet, the benefit of creating a universe is that we can visualize our geometry while we're creating it.", "universe = openmc.Universe()\nuniverse.add_cell(cell)\n\n# this also works\nuniverse = openmc.Universe(cells=[cell])", "The Universe object has a plot method that will display our the universe as current constructed:", "universe.plot(width=(2.0, 2.0))", "By default, the plot will appear in the $x$-$y$ plane. We can change that with the basis argument.", "universe.plot(width=(2.0, 2.0), basis='xz')", "If we have particular fondness for, say, fuchsia, we can tell the plot() method to make our cell that color.", "universe.plot(width=(2.0, 2.0), basis='xz',\n colors={cell: 'fuchsia'})", "Pin cell geometry\nWe now have enough knowledge to create our pin-cell. We need three surfaces to define the fuel and clad:\n\nThe outer surface of the fuel -- a cylinder parallel to the z axis\nThe inner surface of the clad -- same as above\nThe outer surface of the clad -- same as above\n\nThese three surfaces will all be instances of openmc.ZCylinder, each with a different radius according to the specification.", "fuel_or = openmc.ZCylinder(R=0.39)\nclad_ir = openmc.ZCylinder(R=0.40)\nclad_or = openmc.ZCylinder(R=0.46)", "With the surfaces created, we can now take advantage of the built-in operators on surfaces to create regions for the fuel, the gap, and the clad:", "fuel_region = -fuel_or\ngap_region = +fuel_or & -clad_ir\nclad_region = +clad_ir & -clad_or", "Now we can create corresponding cells that assign materials to these regions. As with materials, cells have unique IDs that are assigned either manually or automatically. Note that the gap cell doesn't have any material assigned (it is void by default).", "fuel = openmc.Cell(1, 'fuel')\nfuel.fill = uo2\nfuel.region = fuel_region\n\ngap = openmc.Cell(2, 'air gap')\ngap.region = gap_region\n\nclad = openmc.Cell(3, 'clad')\nclad.fill = zirconium\nclad.region = clad_region", "Finally, we need to handle the coolant outside of our fuel pin. To do this, we create x- and y-planes that bound the geometry.", "pitch = 1.26\nleft = openmc.XPlane(x0=-pitch/2, boundary_type='reflective')\nright = openmc.XPlane(x0=pitch/2, boundary_type='reflective')\nbottom = openmc.YPlane(y0=-pitch/2, boundary_type='reflective')\ntop = openmc.YPlane(y0=pitch/2, boundary_type='reflective')", "The water region is going to be everything outside of the clad outer radius and within the box formed as the intersection of four half-spaces.", "water_region = +left & -right & +bottom & -top & +clad_or\n\nmoderator = openmc.Cell(4, 'moderator')\nmoderator.fill = water\nmoderator.region = water_region", "OpenMC also includes a factory function that generates a rectangular prism that could have made our lives easier.", "box = openmc.get_rectangular_prism(width=pitch, height=pitch,\n boundary_type='reflective')\ntype(box)", "Pay attention here -- the object that was returned is NOT a surface. It is actually the intersection of four surface half-spaces, just like we created manually before. Thus, we don't need to apply the unary operator (-box). Instead, we can directly combine it with +clad_or.", "water_region = box & +clad_or", "The final step is to assign the cells we created to a universe and tell OpenMC that this universe is the \"root\" universe in our geometry. The Geometry is the final object that is actually exported to XML.", "root = openmc.Universe(cells=(fuel, gap, clad, moderator))\n\ngeom = openmc.Geometry()\ngeom.root_universe = root\n\n# or...\ngeom = openmc.Geometry(root)\ngeom.export_to_xml()\n!cat geometry.xml", "Starting source and settings\nThe Python API has a module openmc.stats with various univariate and multivariate probability distributions. We can use these distributions to create a starting source using the openmc.Source object.", "point = openmc.stats.Point((0, 0, 0))\nsrc = openmc.Source(space=point)", "Now let's create a Settings object and give it the source we created along with specifying how many batches and particles we want to run.", "settings = openmc.Settings()\nsettings.source = src\nsettings.batches = 100\nsettings.inactive = 10\nsettings.particles = 1000\n\nsettings.export_to_xml()\n!cat settings.xml", "User-defined tallies\nWe actually have all the required files needed to run a simulation. Before we do that though, let's give a quick example of how to create tallies. We will show how one would tally the total, fission, absorption, and (n,$\\gamma$) reaction rates for $^{235}$U in the cell containing fuel. Recall that filters allow us to specify where in phase-space we want events to be tallied and scores tell us what we want to tally:\n$$X = \\underbrace{\\int d\\mathbf{r} \\int d\\mathbf{\\Omega} \\int dE}{\\text{filters}} \\; \\underbrace{f(\\mathbf{r},\\mathbf{\\Omega},E)}{\\text{scores}} \\psi (\\mathbf{r},\\mathbf{\\Omega},E)$$\nIn this case, the where is \"the fuel cell\". So, we will create a cell filter specifying the fuel cell.", "cell_filter = openmc.CellFilter(fuel)\n\nt = openmc.Tally(1)\nt.filters = [cell_filter]", "The what is the total, fission, absorption, and (n,$\\gamma$) reaction rates in $^{235}$U. By default, if we only specify what reactions, it will gives us tallies over all nuclides. We can use the nuclides attribute to name specific nuclides we're interested in.", "t.nuclides = ['U235']\nt.scores = ['total', 'fission', 'absorption', '(n,gamma)']", "Similar to the other files, we need to create a Tallies collection and export it to XML.", "tallies = openmc.Tallies([t])\ntallies.export_to_xml()\n!cat tallies.xml", "Running OpenMC\nRunning OpenMC from Python can be done using the openmc.run() function. This function allows you to set the number of MPI processes and OpenMP threads, if need be.", "openmc.run()", "Great! OpenMC already told us our k-effective. It also spit out a file called tallies.out that shows our tallies. This is a very basic method to look at tally data; for more sophisticated methods, see other example notebooks.", "!cat tallies.out", "Geometry plotting\nWe saw before that we could call the Universe.plot() method to show a universe while we were creating our geometry. There is also a built-in plotter in the Fortran codebase that is much faster than the Python plotter and has more options. The interface looks somewhat similar to the Universe.plot() method. Instead though, we create Plot instances, assign them to a Plots collection, export it to XML, and then run OpenMC in geometry plotting mode. As an example, let's specify that we want the plot to be colored by material (rather than by cell) and we assign yellow to fuel and blue to water.", "p = openmc.Plot()\np.filename = 'pinplot'\np.width = (pitch, pitch)\np.pixels = (200, 200)\np.color_by = 'material'\np.colors = {uo2: 'yellow', water: 'blue'}", "With our plot created, we need to add it to a Plots collection which can be exported to XML.", "plots = openmc.Plots([p])\nplots.export_to_xml()\n!cat plots.xml", "Now we can run OpenMC in plotting mode by calling the plot_geometry() function. Under the hood this is calling openmc --plot.", "openmc.plot_geometry()", "OpenMC writes out a peculiar image with a .ppm extension. If you have ImageMagick installed, this can be converted into a more normal .png file.", "!convert pinplot.ppm pinplot.png", "We can use functionality from IPython to display the image inline in our notebook:", "from IPython.display import Image\nImage(\"pinplot.png\")", "That was a little bit cumbersome. Thankfully, OpenMC provides us with a function that does all that \"boilerplate\" work.", "openmc.plot_inline(p)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
vzg100/Post-Translational-Modification-Prediction
.ipynb_checkpoints/Phosphorylation Sequence Tests -MLP -dbptm+ELM-VectorAvr.-checkpoint.ipynb
mit
[ "Template for test", "from pred import Predictor\nfrom pred import sequence_vector\nfrom pred import chemical_vector", "Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using S, T, and Y Phosphorylation.\nIncluded is N Phosphorylation however no benchmarks are available, yet. \nTraining data is from phospho.elm and benchmarks are from dbptm.", "par = [\"pass\", \"ADASYN\", \"SMOTEENN\", \"random_under_sample\", \"ncl\", \"near_miss\"]\nfor i in par:\n print(\"y\", i)\n y = Predictor()\n y.load_data(file=\"Data/Training/clean_s_filtered.csv\")\n y.process_data(vector_function=\"sequence\", amino_acid=\"S\", imbalance_function=i, random_data=0)\n y.supervised_training(\"mlp_adam\")\n y.benchmark(\"Data/Benchmarks/phos.csv\", \"S\")\n del y\n print(\"x\", i)\n x = Predictor()\n x.load_data(file=\"Data/Training/clean_s_filtered.csv\")\n x.process_data(vector_function=\"sequence\", amino_acid=\"S\", imbalance_function=i, random_data=1)\n x.supervised_training(\"mlp_adam\")\n x.benchmark(\"Data/Benchmarks/phos.csv\", \"S\")\n del x\n", "Y Phosphorylation", "par = [\"pass\", \"ADASYN\", \"SMOTEENN\", \"random_under_sample\", \"ncl\", \"near_miss\"]\nfor i in par:\n print(\"y\", i)\n y = Predictor()\n y.load_data(file=\"Data/Training/clean_Y_filtered.csv\")\n y.process_data(vector_function=\"sequence\", amino_acid=\"Y\", imbalance_function=i, random_data=0)\n y.supervised_training(\"mlp_adam\")\n y.benchmark(\"Data/Benchmarks/phos.csv\", \"Y\")\n del y\n print(\"x\", i)\n x = Predictor()\n x.load_data(file=\"Data/Training/clean_Y_filtered.csv\")\n x.process_data(vector_function=\"sequence\", amino_acid=\"Y\", imbalance_function=i, random_data=1)\n x.supervised_training(\"mlp_adam\")\n x.benchmark(\"Data/Benchmarks/phos.csv\", \"Y\")\n del x\n", "T Phosphorylation", "par = [\"pass\", \"ADASYN\", \"SMOTEENN\", \"random_under_sample\", \"ncl\", \"near_miss\"]\nfor i in par:\n print(\"y\", i)\n y = Predictor()\n y.load_data(file=\"Data/Training/clean_t_filtered.csv\")\n y.process_data(vector_function=\"sequence\", amino_acid=\"T\", imbalance_function=i, random_data=0)\n y.supervised_training(\"mlp_adam\")\n y.benchmark(\"Data/Benchmarks/phos.csv\", \"T\")\n del y\n print(\"x\", i)\n x = Predictor()\n x.load_data(file=\"Data/Training/clean_t_filtered.csv\")\n x.process_data(vector_function=\"sequence\", amino_acid=\"T\", imbalance_function=i, random_data=1)\n x.supervised_training(\"mlp_adam\")\n x.benchmark(\"Data/Benchmarks/phos.csv\", \"T\")\n del x\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
albahnsen/PracticalMachineLearningClass
notebooks/07-regularization.ipynb
mit
[ "07 - Regularization\nby Alejandro Correa Bahnsen and Jesus Solano\nversion 1.5, January 2019\nPart of the class Practical Machine Learning\nThis notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Special thanks goes to Rick Muller, Sandia National Laboratories(https://github.com/justmarkham)\nAgenda:\n\nOverfitting (review)\nOverfitting with linear models\nRegularization of linear models\nRegularized regression in scikit-learn\nRegularized classification in scikit-learn\nComparing regularized linear models with unregularized linear models\n\nPart 1: Overfitting (review)\nWhat is overfitting?\n\nBuilding a model that matches the training data \"too closely\"\nLearning from the noise in the data, rather than just the signal\n\nHow does overfitting occur?\n\nEvaluating a model by testing it on the same data that was used to train it\nCreating a model that is \"too complex\"\n\nWhat is the impact of overfitting?\n\nModel will do well on the training data, but won't generalize to out-of-sample data\nModel will have low bias, but high variance\n\nOverfitting with KNN\n\nOverfitting with polynomial regression\n\nOverfitting with decision trees\n\nPart 2: Overfitting with linear models\nWhat are the general characteristics of linear models?\n\nLow model complexity\nHigh bias, low variance\nDoes not tend to overfit\n\nNevertheless, overfitting can still occur with linear models if you allow them to have high variance. Here are some common causes:\nCause 1: Irrelevant features\nLinear models can overfit if you include \"irrelevant features\", meaning features that are unrelated to the response. Why?\nBecause it will learn a coefficient for every feature you include in the model, regardless of whether that feature has the signal or the noise.\nThis is especially a problem when p (number of features) is close to n (number of observations), because that model will naturally have high variance.\nCause 2: Correlated features\nLinear models can overfit if the included features are highly correlated with one another. Why?\nFrom the scikit-learn documentation:\n\n\"...coefficient estimates for Ordinary Least Squares rely on the independence of the model terms. When terms are correlated and the columns of the design matrix X have an approximate linear dependence, the design matrix becomes close to singular and as a result, the least-squares estimate becomes highly sensitive to random errors in the observed response, producing a large variance.\"\n\nCause 3: Large coefficients\nLinear models can overfit if the coefficients (after feature standardization) are too large. Why?\nBecause the larger the absolute value of the coefficient, the more power it has to change the predicted response, resulting in a higher variance.\nPart 3: Regularization of linear models\n\nRegularization is a method for \"constraining\" or \"regularizing\" the size of the coefficients, thus \"shrinking\" them towards zero.\nIt reduces model variance and thus minimizes overfitting.\nIf the model is too complex, it tends to reduce variance more than it increases bias, resulting in a model that is more likely to generalize.\n\nOur goal is to locate the optimum model complexity, and thus regularization is useful when we believe our model is too complex.\n\nHow does regularization work?\nFor a normal linear regression model, we estimate the coefficients using the least squares criterion, which minimizes the residual sum of squares (RSS):\n\nFor a regularized linear regression model, we minimize the sum of RSS and a \"penalty term\" that penalizes coefficient size.\nRidge regression (or \"L2 regularization\") minimizes: $$\\text{RSS} + \\alpha \\sum_{j=1}^p \\beta_j^2$$\nLasso regression (or \"L1 regularization\") minimizes: $$\\text{RSS} + \\alpha \\sum_{j=1}^p |\\beta_j|$$\n\n$p$ is the number of features\n$\\beta_j$ is a model coefficient\n$\\alpha$ is a tuning parameter:\nA tiny $\\alpha$ imposes no penalty on the coefficient size, and is equivalent to a normal linear regression model.\nIncreasing the $\\alpha$ penalizes the coefficients and thus shrinks them.\n\n\n\nLasso and ridge path diagrams\nA larger alpha (towards the left of each diagram) results in more regularization:\n\nLasso regression shrinks coefficients all the way to zero, thus removing them from the model\nRidge regression shrinks coefficients toward zero, but they rarely reach zero\n\nSource code for the diagrams: Lasso regression and Ridge regression\n\nAdvice for applying regularization\nShould features be standardized?\n\nYes, because otherwise, features would be penalized simply because of their scale.\nAlso, standardizing avoids penalizing the intercept, which wouldn't make intuitive sense.\n\nHow should you choose between Lasso regression and Ridge regression?\n\nLasso regression is preferred if we believe many features are irrelevant or if we prefer a sparse model.\nIf model performance is your primary concern, it is best to try both.\nElasticNet regression is a combination of lasso regression and ridge Regression.\n\nVisualizing regularization\nBelow is a visualization of what happens when you apply regularization. The general idea is that you are restricting the allowed values of your coefficients to a certain \"region\". Within that region, you want to find the coefficients that result in the best model.\n\nIn this diagram:\n\nWe are fitting a linear regression model with two features, $x_1$ and $x_2$.\n$\\hat\\beta$ represents the set of two coefficients, $\\beta_1$ and $\\beta_2$, which minimize the RSS for the unregularized model.\nRegularization restricts the allowed positions of $\\hat\\beta$ to the blue constraint region:\nFor lasso, this region is a diamond because it constrains the absolute value of the coefficients.\nFor ridge, this region is a circle because it constrains the square of the coefficients.\n\n\nThe size of the blue region is determined by $\\alpha$, with a smaller $\\alpha$ resulting in a larger region:\nWhen $\\alpha$ is zero, the blue region is infinitely large, and thus the coefficient sizes are not constrained.\nWhen $\\alpha$ increases, the blue region gets smaller and smaller.\n\n\n\nIn this case, $\\hat\\beta$ is not within the blue constraint region. Thus, we need to move $\\hat\\beta$ until it intersects the blue region, while increasing the RSS as little as possible.\nFrom page 222 of An Introduction to Statistical Learning:\n\nThe ellipses that are centered around $\\hat\\beta$ represent regions of constant RSS. In other words, all of the points on a given ellipse share a common value of the RSS. As the ellipses expand away from the least squares coefficient estimates, the RSS increases. Equations (6.8) and (6.9) indicate that the lasso and ridge regression coefficient estimates are given by the first point at which an ellipse contacts the constraint region.\nSince ridge regression has a circular constraint with no sharp points, this intersection will not generally occur on an axis, and so the ridge regression coefficient estimates will be exclusively non-zero. However, the lasso constraint has corners at each of the axes, and so the ellipse will often intersect the constraint region at an axis. When this occurs, one of the coefficients will equal zero. In higher dimensions, many of the coefficient estimates may equal zero simultaneously. In Figure 6.7, the intersection occurs at $\\beta_1 = 0$, and so the resulting model will only include $\\beta_2$.\n\nPart 4: Regularized regression in scikit-learn\n\nCommunities and Crime dataset from the UCI Machine Learning Repository: data, data dictionary\nGoal: Predict the violent crime rate for a community given socioeconomic and law enforcement data\n\nLoad and prepare the crime dataset", "# read in the dataset\nimport pandas as pd\nurl = 'https://github.com/albahnsen/PracticalMachineLearningClass/raw/master/datasets/communities.data'\ncrime = pd.read_csv(url, header=None, na_values=['?'])\ncrime.head()\n\n# examine the response variable\ncrime[127].describe()\n\n# remove categorical features\ncrime.drop([0, 1, 2, 3, 4], axis=1, inplace=True)\n\n# remove rows with any missing values\ncrime.dropna(inplace=True)\n\n# check the shape\ncrime.shape\n\n# define X and y\nX = crime.drop(127, axis=1)\ny = crime[127]\n\n# split into training and testing sets\nfrom sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)", "Linear regression", "# build a linear regression model\nfrom sklearn.linear_model import LinearRegression\nlinreg = LinearRegression()\nlinreg.fit(X_train, y_train)\n\n# examine the coefficients\nprint(linreg.coef_)\n\n# make predictions\ny_pred = linreg.predict(X_test)\n\n# calculate RMSE\nfrom sklearn import metrics\nimport numpy as np\nprint(np.sqrt(metrics.mean_squared_error(y_test, y_pred)))", "Ridge regression\n\nRidge documentation\nalpha: must be positive, increase for more regularization\nnormalize: scales the features (without using StandardScaler)", "# alpha=0 is equivalent to linear regression\nfrom sklearn.linear_model import Ridge\nridgereg = Ridge(alpha=0, normalize=True)\nridgereg.fit(X_train, y_train)\ny_pred = ridgereg.predict(X_test)\nprint(np.sqrt(metrics.mean_squared_error(y_test, y_pred)))\n\n# try alpha=0.1\nridgereg = Ridge(alpha=0.1, normalize=True)\nridgereg.fit(X_train, y_train)\ny_pred = ridgereg.predict(X_test)\nprint(np.sqrt(metrics.mean_squared_error(y_test, y_pred)))\n\n# examine the coefficients\nprint(ridgereg.coef_)", "RidgeCV: ridge regression with built-in cross-validation of the alpha parameter\nalphas: array of alpha values to try", "# create an array of alpha values\nalpha_range = 10.**np.arange(-2, 3)\nalpha_range\n\n# select the best alpha with RidgeCV\nfrom sklearn.linear_model import RidgeCV\nridgeregcv = RidgeCV(alphas=alpha_range, normalize=True, scoring='neg_mean_squared_error')\nridgeregcv.fit(X_train, y_train)\nridgeregcv.alpha_\n\n# predict method uses the best alpha value\ny_pred = ridgeregcv.predict(X_test)\nprint(np.sqrt(metrics.mean_squared_error(y_test, y_pred)))", "Lasso regression\n\nLasso documentation\nalpha: must be positive, increase for more regularization\nnormalize: scales the features (without using StandardScaler)", "# try alpha=0.001 and examine coefficients\nfrom sklearn.linear_model import Lasso\nlassoreg = Lasso(alpha=0.001, normalize=True)\nlassoreg.fit(X_train, y_train)\nprint(lassoreg.coef_)\n\n# try alpha=0.01 and examine coefficients\nlassoreg = Lasso(alpha=0.01, normalize=True)\nlassoreg.fit(X_train, y_train)\nprint(lassoreg.coef_)\n\n# calculate RMSE (for alpha=0.01)\ny_pred = lassoreg.predict(X_test)\nprint(np.sqrt(metrics.mean_squared_error(y_test, y_pred)))", "LassoCV: lasso regression with built-in cross-validation of the alpha parameter\nn_alphas: number of alpha values (automatically chosen) to try", "# select the best alpha with LassoCV\nimport warnings\nwarnings.filterwarnings('ignore')\nfrom sklearn.linear_model import LassoCV\nlassoregcv = LassoCV(n_alphas=100, normalize=True, random_state=1,cv=5)\nlassoregcv.fit(X_train, y_train)\nlassoregcv.alpha_\n\n# examine the coefficients\nprint(lassoregcv.coef_)\n\n# predict method uses the best alpha value\ny_pred = lassoregcv.predict(X_test)\nprint(np.sqrt(metrics.mean_squared_error(y_test, y_pred)))", "Part 5: Regularized classification in scikit-learn\n\nWine dataset from the UCI Machine Learning Repository: data, data dictionary\nGoal: Predict the origin of wine using chemical analysis\n\nLoad and prepare the wine dataset", "# read in the dataset\nurl = 'https://github.com/albahnsen/PracticalMachineLearningClass/raw/master/datasets/wine.data'\nwine = pd.read_csv(url, header=None)\nwine.head()\n\n# examine the response variable\nwine[0].value_counts()\n\n# define X and y\nX = wine.drop(0, axis=1)\ny = wine[0]\n\n# split into training and testing sets\nfrom sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)", "Logistic regression (unregularized)", "# build a logistic regression model\nfrom sklearn.linear_model import LogisticRegression\nlogreg = LogisticRegression(C=1e9,solver='liblinear',multi_class='auto')\nlogreg.fit(X_train, y_train)\n\n# examine the coefficients\nprint(logreg.coef_)\n\n# generate predicted probabilities\ny_pred_prob = logreg.predict_proba(X_test)\nprint(y_pred_prob)\n\n# calculate log loss\nprint(metrics.log_loss(y_test, y_pred_prob))", "Logistic regression (regularized)\n\nLogisticRegression documentation\nC: must be positive, decrease for more regularization\npenalty: l1 (lasso) or l2 (ridge)", "# standardize X_train and X_test\nfrom sklearn.preprocessing import StandardScaler\nscaler = StandardScaler()\nX_train = X_train.astype(float)\nX_test = X_test.astype(float)\nscaler.fit(X_train)\nX_train_scaled = scaler.transform(X_train)\nX_test_scaled = scaler.transform(X_test)\n\n# try C=0.1 with L1 penalty\nlogreg = LogisticRegression(C=0.1, penalty='l1',solver='liblinear',multi_class='auto')\nlogreg.fit(X_train_scaled, y_train)\nprint(logreg.coef_)\n\n# generate predicted probabilities and calculate log loss\ny_pred_prob = logreg.predict_proba(X_test_scaled)\nprint(metrics.log_loss(y_test, y_pred_prob))\n\n# try C=0.1 with L2 penalty\nlogreg = LogisticRegression(C=0.1, penalty='l2',multi_class='auto',solver='liblinear')\nlogreg.fit(X_train_scaled, y_train)\nprint(logreg.coef_)\n\n# generate predicted probabilities and calculate log loss\ny_pred_prob = logreg.predict_proba(X_test_scaled)\nprint(metrics.log_loss(y_test, y_pred_prob))", "Part 6: Comparing regularized linear models with unregularized linear models\nAdvantages of regularized linear models:\n\nBetter performance\nL1 regularization performs automatic feature selection\nUseful for high-dimensional problems (p > n)\n\nDisadvantages of regularized linear models:\n\nTuning is required\nFeature scaling is recommended\nLess interpretable (due to feature scaling)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mplaine/www.laatukiikut.fi
2018/data_wrangling/Create Boulders Final.ipynb
mit
[ "Suomen Parhaat Boulderit 2018: Create Boulders Final\nMarch 17, 2018\n<br>\nGoogle Maps JavaScript API key. See https://developers.google.com/maps/documentation/javascript/get-api-key", "GOOGLE_MAPS_JAVASCRIPT_API_KEY = \"YOUR_API_KEY\"", "<br>\nImport required modules", "import json\nimport time\nimport numpy as np\nimport pandas as pd\nfrom geopy.geocoders import GoogleV3\nfrom geopy.exc import GeocoderQueryError, GeocoderQuotaExceeded", "<br>\nLoad the datafile spb2018_-_cleaned.csv, which contains the form responses to the Suomen Parhaat Boulderit 2018 survey.", "# Load cleaned dataset\nspb2018_df = pd.read_csv(\"data/survey_-_cleaned.csv\")\n\n# Drop duplicates (exclude the Timestamp column from comparisons)\nspb2018_df = spb2018_df.drop_duplicates(subset=spb2018_df.columns.values.tolist()[1:])\nspb2018_df.head()", "<br>\nLoad the datafile boulders_-_prefilled.csv, which contains manually added details of each voted boulder.", "boulder_details_df = pd.read_csv(\"data/boulders_-_prefilled.csv\", index_col=\"Name\")\nboulder_details_df.head()", "<br>\nAdd column VotedBy", "\"\"\"\n# Simpler but slower (appr. four times) implementation\n# 533 ms ± 95.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\ndef add_column_votedby(column_name=\"VotedBy\"):\n # Gender mappings from Finnish to English\n gender_dict = {\n \"Mies\": \"Male\",\n \"Nainen\": \"Female\"\n }\n\n # Iterate over boulders\n for index, row in boulder_details_df.iterrows():\n boulder_name = index\n gender_s = spb2018_df.loc[(spb2018_df[\"Boulderin nimi\"] == boulder_name) | (spb2018_df[\"Boulderin nimi.1\"] == boulder_name) | (spb2018_df[\"Boulderin nimi.2\"] == boulder_name), \"Sukupuoli\"]\n boulder_details_df.loc[boulder_name, column_name] = gender_dict[gender_s.iloc[0]] if gender_s.nunique() == 1 else \"Both\"\n\"\"\"\n\"\"\"\n# More complex but faster (appr. four times) implementation\n# 136 ms ± 5.42 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)\ndef add_column_votedby(column_name=\"VotedBy\"):\n # Initialize the new column\n boulder_details_df[column_name] = \"\"\n \n # Gender mappings from Finnish to English\n gender_dict = {\n \"Mies\": \"Male\",\n \"Nainen\": \"Female\"\n }\n\n def update_genders(gender, boulder_names):\n for boulder_name in boulder_names:\n previous_gender = boulder_details_df.loc[boulder_name, column_name]\n if previous_gender == \"\" or previous_gender == gender:\n boulder_details_df.loc[boulder_name, column_name] = gender\n else:\n boulder_details_df.loc[boulder_name, column_name] = \"Both\"\n\n # Iterate over form responses\n for index, row in spb2018_df.iterrows():\n gender = gender_dict[row[\"Sukupuoli\"]]\n boulder_names = [row[\"Boulderin nimi\"], row[\"Boulderin nimi.1\"], row[\"Boulderin nimi.2\"]]\n boulder_names = [boulder_name for boulder_name in boulder_names if pd.notnull(boulder_name)]\n update_genders(gender, boulder_names)\n\"\"\"\n# Typical implementation\n# 430 ms ± 78.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\ndef add_column_votedby(column_name=\"VotedBy\"):\n # Gender mappings from Finnish to English\n gender_dict = {\n \"Mies\": \"Male\",\n \"Nainen\": \"Female\"\n }\n \n def set_voted_by(row):\n boulder_name = row.name\n gender_s = spb2018_df.loc[(spb2018_df[\"Boulderin nimi\"] == boulder_name) | (spb2018_df[\"Boulderin nimi.1\"] == boulder_name) | (spb2018_df[\"Boulderin nimi.2\"] == boulder_name), \"Sukupuoli\"]\n return gender_dict[gender_s.iloc[0]] if gender_s.nunique() == 1 else \"Both\"\n \n boulder_details_df[column_name] = boulder_details_df.apply(set_voted_by, axis=1)\n \nadd_column_votedby()\nboulder_details_df.head()", "<br>\nAdd column Votes.", "def add_column_votes(column_name=\"Votes\"):\n boulder_name_columns = [spb2018_df[\"Boulderin nimi\"], spb2018_df[\"Boulderin nimi.1\"], spb2018_df[\"Boulderin nimi.2\"]]\n all_voted_boulders_s = pd.concat(boulder_name_columns, ignore_index=True).dropna()\n boulder_votes_s = all_voted_boulders_s.value_counts()\n boulder_details_df[column_name] = boulder_votes_s\n \nadd_column_votes()\nboulder_details_df.sort_values(by=[\"Votes\"], ascending=[False]).loc[boulder_details_df[\"Votes\"] >= 3]", "<br>\nAdd columns Latitude and Longitude.", "def add_columns_latitude_and_longitude(column_names=[\"Latitude\", \"Longitude\"]):\n boulder_details_df[[column_names[0], column_names[1]]] = boulder_details_df[\"Coordinates\"].str.split(\",\", expand=True).astype(float)\n \nadd_columns_latitude_and_longitude()\nboulder_details_df.head()", "<br>\nAdd column GradeNumeric.", "def add_column_gradenumeric(column_name=\"GradeNumeric\"):\n # Grade mappings from Font to numeric\n grade_dict = {\n \"?\": 0,\n \"1\": 1,\n \"2\": 2,\n \"3\": 3,\n \"4\": 4,\n \"4+\": 5,\n \"5\": 6,\n \"5+\": 7,\n \"6A\": 8,\n \"6A+\": 9,\n \"6B\": 10,\n \"6B+\": 11,\n \"6C\": 12,\n \"6C+\": 13,\n \"7A\": 14,\n \"7A+\": 15,\n \"7B\": 16,\n \"7B+\": 17,\n \"7C\": 18,\n \"7C+\": 19,\n \"8A\": 20,\n \"8A+\": 21,\n \"8B\": 22,\n \"8B+\": 23,\n \"8C\": 24,\n \"8C+\": 25,\n \"9A\": 26\n }\n \n boulder_details_df[column_name] = boulder_details_df.apply(lambda row: str(grade_dict[row[\"Grade\"]]) if pd.notnull(row[\"Grade\"]) else np.nan, axis=1)\n boulder_details_df[column_name] = boulder_details_df[column_name].astype(int)\n \nadd_column_gradenumeric()\nboulder_details_df.head()", "<br>\nAdd column Adjectives", "def add_column_adjectives(column_name=\"Adjectives\"):\n def set_adjectives(row):\n boulder_name = row.name\n adjectives1_s = spb2018_df.loc[(spb2018_df[\"Boulderin nimi\"] == boulder_name), \"Kuvaile boulderia kolmella (3) adjektiivilla\"]\n adjectives2_s = spb2018_df.loc[(spb2018_df[\"Boulderin nimi.1\"] == boulder_name), \"Kuvaile boulderia kolmella (3) adjektiivilla.1\"]\n adjectives3_s = spb2018_df.loc[(spb2018_df[\"Boulderin nimi.2\"] == boulder_name), \"Kuvaile boulderia kolmella (3) adjektiivilla.2\"]\n adjectives_s = adjectives1_s.append(adjectives2_s).append(adjectives3_s)\n adjectives = \",\".join(adjectives_s)\n # Clean adjectives\n adjectives = \",\".join(sorted(list(set([adjective.strip().lower() for adjective in adjectives.split(\",\")]))))\n return adjectives\n \n boulder_details_df[column_name] = boulder_details_df.apply(set_adjectives, axis=1)\n \nadd_column_adjectives()\nboulder_details_df.head()", "<br>\nAdd column MainHoldTypes", "def add_column_main_hold_types(column_name=\"MainHoldTypes\"):\n def set_main_hold_types(row):\n boulder_name = row.name\n main_hold_types1_s = spb2018_df.loc[(spb2018_df[\"Boulderin nimi\"] == boulder_name), \"Boulderin pääotetyypit\"]\n main_hold_types2_s = spb2018_df.loc[(spb2018_df[\"Boulderin nimi.1\"] == boulder_name), \"Boulderin pääotetyypit.1\"]\n main_hold_types3_s = spb2018_df.loc[(spb2018_df[\"Boulderin nimi.2\"] == boulder_name), \"Boulderin pääotetyypit.2\"]\n main_hold_types_s = main_hold_types1_s.append(main_hold_types2_s).append(main_hold_types3_s)\n main_hold_types = \",\".join(main_hold_types_s)\n # Clean main_hold_types\n main_hold_types = \",\".join(sorted(list(set([main_hold_type.strip().lower() for main_hold_type in main_hold_types.split(\",\")]))))\n return main_hold_types\n \n boulder_details_df[column_name] = boulder_details_df.apply(set_main_hold_types, axis=1)\n \nadd_column_main_hold_types()\nboulder_details_df.head()", "<br>\nAdd column MainProfiles", "def add_column_main_profiles(column_name=\"MainProfiles\"):\n def set_main_profiles(row):\n boulder_name = row.name\n main_profiles1_s = spb2018_df.loc[(spb2018_df[\"Boulderin nimi\"] == boulder_name), \"Boulderin pääprofiilit\"]\n main_profiles2_s = spb2018_df.loc[(spb2018_df[\"Boulderin nimi.1\"] == boulder_name), \"Boulderin pääprofiilit.1\"]\n main_profiles3_s = spb2018_df.loc[(spb2018_df[\"Boulderin nimi.2\"] == boulder_name), \"Boulderin pääprofiilit.2\"]\n main_profiles_s = main_profiles1_s.append(main_profiles2_s).append(main_profiles3_s)\n main_profiles = \",\".join(main_profiles_s)\n # Clean main_profiles\n main_profiles = \",\".join(sorted(list(set([main_profile.strip().lower() for main_profile in main_profiles.split(\",\")]))))\n return main_profiles\n \n boulder_details_df[column_name] = boulder_details_df.apply(set_main_profiles, axis=1)\n \nadd_column_main_profiles()\nboulder_details_df.head()", "<br>\nAdd column MainSkillsNeeded", "def add_column_main_skills_needed(column_name=\"MainSkillsNeeded\"):\n def set_main_skills_needed(row):\n boulder_name = row.name\n main_skills_needed1_s = spb2018_df.loc[(spb2018_df[\"Boulderin nimi\"] == boulder_name), \"Boulderin kiipeämiseen vaadittavat pääkyvyt\"]\n main_skills_needed2_s = spb2018_df.loc[(spb2018_df[\"Boulderin nimi.1\"] == boulder_name), \"Boulderin kiipeämiseen vaadittavat pääkyvyt.1\"]\n main_skills_needed3_s = spb2018_df.loc[(spb2018_df[\"Boulderin nimi.2\"] == boulder_name), \"Boulderin kiipeämiseen vaadittavat pääkyvyt.2\"]\n main_skills_needed_s = main_skills_needed1_s.append(main_skills_needed2_s).append(main_skills_needed3_s)\n main_skills_needed = \",\".join(main_skills_needed_s)\n # Clean main_skills_needed\n main_skills_needed = \",\".join(sorted(list(set([main_skill_needed.strip().lower() for main_skill_needed in main_skills_needed.split(\",\")]))))\n return main_skills_needed\n \n boulder_details_df[column_name] = boulder_details_df.apply(set_main_skills_needed, axis=1)\n \nadd_column_main_skills_needed()\nboulder_details_df.head()", "<br>\nAdd column Comments", "def add_column_comments(column_name=\"Comments\"):\n def set_comments(row):\n boulder_name = row.name\n comments1_s = spb2018_df.loc[(spb2018_df[\"Boulderin nimi\"] == boulder_name), \"Kuvaile boulderia omin sanoin (vapaaehtoinen)\"]\n comments2_s = spb2018_df.loc[(spb2018_df[\"Boulderin nimi.1\"] == boulder_name), \"Kuvaile boulderia omin sanoin (vapaaehtoinen).1\"]\n comments3_s = spb2018_df.loc[(spb2018_df[\"Boulderin nimi.2\"] == boulder_name), \"Kuvaile boulderia omin sanoin (vapaaehtoinen).2\"]\n comments_s = comments1_s.append(comments2_s).append(comments3_s)\n comments = []\n for index, value in comments_s.iteritems():\n if pd.notnull(value):\n comments.append(value.strip())\n return \",\".join(\"\\\"{}\\\"\".format(comment) for comment in comments)\n \n boulder_details_df[column_name] = boulder_details_df.apply(set_comments, axis=1)\n \nadd_column_comments()\nboulder_details_df.head()", "<br>\nAdd columns AreaLevel1, AreaLevel2, and AreaLevel3", "def add_columns_arealevel1_arealevel2_and_arealevel3(column_names=[\"AreaLevel1\", \"AreaLevel2\", \"AreaLevel3\"]):\n boulder_details_df.drop(columns=[column_names[0], column_names[1], column_names[2]], inplace=True, errors=\"ignore\")\n geolocator = GoogleV3(api_key=GOOGLE_MAPS_JAVASCRIPT_API_KEY)\n\n def extract_administrative_area_levels(location_results, approximateLocation, area_levels_dict):\n # List of location result types that we are interested in\n location_result_types = [\"administrative_area_level_1\", \"administrative_area_level_2\", \"administrative_area_level_3\"]\n\n # Iterate over location results\n for location_result in location_results:\n location_result_json = location_result.raw\n # Extract data only from those location results that we are interested in\n if any(location_result_type in location_result_json[\"types\"] for location_result_type in location_result_types):\n # Extract location result type\n location_result_type = location_result_json[\"types\"][0]\n # Iterate over address components\n for address_component in location_result_json[\"address_components\"]:\n # Extract data only from the matched location result type\n if location_result_type in address_component[\"types\"]:\n # Extract the name of the administrative area level 1\n if location_result_type == location_result_types[0]:\n area_levels_dict[\"AreaLevel1\"] = address_component[\"long_name\"]\n # Extract the name of the administrative area level 2\n if location_result_type == location_result_types[1] and approximateLocation == \"No\":\n area_levels_dict[\"AreaLevel2\"] = address_component[\"long_name\"]\n # Extract the name of the administrative area level 3\n if location_result_type == location_result_types[2] and approximateLocation == \"No\":\n area_levels_dict[\"AreaLevel3\"] = address_component[\"long_name\"]\n return area_levels_dict\n\n def get_area_levels(row):\n # Area levels template\n area_levels_dict = {\n column_names[0]: \"\",\n column_names[1]: \"\",\n column_names[2]: \"\"\n }\n\n geocoded = False\n while geocoded is not True:\n # Reverse geocode coordinates\n try:\n location_results = geolocator.reverse(row[\"Coordinates\"], language=\"fi\")\n area_levels_dict = extract_administrative_area_levels(location_results, row[\"ApproximateCoordinates\"], area_levels_dict)\n geocoded = True\n except GeocoderQueryError as gqe:\n print(\"Geocoding error with {}: {}\".format(row.name, str(gqe)))\n print(\"Skipping {}\".format(row.name))\n geocoded = True\n except GeocoderQuotaExceeded as gqe:\n print(\"Geocoding quota exceeded: {}\".format(str(gqe)))\n print(\"Backing off for a bit\")\n time.sleep(30 * 60) # sleep for 30 minutes\n print(\"Back in action\")\n\n return pd.Series(area_levels_dict)\n\n boulder_area_levels_df = boulder_details_df[[\"Coordinates\", \"ApproximateCoordinates\"]].apply(get_area_levels, axis=1)\n return pd.merge(boulder_details_df, boulder_area_levels_df, how=\"outer\", left_index=True, right_index=True)\n\nboulder_details_df = add_columns_arealevel1_arealevel2_and_arealevel3()\nboulder_details_df.head()", "<br>\nCreate boulders final file boulders_-_final.csv.", "def create_boulders_final():\n boulder_details_reset_df = boulder_details_df.reset_index()\n boulder_details_reset_df = boulder_details_reset_df[[\"Votes\", \"VotedBy\", \"Name\", \"Grade\", \"GradeNumeric\", \"InFinland\", \"AreaLevel1\", \"AreaLevel2\", \"AreaLevel3\", \"Crag\", \"ApproximateCoordinates\", \"Coordinates\", \"Latitude\", \"Longitude\", \"Url27crags\", \"UrlVideo\", \"UrlStory\", \"MainProfiles\", \"MainHoldTypes\", \"MainSkillsNeeded\", \"Adjectives\", \"Comments\"]]\n boulder_details_reset_df = boulder_details_reset_df.sort_values(by=[\"Votes\", \"GradeNumeric\", \"Name\"], ascending=[False, False, True])\n boulder_details_reset_df.to_csv(\"data/boulders_-_final.csv\", index=False)\n\ncreate_boulders_final()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive/supplemental_gradient_boosting/labs/a_boosting_from_scratch.ipynb
apache-2.0
[ "Gradient Boosting From Scratch\nLet's implement gradient boosting from scratch.", "from __future__ import print_function\n\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\n\nfrom matplotlib import pyplot as plt\nfrom sklearn.tree import DecisionTreeRegressor\nfrom tensorflow.keras.datasets import boston_housing\n\nnp.random.seed(0)\n\nplt.rcParams['figure.figsize'] = (8.0, 5.0)\nplt.rcParams['axes.labelsize'] = 20\nplt.rcParams['ytick.labelsize'] = 14\nplt.rcParams['xtick.labelsize'] = 14\n\n(x_train, y_train), (x_test, y_test) = boston_housing.load_data()\n\nx_train.shape", "Exploration\nLet explore the data before building a model. The goal is to predict the median value of owner-occupied homes in $1000s.", "# Create training/test dataframes for visualization/data exploration.\n# Description of features: https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html\nfeature_names = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD','TAX', 'PTRATIO', 'B', 'LSTAT']\ndf_train = pd.DataFrame(x_train, columns=feature_names)\ndf_test = pd.DataFrame(x_test, columns=feature_names)", "Exercise #1: What are the most predictive features? Determine correlation for each feature with the label. You may find the corr function useful.\nTrain Gradient Boosting model\nTraining Steps to build model an ensemble of $K$ estimators.\n1. At $k=0$ build base model , $\\hat{y}{0}$: $\\hat{y}{0}=base_predicted$\n3. Compute residuals $r = \\sum_{i=0}^n (y_{k,i} - \\hat{y}{k,i})$; $n: number\\ train\\ examples$\n4. Train new model, fitting on residuals, $r$. We will call the predictions from this model $e{k}_predicted$\n5. Update model predictions at step $k$ by adding residual to current predictions: $\\hat{y}{k} = \\hat{y}{k-1} + e_{k}_predicted$\n6. Repeat steps 2 - 5 K times.\nIn summary, the goal is to build K estimators that learn to predict the residuals from the prior model; thus we are learning to \"correct\" the\npredictions up until this point.\n<br>\n$\\hat{y}{K} = base_predicted\\ +\\ \\sum{j=1}^Ke_{j}_predicted$\nBuild base model\nExercise #2: Make an initial prediction using the BaseModel class -- configure the predict method to predict the training mean.", "class BaseModel(object):\n \"\"\"Initial model that predicts mean of train set.\"\"\"\n\n def __init__(self, y_train):\n self.train_mean = # TODO\n\n def predict(self, x):\n \"\"\"Return train mean for every prediction.\"\"\"\n return # TODO\n\ndef compute_residuals(label, pred):\n \"\"\"Compute difference of labels and predictions.\n\n When using mean squared error loss function, the residual indicates the \n negative gradient of the loss function in prediction space. Thus by fitting\n the residuals, we performing gradient descent in prediction space. See for\n more detail:\n\n https://explained.ai/gradient-boosting/L2-loss.html\n \"\"\"\n return label - pred\n\ndef compute_rmse(x):\n return np.sqrt(np.mean(np.square(x)))\n\n# Build a base model that predicts the mean of the training set.\nbase_model = BaseModel(y_train)\ntest_pred = base_model.predict(x_test)\ntest_residuals = compute_residuals(y_test, test_pred)\ncompute_rmse(test_residuals)", "Let's see how the base model performs on out test data. Let's visualize performance compared to the LSTAT feature.", "feature = df_test.LSTAT\n\n# Pick a predictive feature for plotting.\nplt.plot(feature, y_test, 'go', alpha=0.7, markersize=10)\nplt.plot(feature, test_pred, label='initial prediction')\n\nplt.xlabel('LSTAT', size=20)\nplt.legend(prop={'size': 20});", "There is definitely room for improvement. We can also look at the residuals:", "plt.plot(feature, test_residuals, 'bo', alpha=0.7, markersize=10)\nplt.ylabel('residuals', size=20)\nplt.xlabel('LSTAT', size=20)\nplt.plot([feature.min(), feature.max()], [0, 0], 'b--', label='0 error');\nplt.legend(prop={'size': 20});", "Train Boosting model\nReturning back to boosting, let's use our very first base model as are initial prediction. We'll then perform subsequent boosting iterations to improve upon this model.\ncreate_weak_model", "def create_weak_learner(**tree_params):\n \"\"\"Initialize a Decision Tree model.\"\"\"\n model = DecisionTreeRegressor(**tree_params)\n return model", "Make initial prediction.\nExercise #3: Update the prediction on the training set (train_pred) and on the testing set (test_pred) using the weak learner that predicts the residuals.", "base_model = BaseModel(y_train)\n\n# Training parameters. \ntree_params = {\n 'max_depth': 1,\n 'criterion': 'mse',\n 'random_state': 123\n }\nN_ESTIMATORS = 50\nBOOSTING_LR = 0.1\n\n# Initial prediction, residuals.\ntrain_pred = base_model.prediction(x_train)\ntest_pred = base_model.prediction(x_test)\ntrain_residuals = compute_residuals(y_train, train_pred)\ntest_residuals = compute_residuals(y_test, test_pred)\n\n# Boosting.\ntrain_rmse, test_rmse = [], []\nfor _ in range(0, N_ESTIMATORS):\n train_rmse.append(compute_rmse(train_residuals))\n test_rmse.append(compute_rmse(test_residuals))\n # Train weak learner.\n model = create_weak_learner(**tree_params)\n model.fit(x_train, train_residuals)\n # Boosting magic happens here: add the residual prediction to correct\n # the prior model.\n grad_approx = # TODO\n train_pred += # TODO\n train_residuals = compute_residuals(y_train, train_pred) \n \n # Keep track of residuals on validation set.\n grad_approx = # TODO\n test_pred += # TODO\n test_residuals = compute_residuals(y_test, test_pred) ", "Interpret results\nCan you improve the model results?", "plt.figure()\nplt.plot(train_rmse, label='train error')\nplt.plot(test_rmse, label='test error')\nplt.ylabel('rmse', size=20)\nplt.xlabel('Boosting Iterations', size=20);\nplt.legend()", "Let's visualize how the performance changes across iterations", "feature = df_test.LSTAT\nix = np.argsort(feature)\n\n# Pick a predictive feature for plotting.\nplt.plot(feature, y_test, 'go', alpha=0.7, markersize=10)\nplt.plot(feature[ix], test_pred[ix], label='boosted prediction', linewidth=2)\n\nplt.xlabel('feature', size=20)\nplt.legend(prop={'size': 20});", "Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
as595/AllOfYourBases
TIARA/Tutorial/KeplerLightCurveCelerite.ipynb
gpl-3.0
[ "KeplerLightCurveCelerite.ipynb\n\n‹ KeplerLightCurve.ipynb › Copyright (C) ‹ 2017 › ‹ Anna Scaife - anna.scaife@manchester.ac.uk ›\nThis program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.\nThis program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nYou should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/.\n\n[AMS - 170829] Notebook created for TIARA Astrostatistics Summer School, Taipei, September 2017.\nThis notebook runs through the Gaussian Process Modelling described in Example 3 of https://arxiv.org/pdf/1703.09710.pdf and builds on the methodology presented in the accompanying lecture: \"Can You Predict the Future..?\"\nIt uses a number of Python libraries, which are all installable using pip.\nThis example uses the celerite GPM library (http://celerite.readthedocs.io) and the emcee package (http://dan.iel.fm/emcee/).", "%matplotlib inline", "Import some libraries:", "import numpy as np\nimport pylab as pl", "Import the celerite Gaussian Process Modelling library and the george covariance kernels:", "import celerite\nfrom celerite import terms", "Specify the datafile containing Kepler data for the object KIC 1430163:", "filename=\"KIC1430163.tbl\"\ndatafile = open(filename,'r')", "Read the Kepler data from the file:", "time=[];value=[]\nwhile True:\n line = datafile.readline()\n if not line: break\n \n items=line.split()\n if (items[0][0]!='|'):\n time.append(float(items[1]))\n value.append(float(items[2]))\n \ntime=np.array(time)\nvalue=np.array(value)\n\nprint \"There are \",len(time),\" data points\"", "The paper says:\nWe set the mean function to zero\nand we can see from Fig 7 that the data have also been normalised to have a maximum value of one.\nSo, let's also do that:", "mean = np.mean(value)\nvalue-=mean\n\nnorm = np.max(value)\nvalue/=norm", "And the time has been made relative to the first measurement:", "day1 = time[0]\ntime-=day1", "Make a plot like the one in Figure 7:", "pl.subplot(111)\npl.scatter(time,value,s=0.2)\npl.axis([0.,60.,-1.,1.])\npl.ylabel(\"Relative flux [ppt]\")\npl.xlabel(\"Time [days]\")\npl.show()", "In the paper there are two suggested kernels for modelling the covariance of the Kepler data (Eqs. 55 & 56). In the paper the authors fit Eq 56 - here we are going to fit Eq. 56.\n$$\nk(\\tau) = \\frac{B}{1+C}\\exp^{-\\tau/L} \\left[ \\cos{\\left( \\frac{2\\pi\\tau}{P} \\right)} + (1+C) \\right]\n$$\nThis is the same as the CustomTerm described in the celerite documentation here: http://celerite.readthedocs.io/en/stable/python/kernel/ \nThere is one small difference though - the exponent is expressed differently. This doesn't mean we need to change anything... except for our prior bounds because we're going to apply those as logarithmic bounds so we will need to put a minus sign in front of them since $\\log(1/x) = -\\log(x)$.", "import autograd.numpy as np\n\nclass CustomTerm(terms.Term):\n parameter_names = (\"log_a\", \"log_b\", \"log_c\", \"log_P\")\n\n def get_real_coefficients(self, params):\n log_a, log_b, log_c, log_P = params\n b = np.exp(log_b)\n return (\n np.exp(log_a) * (1.0 + b) / (2.0 + b), np.exp(log_c),\n )\n\n def get_complex_coefficients(self, params):\n log_a, log_b, log_c, log_P = params\n b = np.exp(log_b)\n return (\n np.exp(log_a) / (2.0 + b), 0.0,\n np.exp(log_c), 2*np.pi*np.exp(-log_P),\n )", "We need to pick some first guess parameters. Because we're lazy we'll just start by setting them all to 1:", "log_a = 0.0;log_b = 0.0; log_c = 0.0; log_P = 0.0\nkernel = CustomTerm(log_a, log_b, log_c, log_P)\n\ngp = celerite.GP(kernel, mean=0.0)\n\nyerr = 0.000001*np.ones(time.shape)\ngp.compute(time,yerr)\n\nprint(\"Initial log-likelihood: {0}\".format(gp.log_likelihood(value)))\n\nt = np.arange(np.min(time),np.max(time),0.1)\n\n# calculate expectation and variance at each point:\nmu, cov = gp.predict(value, t)\nstd = np.sqrt(np.diag(cov))\n\nax = pl.subplot(111)\npl.plot(t,mu)\nax.fill_between(t,mu-std,mu+std,facecolor='lightblue', lw=0, interpolate=True)\npl.scatter(time,value,s=2)\npl.axis([0.,60.,-1.,1.])\npl.ylabel(\"Relative flux [ppt]\")\npl.xlabel(\"Time [days]\")\npl.show()", "The paper says:\nAs with the earlier examples, we start by estimating the MAP parameters using L-BFGS-B\nSo let's do that. We'll use the scipy optimiser, which requires us to define a log(likelihood) function and a function for the gradient of the log(likelihood):", "def nll(p, y, gp):\n \n # Update the kernel parameters:\n gp.set_parameter_vector(p)\n \n # Compute the loglikelihood:\n ll = gp.log_likelihood(y)\n \n # The scipy optimizer doesn’t play well with infinities:\n return -ll if np.isfinite(ll) else 1e25\n\ndef grad_nll(p, y, gp):\n \n # Update the kernel parameters:\n gp.set_parameter_vector(p)\n \n # Compute the gradient of the loglikelihood:\n gll = gp.grad_log_likelihood(y)[1]\n \n return -gll", "I'm going to set bounds on the available parameters space, i.e. our prior volume, using the ranges taken from Table 4 of https://arxiv.org/pdf/1706.05459.pdf", "import scipy.optimize as op\n\n# extract our initial guess at parameters\n# from the celerite kernel and put it in a \n# vector:\np0 = gp.get_parameter_vector()\n\n# set prior ranges\n# Note that these are in *logarithmic* space\nbnds = ((-10.,0.),(-5.,5.),(-5.,-1.5),(-3.,5.))\n\n# run optimization:\nresults = op.minimize(nll, p0, method='L-BFGS-B', jac=grad_nll, bounds=bnds, args=(value, gp))\n\n# print the value of the optimised parameters:\nprint np.exp(results.x)\nprint(\"Final log-likelihood: {0}\".format(-results.fun))", "The key parameter here is the period, which is the fourth number along. We expect this to be about 3.9 and... we're getting 4.24, so not a million miles off.\nFrom the paper:\nThis star has a published rotation period of 3.88 ± 0.58 days, measured using traditional periodogram and autocorrelation function approaches applied to Kepler data from Quarters 0–16 (Mathur et al. 2014), covering about four years.\nLet's now pass these optimised parameters to george and recompute our prediction:", "# pass the parameters to the george kernel:\ngp.set_parameter_vector(results.x)\n\nt = np.arange(np.min(time),np.max(time),0.1)\n\n# calculate expectation and variance at each point:\nmu, cov = gp.predict(value, t)\nstd = np.sqrt(np.diag(cov))\n\nax = pl.subplot(111)\npl.plot(t,mu)\nax.fill_between(t,mu-std,mu+std,facecolor='lightblue', lw=0, interpolate=True)\npl.scatter(time,value,s=2)\npl.axis([0.,60.,-1.,1.])\npl.ylabel(\"Relative flux [ppt]\")\npl.xlabel(\"Time [days]\")\npl.show()", "", "import emcee\n\n# we need to define three functions: \n# a log likelihood, a log prior & a log posterior.", "First we need to define a log(likelihood). We'll use the log(likelihood) implemented in the george library, which implements:\n$$\n\\ln L = -\\frac{1}{2}(y - \\mu)^{\\rm T} C^{-1}(y - \\mu) - \\frac{1}{2}\\ln |C\\,| + \\frac{N}{2}\\ln 2\\pi\n$$\n(see Eq. 5 in https://arxiv.org/pdf/1706.05459.pdf).", "# set the loglikelihood:\ndef lnlike(p, x, y):\n \n lnB = np.log(p[0])\n lnC = p[1]\n lnL = np.log(p[2])\n lnP = np.log(p[3])\n \n p0 = np.array([lnB,lnC,lnL,lnP])\n \n # update kernel parameters:\n gp.set_parameter_vector(p0)\n \n # calculate the likelihood:\n ll = gp.log_likelihood(y)\n \n # return \n return ll if np.isfinite(ll) else 1e25", "We also need to specify our parameter priors. Here we'll just use uniform logarithmic priors. The ranges are the same as specified in Table 3 of https://arxiv.org/pdf/1703.09710.pdf.\n<img src=\"table3.png\">", "# set the logprior\ndef lnprior(p):\n \n # These ranges are taken from Table 4 \n # of https://arxiv.org/pdf/1703.09710.pdf\n \n lnB = np.log(p[0])\n lnC = p[1]\n lnL = np.log(p[2])\n lnP = np.log(p[3])\n \n # really crappy prior:\n if (-10<lnB<0.) and (-5.<lnC<5.) and (-5.<lnL<1.5) and (-3.<lnP<5.):\n return 0.0\n \n return -np.inf\n #return gp.log_prior()", "We then need to combine our log likelihood and our log prior into an (unnormalised) log posterior:", "# set the logposterior:\ndef lnprob(p, x, y):\n \n lp = lnprior(p)\n \n return lp + lnlike(p, x, y) if np.isfinite(lp) else -np.inf", "ok, now we have our probability stuff set up we can run the MCMC. We'll start by explicitly specifying our Kepler data as our training data:", "x_train = time\ny_train = value", "The paper then says:\ninitialize 32 walkers by sampling from an isotropic Gaussian with a standard deviation of $10^{−5}$ centered on the MAP parameters.\nSo, let's do that:", "# put all the data into a single array:\ndata = (x_train,y_train)\n\n# set your initial guess parameters\n# as the output from the scipy optimiser\n# remember celerite keeps these in ln() form!\n\n# C looks like it's going to be a very small\n# value - so we will sample from ln(C):\n# A, lnC, L, P\np = gp.get_parameter_vector()\ninitial = np.array([np.exp(p[0]),p[1],np.exp(p[2]),np.exp(p[3])])\nprint \"Initial guesses: \",initial\n\n# set the dimension of the prior volume \n# (i.e. how many parameters do you have?)\nndim = len(initial)\nprint \"Number of parameters: \",ndim\n\n# The number of walkers needs to be more than twice \n# the dimension of your parameter space. \nnwalkers = 32\n\n# perturb your inital guess parameters very slightly (10^-5)\n# to get your starting values:\np0 = [np.array(initial) + 1e-5 * np.random.randn(ndim)\n for i in xrange(nwalkers)]\n", "We can then use these inputs to initiate our sampler:", "# initalise the sampler:\nsampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, args=data)", "The paper says:\nWe run 500 steps of burn-in, followed by 5000 steps of MCMC using emcee.\nFirst let's run the burn-in:", "# run a few samples as a burn-in:\nprint(\"Running burn-in\")\np0, lnp, _ = sampler.run_mcmc(p0, 500)\nsampler.reset()", "Now let's run the production MCMC:", "# take the highest likelihood point from the burn-in as a\n# starting point and now begin your production run:\nprint(\"Running production\")\np = p0[np.argmax(lnp)]\np0 = [p + 1e-5 * np.random.randn(ndim) for i in xrange(nwalkers)]\np0, _, _ = sampler.run_mcmc(p0, 5000)\n\nprint \"Finished\"\n\nimport acor\n\n# calculate the convergence time of our\n# MCMC chains:\nsamples = sampler.flatchain\ns2 = np.ndarray.transpose(samples)\ntau, mean, sigma = acor.acor(s2)\nprint \"Convergence time from acor: \", tau\nprint \"Number of independent samples:\", 5000.-(20.*tau)\n\n# get rid of the samples that were taken\n# before convergence:\ndelta = int(20*tau)\nsamples = sampler.flatchain[delta:,:]\n\nsamples[:, 2] = np.exp(samples[:, 2])\nb_mcmc, c_mcmc, l_mcmc, p_mcmc = map(lambda v: (v[1], v[2]-v[1], v[1]-v[0]),\n zip(*np.percentile(samples, [16, 50, 84],\n axis=0)))\n\n# specify prediction points:\nt = np.arange(np.min(time),np.max(time),0.1)\n\n# update the kernel hyper-parameters:\nhp = np.array([b_mcmc[0], c_mcmc[0], l_mcmc[0], p_mcmc[0]])\n\nlnB = np.log(p[0])\nlnC = p[1]\nlnL = np.log(p[2])\nlnP = np.log(p[3])\n \np0 = np.array([lnB,lnC,lnL,lnP]) \ngp.set_parameter_vector(p0)\n\n \nprint hp\n# calculate expectation and variance at each point:\nmu, cov = gp.predict(value, t)\n \nax = pl.subplot(111)\npl.plot(t,mu)\nax.fill_between(t,mu-std,mu+std,facecolor='lightblue', lw=0, interpolate=True)\npl.scatter(time,value,s=2)\npl.axis([0.,60.,-1.,1.])\npl.ylabel(\"Relative flux [ppt]\")\npl.xlabel(\"Time [days]\")\npl.show()\n\nimport corner\n\n# Plot it.\nfigure = corner.corner(samples, labels=[r\"$B$\", r\"$lnC$\", r\"$L$\", r\"$P$\"],\n quantiles=[0.16,0.5,0.84],\n #levels=[0.39,0.86,0.99],\n levels=[0.68,0.95,0.99],\n title=\"KIC 1430163\",\n show_titles=True, title_args={\"fontsize\": 12})", "" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
DTOcean/dtocean-core
notebooks/DTOcean Floating Wave Scenario Analysis.ipynb
gpl-3.0
[ "Floating Wave Scenario Analysis", "%matplotlib inline\n\nfrom IPython.display import display, HTML\n\nimport matplotlib.pyplot as plt\nplt.rcParams['figure.figsize'] = (14.0, 8.0)\n\nimport numpy as np\nfrom datetime import datetime\n\nfrom dtocean_core import start_logging\nfrom dtocean_core.core import Core\nfrom dtocean_core.menu import DataMenu, ModuleMenu, ProjectMenu, ThemeMenu\nfrom dtocean_core.pipeline import Tree, _get_connector\nfrom dtocean_core.extensions import StrategyManager\n\n# Bring up the logger\nstart_logging()\n\ndef html_list(x):\n message = \"<ul>\"\n for name in x:\n message += \"<li>{}</li>\".format(name)\n message += \"</ul>\"\n return message\ndef html_dict(x):\n message = \"<ul>\"\n for name, status in x.iteritems():\n message += \"<li>{}: <b>{}</b></li>\".format(name, status)\n message += \"</ul>\"\n return message\ndef html_variable(core, project, variable):\n value = variable.get_value(core, project)\n metadata = variable.get_metadata(core)\n name = metadata.title\n units = metadata.units\n message = \"<b>{}:</b> {}\".format(name, value)\n if units:\n message += \" ({})\".format(units[0])\n return message", "Create the core, menus and pipeline tree\nThe core object carrys all the system information and is operated on by the other classes", "new_core = Core()\nproject_menu = ProjectMenu()\nmodule_menu = ModuleMenu()\ntheme_menu = ThemeMenu()\ndata_menu = DataMenu()\npipe_tree = Tree()", "Create a new project and tree", "project_title = \"DTOcean\" \nnew_project = project_menu.new_project(new_core, project_title)", "Set the device type", "options_branch = pipe_tree.get_branch(new_core, new_project, \"System Type Selection\")\nvariable_id = \"device.system_type\"\nmy_var = options_branch.get_input_variable(new_core, new_project, variable_id)\nmy_var.set_raw_interface(new_core, \"Wave Floating\")\nmy_var.read(new_core, new_project)", "Initiate the pipeline\nThis step will be important when the database is incorporated into the system as it will effect the operation of the pipeline.", "project_menu.initiate_pipeline(new_core, new_project)", "Discover available modules", "names = module_menu.get_available(new_core, new_project)\nmessage = html_list(names)\nHTML(message)", "Activate some modules\nNote that the order of activation is important and that we can't deactivate yet!", "module_menu.activate(new_core, new_project, 'Hydrodynamics')\nmodule_menu.activate(new_core, new_project, 'Electrical Sub-Systems')\nmodule_menu.activate(new_core, new_project, 'Mooring and Foundations')", "Activate the Economics and Reliability themes", "names = theme_menu.get_available(new_core, new_project)\nmessage = html_list(names)\nHTML(message)\n\ntheme_menu.activate(new_core, new_project, \"Economics\")\n\n# Here we are expecting Hydrodynamics\nassert _get_connector(new_project, \"modules\").get_current_interface_name(new_core, new_project) == \"Hydrodynamics\"\n\nfrom aneris.utilities.analysis import get_variable_network, count_atomic_variables\n\nreq_inputs, opt_inputs, outputs, req_inter, opt_inter = get_variable_network(new_core.control,\n new_project.get_pool(),\n new_project.get_simulation(),\n \"modules\")\n\nreq_inputs[req_inputs.Type==\"Shared\"].reset_index()\n\nshared_req_inputs = req_inputs[req_inputs.Type==\"Shared\"]\nlen(shared_req_inputs[\"Identifier\"].unique())\n\ncount_atomic_variables(shared_req_inputs[\"Identifier\"].unique(),\n new_core.data_catalog,\n \"labels\",\n [\"TableData\",\n \"TableDataColumn\",\n \"IndexTable\",\n \"LineTable\",\n \"LineTableColumn\",\n \"TimeTable\",\n \"TimeTableColumn\"])\n\nopt_inputs[opt_inputs.Type==\"Shared\"].reset_index()\n\nshared_opt_inputs = opt_inputs[opt_inputs.Type==\"Shared\"]\nlen(shared_opt_inputs[\"Identifier\"].unique())\n\ncount_atomic_variables(shared_opt_inputs[\"Identifier\"].unique(),\n new_core.data_catalog,\n \"labels\",\n [\"TableData\",\n \"TableDataColumn\",\n \"IndexTable\",\n \"LineTable\",\n \"LineTableColumn\",\n \"TimeTable\",\n \"TimeTableColumn\"])\n\nreq_inter\n\nlen(req_inter[\"Identifier\"].unique())\n\ncount_atomic_variables(req_inter[\"Identifier\"].unique(),\n new_core.data_catalog,\n \"labels\",\n [\"TableData\",\n \"TableDataColumn\",\n \"IndexTable\",\n \"LineTable\",\n \"LineTableColumn\",\n \"TimeTable\",\n \"TimeTableColumn\"])\n\nopt_inter\n\nlen(opt_inter[\"Identifier\"].unique())\n\ncount_atomic_variables(opt_inter[\"Identifier\"].unique(),\n new_core.data_catalog,\n \"labels\",\n [\"TableData\",\n \"TableDataColumn\",\n \"IndexTable\",\n \"LineTable\",\n \"LineTableColumn\",\n \"TimeTable\",\n \"TimeTableColumn\"])\n\nhyrdo_req_inputs = req_inputs.loc[req_inputs['Interface'] == 'Hydrodynamics']\nlen(hyrdo_req_inputs[\"Identifier\"].unique())\n\ncount_atomic_variables(hyrdo_req_inputs[\"Identifier\"].unique(),\n new_core.data_catalog,\n \"labels\",\n [\"TableData\",\n \"TableDataColumn\",\n \"IndexTable\",\n \"LineTable\",\n \"LineTableColumn\",\n \"TimeTable\",\n \"TimeTableColumn\"])\n\nhyrdo_opt_inputs = opt_inputs.loc[opt_inputs['Interface'] == 'Hydrodynamics']\nlen(hyrdo_opt_inputs[\"Identifier\"].unique())\n\ncount_atomic_variables(hyrdo_opt_inputs[\"Identifier\"].unique(),\n new_core.data_catalog,\n \"labels\",\n [\"TableData\",\n \"TableDataColumn\",\n \"IndexTable\",\n \"LineTable\",\n \"LineTableColumn\",\n \"TimeTable\",\n \"TimeTableColumn\"])\n\nelectro_req_inputs = req_inputs.loc[req_inputs['Interface'] == 'Electrical Sub-Systems']\nlen(electro_req_inputs[\"Identifier\"].unique())\n\ncount_atomic_variables(electro_req_inputs[\"Identifier\"].unique(),\n new_core.data_catalog,\n \"labels\",\n [\"TableData\",\n \"TableDataColumn\",\n \"IndexTable\",\n \"LineTable\",\n \"LineTableColumn\",\n \"TimeTable\",\n \"TimeTableColumn\"])\n\nelectro_opt_inputs = opt_inputs.loc[opt_inputs['Interface'] == 'Electrical Sub-Systems']\nlen(electro_opt_inputs[\"Identifier\"].unique())\n\ncount_atomic_variables(electro_opt_inputs[\"Identifier\"].unique(),\n new_core.data_catalog,\n \"labels\",\n [\"TableData\",\n \"TableDataColumn\",\n \"IndexTable\",\n \"LineTable\",\n \"LineTableColumn\",\n \"TimeTable\",\n \"TimeTableColumn\"])\n\nmoorings_req_inputs = req_inputs.loc[req_inputs['Interface'] == 'Mooring and Foundations']\nlen(moorings_req_inputs[\"Identifier\"].unique())\n\ncount_atomic_variables(moorings_req_inputs[\"Identifier\"].unique(),\n new_core.data_catalog,\n \"labels\",\n [\"TableData\",\n \"TableDataColumn\",\n \"IndexTable\",\n \"LineTable\",\n \"LineTableColumn\",\n \"TimeTable\",\n \"TimeTableColumn\"])\n\nmoorings_opt_inputs = opt_inputs.loc[opt_inputs['Interface'] == 'Mooring and Foundations']\nlen(moorings_opt_inputs[\"Identifier\"].unique())\n\ncount_atomic_variables(moorings_opt_inputs[\"Identifier\"].unique(),\n new_core.data_catalog,\n \"labels\",\n [\"TableData\",\n \"TableDataColumn\",\n \"IndexTable\",\n \"LineTable\",\n \"LineTableColumn\",\n \"TimeTable\",\n \"TimeTableColumn\"])\n\ntotal_req_inputs = req_inputs.loc[req_inputs['Interface'] != 'Shared']\nlen(total_req_inputs[\"Identifier\"].unique())\n\ncount_atomic_variables(total_req_inputs[\"Identifier\"].unique(),\n new_core.data_catalog,\n \"labels\",\n [\"TableData\",\n \"TableDataColumn\",\n \"IndexTable\",\n \"LineTable\",\n \"LineTableColumn\",\n \"TimeTable\",\n \"TimeTableColumn\"])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cyang019/blight_fight
Final_Report.ipynb
mit
[ "Study of Correlation Between Building Demolition and Associated Features\n\nCapstone Project for Data Science at Scale on Coursera\nRepo is located here\n\nChen Yang yangcnju@gmail.com", "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom IPython.display import Image\n%matplotlib inline", "Objective\nBuild a model to make predictions on blighted buildings based on real data from data.detroitmi.gov as given by coursera. \nBuilding demolition is very important for the city to turn around and revive its economy. However, it's no easy task. Accurate predictions can provide guidance on potential blighted buildings and help avoid complications at early stages.\nBuilding List\nThe buildings were defined as described below:\n\nBuilding sizes were estimated using parcel info downloaded here at data.detroitmi.gov. Details can be found in this notebook.\nA event table was constructed from the 4 files (detroit-311.csv, detroit-blight-violations.csv, detroit-crime.csv, and detroit-demolition-permits.tsv) using their coordinates, as shown here.\nBuildings were defined using these coordinates with an estimated building size (median of all parcels). Each building was represented as a same sized rectangle.", "# The resulted buildings:\nImage(\"./data/buildings_distribution.png\")", "Features\nThree kinds (311-calls, blight-violations, and crimes) of incident counts and coordinates (normalized) was used in the end. I also tried to generate more features by differentiating each kind of crimes or each kind of violations in this notebook. However, these differentiated features lead to smaller AUC scores.\nData\n\nThe buildings were down-sampled to contain same number of blighted buildings and non-blighted ones. \nThe ratio between train and test was set at a ratio of 80:20. \nDuring training using xgboost, the train data was further separated into train and evaluation with a ratio of 80:20 for monitoring.\n\nModel\n\nA Gradient Boosted Tree model using Xgboost achieved AUC score of 0.85 on evaluation data set:", "Image('./data/train_process.png')", "This model resulted in an AUC score of 0.858 on test data. Feature importances are shown below:", "Image('./data/feature_f_scores.png')", "Locations were most important features in this model. Although I tried using more features generated by differentiating different kind of crimes or violations, the AUC scores did not improve.\n\nFeature importance can also be viewed using tree representation:", "Image('./data/bst_tree.png')", "To reduce variance of the model, since overfitting was observed during training. I also tried to reduce variance by including in more nonblighted buildings by sampling again multiple times with replacement (bagging).\nA final AUC score of 0.8625 was achieved. The resulted ROC Curve on test data is shown below:", "Image('./data/ROC_Curve_combined.png')", "Discussion\nSeveral things worth trying:\n\nUsing neural net to study more features generated from differentiated crimes or violations if given more time.\nTaken into account possibilities that a building might blight in the future.\n\nThanks for your time reading the report!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
DJCordhose/ai
notebooks/2019_tf/embeddings-viz.ipynb
mit
[ "<a href=\"https://colab.research.google.com/github/DJCordhose/ai/blob/master/notebooks/2019_tf/embeddings-viz.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nUnderstanding Latent Neural Spaces\nhttps://www.sfdatainstitute.org/\nExperiments\n\nAdd one airport into manual embedding\nAdd one airport description here and let it train into embedding\n\nMore notebooks\n\nTODO: AE notebooks incl advanced", "import tensorflow as tf\ntf.logging.set_verbosity(tf.logging.ERROR)\nprint(tf.__version__)", "Challenge: You have a couple of airports and want to bring them into a numerical representation to enable processing with neural networks. How do you do that?", "# https://en.wikipedia.org/wiki/List_of_busiest_airports_by_passenger_traffic\n\nairports = {\n 'HAM': [\"germany europe regional\", 18],\n 'TXL': [\"germany europe regional\", 21],\n 'FRA': [\"germany europe hub\", 70],\n 'MUC': [\"germany europe hub\", 46],\n 'CPH': [\"denmark capital scandinavia europe hub\", 29],\n 'ARN': [\"sweden capital scandinavia europe regional\", 27],\n 'BGO': [\"norway scandinavia europe regional\", 6],\n 'OSL': [\"norway capital scandinavia europe regional\", 29],\n 'LHR': [\"gb capital europe hub\", 80],\n 'CDG': [\"france capital europe hub\", 72],\n 'SFO': [\"usa california regional\", 58],\n 'IAD': [\"usa capital regional\", 21],\n 'AUS': [\"usa texas regional\", 16],\n 'EWR': [\"usa new_jersey hub\", 46],\n 'JFK': [\"usa new_york hub\", 62],\n 'ATL': [\"usa georgia hub\", 110],\n 'STL': [\"usa missouri regional\", 16],\n 'LAX': [\"usa california hub\", 88]\n}\n\nairport_names = list(airports.keys())\nairport_numbers = list(range(0, len(airports)))\nairport_to_number = dict(zip(airport_names, airport_numbers))\nnumber_to_airport = dict(zip(airport_numbers, airport_names))\nairport_descriptions = [value[0] for value in list(airports.values())]\nairport_passengers = [value[1] for value in list(airports.values())]", "Encode Texts in multi-hot frequency", "tokenizer = tf.keras.preprocessing.text.Tokenizer()\ntokenizer.fit_on_texts(airport_descriptions)\ndescription_matrix = tokenizer.texts_to_matrix(airport_descriptions, mode='freq')\n\naiport_count, word_count = description_matrix.shape\ndictionary_size = word_count\naiport_count, word_count\n\nx = airport_numbers\nY = description_matrix", "2d embeddings", "%%time\n\nimport matplotlib.pyplot as plt\n\n\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras.layers import Flatten, GlobalAveragePooling1D, Dense, LSTM, GRU, SimpleRNN, Bidirectional, Embedding\nfrom tensorflow.keras.models import Sequential, Model\n\nfrom tensorflow.keras.initializers import glorot_normal\nseed = 3\n\ninput_dim = len(airports)\nembedding_dim = 2\n\nmodel = Sequential()\n\nmodel.add(Embedding(name='embedding',\n input_dim=input_dim, \n output_dim=embedding_dim, \n input_length=1,\n embeddings_initializer=glorot_normal(seed=seed)))\n\nmodel.add(GlobalAveragePooling1D())\n\nmodel.add(Dense(units=50, activation='relu', bias_initializer='zeros', kernel_initializer=glorot_normal(seed=seed)))\n\nmodel.add(Dense(units=dictionary_size, name='output', activation='softmax', bias_initializer='zeros', kernel_initializer=glorot_normal(seed=seed)))\nmodel.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])\n\nEPOCHS=1000\nBATCH_SIZE=2\n\n%time history = model.fit(x, Y, epochs=EPOCHS, batch_size=BATCH_SIZE, verbose=0)\n\n\nplt.yscale('log')\nplt.plot(history.history['loss'])\n\nloss, accuracy = model.evaluate(x, Y)\nloss, accuracy\n\nembedding_layer = model.get_layer('embedding')\nembedding_model = Model(inputs=model.input, outputs=embedding_layer.output)\nembeddings_2d = embedding_model.predict(airport_numbers).reshape(-1, 2)\n\n# for printing only\n# plt.figure(dpi=600)\nplt.axis('off')\nplt.scatter(embeddings_2d[:, 0], embeddings_2d[:, 1])\nfor name, x_pos, y_pos in zip(airport_names, embeddings_2d[:, 0], embeddings_2d[:, 1]):\n print(name, (x_pos, y_pos))\n plt.annotate(name, (x_pos, y_pos))", "1d embeddings", "seed = 3\n\ninput_dim = len(airports)\nembedding_dim = 1\n\nmodel = Sequential()\n\nmodel.add(Embedding(name='embedding',\n input_dim=input_dim, \n output_dim=embedding_dim, \n input_length=1,\n embeddings_initializer=glorot_normal(seed=seed)))\n\nmodel.add(GlobalAveragePooling1D())\n\nmodel.add(Dense(units=50, activation='relu', bias_initializer='zeros', kernel_initializer=glorot_normal(seed=seed)))\n\nmodel.add(Dense(units=dictionary_size, name='output', activation='softmax', bias_initializer='zeros', kernel_initializer=glorot_normal(seed=seed)))\nmodel.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])\n\nEPOCHS=1500\nBATCH_SIZE=2\n\n%time history = model.fit(x, Y, epochs=EPOCHS, batch_size=BATCH_SIZE, verbose=0)\n\n\nplt.yscale('log')\nplt.plot(history.history['loss'])\n\nimport numpy as np\n\nembedding_layer = model.get_layer('embedding')\nembedding_model = Model(inputs=model.input, outputs=embedding_layer.output)\nembeddings_1d = embedding_model.predict(airport_numbers).reshape(-1)\n\n# for printing only\n# plt.figure(figsize=(20,5))\n# plt.figure(dpi=600)\nplt.axis('off')\nplt.scatter(embeddings_1d, np.zeros(len(embeddings_1d)))\nfor name, x_pos in zip(airport_names, embeddings_1d):\n print(name, (x_pos, y_pos))\n plt.annotate(name, (x_pos, 0), rotation=80)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.15/_downloads/plot_cwt_sensor_connectivity.ipynb
bsd-3-clause
[ "%matplotlib inline", "Compute seed-based time-frequency connectivity in sensor space\nComputes the connectivity between a seed-gradiometer close to the visual cortex\nand all other gradiometers. The connectivity is computed in the time-frequency\ndomain using Morlet wavelets and the debiased Squared Weighted Phase Lag Index\n[1]_ is used as connectivity metric.\n.. [1] Vinck et al. \"An improved index of phase-synchronization for electro-\n physiological data in the presence of volume-conduction, noise and\n sample-size bias\" NeuroImage, vol. 55, no. 4, pp. 1548-1565, Apr. 2011.", "# Author: Martin Luessi <mluessi@nmr.mgh.harvard.edu>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\n\nimport mne\nfrom mne import io\nfrom mne.connectivity import spectral_connectivity, seed_target_indices\nfrom mne.datasets import sample\nfrom mne.time_frequency import AverageTFR\n\nprint(__doc__)", "Set parameters", "data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname)\nevents = mne.read_events(event_fname)\n\n# Add a bad channel\nraw.info['bads'] += ['MEG 2443']\n\n# Pick MEG gradiometers\npicks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=True,\n exclude='bads')\n\n# Create epochs for left-visual condition\nevent_id, tmin, tmax = 3, -0.2, 0.5\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=(None, 0), reject=dict(grad=4000e-13, eog=150e-6),\n preload=True)\n\n# Use 'MEG 2343' as seed\nseed_ch = 'MEG 2343'\npicks_ch_names = [raw.ch_names[i] for i in picks]\n\n# Create seed-target indices for connectivity computation\nseed = picks_ch_names.index(seed_ch)\ntargets = np.arange(len(picks))\nindices = seed_target_indices(seed, targets)\n\n# Define wavelet frequencies and number of cycles\ncwt_freqs = np.arange(7, 30, 2)\ncwt_n_cycles = cwt_freqs / 7.\n\n# Run the connectivity analysis using 2 parallel jobs\nsfreq = raw.info['sfreq'] # the sampling frequency\ncon, freqs, times, _, _ = spectral_connectivity(\n epochs, indices=indices,\n method='wpli2_debiased', mode='cwt_morlet', sfreq=sfreq,\n cwt_freqs=cwt_freqs, cwt_n_cycles=cwt_n_cycles, n_jobs=1)\n\n# Mark the seed channel with a value of 1.0, so we can see it in the plot\ncon[np.where(indices[1] == seed)] = 1.0\n\n# Show topography of connectivity from seed\ntitle = 'WPLI2 - Visual - Seed %s' % seed_ch\n\nlayout = mne.find_layout(epochs.info, 'meg') # use full layout\n\ntfr = AverageTFR(epochs.info, con, times, freqs, len(epochs))\ntfr.plot_topo(fig_facecolor='w', font_color='k', border='k')" ]
[ "code", "markdown", "code", "markdown", "code" ]
sz2472/foundations-homework
.ipynb_checkpoints/homework7_billionaire_shengyingzhao-checkpoint.ipynb
mit
[ "import pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n!pip3 install xlrd\n\ndf = pd.read_excel(\"richpeople.xlsx\")", "What country are most billionaires from? For the top ones, how many billionaires per billion people?", "recent = df[df['year'] == 2014] #recent is a variable, a variable can be assigned to different things, here it was assigned to a data frame\nrecent.head()\n\nrecent.columns.values", "where are all the billionaires from?", "recent['countrycode'].value_counts() #value_counts counts每个country出现的次数\n\nrecent.sort_values(by='networthusbillion', ascending=False).head(10) #sort_values reorganizes the data basde on the by column", "What's the average wealth of a billionaire? Male? Female?", "recent['networthusbillion'].describe()\n# the average wealth of a billionaire is $3.9 billion\n\nrecent.groupby('gender')['networthusbillion'].describe()#group by is a function, group everything by gender, and show the billionnetworth\n# female mean is 3.920556 billion\n# male mean is 3.902716 billion", "Who is the poorest billionaire? Who are the top 10 poorest billionaires?", "recent.sort_values(by='rank',ascending=False).head(10)", "'What is relationship to company'? And what are the most common relationships?", "recent['relationshiptocompany']\n\nrecent['relationshiptocompany'].describe()\n# the most common relationship to company is founder", "Most common source of wealth? Male vs. female?", "recent['sourceofwealth'].describe()\n# the most common source of wealth is real estate\n\nrecent.groupby('gender')['sourceofwealth'].describe() #describe the content of a given column\n# the most common source of wealth for male is real estate, while for female is diversified", "Given the richest person in a country, what % of the GDP is their wealth?", "recent.sort_values(by='networthusbillion', ascending=False).head(10)['gdpcurrentus']\n\n#From the website, I learned that the GDP for USA in 2014 is $17348 billion \n#from the previous dataframe, I learned that the richest USA billionaire made $76 billion networth\nrichest = 76\nusa_gdp = 17348\npercent = round(richest / usa_gdp * 100,2)\nprint(percent, \"% of the US GDP is his wealth.\")", "Add up the wealth of all of the billionaires in a given country (or a few countries) and then compare it to the GDP of the country, or other billionaires, so like pit the US vs India", "recent.groupby('countrycode')['networthusbillion'].sum().sort_values(ascending=False)\n# USA is $2322 billion, compared to Russian is $422 billion", "What are the most common industries for billionaires to come from? What's the total amount of billionaire money from each industry?", "recent['sourceofwealth'].describe()\n\nrecent.groupby('sourceofwealth')['networthusbillion'].sum().sort_values(ascending=False)\n\nHow old are billionaires? How old are billionaires self made vs. non self made? or different industries?\nWho are the youngest billionaires? The oldest? Age distribution - maybe make a graph about it?\nMaybe just made a graph about how wealthy they are in general?\nMaybe plot their net worth vs age (scatterplot)\nMake a bar graph of the top 10 or 20 richest", "How many self made billionaires vs. others?", "recent['selfmade'].value_counts()", "How old are billionaires? How old are billionaires self made vs. non self made? or different industries?", "recent.sort_values(by='age',ascending=False).head()\n\ncolumns_want = recent[['name', 'age', 'selfmade','industry']] #[[]]:dataframe\ncolumns_want.head()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mohanprasath/Course-Work
coursera/python_for_data_science/2.1_Tuples.ipynb
gpl-3.0
[ "<a href=\"http://cocl.us/topNotebooksPython101Coursera\"><img src = \"https://ibm.box.com/shared/static/yfe6h4az47ktg2mm9h05wby2n7e8kei3.png\" width = 750, align = \"center\"></a>\n<a href=\"https://www.bigdatauniversity.com\"><img src = \"https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png\" width = 300, align = \"center\"></a>\n<h1 align=center><font size = 5>TUPLES IN PYTHON</font></h1>\n\n<a id=\"ref0\"></a>\n<center><h2>About the Dataset</h2></center>\nTable of Contents\n<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n<li><a href=\"#ref0\">About the Dataset</a></li>\n<li><a href=\"#ref1\">Tuples</a></li>\n<li><a href=\"#ref2\">Quiz on Tuples</a></li>\n\n<p></p>\nEstimated Time Needed: <strong>15 min</strong>\n</div>\n\n<hr>\n\nImagine you received album recommendations from your friends and compiled all of the recomendations into a table, with specific information about each album.\nThe table has one row for each movie and several columns:\n\nartist - Name of the artist\nalbum - Name of the album\nreleased_year - Year the album was released\nlength_min_sec - Length of the album (hours,minutes,seconds)\ngenre - Genre of the album\nmusic_recording_sales_millions - Music recording sales (millions in USD) on SONG://DATABASE\nclaimed_sales_millions - Album's claimed sales (millions in USD) on SONG://DATABASE\ndate_released - Date on which the album was released\nsoundtrack - Indicates if the album is the movie soundtrack (Y) or (N)\nrating_of_friends - Indicates the rating from your friends from 1 to 10\n<br>\n<br>\n\nThe dataset can be seen below:\n<font size=\"1\">\n<table font-size:xx-small style=\"width:25%\">\n <tr>\n <th>Artist</th>\n <th>Album</th> \n <th>Released</th>\n <th>Length</th>\n <th>Genre</th> \n <th>Music recording sales (millions)</th>\n <th>Claimed sales (millions)</th>\n <th>Released</th>\n <th>Soundtrack</th>\n <th>Rating (friends)</th>\n </tr>\n <tr>\n <td>Michael Jackson</td>\n <td>Thriller</td> \n <td>1982</td>\n <td>00:42:19</td>\n <td>Pop, rock, R&B</td>\n <td>46</td>\n <td>65</td>\n <td>30-Nov-82</td>\n <td></td>\n <td>10.0</td>\n </tr>\n <tr>\n <td>AC/DC</td>\n <td>Back in Black</td> \n <td>1980</td>\n <td>00:42:11</td>\n <td>Hard rock</td>\n <td>26.1</td>\n <td>50</td>\n <td>25-Jul-80</td>\n <td></td>\n <td>8.5</td>\n </tr>\n <tr>\n <td>Pink Floyd</td>\n <td>The Dark Side of the Moon</td> \n <td>1973</td>\n <td>00:42:49</td>\n <td>Progressive rock</td>\n <td>24.2</td>\n <td>45</td>\n <td>01-Mar-73</td>\n <td></td>\n <td>9.5</td>\n </tr>\n <tr>\n <td>Whitney Houston</td>\n <td>The Bodyguard</td> \n <td>1992</td>\n <td>00:57:44</td>\n <td>Soundtrack/R&B, soul, pop</td>\n <td>26.1</td>\n <td>50</td>\n <td>25-Jul-80</td>\n <td>Y</td>\n <td>7.0</td>\n </tr>\n <tr>\n <td>Meat Loaf</td>\n <td>Bat Out of Hell</td> \n <td>1977</td>\n <td>00:46:33</td>\n <td>Hard rock, progressive rock</td>\n <td>20.6</td>\n <td>43</td>\n <td>21-Oct-77</td>\n <td></td>\n <td>7.0</td>\n </tr>\n <tr>\n <td>Eagles</td>\n <td>Their Greatest Hits (1971-1975)</td> \n <td>1976</td>\n <td>00:43:08</td>\n <td>Rock, soft rock, folk rock</td>\n <td>32.2</td>\n <td>42</td>\n <td>17-Feb-76</td>\n <td></td>\n <td>9.5</td>\n </tr>\n <tr>\n <td>Bee Gees</td>\n <td>Saturday Night Fever</td> \n <td>1977</td>\n <td>1:15:54</td>\n <td>Disco</td>\n <td>20.6</td>\n <td>40</td>\n <td>15-Nov-77</td>\n <td>Y</td>\n <td>9.0</td>\n </tr>\n <tr>\n <td>Fleetwood Mac</td>\n <td>Rumours</td> \n <td>1977</td>\n <td>00:40:01</td>\n <td>Soft rock</td>\n <td>27.9</td>\n <td>40</td>\n <td>04-Feb-77</td>\n <td></td>\n <td>9.5</td>\n </tr>\n</table>\n</font>\n<hr>\n\n<a id=\"ref1\"></a>\n<center><h2>Tuples</h2></center>\nIn Python, there are different data types: string, integer and float. These data types can all be contained in a tuple as follows:\n<img src = \"https://ibm.box.com/shared/static/t2jw5ia78ulp8twr71j6q7055hykz10c.png\" width = 750, align = \"center\"></a>", "tuple1=(\"disco\",10,1.2 )\ntuple1", "The type of variable is a tuple.", " type(tuple1)", "Each element of a tuple can be accessed via an index. The following table represents the relationship between the index and the items in the tuple. Each element can be obtained by the name of the tuple followed by a square bracket with the index number:\n<img src = \"https://ibm.box.com/shared/static/83kpang0opwen5e5gbwck6ktqw7btwoe.gif\" width = 750, align = \"center\"></a>\nWe can print out each value in the tuple:", "print( tuple1[0])\nprint( tuple1[1])\nprint( tuple1[2])", "We can print out the type of each value in the tuple:", "print( type(tuple1[0]))\nprint( type(tuple1[1]))\nprint( type(tuple1[2]))", "We can also use negative indexing. We use the same table above with corresponding negative values:\n<img src = \"https://ibm.box.com/shared/static/uwlfzo367bekwg0p5s5odxlz7vhpojyj.png\" width = 750, align = \"center\"></a>\nWe can obtain the last element as follows (this time we will not use the print statement to display the values):", "tuple1[-1]", "We can display the next two elements as follows:", "tuple1[-2]\n\ntuple1[-3]", "We can concatenate or combine tuples by using the + sign:", "tuple2=tuple1+(\"hard rock\", 10)\ntuple2", "We can slice tuples obtaining multiple values as demonstrated by the figure below:\n<img src = \"https://ibm.box.com/shared/static/s9nofy728bcnsgnx3vh159bu16w7frnc.gif\" width = 750, align = \"center\"></a>\nWe can slice tuples, obtaining new tuples with the corresponding elements:", "tuple2[0:3]", "We can obtain the last two elements of the tuple:", "tuple2[3:5]", "We can obtain the length of a tuple using the length command:", "len(tuple2)", "This figure shows the number of elements:\n<img src = \"https://ibm.box.com/shared/static/apxe8l3w42f597yjhizg305merlm4ijf.png\" width = 750, align = \"center\"></a>\nConsider the following tuple:", "Ratings =(0,9,6,5,10,8,9,6,2)", "We can assign the tuple to a 2nd variable:", "Ratings1=Ratings\nRatings", "We can sort the values in a tuple and save it to a new tuple:", "RatingsSorted=sorted(Ratings )\nRatingsSorted", "A tuple can contain another tuple as well as other more complex data types. This process is called 'nesting'. Consider the following tuple with several elements:", "NestedT =(1, 2, (\"pop\", \"rock\") ,(3,4),(\"disco\",(1,2)))", "Each element in the tuple including other tuples can be obtained via an index as shown in the figure:\n<img src = \"https://ibm.box.com/shared/static/estqe2bczv5weocc4ag4mx9dtqy952fp.png\" width = 750, align = \"center\"></a>", "print(\"Element 0 of Tuple: \", NestedT[0])\nprint(\"Element 1 of Tuple: \", NestedT[1])\nprint(\"Element 2 of Tuple: \", NestedT[2])\nprint(\"Element 3 of Tuple: \", NestedT[3])\nprint(\"Element 4 of Tuple: \", NestedT[4])", "We can use the second index to access other tuples as demonstrated in the figure:\n<img src = \"https://ibm.box.com/shared/static/j1orgjuasaaj3d0feymedrnoqv8trqyo.png\" width = 750, align = \"center\"></a>\nWe can access the nested tuples :", "print(\"Element 2,0 of Tuple: \", NestedT[2][0])\nprint(\"Element 2,1 of Tuple: \", NestedT[2][1])\nprint(\"Element 3,0 of Tuple: \", NestedT[3][0])\nprint(\"Element 3,1 of Tuple: \", NestedT[3][1])\nprint(\"Element 4,0 of Tuple: \", NestedT[4][0])\nprint(\"Element 4,1 of Tuple: \", NestedT[4][1])", "We can access strings in the second nested tuples using a third index:", "NestedT[2][1][0]\n\n NestedT[2][1][1]", "We can use a tree to visualise the process. Each new index corresponds to a deeper level in the tree:\n<img src ='https://ibm.box.com/shared/static/vjvsygpzpwcr6czsucgno1wukyhk5vxq.gif' width = 750, align = \"center\"></a>\nSimilarly, we can access elements nested deeper in the tree with a fourth index:", "NestedT[4][1][0]\n\nNestedT[4][1][1]", "The following figure shows the relationship of the tree and the element NestedT[4][1][1]:\n<img src ='https://ibm.box.com/shared/static/9y5s7515zwzc9v6i4f67yj3np2fv9evs.gif'width = 750, align = \"center\"></a>\n<a id=\"ref2\"></a>\n<h2 align=center> Quiz on Tuples </h2>\n\nConsider the following tuple:", "genres_tuple = (\"pop\", \"rock\", \"soul\", \"hard rock\", \"soft rock\", \\\n \"R&B\", \"progressive rock\", \"disco\") \ngenres_tuple", "Find the length of the tuple, \"genres_tuple\":", "len(genres_tuple)", "<div align=\"right\">\n<a href=\"#String1\" class=\"btn btn-default\" data-toggle=\"collapse\">Click here for the solution</a>\n\n</div>\n\n<div id=\"String1\" class=\"collapse\">\n\n\"len(genres_tuple)\"\n <a ><img src = \"https://ibm.box.com/shared/static/n4969qbta8hhsycs2dc4n8jqbf062wdw.png\" width = 1100, align = \"center\"></a>\n```\n\n\n```\n</div>\n\nAccess the element, with respect to index 3:", "genres_tuple[3]", "<div align=\"right\">\n<a href=\"#2\" class=\"btn btn-default\" data-toggle=\"collapse\">Click here for the solution</a>\n\n</div>\n\n<div id=\"2\" class=\"collapse\">\n\n\n <a ><img src = \"https://ibm.box.com/shared/static/s6r8v2uy6wifmaqv53w6adabqci47zme.png\" width = 1100, align = \"center\"></a>\n\n</div>\n\nUse slicing to obtain indexes 3, 4 and 5:", "genres_tuple[3:6]", "<div align=\"right\">\n<a href=\"#3\" class=\"btn btn-default\" data-toggle=\"collapse\">Click here for the solution</a>\n\n</div>\n\n<div id=\"3\" class=\"collapse\">\n\n\n <a ><img src = \"https://ibm.box.com/shared/static/nqo84vydw6eixdex0trybuvactcw7ffi.png\" width = 1100, align = \"center\"></a>\n\n</div>\n\nFind the first two elements of the tuple \"genres_tuple\":", "genres_tuple[:2]", "<div align=\"right\">\n<a href=\"#q5\" class=\"btn btn-default\" data-toggle=\"collapse\">Click here for the solution</a>\n</div>\n<div id=\"q5\" class=\"collapse\">\n```\ngenres_tuple[0:2]\n\n```\n\n\n#### Find the first index of 'disco':", "genres_tuple.index(\"disco\")", "<div align=\"right\">\n<a href=\"#q6\" class=\"btn btn-default\" data-toggle=\"collapse\">Click here for the solution</a>\n</div>\n<div id=\"q6\" class=\"collapse\">\n```\ngenres_tuple.index(\"disco\") \n\n```\n\n<hr>\n\n#### Generate a sorted List from the Tuple C_tuple=(-5,1,-3):", "C_tuple=sorted((-5, 1, -3))\nC_tuple", "<div align=\"right\">\n<a href=\"#q7\" class=\"btn btn-default\" data-toggle=\"collapse\">Click here for the solution</a>\n</div>\n<div id=\"q7\" class=\"collapse\">\n```\nC_tuple = (-5,1,-3)\nC_list = sorted(C_tuple)\nC_list\n\n```\n\n <hr></hr>\n<div class=\"alert alert-success alertsuccess\" style=\"margin-top: 20px\">\n<h4> [Tip] Saving the Notebook </h4> \n\nYour notebook saves automatically every two minutes. You can manually save by going to **File** > **Save and Checkpoint**. You can come back to this notebook anytime by clicking this notebook under the \"**Recent Notebooks**\" list on the right-hand side. \n\n\n</div>\n<hr></hr>\n<div class=\"alert alert-success alertsuccess\" style=\"margin-top: 20px\">\n<h4> [Tip] Notebook Features </h4> \n\nDid you know there are other **notebook options**? Click on the **>** symbol to the left of the notebook:\n\n<img src =https://ibm.box.com/shared/static/otu40m0kkzz5hropxah1nnzd2j01itom.png width = 35%>\n\n\n<p></p>\n\n</div>\n<hr></hr>\n\n <a href=\"http://cocl.us/bottemNotebooksPython101Coursera\"><img src = \"https://ibm.box.com/shared/static/irypdxea2q4th88zu1o1tsd06dya10go.png\" width = 750, align = \"center\"></a>\n\n# About the Authors: \n\n [Joseph Santarcangelo]( https://www.linkedin.com/in/joseph-s-50398b136/) has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.\n\n\n <hr>\nCopyright &copy; 2017 [cognitiveclass.ai](cognitiveclass.ai?utm_source=bducopyrightlink&utm_medium=dswb&utm_campaign=bdu). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license/).​" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/cccma/cmip6/models/canesm5/ocean.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Ocean\nMIP Era: CMIP6\nInstitute: CCCMA\nSource ID: CANESM5\nTopic: Ocean\nSub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. \nProperties: 133 (101 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:46\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cccma', 'canesm5', 'ocean')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Seawater Properties\n3. Key Properties --&gt; Bathymetry\n4. Key Properties --&gt; Nonoceanic Waters\n5. Key Properties --&gt; Software Properties\n6. Key Properties --&gt; Resolution\n7. Key Properties --&gt; Tuning Applied\n8. Key Properties --&gt; Conservation\n9. Grid\n10. Grid --&gt; Discretisation --&gt; Vertical\n11. Grid --&gt; Discretisation --&gt; Horizontal\n12. Timestepping Framework\n13. Timestepping Framework --&gt; Tracers\n14. Timestepping Framework --&gt; Baroclinic Dynamics\n15. Timestepping Framework --&gt; Barotropic\n16. Timestepping Framework --&gt; Vertical Physics\n17. Advection\n18. Advection --&gt; Momentum\n19. Advection --&gt; Lateral Tracers\n20. Advection --&gt; Vertical Tracers\n21. Lateral Physics\n22. Lateral Physics --&gt; Momentum --&gt; Operator\n23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff\n24. Lateral Physics --&gt; Tracers\n25. Lateral Physics --&gt; Tracers --&gt; Operator\n26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff\n27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity\n28. Vertical Physics\n29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details\n30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers\n31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum\n32. Vertical Physics --&gt; Interior Mixing --&gt; Details\n33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers\n34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum\n35. Uplow Boundaries --&gt; Free Surface\n36. Uplow Boundaries --&gt; Bottom Boundary Layer\n37. Boundary Forcing\n38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction\n39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction\n40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration\n41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing \n1. Key Properties\nOcean key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of ocean model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of ocean model code (NEMO 3.6, MOM 5.0,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Family\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of ocean model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OGCM\" \n# \"slab ocean\" \n# \"mixed layer ocean\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBasic approximations made in the ocean.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Primitive equations\" \n# \"Non-hydrostatic\" \n# \"Boussinesq\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of prognostic variables in the ocean component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# \"Salinity\" \n# \"U-velocity\" \n# \"V-velocity\" \n# \"W-velocity\" \n# \"SSH\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Seawater Properties\nPhysical properties of seawater in ocean\n2.1. Eos Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EOS for sea water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Wright, 1997\" \n# \"Mc Dougall et al.\" \n# \"Jackett et al. 2006\" \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2.2. Eos Functional Temp\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTemperature used in EOS for sea water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# TODO - please enter value(s)\n", "2.3. Eos Functional Salt\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSalinity used in EOS for sea water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Practical salinity Sp\" \n# \"Absolute salinity Sa\" \n# TODO - please enter value(s)\n", "2.4. Eos Functional Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDepth or pressure used in EOS for sea water ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pressure (dbars)\" \n# \"Depth (meters)\" \n# TODO - please enter value(s)\n", "2.5. Ocean Freezing Point\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2.6. Ocean Specific Heat\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecific heat in ocean (cpocean) in J/(kg K)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "2.7. Ocean Reference Density\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBoussinesq reference density (rhozero) in kg / m3", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Bathymetry\nProperties of bathymetry in ocean\n3.1. Reference Dates\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nReference date of bathymetry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Present day\" \n# \"21000 years BP\" \n# \"6000 years BP\" \n# \"LGM\" \n# \"Pliocene\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the bathymetry fixed in time in the ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.3. Ocean Smoothing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe any smoothing or hand editing of bathymetry in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.4. Source\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe source of bathymetry in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.source') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Nonoceanic Waters\nNon oceanic waters treatement in ocean\n4.1. Isolated Seas\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how isolated seas is performed", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. River Mouth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how river mouth mixing or estuaries specific treatment is performed", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Software Properties\nSoftware properties of ocean code\n5.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Resolution\nResolution in the ocean grid\n6.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Range Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Number Of Horizontal Gridpoints\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "6.5. Number Of Vertical Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "6.6. Is Adaptive Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.7. Thickness Level 1\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThickness of first surface ocean level (in meters)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Tuning Applied\nTuning methodology for ocean component\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation\nConservation in the ocean component\n8.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBrief description of conservation methodology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProperties conserved in the ocean by the numerical schemes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Enstrophy\" \n# \"Salt\" \n# \"Volume of ocean\" \n# \"Momentum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Consistency Properties\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAny additional consistency properties (energy conversion, pressure gradient discretisation, ...)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Corrected Conserved Prognostic Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSet of variables which are conserved by more than the numerical scheme alone.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.5. Was Flux Correction Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDoes conservation involve flux correction ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9. Grid\nOcean grid\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of grid in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Grid --&gt; Discretisation --&gt; Vertical\nProperties of vertical discretisation in ocean\n10.1. Coordinates\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of vertical coordinates in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Z-coordinate\" \n# \"Z*-coordinate\" \n# \"S-coordinate\" \n# \"Isopycnic - sigma 0\" \n# \"Isopycnic - sigma 2\" \n# \"Isopycnic - sigma 4\" \n# \"Isopycnic - other\" \n# \"Hybrid / Z+S\" \n# \"Hybrid / Z+isopycnic\" \n# \"Hybrid / other\" \n# \"Pressure referenced (P)\" \n# \"P*\" \n# \"Z**\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Partial Steps\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nUsing partial steps with Z or Z vertical coordinate in ocean ?*", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "11. Grid --&gt; Discretisation --&gt; Horizontal\nType of horizontal discretisation scheme in ocean\n11.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal grid type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Lat-lon\" \n# \"Rotated north pole\" \n# \"Two north poles (ORCA-style)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Staggering\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal grid staggering type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa E-grid\" \n# \"N/a\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.3. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite difference\" \n# \"Finite volumes\" \n# \"Finite elements\" \n# \"Unstructured grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Timestepping Framework\nOcean Timestepping Framework\n12.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of time stepping in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.2. Diurnal Cycle\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDiurnal cycle type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Via coupling\" \n# \"Specific treatment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Timestepping Framework --&gt; Tracers\nProperties of tracers time stepping in ocean\n13.1. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracers time stepping scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracers time step (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14. Timestepping Framework --&gt; Baroclinic Dynamics\nBaroclinic dynamics in ocean\n14.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBaroclinic dynamics type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Preconditioned conjugate gradient\" \n# \"Sub cyling\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBaroclinic dynamics scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.3. Time Step\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBaroclinic time step (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15. Timestepping Framework --&gt; Barotropic\nBarotropic time stepping in ocean\n15.1. Splitting\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime splitting method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"split explicit\" \n# \"implicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.2. Time Step\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBarotropic time step (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Timestepping Framework --&gt; Vertical Physics\nVertical physics time stepping in ocean\n16.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDetails of vertical time stepping in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17. Advection\nOcean advection\n17.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of advection in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Advection --&gt; Momentum\nProperties of lateral momemtum advection scheme in ocean\n18.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of lateral momemtum advection scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flux form\" \n# \"Vector form\" \n# TODO - please enter value(s)\n", "18.2. Scheme Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of ocean momemtum advection scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.3. ALE\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nUsing ALE for vertical advection ? (if vertical coordinates are sigma)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.ALE') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "19. Advection --&gt; Lateral Tracers\nProperties of lateral tracer advection scheme in ocean\n19.1. Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrder of lateral tracer advection scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.2. Flux Limiter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMonotonic flux limiter for lateral tracer advection scheme in ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "19.3. Effective Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEffective order of limited lateral tracer advection scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.4. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.5. Passive Tracers\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPassive tracers advected", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ideal age\" \n# \"CFC 11\" \n# \"CFC 12\" \n# \"SF6\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.6. Passive Tracers Advection\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIs advection of passive tracers different than active ? if so, describe.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Advection --&gt; Vertical Tracers\nProperties of vertical tracer advection scheme in ocean\n20.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20.2. Flux Limiter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMonotonic flux limiter for vertical tracer advection scheme in ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "21. Lateral Physics\nOcean lateral physics\n21.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of lateral physics in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of transient eddy representation in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Eddy active\" \n# \"Eddy admitting\" \n# TODO - please enter value(s)\n", "22. Lateral Physics --&gt; Momentum --&gt; Operator\nProperties of lateral physics operator for momentum in ocean\n22.1. Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDirection of lateral physics momemtum scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.2. Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrder of lateral physics momemtum scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.3. Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDiscretisation of lateral physics momemtum scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff\nProperties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean\n23.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLateral physics momemtum eddy viscosity coeff type in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Constant Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "23.3. Variable Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.4. Coeff Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.5. Coeff Backscatter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "24. Lateral Physics --&gt; Tracers\nProperties of lateral physics for tracers in ocean\n24.1. Mesoscale Closure\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there a mesoscale closure in the lateral physics tracers scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "24.2. Submesoscale Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "25. Lateral Physics --&gt; Tracers --&gt; Operator\nProperties of lateral physics operator for tracers in ocean\n25.1. Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDirection of lateral physics tracers scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrder of lateral physics tracers scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDiscretisation of lateral physics tracers scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff\nProperties of eddy diffusity coeff in lateral physics tracers scheme in the ocean\n26.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLateral physics tracers eddy diffusity coeff type in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.2. Constant Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.3. Variable Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.4. Coeff Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.5. Coeff Backscatter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity\nProperties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean\n27.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EIV in lateral physics tracers in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"GM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Constant Val\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf EIV scheme for tracers is constant, specify coefficient value (M2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "27.3. Flux Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EIV flux (advective or skew)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.4. Added Diffusivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EIV added diffusivity (constant, flow dependent or none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Vertical Physics\nOcean Vertical Physics\n28.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of vertical physics in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details\nProperties of vertical physics in ocean\n29.1. Langmuir Cells Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there Langmuir cells mixing in upper ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers\n*Properties of boundary layer (BL) mixing on tracers in the ocean *\n30.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of boundary layer mixing for tracers in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.2. Closure Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.3. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant BL mixing of tracers, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground BL mixing of tracers coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum\n*Properties of boundary layer (BL) mixing on momentum in the ocean *\n31.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of boundary layer mixing for momentum in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.2. Closure Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "31.3. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant BL mixing of momentum, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "31.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground BL mixing of momentum coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32. Vertical Physics --&gt; Interior Mixing --&gt; Details\n*Properties of interior mixing in the ocean *\n32.1. Convection Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of vertical convection in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Non-penetrative convective adjustment\" \n# \"Enhanced vertical diffusion\" \n# \"Included in turbulence closure\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.2. Tide Induced Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how tide induced mixing is modelled (barotropic, baroclinic, none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.3. Double Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there double diffusion", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "32.4. Shear Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there interior shear mixing", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers\n*Properties of interior mixing on tracers in the ocean *\n33.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of interior mixing for tracers in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.2. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant interior mixing of tracers, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "33.3. Profile\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "33.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground interior mixing of tracers coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum\n*Properties of interior mixing on momentum in the ocean *\n34.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of interior mixing for momentum in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "34.2. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant interior mixing of momentum, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "34.3. Profile\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground interior mixing of momentum coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35. Uplow Boundaries --&gt; Free Surface\nProperties of free surface in ocean\n35.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of free surface in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nFree surface scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear implicit\" \n# \"Linear filtered\" \n# \"Linear semi-explicit\" \n# \"Non-linear implicit\" \n# \"Non-linear filtered\" \n# \"Non-linear semi-explicit\" \n# \"Fully explicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "35.3. Embeded Seaice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the sea-ice embeded in the ocean model (instead of levitating) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36. Uplow Boundaries --&gt; Bottom Boundary Layer\nProperties of bottom boundary layer in ocean\n36.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of bottom boundary layer in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.2. Type Of Bbl\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of bottom boundary layer in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diffusive\" \n# \"Acvective\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36.3. Lateral Mixing Coef\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "36.4. Sill Overflow\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe any specific treatment of sill overflows", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37. Boundary Forcing\nOcean boundary forcing\n37.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of boundary forcing in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.2. Surface Pressure\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.3. Momentum Flux Correction\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.4. Tracers Flux Correction\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.5. Wave Effects\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how wave effects are modelled at ocean surface.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.6. River Runoff Budget\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how river runoff from land surface is routed to ocean and any global adjustment done.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.7. Geothermal Heating\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how geothermal heating is present at ocean bottom.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction\nProperties of momentum bottom friction in ocean\n38.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of momentum bottom friction in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Non-linear\" \n# \"Non-linear (drag function of speed of tides)\" \n# \"Constant drag coefficient\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction\nProperties of momentum lateral friction in ocean\n39.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of momentum lateral friction in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Free-slip\" \n# \"No-slip\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration\nProperties of sunlight penetration scheme in ocean\n40.1. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of sunlight penetration scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"1 extinction depth\" \n# \"2 extinction depth\" \n# \"3 extinction depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "40.2. Ocean Colour\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the ocean sunlight penetration scheme ocean colour dependent ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "40.3. Extinction Depth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe and list extinctions depths for sunlight penetration scheme (if applicable).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing\nProperties of surface fresh water forcing in ocean\n41.1. From Atmopshere\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of surface fresh water forcing from atmos in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.2. From Sea Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of surface fresh water forcing from sea-ice in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Real salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.3. Forced Mode Restoring\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of surface salinity restoring in forced mode (OMIP)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
rainyear/pytips
Tips/2016-03-11-Arguments-and-Unpacking.ipynb
mit
[ "函数调用的参数规则与解包\nPython 的函数在声明参数时大概有下面 4 种形式:\n\n不带默认值的:def func(a): pass\n带有默认值的:def func(a, b = 1): pass\n任意位置参数:def func(a, b = 1, *c): pass\n任意键值参数:def func(a, b = 1, *c, **d): pass\n\n在调用函数时,有两种情况:\n\n没有关键词的参数:func(\"G\", 20)\n带有关键词的参数:func(a = \"G\", b = 20)(其中带有关键词调用可以不考虑顺序:func(b = 20, a = \"G\")\n\n当然,这两种情况是可以混用的:func(\"G\", b = 20),但最重要的一条规则是位置参数不能在关键词参数之后出现:", "def func(a, b = 1):\n pass\nfunc(a = \"G\", 20) # SyntaxError 语法错误", "另外一条规则是:位置参数优先权:", "def func(a, b = 1):\n pass\nfunc(20, a = \"G\") # TypeError 对参数 a 重复赋值", "最保险的方法就是全部采用关键词参数。\n任意参数\n任意参数可以接受任意数量的参数,其中*a的形式代表任意数量的位置参数,**d代表任意数量的关键词参数:", "def concat(*lst, sep = \"/\"):\n return sep.join((str(i) for i in lst))\n\nprint(concat(\"G\", 20, \"@\", \"Hz\", sep = \"\"))", "上面的这个def concat(*lst, sep = \"/\")的语法是PEP 3102提出的,在 Python 3.0 之后实现。这里的关键词函数必须明确指明,不能通过位置推断:", "print(concat(\"G\", 20, \"-\")) # Not G-20", "**d则代表任意数量的关键词参数", "def dconcat(sep = \":\", **dic):\n for k in dic.keys():\n print(\"{}{}{}\".format(k, sep, dic[k]))\n\ndconcat(hello = \"world\", python = \"rocks\", sep = \"~\")", "Unpacking\nPython 3.5 添加的新特性(PEP 448),使得*a、**d可以在函数参数之外使用:", "print(*range(5))\nlst = [0, 1, 2, 3]\nprint(*lst)\n\na = *range(3), # 这里的逗号不能漏掉\nprint(a)\n\nd = {\"hello\": \"world\", \"python\": \"rocks\"}\nprint({**d}[\"python\"])", "所谓的解包(Unpacking)实际上可以看做是去掉()的元组或者是去掉{}的字典。这一语法也提供了一个更加 Pythonic 地合并字典的方法:", "user = {'name': \"Trey\", 'website': \"http://treyhunner.com\"}\ndefaults = {'name': \"Anonymous User\", 'page_name': \"Profile Page\"}\n\nprint({**defaults, **user})", "在函数调用的时候使用这种解包的方法则是 Python 2.7 也可以使用的:", "print(concat(*\"ILovePython\"))", "参考\n\nThe Idiomatic Way to Merge Dictionaries in Python" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
franchenstein/dcgram
dcgram.ipynb
gpl-3.0
[ "DCGraM Algorithm\nThis notebook implements the D-Markov with Clustering and Graph Minimization (DCGraM) Algorithm. Its objective is to model a discrete dynamical system using a Probabilistic Finite State Machine (PFSA). \nGiven a sequence X over the alphabet $\\Sigma$ of length N that is an output of the original dynamical system, DCGraM works by:\n\nCreating a D-Markov model for the original system for a given D;\nUsing a clustering algorithm on the D-Markov model states in order to create an initial partition;\nUsing a graph minimization algorithm to refine the initial partition until the final reduced PFSA is obtained.\n\nInitialization\nFirst, it is necessary to create the directories that store the working files for the current system. The first cell sets the system's name and the tag to be used in the current run. The following cell only has to be ran when creating modeling a new system. A directory is then created with this tag and inside it subdirectories that contain the sequence, PFSA and result files.", "import pandas as pd\nimport yaml\nimport sequenceanalyzer as sa\n#import dmarkov\n\nname = 'ternary_even_shift'\ntag = 'v1'\n\nimport os\nif not os.path.exists(name):\n os.makedirs(name)\n os.makedirs(name + '/sequences')\n os.makedirs(name + '/pfsa')\n os.makedirs(name + '/results')\n os.makedirs(name + '/results/probabilities')\n os.makedirs(name + '/results/probabilities/conditional')\n os.makedirs(name + '/results/cond_entropies')\n os.makedirs(name + '/results/kldivergences')\n os.makedirs(name + '/results/autocorrelations')\n os.makedirs(name + '/results/prob_distances')\n os.makedirs(name + '/results/plots')", "Parameters\nThe next cell initializes the parameters that are used throughout the code. They are listed as:\n\nN: The original sequence length N, which is also the length of the sequences that are going to be generated by the PFSA generated by DCGraM;\ndrange: range of values of D for which D-Markov and DCGraM machines that will be generated;\na: value up to which the autocorrelation is computed.", "N = 10000000\ndrange = range(4,11)\na = 20", "Original Sequence Analysis\nMake sure that the original sequence of length N is stored in the correct directory and run the cell to load it to X. After this, run the cells corresponding to the computation of the subsequence probabilities and the conditional probabilites for the value d_max, which is the last value in drange. Additional results can also be computed in the respective cells (autocorrelation and conditional entropy).", "#Open original sequence from yaml file\nwith open(name + '/sequences/original_len_' + str(N) + '_' + tag + '.yaml', 'r') as f:\n X = yaml.load(f)\n \n#Value up to which results are computed\nd_max = drange[-1]\n\n#Initialization of variables:\np = None\np_cond = None\n\n#Compute subsequence probabilities of occurrence up to length d_max\np, alphabet = sa.calc_probs(X, d_max)\nwith open(name + '/results/probabilities/original_' + tag + '.yaml', 'w') as f:\n yaml.dump(p, f)\nwith open(name + '/alphabet.yaml', 'w') as f:\n yaml.dump(alphabet, f)\n\n#If p has been previously computed, use this cell to load the values\nif not p:\n with open(name + '/results/probabilities/original_' + tag + '.yaml', 'r') as f:\n p = yaml.load(f)\n with open(name + '/alphabet.yaml', 'r') as f:\n alphabet = yaml.load(f)\n\n#Compute conditional probabilities of subsequences occurring after given each symbol of the alphabet\n#One of the two previous cells needs to be executed first.\nif p:\n p_cond = sa.calc_cond_probs(p, alphabet, d_max - 1) \n with open(name + '/results/probabilities/conditional/original_' + tag + '.yaml', 'w') as f:\n yaml.dump(p_cond, f)\nelse:\n print(\"Run a cell that either computes or opens the probabilities.\")\n\n#If p_cond has been previously computed, use this cell to load the values\nif not p_cond:\n with open(name + '/results/probabilities/conditional/original_' + tag + '.yaml', 'r') as f:\n p_cond = yaml.load(f)\n\n#Compute conditional entropy\nif p and p_cond:\n h = sa.calc_cond_entropy(p, p_cond, d_max)\n h.to_csv(name + '/results/cond_entropies/original_' + tag + '.csv')\nelse:\n print(\"Run the conditional probabilities cell first.\")\n\n#If p_cond has been previously computed, use this cell to load the values\nif not h:\n h = pd.read_csv(name + '/results/cond_entropies/original_' + tag + '.csv')\n\n#Compute autocorrelation\naut = sa.calc_autocorr(X, a)\naut.to_csv(name + '/results/autocorrelations/original_' + tag + '.csv')\n\n#If aut has been previously computed, use this cell to load the values\nif not aut:\n aut = pd.read_csv(name + '/results/autocorrelations/original_' + tag + '.csv')", "D-Markov Machines\nThe next step of DCGraM consists of generating D-Markov Machines for each value of D in drange defined above. The values of p_cond for each of these values is then needed, so it is necessary to compute it above. A D-Markov Machine is a PFSA with $|\\Sigma|^D$ states, each one labeled with one of the subsquences of length $D$. Given a state $\\omega = \\sigma_1\\sigma_2\\ldots\\sigma_D$, for each $\\sigma \\in \\Sigma$, it transitions to the state $\\sigma_2\\sigma_3\\ldots\\sigma_D\\sigma$ with probability $\\Pr(\\sigma|\\omega)$. This is done for all states in the D-Markov machine.", "dmark_machines = []\n\n#If the D-Markov machines have not been previously created, generate them with this cell\nfor D in list(map(str,drange)):\n dmark_machines.append(dmarkov.create(p_cond, D))\n dmark_machines[-1].to_csv(name + '/pfsa/dmarkov_D' + D + '_' + tag + '.csv')\n\n#On the other hand, if there already are D-Markov machines, load them with this cell\nif not dmark_machines:\n for D in drange:\n dmark_machines.append(pd.read_csv(name + '/pfsa/dmarkov_D' + D + '_' + tag + '.csv'))", "D-Markov Machine Analysis\nFirst of all, sequences should be generated from the D-Markov Machines. The same parameters computed in the analysis of the original sequence should be computed for the D-Markov Machines' sequences. Besides those parameters, the Kullback-Leibler Divergence and Distribution Distance between these sequences and the original sequence.", "dmark_seqs = []\n\n#Generate sequences:\ncount = 0\nfor machine in dmark_machines:\n seq = machine.generate_sequence(N)\n with open(name + '/sequences/dmarkov_D' + str(drange[count]) + '_' + tag + '.yaml', 'w') as f:\n yaml.dump(seq, f)\n dmark_seqs.append(seq)\n count += 1\n\n#If the sequences have been previously generated, load them here:\nif not dmark_seqs:\n for D in list(map(str,drange)):\n with open(name + '/sequences/dmarkov_D' + D + '_' + tag + '.yaml', 'w') as f:\n dmark_seqs.append(yaml.load(f))\n\n#Compute subsequence probabilities of occurrence of the D-Markov sequences\ncount = 0\np_dmark = []\nfor seq in dmark_seqs:\n p_dm, alphabet = sa.calc_probs(seq, d_max)\n p_dm.to_csv(name + '/results/probabilities/dmarkov_D'+ str(drange[count]) + '_' + tag + '.csv')\n p_dmark.append(p_dm)\n count += 1\n\n#If p_dmark has been previously computed, use this cell to load the values\nif not p_dmark:\n for D in list(map(str,drange)):\n p_dm = pd.read_csv(name + '/results/probabilities/dmarkov_D' + D + '_' + tag + '.csv')\n p_dmark.append(p_dm)\n with open(name + '/alphabet.yaml', 'r') as f:\n alphabet = yaml.load(f)\n\n#Compute conditional probabilities of subsequences occurring after given each symbol of the alphabet\n#One of the two previous cells needs to be executed first.\np_cond_dmark = []\ncount = 0\nif p_dmark:\n for p_dm in p_dmark:\n p_cond_dm = sa.calc_cond_probs(p_dm, alphabet, d_max) \n p_cond_dm.to_csv(name + '/results/probabilities/conditional/dmarkov_D' + str(drange[count]) + '_' + tag + '.csv')\n p_cond_dmark.append(p_cond_dm)\n count += 1\nelse:\n print(\"Run a cell that either computes or opens the probabilities.\")\n\n#If p_cond has been previously computed, use this cell to load the values\nif not p_cond_dmark:\n for D in list(map(str,drange)):\n p_cond_dmark.append(pd.read_csv(name + '/results/probabilities/conditional/dmarkov_D' + D + '_' + tag + '.csv'))\n\n#Compute conditional entropy\ncount = 0\nh_dmark = []\nif p_dmark and p_cond_dmark:\n for p_dm in p_dmark:\n h_dm = sa.calc_cond_entropy(p_dm, p_cond_dmark[count], d_max)\n h_dm.to_csv(name + '/results/cond_entropies/dmarkov_D' + str(drange[count]) + '_' + tag + '.csv')\n h_dmark.append(h_dm)\n count += 1\nelse:\n print(\"Run the conditional probabilities cell first.\")\n\n#If h_dmark has been previously computed, use this cell to load the values\nif not h_dmark:\n for D in list(map(str,drange)):\n h_dmark.append(pd.read_csv(name + '/results/cond_entropies/dmarkov_D' + D + '_' + tag + '.csv'))\n\n#Compute autocorrelation\naut_dmark = []\ncount = 0\nfor dseq in dmark_seqs:\n aut_dm = sa.calc_autocorr(dseq, a)\n aut_dm.to_csv(name + '/results/autocorrelations/dmarkov_D' + str(drange[count]) + '_' + tag + '.csv')\n aut_dmark.append(aut_dm)\n count += 1\n\n#If aut has been previously computed, use this cell to load the values\nif not aut_dmark:\n for D in list(map(str,drange)):\n aut_dmark.append(pd.read_csv(name + '/results/autocorrelations/dmarkov_D' + D + '_' + tag + '.csv'))\n\n#Compute the Kullback-Leibler Divergence between the sequences generated by the D-Markov Machines and the original\n#sequence.\nkld_dmark = []\nfor dseq in dmark_seqs:\n kld_dm = sa.calc_kld(dseq, X, d_max)\n kld_dmark.append(kld_dm)\n \nkld_dmark.to_csv(name + '/results/kldivergences/dmarkov_' + tag + '.csv')\n\n#If the D-Markov Kullback-Leibler divergence has been previously computed, use this cell to load the values\nif not kld_dmark:\n kld_dmark = pd.read_csv(name + '/results/kldivergences/dmarkov_' + tag + '.csv')\n\n#Compute the Probability Distances between the sequences generated by the D-Markov Machines and the original\n#sequence.\npdist_dmark = []\nfor p_dm in p_dmark:\n pdist_dm = sa.calc_pdist(p_dm, p, d_max)\n pdist_dmark.append(pdist_dm)\n \npdist_dmark.to_csv(name + '/results/prob_distances/dmarkov_' + tag + '.csv')\n\n#If the Probability Distances of the D-Markov Machines have been previously computed, load them with this cell.\nif not pdist_dmark:\n pdist_dmark = pd.read_csv(name + '/results/prob_distances/dmarkov_' + tag + '.csv')", "Clustering\nNow that we have obtained the D-Markov Machines, the next step of DCGraM is to cluster the states of these machines. For a given D-Markov Machine G$_D$, its states $q$ are considered points in a $\\Sigma$-dimensional space, in which each dimension is labeled with a symbol $\\sigma$ from the alphabet and the position of the state $q$ in this dimension is its probability of transitioning with this symbol. These point-states are then clustered together in $K$ clusters using a variation of the K-Means clustering algorithm that instead of using an Euclidean distance between points, uses the Kullback-Leibler Divergence between the point-state and the cluster centroids.", "clustered = []\nK = 4\nfor machine in dmark_machines:\n clustered.append(clustering.kmeans_kld(machine, K))", "Graph Minimization\nOnce that the states of the D-Markov Machines are clustered, these clusterings are then used as initial partitions of the D-Markov Machines' states. To these machines and initial partitions, a graph minimization algorithm (in the current version, only Moore) is applied in order to obtain a final reduced PFSA, the DCGraM PFSA.", "dcgram_machines = []\nfor ini_part in clustered:\n dcgram_machines.append(graphmin.moore(clustered))", "DCGraM Analysis\nNow that the DCGraM machines have been generated, the same analysis done for the D-Markov Machines is used for them. Sequences are generated for each of the DCGraM machines and afterwards all of the analysis is applied to them so the comparison can be made between regular D-Markov and DCGraM.", "dcgram_seqs = []\n\n#Generate sequences:\ncount = 0\nfor machine in dcgram_machines:\n seq = machine.generate_sequence(N)\n with open(name + '/sequences/dcgram_D' + str(drange[count]) + '_' + tag + '.yaml', 'w') as f:\n yaml.dump(seq, f)\n dcgram_seqs.append(seq)\n count += 1\n\n#If the sequences have been previously generated, load them here:\nif not dcgram_seqs:\n for D in list(map(str,drange)):\n with open(name + '/sequences/dcgram_D' + D + '_' + tag + '.yaml', 'w') as f:\n dcgram_seqs.append(yaml.load(f))\n\n#Compute subsequence probabilities of occurrence of the DCGraM sequences\ncount = 0\np_dcgram = []\nfor seq in dcgram_seqs:\n p_dc, alphabet = sa.calc_probs(seq, d_max)\n p_dc.to_csv(name + '/results/probabilities/dcgram_D'+ str(drange[count]) + '_' + tag + '.csv')\n p_dcgram.append(p_dc)\n count += 1\n\n#If p_dcgram has been previously computed, use this cell to load the values\nif not p_dcgram:\n for D in list(map(str,drange)):\n p_dc = pd.read_csv(name + '/results/probabilities/dcgram_D' + D + '_' + tag + '.csv')\n p_dcgram.append(p_dm)\n with open(name + '/alphabet.yaml', 'r') as f:\n alphabet = yaml.load(f)\n\n#Compute conditional probabilities of subsequences occurring after given each symbol of the alphabet\n#One of the two previous cells needs to be executed first.\np_cond_dcgram = []\ncount = 0\nif p_dcgram:\n for p_dc in p_dcgram:\n p_cond_dc = sa.calc_cond_probs(p_dc, alphabet, d_max) \n p_cond_dc.to_csv(name + '/results/probabilities/conditional/dcgram_D' + str(drange[count]) + '_' + tag + '.csv')\n p_cond_dcgram.append(p_cond_dc)\n count += 1\nelse:\n print(\"Run a cell that either computes or opens the probabilities.\")\n\n#If p_cond_dcgram has been previously computed, use this cell to load the values\nif not p_cond_dcgram:\n for D in list(map(str,drange)):\n p_cond_dcgram.append(pd.read_csv(name + '/results/probabilities/conditional/dcgram_D' + D + '_' + tag + '.csv'))\n\n#Compute conditional entropy\ncount = 0\nh_dcgram = []\nif p_dcgram and p_cond_dcgram:\n for p_dc in p_dcgram:\n h_dc = sa.calc_cond_entropy(p_dc, p_cond_dcgram[count], d_max)\n h_dc.to_csv(name + '/results/cond_entropies/dcgram_D' + str(drange[count]) + '_' + tag + '.csv')\n h_dcgram.append(h_dc)\n count += 1\nelse:\n print(\"Run the conditional probabilities cell first.\")\n\n#If h_dcgram has been previously computed, use this cell to load the values\nif not h_dcgram:\n for D in list(map(str,drange)):\n h_dcgram.append(pd.read_csv(name + '/results/cond_entropies/dcgram_D' + D + '_' + tag + '.csv'))\n\n#Compute autocorrelation\naut_dcgram = []\ncount = 0\nfor dcseq in dcgram_seqs:\n aut_dc = sa.calc_autocorr(dcseq, a)\n aut_dc.to_csv(name + '/results/autocorrelations/dcgram_D' + str(drange[count]) + '_' + tag + '.csv')\n aut_dcgram.append(aut_dc)\n count += 1\n\n#If aut has been previously computed, use this cell to load the values\nif not aut_dcgram:\n for D in list(map(str,drange)):\n aut_dmark.append(pd.read_csv(name + '/results/autocorrelations/dcgram_D' + D + '_' + tag + '.csv'))\n\n#Compute the Kullback-Leibler Divergence between the sequences generated by the DCGraM Machines and the original\n#sequence.\nkld_dcgram = []\nfor dcseq in dcgram_seqs:\n kld_dc = sa.calc_kld(dcseq, X, d_max)\n kld_dcgram.append(kld_dc)\n \nkld_dcgram.to_csv(name + '/results/kldivergences/dcgram_' + tag + '.csv')\n\n#If the DCGraM Kullback-Leibler divergence has been previously computed, use this cell to load the values\nif not kld_dcgram:\n kld_dcgram = pd.read_csv(name + '/results/kldivergences/dcgram_' + tag + '.csv')\n\n#Compute the Probability Distances between the sequences generated by the DCGraM Machines and the original\n#sequence.\npdist_dcgram = []\nfor p_dc in p_dcgram:\n pdist_dc = sa.calc_pdist(p_dc, p, d_max)\n pdist_dcgram.append(pdist_dc)\n \npdist_dcgram.to_csv(name + '/results/prob_distances/dcgram_' + tag + '.csv')\n\n#If the Probability Distances of the DCGraM Machines have been previously computed, load them with this cell.\nif not pdist_dcgram:\n pdist_dcgram = pd.read_csv(name + '/results/prob_distances/dcgram_' + tag + '.csv')", "Plots\nOnce all analysis have been made, the plots of each of those parameters is created to visualize the performance. The plots have the x-axis representing the number of states of each PFSA and the y-axis represents the parameters being observed. There are always two curves: one for the DCGraM machines and one for the D-Markov Machines. Each point in these curves represents a machine of that type for a certain value of $D$. The further right a point is in the curve, the higher its $D$-value. On the curve for conditional entropy there is also a black representing the original sequence's conditional entropy for the $L$ being used as a baseline.", "#initialization\nimport matplotlib.pyplot as plt\n\n#Labels to be used in the plots' legends\nlabels = ['D-Markov Machines, D from ' + str(drange[0]) + ' to ' + str(d_max),\n 'DCGraM Machines, D from ' + str(drange[0]) + ' to ' + str(d_max),\n 'Original Sequence Baseline']\n\n#Obtaining number of states of the machines to be used in the x-axis:\nstates_dmarkov = []\nfor dm in dmark_machines:\n states_dmarkov.append(dm.shape[0])\n \nstates_dcgram = []\nfor dc in dcgram_machines:\n states_dcgram.append(dc.shape[0])\n \nstates = [states_dmarkov, states_dcgram]\n\n#Conditional Entropy plots\n\nH = 10\n\nh_dmark_curve = []\nfor h_dm in h_dmarkov:\n h_dmark_curve.append(h_dm[H])\nplt.semilogx(states[0], h_dmark_curve, marker='o', label=labels[0])\n \nh_dcgram_curve = []\nfor h_dc in h_dcgram:\n h_dcgram_curve.append(h_dc[H])\nplt.semilogx(states[1], h_dcgram_curve, marker='x', label=labels[1])\n \n\n#Opening original sequence baseline:\nh_base = h[H]\nplt.axhline(y=h_base, color='k', linewidth = 3, label=labels[2])\n\nplt.xlabel('Number of States', fontsize=16)\nplt.yalbel('$h_' + str(H) + '$', fontsize=16)\nplt.legend(loc='upper right', shadow=False, fontsize='large')\nplt.title('Conditional Entropy',fontsize=18,weight='bold')\nplt.savefig(name + '/plots/conditional_entropy_' + tag + '.eps' , bbox_inches='tight', format='eps',dpi=1000)\nplt.show()\n\n#Kullback-Leibler plots\n\nplt.semilogx(states[0], kld_dmark, marker='o', label=labels[0])\nplt.semilogx(states[1], kld_dcgram, marker='x', label=labels[1])\n\nplt.xlabel('Number of States', fontsize=16)\nplt.yalbel('$k_' + str(H) + '$', fontsize=16)\nplt.legend(loc='upper right', shadow=False, fontsize='large')\nplt.title('Kullback-Leibler Divergence',fontsize=18,weight='bold')\nplt.savefig(name + '/plots/kldivergence_' + tag + '.eps' , bbox_inches='tight', format='eps',dpi=1000)\nplt.show()\n\n#Probability Distance plots\n\nplt.semilogx(states[0], pdist_dmark, marker='o', label=labels[0])\nplt.semilogx(states[1], pdist_dcgram, marker='x', label=labels[1])\n\nplt.xlabel('Number of States', fontsize=16)\nplt.yalbel('$P_' + str(H) + '$', fontsize=16)\nplt.legend(loc='upper right', shadow=False, fontsize='large')\nplt.title('Probability Distance',fontsize=18,weight='bold')\nplt.savefig(name + '/plots/prob_distance_' + tag + '.eps' , bbox_inches='tight', format='eps',dpi=1000)\nplt.show()\n\n#TODO: Think how to have good plots for autocorrelation" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
WNoxchi/Kaukasos
FAI_old/lesson1/dogs_cats_redux.ipynb
mit
[ "Dogs vs Cat Redux\nIn this tutorial, you will learn how generate and submit predictions to a Kaggle competiton\nDogs vs. Cats Redux: Kernels Edition\nTo start you will need to download and unzip the competition data from Kaggle and ensure your directory structure looks like this\nutils/\n vgg16.py\n utils.py\nlesson1/\n redux.ipynb\n data/\n redux/\n train/\n cat.437.jpg\n dog.9924.jpg\n cat.1029.jpg\n dog.4374.jpg\n test/\n 231.jpg\n 325.jpg\n 1235.jpg\n 9923.jpg\nYou can download the data files from the competition page here or you can download them from the command line using the Kaggle CLI.\nYou should launch your notebook inside the lesson1 directory\ncd lesson1\njupyter notebook", "#Verify we are in the lesson1 directory\n%pwd\n\n#Create references to important directories we will use over and over\nimport os, sys\ncurrent_dir = os.getcwd()\nLESSON_HOME_DIR = current_dir\n# DATA_HOME_DIR = current_dir+'/data/redux'\nDATA_HOME_DIR = current_dir+'/data'\n\n#Allow relative imports to directories above lesson1/\n# sys.path.insert(1, os.path.join(sys.path[0], '..'))\nsys.path.insert(1, os.path.join(LESSON_HOME_DIR, '../utils'))\n\n#import modules\nfrom utils import *\nfrom vgg16 import Vgg16\n\n#Instantiate plotting tool\n#In Jupyter notebooks, you will need to run this command before doing any plotting\n%matplotlib inline", "Action Plan\n\nCreate Validation and Sample sets\nRearrange image files into their respective directories \nFinetune and Train model\nGenerate predictions\nValidate predictions\nSubmit predictions to Kaggle\n\nCreate validation set and sample", "#Create directories\n%cd $DATA_HOME_DIR\n%mkdir valid\n%mkdir results\n%mkdir -p sample/train\n%mkdir -p sample/test\n%mkdir -p sample/valid\n%mkdir -p sample/results\n%mkdir -p test/unknown\n\n%cd $DATA_HOME_DIR/train\n\ng = glob('*.jpg')\nshuf = np.random.permutation(g)\nfor i in range(2000): os.rename(shuf[i], DATA_HOME_DIR+'/valid/' + shuf[i])\n\nfrom shutil import copyfile\n\ng = glob('*.jpg')\nshuf = np.random.permutation(g)\nfor i in range(200): copyfile(shuf[i], DATA_HOME_DIR+'/sample/train/' + shuf[i])\n\n%cd $DATA_HOME_DIR/valid\n\ng = glob('*.jpg')\nshuf = np.random.permutation(g)\nfor i in range(50): copyfile(shuf[i], DATA_HOME_DIR+'/sample/valid/' + shuf[i])", "Rearrange image files into their respective directories", "#Divide cat/dog images into separate directories\n\n%cd $DATA_HOME_DIR/sample/train\n%mkdir cats\n%mkdir dogs\n%mv cat.*.jpg cats/\n%mv dog.*.jpg dogs/\n\n%cd $DATA_HOME_DIR/sample/valid\n%mkdir cats\n%mkdir dogs\n%mv cat.*.jpg cats/\n%mv dog.*.jpg dogs/\n\n%cd $DATA_HOME_DIR/valid\n%mkdir cats\n%mkdir dogs\n%mv cat.*.jpg cats/\n%mv dog.*.jpg dogs/\n\n%cd $DATA_HOME_DIR/train\n%mkdir cats\n%mkdir dogs\n%mv cat.*.jpg cats/\n%mv dog.*.jpg dogs/\n\n# Create single 'unknown' class for test set\n%cd $DATA_HOME_DIR/test\n%mv *.jpg unknown/", "Finetuning and Training", "%cd $DATA_HOME_DIR\n\n#Set path to sample/ path if desired\npath = DATA_HOME_DIR + '/' #'/sample/'\ntest_path = DATA_HOME_DIR + '/test/' #We use all the test data\nresults_path=DATA_HOME_DIR + '/results/'\ntrain_path=path + '/train/'\nvalid_path=path + '/valid/'\n\n#import Vgg16 helper class\nvgg = Vgg16()\n\n#Set constants. You can experiment with no_of_epochs to improve the model\nbatch_size=64\nno_of_epochs=3\n\n#Finetune the model\nbatches = vgg.get_batches(train_path, batch_size=batch_size)\nval_batches = vgg.get_batches(valid_path, batch_size=batch_size*2)\nvgg.finetune(batches)\n\n#Not sure if we set this for all fits\nvgg.model.optimizer.lr = 0.01\n\n#Notice we are passing in the validation dataset to the fit() method\n#For each epoch we test our model against the validation set\nlatest_weights_filename = None\nfor epoch in range(no_of_epochs):\n print \"Running epoch: %d\" % epoch\n vgg.fit(batches, val_batches, nb_epoch=1)\n# latest_weights_filename = 'ft%d.h5' % epoch\n# vgg.model.save_weights(results_path+latest_weights_filename)\nprint \"Completed %s fit operations\" % no_of_epochs", "Generate Predictions\nLet's use our new model to make predictions on the test dataset", "batches, preds = vgg.test(test_path, batch_size = batch_size*2)\n\n#For every image, vgg.test() generates two probabilities \n#based on how we've ordered the cats/dogs directories.\n#It looks like column one is cats and column two is dogs\nprint preds[:5]\n\nfilenames = batches.filenames\nprint filenames[:5]\n\n#You can verify the column ordering by viewing some images\nfrom PIL import Image\nImage.open(test_path + filenames[2])\n\n#Save our test results arrays so we can use them again later\nsave_array(results_path + 'test_preds.dat', preds)\nsave_array(results_path + 'filenames.dat', filenames)", "Validate Predictions\nKeras' fit() function conveniently shows us the value of the loss function, and the accuracy, after every epoch (\"epoch\" refers to one full run through all training examples). The most important metrics for us to look at are for the validation set, since we want to check for over-fitting. \n\nTip: with our first model we should try to overfit before we start worrying about how to reduce over-fitting - there's no point even thinking about regularization, data augmentation, etc if you're still under-fitting! (We'll be looking at these techniques shortly).\n\nAs well as looking at the overall metrics, it's also a good idea to look at examples of each of:\n1. A few correct labels at random\n2. A few incorrect labels at random\n3. The most correct labels of each class (ie those with highest probability that are correct)\n4. The most incorrect labels of each class (ie those with highest probability that are incorrect)\n5. The most uncertain labels (ie those with probability closest to 0.5).\nLet's see what we can learn from these examples. (In general, this is a particularly useful technique for debugging problems in the model. However, since this model is so simple, there may not be too much to learn at this stage.)\nCalculate predictions on validation set, so we can find correct and incorrect examples:", "vgg.model.load_weights(results_path+latest_weights_filename)\n\nval_batches, probs = vgg.test(valid_path, batch_size = batch_size)\n\nfilenames = val_batches.filenames\nexpected_labels = val_batches.classes #0 or 1\n\n#Round our predictions to 0/1 to generate labels\nour_predictions = probs[:,0]\nour_labels = np.round(1-our_predictions)\n\nfrom keras.preprocessing import image\n\n#Helper function to plot images by index in the validation set \n#Plots is a helper function in utils.py\ndef plots_idx(idx, titles=None):\n plots([image.load_img(valid_path + filenames[i]) for i in idx], titles=titles)\n \n#Number of images to view for each visualization task\nn_view = 4\n\n#1. A few correct labels at random\ncorrect = np.where(our_labels==expected_labels)[0]\nprint \"Found %d correct labels\" % len(correct)\nidx = permutation(correct)[:n_view]\nplots_idx(idx, our_predictions[idx])\n\n#2. A few incorrect labels at random\nincorrect = np.where(our_labels!=expected_labels)[0]\nprint \"Found %d incorrect labels\" % len(incorrect)\nidx = permutation(incorrect)[:n_view]\nplots_idx(idx, our_predictions[idx])\n\n#3a. The images we most confident were cats, and are actually cats\ncorrect_cats = np.where((our_labels==0) & (our_labels==expected_labels))[0]\nprint \"Found %d confident correct cats labels\" % len(correct_cats)\nmost_correct_cats = np.argsort(our_predictions[correct_cats])[::-1][:n_view]\nplots_idx(correct_cats[most_correct_cats], our_predictions[correct_cats][most_correct_cats])\n\n#3b. The images we most confident were dogs, and are actually dogs\ncorrect_dogs = np.where((our_labels==1) & (our_labels==expected_labels))[0]\nprint \"Found %d confident correct dogs labels\" % len(correct_dogs)\nmost_correct_dogs = np.argsort(our_predictions[correct_dogs])[:n_view]\nplots_idx(correct_dogs[most_correct_dogs], our_predictions[correct_dogs][most_correct_dogs])\n\n#4a. The images we were most confident were cats, but are actually dogs\nincorrect_cats = np.where((our_labels==0) & (our_labels!=expected_labels))[0]\nprint \"Found %d incorrect cats\" % len(incorrect_cats)\nif len(incorrect_cats):\n most_incorrect_cats = np.argsort(our_predictions[incorrect_cats])[::-1][:n_view]\n plots_idx(incorrect_cats[most_incorrect_cats], our_predictions[incorrect_cats][most_incorrect_cats])\n\n#4b. The images we were most confident were dogs, but are actually cats\nincorrect_dogs = np.where((our_labels==1) & (our_labels!=expected_labels))[0]\nprint \"Found %d incorrect dogs\" % len(incorrect_dogs)\nif len(incorrect_dogs):\n most_incorrect_dogs = np.argsort(our_predictions[incorrect_dogs])[:n_view]\n plots_idx(incorrect_dogs[most_incorrect_dogs], our_predictions[incorrect_dogs][most_incorrect_dogs])\n\n#5. The most uncertain labels (ie those with probability closest to 0.5).\nmost_uncertain = np.argsort(np.abs(our_predictions-0.5))\nplots_idx(most_uncertain[:n_view], our_predictions[most_uncertain])", "Perhaps the most common way to analyze the result of a classification model is to use a confusion matrix. Scikit-learn has a convenient function we can use for this purpose:", "from sklearn.metrics import confusion_matrix\ncm = confusion_matrix(expected_labels, our_labels)", "We can just print out the confusion matrix, or we can show a graphical view (which is mainly useful for dependents with a larger number of categories).", "plot_confusion_matrix(cm, val_batches.class_indices)", "Submit Predictions to Kaggle!\nHere's the format Kaggle requires for new submissions:\nimageId,isDog\n1242, .3984\n3947, .1000\n4539, .9082\n2345, .0000\nKaggle wants the imageId followed by the probability of the image being a dog. Kaggle uses a metric called Log Loss to evaluate your submission.", "#Load our test predictions from file\npreds = load_array(results_path + 'test_preds.dat')\nfilenames = load_array(results_path + 'filenames.dat')\n\n#Grab the dog prediction column\nisdog = preds[:,1]\nprint \"Raw Predictions: \" + str(isdog[:5])\nprint \"Mid Predictions: \" + str(isdog[(isdog < .6) & (isdog > .4)])\nprint \"Edge Predictions: \" + str(isdog[(isdog == 1) | (isdog == 0)])", "Log Loss doesn't support probability values of 0 or 1--they are undefined (and we have many). Fortunately, Kaggle helps us by offsetting our 0s and 1s by a very small value. So if we upload our submission now we will have lots of .99999999 and .000000001 values. This seems good, right?\nNot so. There is an additional twist due to how log loss is calculated--log loss rewards predictions that are confident and correct (p=.9999,label=1), but it punishes predictions that are confident and wrong far more (p=.0001,label=1). See visualization below.", "#Visualize Log Loss when True value = 1\n#y-axis is log loss, x-axis is probabilty that label = 1\n#As you can see Log Loss increases rapidly as we approach 0\n#But increases slowly as our predicted probability gets closer to 1\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom sklearn.metrics import log_loss\n\nx = [i*.0001 for i in range(1,10000)]\ny = [log_loss([1],[[i*.0001,1-(i*.0001)]],eps=1e-15) for i in range(1,10000,1)]\n\nplt.plot(x, y)\nplt.axis([-.05, 1.1, -.8, 10])\nplt.title(\"Log Loss when true label = 1\")\nplt.xlabel(\"predicted probability\")\nplt.ylabel(\"log loss\")\n\nplt.show()\n\n#So to play it safe, we use a sneaky trick to round down our edge predictions\n#Swap all ones with .95 and all zeros with .05\nisdog = isdog.clip(min=0.05, max=0.95)\n\n#Extract imageIds from the filenames in our test/unknown directory \nfilenames = batches.filenames\nids = np.array([int(f[8:f.find('.')]) for f in filenames])", "Here we join the two columns into an array of [imageId, isDog]", "subm = np.stack([ids,isdog], axis=1)\nsubm[:5]\n\n%cd $DATA_HOME_DIR\nsubmission_file_name = 'submission1.csv'\nnp.savetxt(submission_file_name, subm, fmt='%d,%.5f', header='id,label', comments='')\n\nfrom IPython.display import FileLink\n%cd $LESSON_HOME_DIR\nFileLink('data/redux/'+submission_file_name)", "You can download this file and submit on the Kaggle website or use the Kaggle command line tool's \"submit\" method." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tritemio/FRETBursts
notebooks/FRETBursts - 8-spot smFRET burst analysis.ipynb
gpl-2.0
[ "FRETBursts - 8-spot smFRET burst analysis\nThis notebook is part of a tutorial series for the FRETBursts burst analysis software.\n\nFor a step-by-step introduction to FRETBursts usage please refer to \nus-ALEX smFRET burst analysis.\nIn this notebook we present a typical FRETBursts\nworkflow for multi-spot smFRET burst analysis. \nBriefly, we show how to perform background estimation, burst search, burst selection, \nFRET histograms, and FRET efficiency fit using different methods. \n\nLoading the software", "from fretbursts import *\n\nsns = init_notebook()\n\nimport lmfit; lmfit.__version__\n\nimport phconvert; phconvert.__version__", "Downloading the sample data file\nThe complete example dataset can be downloaded \nfrom here.\nHere we download an 8-spot smFRET measurement file using \nthe download_file in FRETBursts:", "url = 'http://files.figshare.com/2182604/12d_New_30p_320mW_steer_3.hdf5'\n\ndownload_file(url, save_dir='./data')", "Selecting a data file", "filename = \"./data/12d_New_30p_320mW_steer_3.hdf5\"\n\nimport os\nassert os.path.exists(filename)", "Data load and Burst search\nLoad and process the data:", "d = loader.photon_hdf5(filename)", "For convenience we can set the correction coefficients right away \nso that they will be used in the subsequent analysis. \nThe correction coefficients are: \n\nleakage or bleed-through: leakage\ndirect excitation: dir_ex (ALEX-only)\ngamma-factor gamma\n\nThe direct excitation cannot be applied to non-ALEX (single-laser) \nsmFRET measurements (like the current one).", "d.leakage = 0.038\nd.gamma = 0.43", "NOTE: at any later moment, after burst search, a simple \nreassignment of these coefficient will update the burst data \nwith the new correction values.\n\nCompute background and burst search:", "d.calc_bg(bg.exp_fit, time_s=30, tail_min_us='auto', F_bg=1.7)\nd.burst_search(L=10, m=10, F=7)", "Perform a background plot as a function of the channel:", "mch_plot_bg(d)", "Let's take a look at the photon waiting times histograms and at the fitted background rates:", "dplot(d, hist_bg);", "Using dplot exactly in the same way as for the single-spot \ndata has now generated 8 subplots, one for each channel.\nLet's plot a timetrace for the background to see is there \nare significant variations during the measurement:", "dplot(d, timetrace_bg);", "We can look at the timetrace of the photon stream (binning):", "dplot(d, timetrace)\nxlim(2, 3); ylim(-100, 100);", "We can also open the same plot in an interactive window that allows scrolling (uncomment the following lines):", "#%matplotlib qt\n\n#dplot(d, timetrace, scroll=True);\n\n#ylim(-100, 100)\n\n#%matplotlib inline", "Burst selection and FRET\nSelecting bursts by burst size (select_bursts.size)", "gamma = d.gamma\ngamma\n\nd.gamma = 1\nds = d.select_bursts(select_bursts.size, th1=30, gamma=1)\ndplot(ds, hist_fret);\n\nds = d.select_bursts(select_bursts.size, th1=25, gamma=gamma, donor_ref=False)\ndplot(ds, hist_fret);\n\nds = d.select_bursts(select_bursts.size, th1=25, gamma=gamma)\ndplot(ds, hist_fret, weights='size', gamma=gamma);\n\ndplot(ds, scatter_fret_nd_na); ylim(0,200);", "FRET Fitting\n2-Gaussian mixture\nLet's fit the $E$ histogram with a 2-Gaussians model:", "ds.gamma = 1.\nbext.bursts_fitter(ds, weights=None)\nds.E_fitter.fit_histogram(mfit.factory_two_gaussians(), verbose=False)", "The fitted parameters are stored in a pandas DataFrame:", "ds.E_fitter.params\n\ndplot(ds, hist_fret, weights=None, show_model=True,\n show_fit_stats=True, fit_from='p2_center');", "Weighted Expectation Maximization\nThe expectation maximization \n(EM) method is particularly suited to resolve population \nmixtures. Note that the EM algorithm does not fit the histogram \nbut the $E$ distribution with no binning.\nFRETBursts include a weighted version of the EM algorithm that \ncan take into account the burst size.\nThe algorithm and benchmarks with the 2-Gaussian histogram fit \nare reported here.\nYou can find the EM algorithm in fretbursts/fit/gaussian_fit.py or typing:\nbl.two_gaussian_fit_EM??", "# bl.two_gaussian_fit_EM??\n\nEM_results = ds.fit_E_two_gauss_EM(weights=None, gamma=1.)\nEM_results", "The fitted parameters for each channel are stored in the fit_E_res attribute:", "ds.fit_E_name, ds.fit_E_res", "The model function is stored in:", "ds.fit_E_model", "Let's plot the histogram and the model with parameters from the EM fit:", "AX = dplot(ds, hist_fret, weights=None)\n\nx = np.r_[-0.2: 1.2 : 0.01]\nfor ich, (ax, E_fit) in enumerate(zip(AX.ravel(), EM_results)):\n ax.axvline(E_fit, ls='--', color='r')\n ax.plot(x, ds.fit_E_model(x, ds.fit_E_res[ich]))\n\nprint('E mean: %.2f%% E delta: %.2f%%' %\\\n (EM_results.mean()*100, (EM_results.max() - EM_results.min())*100))", "Comparing 2-Gaussian and EM fit\nTo quickly compare the 2-Gaussians with the EM fit we convert the EM fit results in a DataFrame:", "import pandas as pd\n\nEM_results = pd.DataFrame(ds.fit_E_res, columns=['p1_center', 'p1_sigma', 'p2_center', 'p2_sigma', 'p1_amplitude'])\nEM_results * 100\n\nds.E_fitter.params * 100", "And we compute the difference between the two sets of parameters:", "(ds.E_fitter.params - EM_results) * 100", "NOTE: The EM method follows more the \"asymmetry\" of the \npeaks because the center is a weighted mean of the bursts. \nOn the contrary the 2-Gaussians histogram fit tends to follows \nmore the peak position an less the \"asymmetric\" tails." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mediagestalt/Adding-Context
Adding Context to Word Frequency Counts.ipynb
mit
[ "Adding Context to Word Frequency Counts\nWhile the raw data from word frequency counts is compelling, it does little but describe quantitative features of the corpus. In order to determine if the statistics are indicative of a trend in word usage we must add value to the word frequencies. In this exercise we will produce a ratio of the occurences of privacy to the number of words in the entire corpus. Then we will compare the occurences of privacy to the indivudal number of transcripts within the corpus. This data will allow us identify trends that are worthy of further investigation.\nFinally, we will determine the number of words in the corpus as a whole and investigate the 50 most common words by creating a frequency plot. The last statistic we will generate is the type/token ratio, which is a measure of the variability of the words used in the corpus.\nPart 1: Determining a ratio\nTo add context to our word frequency counts, we can work with the corpus in a number of different ways. One of the easiest is to compare the number of words in the entire corpus to the frequency of the word we are investigating.\nLet's begin by calling on all the <span style=\"cursor:help;\" title=\"a set of instructions that performs a specific task\"><b>functions</b></span> we will need. Remember that the first few sentences are calling on pre-installed <i>Python</i> <span style=\"cursor:help;\" title=\"packages of functions and code that serve specific purposes\"><b>modules</b></span>, and anything with a def at the beginning is a custom function built specifically for these exercises. The text in red describes the purpose of the function.", "# This is where the modules are imported\n\nimport nltk\nfrom os import listdir\nfrom os.path import splitext\nfrom os.path import basename\nfrom tabulate import tabulate\n\n# These functions iterate through the directory and create a list of filenames\n\ndef list_textfiles(directory):\n \"Return a list of filenames ending in '.txt'\"\n textfiles = []\n for filename in listdir(directory):\n if filename.endswith(\".txt\"):\n textfiles.append(directory + \"/\" + filename)\n return textfiles\n\n\ndef remove_ext(filename):\n \"Removes the file extension, such as .txt\"\n name, extension = splitext(filename)\n return name\n\n\ndef remove_dir(filepath):\n \"Removes the path from the file name\"\n name = basename(filepath)\n return name\n\n\ndef get_filename(filepath):\n \"Removes the path and file extension from the file name\"\n filename = remove_ext(filepath)\n name = remove_dir(filename)\n return name\n\n# These functions work on the content of the files\n\ndef read_file(filename):\n \"Read the contents of FILENAME and return as a string.\"\n infile = open(filename)\n contents = infile.read()\n infile.close()\n return contents\n\ndef count_in_list(item_to_count, list_to_search):\n \"Counts the number of a specified word within a list of words\"\n number_of_hits = 0\n for item in list_to_search:\n if item == item_to_count:\n number_of_hits += 1\n return number_of_hits", "In the next piece of code we will cycle through our directory again: first assigning readable names to our files and storing them as a list in the variable filenames; then we will remove the case and punctuation from the text, split the words into a list of tokens, and assign the words in each file to a list in the variable corpus.", "filenames = []\nfor files in list_textfiles('../Counting Word Frequencies/data'):\n files = get_filename(files)\n filenames.append(files)\n\ncorpus = []\nfor filename in list_textfiles('../Counting Word Frequencies/data'):\n text = read_file(filename)\n words = text.split()\n clean = [w.lower() for w in words if w.isalpha()]\n corpus.append(clean)", "Here we recreate our list from the last exercise, counting the instances of the word privacy in each file.", "for words, names in zip(corpus, filenames):\n print(\"Instances of the word \\'privacy\\' in\", names, \":\", count_in_list(\"privacy\", words))", "Next we use the len function to count the total number of words in each file.", "for files, names in zip(corpus, filenames):\n print(\"There are\", len(files), \"words in\", names)", "Now we can calculate the ratio of the word privacy to the total number of words in the file. To accomplish this we simply divide the two numbers.", "print(\"Ratio of instances of privacy to total number of words in the corpus:\")\nfor words, names in zip(corpus, filenames):\n print('{:.6f}'.format(float(count_in_list(\"privacy\", words))/(float(len(words)))),\":\",names)", "Now our descriptive statistics concerning word frequencies have added value. We can see that there has indeed been a steady increase in the frequency of the use of the word privacy in our corpus. When we investigate the yearly usage, we can see that the frequency almost doubled between 2008 and 2009, as well as dramatic increase between 2012 and 2014. This is also apparent in the difference between the 39th and the 40th sittings of Parliament. \n\nLet's package all of the data together so it can be displayed as a table or exported to a CSV file. First we will write our values to a list: raw contains the raw frequencies, and ratio contains the ratios. Then we will create a <span style=\"cursor:help;\" title=\"a type of list where the values are permanent\"><b>tuple</b></span> that contains the filename variable and includes the corresponding raw and ratio variables. Here we'll generate the ratio as a percentage.", "raw = []\nfor i in range(len(corpus)):\n raw.append(count_in_list(\"privacy\", corpus[i]))\n\nratio = [] \nfor i in range(len(corpus)):\n ratio.append('{:.3f}'.format((float(count_in_list(\"privacy\", corpus[i]))/(float(len(corpus[i])))) * 100))\n \ntable = zip(filenames, raw, ratio)", "Using the tabulate module, we will display our tuple as a table.", "print(tabulate(table, headers = [\"Filename\", \"Raw\", \"Ratio %\"], floatfmt=\".3f\", numalign=\"left\"))", "And finally, we will write the values to a CSV file called privacyFreqTable.", "import csv\nwith open('privacyFreqTable.csv','wb') as f:\n w = csv.writer(f)\n w.writerows(table)", "Part 2: Counting the number of transcripts\nAnother way we can provide context is to process the corpus in a different way. Instead of splitting the data by word, we will split it in larger chunks pertaining to each individual transcript. Each transcript corresponds to a unique debate but starts with exactly the same formatting, making the files easy to split. The text below shows the beginning of a transcript. The first words are OFFICIAL REPORT (HANSARD).\n<img src=\"hansardText.png\">\nHere we will pass the files to another variable, called corpus_1. Instead of removing capitalization and punctuation, all we will do is split the files at every occurence of OFFICIAL REPORT (HANSARD).", "corpus_1 = []\nfor filename in list_textfiles('../Counting Word Frequencies/data'):\n text = read_file(filename)\n words = text.split(\" OFFICIAL REPORT (HANSARD)\")\n corpus_1.append(words)", "Now, we can count the number of files in each dataset. This is also an important activity for error-checking. While it is easy to trust the numerical output of the code when it works sucessfully, we must always be sure to check that the code is actually performing in exactly the way we want it to. In this case, these numbers can be cross-referenced with the original XML data, where each transcript exists as its own file. A quick check of the directory shows that the numbers are correct.", "for files, names in zip(corpus_1, filenames):\n print(\"There are\", len(files), \"files in\", names)", "Here is a screenshot of some of the raw data. We can see that there are <u>97</u> files in 2006, <u>117</u> in 2007 and <u>93</u> in 2008. The rest of the data is also correct. \n<img src=\"filecount.png\">\nNow we can compare the amount of occurences of privacy with the number of debates occuring in each dataset.", "for names, files, words in zip(filenames, corpus_1, corpus):\n print(\"In\", names, \"there were\", len(files), \"debates. The word privacy was said\", \\\n count_in_list('privacy', words), \"times.\")", "These numbers confirm our earlier results. There is a clear indication that the usage of the term privacy is increasing, with major changes occuring between the years 2008 and 2009, as well as between 2012 and 2014. This trend is also clearly obervable between the 39th and 40th sittings of Parliament. \n\nPart 3: Looking at the corpus as a whole\nWhile chunking the corpus into pieces can help us understand the distribution or dispersion of words throughout the corpus, it's valuable to look at the corpus as a whole. Here we will create a third corpus variable corpus_3 that only contains the files named 39, 40, and 41. Note the new directory named data2. We only need these files; if we used all of the files we would literally duplicate the results.", "corpus_3 = []\nfor filename in list_textfiles('../Counting Word Frequencies/data2'):\n text = read_file(filename)\n words = text.split()\n clean = [w.lower() for w in words if w.isalpha()]\n corpus_3.append(clean)", "Now we will combine the three lists into one large list and assign it to the variable large.", "large = list(sum(corpus_3, []))", "We can use the same calculations to determine the total number of occurences of privacy, as well as the total number of words in the corpus. We can also calculate the total ratio of privacy to the total number of words.", "print(\"There are\", count_in_list('privacy', large), \"occurences of the word 'privacy' and a total of\", \\\nlen(large), \"words.\")\n\nprint(\"The ratio of instances of privacy to total number of words in the corpus is:\", \\\n'{:.6f}'.format(float(count_in_list(\"privacy\", large))/(float(len(large)))), \"or\", \\\n'{:.3f}'.format((float(count_in_list(\"privacy\", large))/(float(len(large)))) * 100),\"%\")", "Another type of word frequency statistic we can generate is a type/token ratio. The types are the total number of unique words in the corpus, while the tokens are the total number of words. The type/token ratio is used to determine the variability of the language used in the text. The higher the ratio, the more complex the text will be. First we'll determine the total number of types, using <i>Python's</i> set function.", "print(\"There are\", (len(set(large))), \"unique words in the Hansard corpus.\")", "Now we can divide the types by the tokens to determine the ratio.", "print(\"The type/token ratio is:\", ('{:.6f}'.format(len(set(large))/(float(len(large))))), \"or\",\\\n'{:.3f}'.format(len(set(large))/(float(len(large)))*100),\"%\")", "Finally, we will use the NLTK module to create a graph that shows the top 50 most frequent words in the Hansard corpus. Although privacy will not appear in the graph, it's always interesting to see what types of words are most common, and what their distribution is. NLTK will be introduced with more detail in the next section featuring concordance outputs, but here all we need to know is that we assign our variable large to the NLTK function Text in order to work with the corpus data. From there we can determine the frequency distribution for the whole text.", "text = nltk.Text(large)\nfd = nltk.FreqDist(text)", "Here we will assign the frequency distribution to the plot function to produce a graph. While it's a little hard to read, the most commonly used word in the Hansard corpus is the, with a frequency just over 400,000 occurences. The next most frequent word is to, which only has a frequency of about 225,000 occurences, almost half of the first most common word. The first 10 most frequent words appear with a much greater frequency than any of the other words in the corpus.", "%matplotlib inline\nfd.plot(50,cumulative=False)", "Another feature of the NLTK frequency distribution function is the generation of a list of hapaxes. These are words that appear only once in the entire corpus. While not meaningful for this study, it's an interesting way to explore the data.", "fd.hapaxes()", "The next section will use NLTK to create generate concordance outputs featuring the word privacy." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/dwd/cmip6/models/sandbox-2/ocean.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Ocean\nMIP Era: CMIP6\nInstitute: DWD\nSource ID: SANDBOX-2\nTopic: Ocean\nSub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. \nProperties: 133 (101 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:57\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'dwd', 'sandbox-2', 'ocean')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Seawater Properties\n3. Key Properties --&gt; Bathymetry\n4. Key Properties --&gt; Nonoceanic Waters\n5. Key Properties --&gt; Software Properties\n6. Key Properties --&gt; Resolution\n7. Key Properties --&gt; Tuning Applied\n8. Key Properties --&gt; Conservation\n9. Grid\n10. Grid --&gt; Discretisation --&gt; Vertical\n11. Grid --&gt; Discretisation --&gt; Horizontal\n12. Timestepping Framework\n13. Timestepping Framework --&gt; Tracers\n14. Timestepping Framework --&gt; Baroclinic Dynamics\n15. Timestepping Framework --&gt; Barotropic\n16. Timestepping Framework --&gt; Vertical Physics\n17. Advection\n18. Advection --&gt; Momentum\n19. Advection --&gt; Lateral Tracers\n20. Advection --&gt; Vertical Tracers\n21. Lateral Physics\n22. Lateral Physics --&gt; Momentum --&gt; Operator\n23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff\n24. Lateral Physics --&gt; Tracers\n25. Lateral Physics --&gt; Tracers --&gt; Operator\n26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff\n27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity\n28. Vertical Physics\n29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details\n30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers\n31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum\n32. Vertical Physics --&gt; Interior Mixing --&gt; Details\n33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers\n34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum\n35. Uplow Boundaries --&gt; Free Surface\n36. Uplow Boundaries --&gt; Bottom Boundary Layer\n37. Boundary Forcing\n38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction\n39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction\n40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration\n41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing \n1. Key Properties\nOcean key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of ocean model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of ocean model code (NEMO 3.6, MOM 5.0,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Family\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of ocean model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OGCM\" \n# \"slab ocean\" \n# \"mixed layer ocean\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBasic approximations made in the ocean.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Primitive equations\" \n# \"Non-hydrostatic\" \n# \"Boussinesq\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of prognostic variables in the ocean component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# \"Salinity\" \n# \"U-velocity\" \n# \"V-velocity\" \n# \"W-velocity\" \n# \"SSH\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Seawater Properties\nPhysical properties of seawater in ocean\n2.1. Eos Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EOS for sea water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Wright, 1997\" \n# \"Mc Dougall et al.\" \n# \"Jackett et al. 2006\" \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2.2. Eos Functional Temp\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTemperature used in EOS for sea water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# TODO - please enter value(s)\n", "2.3. Eos Functional Salt\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSalinity used in EOS for sea water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Practical salinity Sp\" \n# \"Absolute salinity Sa\" \n# TODO - please enter value(s)\n", "2.4. Eos Functional Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDepth or pressure used in EOS for sea water ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pressure (dbars)\" \n# \"Depth (meters)\" \n# TODO - please enter value(s)\n", "2.5. Ocean Freezing Point\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2.6. Ocean Specific Heat\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecific heat in ocean (cpocean) in J/(kg K)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "2.7. Ocean Reference Density\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBoussinesq reference density (rhozero) in kg / m3", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Bathymetry\nProperties of bathymetry in ocean\n3.1. Reference Dates\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nReference date of bathymetry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Present day\" \n# \"21000 years BP\" \n# \"6000 years BP\" \n# \"LGM\" \n# \"Pliocene\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the bathymetry fixed in time in the ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.3. Ocean Smoothing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe any smoothing or hand editing of bathymetry in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.4. Source\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe source of bathymetry in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.source') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Nonoceanic Waters\nNon oceanic waters treatement in ocean\n4.1. Isolated Seas\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how isolated seas is performed", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. River Mouth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how river mouth mixing or estuaries specific treatment is performed", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Software Properties\nSoftware properties of ocean code\n5.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Resolution\nResolution in the ocean grid\n6.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Range Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Number Of Horizontal Gridpoints\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "6.5. Number Of Vertical Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "6.6. Is Adaptive Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.7. Thickness Level 1\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThickness of first surface ocean level (in meters)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Tuning Applied\nTuning methodology for ocean component\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation\nConservation in the ocean component\n8.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBrief description of conservation methodology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProperties conserved in the ocean by the numerical schemes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Enstrophy\" \n# \"Salt\" \n# \"Volume of ocean\" \n# \"Momentum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Consistency Properties\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAny additional consistency properties (energy conversion, pressure gradient discretisation, ...)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Corrected Conserved Prognostic Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSet of variables which are conserved by more than the numerical scheme alone.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.5. Was Flux Correction Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDoes conservation involve flux correction ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9. Grid\nOcean grid\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of grid in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Grid --&gt; Discretisation --&gt; Vertical\nProperties of vertical discretisation in ocean\n10.1. Coordinates\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of vertical coordinates in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Z-coordinate\" \n# \"Z*-coordinate\" \n# \"S-coordinate\" \n# \"Isopycnic - sigma 0\" \n# \"Isopycnic - sigma 2\" \n# \"Isopycnic - sigma 4\" \n# \"Isopycnic - other\" \n# \"Hybrid / Z+S\" \n# \"Hybrid / Z+isopycnic\" \n# \"Hybrid / other\" \n# \"Pressure referenced (P)\" \n# \"P*\" \n# \"Z**\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Partial Steps\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nUsing partial steps with Z or Z vertical coordinate in ocean ?*", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "11. Grid --&gt; Discretisation --&gt; Horizontal\nType of horizontal discretisation scheme in ocean\n11.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal grid type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Lat-lon\" \n# \"Rotated north pole\" \n# \"Two north poles (ORCA-style)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Staggering\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal grid staggering type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa E-grid\" \n# \"N/a\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.3. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite difference\" \n# \"Finite volumes\" \n# \"Finite elements\" \n# \"Unstructured grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Timestepping Framework\nOcean Timestepping Framework\n12.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of time stepping in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.2. Diurnal Cycle\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDiurnal cycle type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Via coupling\" \n# \"Specific treatment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Timestepping Framework --&gt; Tracers\nProperties of tracers time stepping in ocean\n13.1. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracers time stepping scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracers time step (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14. Timestepping Framework --&gt; Baroclinic Dynamics\nBaroclinic dynamics in ocean\n14.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBaroclinic dynamics type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Preconditioned conjugate gradient\" \n# \"Sub cyling\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBaroclinic dynamics scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.3. Time Step\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBaroclinic time step (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15. Timestepping Framework --&gt; Barotropic\nBarotropic time stepping in ocean\n15.1. Splitting\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime splitting method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"split explicit\" \n# \"implicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.2. Time Step\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBarotropic time step (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Timestepping Framework --&gt; Vertical Physics\nVertical physics time stepping in ocean\n16.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDetails of vertical time stepping in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17. Advection\nOcean advection\n17.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of advection in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Advection --&gt; Momentum\nProperties of lateral momemtum advection scheme in ocean\n18.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of lateral momemtum advection scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flux form\" \n# \"Vector form\" \n# TODO - please enter value(s)\n", "18.2. Scheme Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of ocean momemtum advection scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.3. ALE\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nUsing ALE for vertical advection ? (if vertical coordinates are sigma)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.ALE') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "19. Advection --&gt; Lateral Tracers\nProperties of lateral tracer advection scheme in ocean\n19.1. Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrder of lateral tracer advection scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.2. Flux Limiter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMonotonic flux limiter for lateral tracer advection scheme in ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "19.3. Effective Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEffective order of limited lateral tracer advection scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.4. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.5. Passive Tracers\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPassive tracers advected", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ideal age\" \n# \"CFC 11\" \n# \"CFC 12\" \n# \"SF6\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.6. Passive Tracers Advection\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIs advection of passive tracers different than active ? if so, describe.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Advection --&gt; Vertical Tracers\nProperties of vertical tracer advection scheme in ocean\n20.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20.2. Flux Limiter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMonotonic flux limiter for vertical tracer advection scheme in ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "21. Lateral Physics\nOcean lateral physics\n21.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of lateral physics in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of transient eddy representation in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Eddy active\" \n# \"Eddy admitting\" \n# TODO - please enter value(s)\n", "22. Lateral Physics --&gt; Momentum --&gt; Operator\nProperties of lateral physics operator for momentum in ocean\n22.1. Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDirection of lateral physics momemtum scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.2. Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrder of lateral physics momemtum scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.3. Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDiscretisation of lateral physics momemtum scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff\nProperties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean\n23.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLateral physics momemtum eddy viscosity coeff type in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Constant Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "23.3. Variable Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.4. Coeff Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.5. Coeff Backscatter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "24. Lateral Physics --&gt; Tracers\nProperties of lateral physics for tracers in ocean\n24.1. Mesoscale Closure\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there a mesoscale closure in the lateral physics tracers scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "24.2. Submesoscale Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "25. Lateral Physics --&gt; Tracers --&gt; Operator\nProperties of lateral physics operator for tracers in ocean\n25.1. Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDirection of lateral physics tracers scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrder of lateral physics tracers scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDiscretisation of lateral physics tracers scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff\nProperties of eddy diffusity coeff in lateral physics tracers scheme in the ocean\n26.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLateral physics tracers eddy diffusity coeff type in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.2. Constant Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.3. Variable Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.4. Coeff Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.5. Coeff Backscatter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity\nProperties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean\n27.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EIV in lateral physics tracers in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"GM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Constant Val\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf EIV scheme for tracers is constant, specify coefficient value (M2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "27.3. Flux Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EIV flux (advective or skew)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.4. Added Diffusivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EIV added diffusivity (constant, flow dependent or none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Vertical Physics\nOcean Vertical Physics\n28.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of vertical physics in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details\nProperties of vertical physics in ocean\n29.1. Langmuir Cells Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there Langmuir cells mixing in upper ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers\n*Properties of boundary layer (BL) mixing on tracers in the ocean *\n30.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of boundary layer mixing for tracers in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.2. Closure Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.3. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant BL mixing of tracers, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground BL mixing of tracers coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum\n*Properties of boundary layer (BL) mixing on momentum in the ocean *\n31.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of boundary layer mixing for momentum in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.2. Closure Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "31.3. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant BL mixing of momentum, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "31.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground BL mixing of momentum coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32. Vertical Physics --&gt; Interior Mixing --&gt; Details\n*Properties of interior mixing in the ocean *\n32.1. Convection Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of vertical convection in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Non-penetrative convective adjustment\" \n# \"Enhanced vertical diffusion\" \n# \"Included in turbulence closure\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.2. Tide Induced Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how tide induced mixing is modelled (barotropic, baroclinic, none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.3. Double Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there double diffusion", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "32.4. Shear Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there interior shear mixing", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers\n*Properties of interior mixing on tracers in the ocean *\n33.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of interior mixing for tracers in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.2. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant interior mixing of tracers, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "33.3. Profile\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "33.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground interior mixing of tracers coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum\n*Properties of interior mixing on momentum in the ocean *\n34.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of interior mixing for momentum in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "34.2. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant interior mixing of momentum, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "34.3. Profile\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground interior mixing of momentum coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35. Uplow Boundaries --&gt; Free Surface\nProperties of free surface in ocean\n35.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of free surface in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nFree surface scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear implicit\" \n# \"Linear filtered\" \n# \"Linear semi-explicit\" \n# \"Non-linear implicit\" \n# \"Non-linear filtered\" \n# \"Non-linear semi-explicit\" \n# \"Fully explicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "35.3. Embeded Seaice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the sea-ice embeded in the ocean model (instead of levitating) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36. Uplow Boundaries --&gt; Bottom Boundary Layer\nProperties of bottom boundary layer in ocean\n36.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of bottom boundary layer in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.2. Type Of Bbl\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of bottom boundary layer in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diffusive\" \n# \"Acvective\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36.3. Lateral Mixing Coef\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "36.4. Sill Overflow\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe any specific treatment of sill overflows", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37. Boundary Forcing\nOcean boundary forcing\n37.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of boundary forcing in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.2. Surface Pressure\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.3. Momentum Flux Correction\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.4. Tracers Flux Correction\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.5. Wave Effects\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how wave effects are modelled at ocean surface.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.6. River Runoff Budget\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how river runoff from land surface is routed to ocean and any global adjustment done.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.7. Geothermal Heating\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how geothermal heating is present at ocean bottom.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction\nProperties of momentum bottom friction in ocean\n38.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of momentum bottom friction in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Non-linear\" \n# \"Non-linear (drag function of speed of tides)\" \n# \"Constant drag coefficient\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction\nProperties of momentum lateral friction in ocean\n39.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of momentum lateral friction in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Free-slip\" \n# \"No-slip\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration\nProperties of sunlight penetration scheme in ocean\n40.1. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of sunlight penetration scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"1 extinction depth\" \n# \"2 extinction depth\" \n# \"3 extinction depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "40.2. Ocean Colour\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the ocean sunlight penetration scheme ocean colour dependent ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "40.3. Extinction Depth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe and list extinctions depths for sunlight penetration scheme (if applicable).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing\nProperties of surface fresh water forcing in ocean\n41.1. From Atmopshere\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of surface fresh water forcing from atmos in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.2. From Sea Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of surface fresh water forcing from sea-ice in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Real salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.3. Forced Mode Restoring\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of surface salinity restoring in forced mode (OMIP)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/recommendation_systems/labs/multitask.ipynb
apache-2.0
[ "Multi-task recommenders\nLearning Objectives\n 1. Training a model which focuses on ratings.\n 2. Training a model which focuses on retrieval.\n 3. Training a joint model that assigns positive weights to both ratings & retrieval models.\nIntroduction\nIn the basic retrieval notebook we built a retrieval system using movie watches as positive interaction signals.\nIn many applications, however, there are multiple rich sources of feedback to draw upon. For example, an e-commerce site may record user visits to product pages (abundant, but relatively low signal), image clicks, adding to cart, and, finally, purchases. It may even record post-purchase signals such as reviews and returns.\nIntegrating all these different forms of feedback is critical to building systems that users love to use, and that do not optimize for any one metric at the expense of overall performance.\nIn addition, building a joint model for multiple tasks may produce better results than building a number of task-specific models. This is especially true where some data is abundant (for example, clicks), and some data is sparse (purchases, returns, manual reviews). In those scenarios, a joint model may be able to use representations learned from the abundant task to improve its predictions on the sparse task via a phenomenon known as transfer learning. For example, this paper shows that a model predicting explicit user ratings from sparse user surveys can be substantially improved by adding an auxiliary task that uses abundant click log data.\nIn this jupyter notebook, we are going to build a multi-objective recommender for Movielens, using both implicit (movie watches) and explicit signals (ratings).\nEach learning objective will correspond to a #TODO in the notebook where you will complete the notebook cell's code before running. Refer to the solution for reference.\nImports\nLet's first get our imports out of the way.", "# Installing the necessary libraries.\n!pip install -q tensorflow-recommenders\n!pip install -q --upgrade tensorflow-datasets", "NOTE: Please ignore any incompatibility warnings and errors and re-run the above cell before proceeding.", "# Importing the necessary modules\nimport os\nimport pprint\nimport tempfile\n\nfrom typing import Dict, Text\n\nimport numpy as np\nimport tensorflow as tf\nimport tensorflow_datasets as tfds\n\nimport tensorflow_recommenders as tfrs", "Preparing the dataset\nWe're going to use the Movielens 100K dataset.", "ratings = tfds.load('movielens/100k-ratings', split=\"train\")\nmovies = tfds.load('movielens/100k-movies', split=\"train\")\n\n# Select the basic features.\nratings = ratings.map(lambda x: {\n \"movie_title\": x[\"movie_title\"],\n \"user_id\": x[\"user_id\"],\n \"user_rating\": x[\"user_rating\"],\n})\nmovies = movies.map(lambda x: x[\"movie_title\"])", "And repeat our preparations for building vocabularies and splitting the data into a train and a test set:", "# Randomly shuffle data and split between train and test.\ntf.random.set_seed(42)\nshuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)\n\ntrain = shuffled.take(80_000)\ntest = shuffled.skip(80_000).take(20_000)\n\nmovie_titles = movies.batch(1_000)\nuser_ids = ratings.batch(1_000_000).map(lambda x: x[\"user_id\"])\n\nunique_movie_titles = np.unique(np.concatenate(list(movie_titles)))\nunique_user_ids = np.unique(np.concatenate(list(user_ids)))", "A multi-task model\nThere are two critical parts to multi-task recommenders:\n\nThey optimize for two or more objectives, and so have two or more losses.\nThey share variables between the tasks, allowing for transfer learning.\n\nIn this jupyter notebook, we will define our models as before, but instead of having a single task, we will have two tasks: one that predicts ratings, and one that predicts movie watches.\nThe user and movie models are as before:\n```python\nuser_model = tf.keras.Sequential([\n tf.keras.layers.experimental.preprocessing.StringLookup(\n vocabulary=unique_user_ids, mask_token=None),\n # We add 1 to account for the unknown token.\n tf.keras.layers.Embedding(len(unique_user_ids) + 1, embedding_dimension)\n])\nmovie_model = tf.keras.Sequential([\n tf.keras.layers.experimental.preprocessing.StringLookup(\n vocabulary=unique_movie_titles, mask_token=None),\n tf.keras.layers.Embedding(len(unique_movie_titles) + 1, embedding_dimension)\n])\n```\nHowever, now we will have two tasks. The first is the rating task:\npython\ntfrs.tasks.Ranking(\n loss=tf.keras.losses.MeanSquaredError(),\n metrics=[tf.keras.metrics.RootMeanSquaredError()],\n)\nIts goal is to predict the ratings as accurately as possible.\nThe second is the retrieval task:\npython\ntfrs.tasks.Retrieval(\n metrics=tfrs.metrics.FactorizedTopK(\n candidates=movies.batch(128)\n )\n)\nAs before, this task's goal is to predict which movies the user will or will not watch.\nPutting it together\nWe put it all together in a model class.\nThe new component here is that - since we have two tasks and two losses - we need to decide on how important each loss is. We can do this by giving each of the losses a weight, and treating these weights as hyperparameters. If we assign a large loss weight to the rating task, our model is going to focus on predicting ratings (but still use some information from the retrieval task); if we assign a large loss weight to the retrieval task, it will focus on retrieval instead.", "class MovielensModel(tfrs.models.Model):\n\n def __init__(self, rating_weight: float, retrieval_weight: float) -> None:\n # We take the loss weights in the constructor: this allows us to instantiate\n # several model objects with different loss weights.\n\n super().__init__()\n\n embedding_dimension = 32\n\n # User and movie models.\n self.movie_model: tf.keras.layers.Layer = tf.keras.Sequential([\n tf.keras.layers.experimental.preprocessing.StringLookup(\n vocabulary=unique_movie_titles, mask_token=None),\n tf.keras.layers.Embedding(len(unique_movie_titles) + 1, embedding_dimension)\n ])\n self.user_model: tf.keras.layers.Layer = tf.keras.Sequential([\n tf.keras.layers.experimental.preprocessing.StringLookup(\n vocabulary=unique_user_ids, mask_token=None),\n tf.keras.layers.Embedding(len(unique_user_ids) + 1, embedding_dimension)\n ])\n\n # A small model to take in user and movie embeddings and predict ratings.\n # We can make this as complicated as we want as long as we output a scalar\n # as our prediction.\n self.rating_model = tf.keras.Sequential([\n tf.keras.layers.Dense(256, activation=\"relu\"),\n tf.keras.layers.Dense(128, activation=\"relu\"),\n tf.keras.layers.Dense(1),\n ])\n\n # The tasks.\n self.rating_task: tf.keras.layers.Layer = tfrs.tasks.Ranking(\n loss=tf.keras.losses.MeanSquaredError(),\n metrics=[tf.keras.metrics.RootMeanSquaredError()],\n )\n self.retrieval_task: tf.keras.layers.Layer = tfrs.tasks.Retrieval(\n metrics=tfrs.metrics.FactorizedTopK(\n candidates=movies.batch(128).map(self.movie_model)\n )\n )\n\n # The loss weights.\n self.rating_weight = rating_weight\n self.retrieval_weight = retrieval_weight\n\n def call(self, features: Dict[Text, tf.Tensor]) -> tf.Tensor:\n # We pick out the user features and pass them into the user model.\n user_embeddings = self.user_model(features[\"user_id\"])\n # And pick out the movie features and pass them into the movie model.\n movie_embeddings = self.movie_model(features[\"movie_title\"])\n \n return (\n user_embeddings,\n movie_embeddings,\n # We apply the multi-layered rating model to a concatentation of\n # user and movie embeddings.\n self.rating_model(\n tf.concat([user_embeddings, movie_embeddings], axis=1)\n ),\n )\n\n def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:\n\n ratings = features.pop(\"user_rating\")\n\n user_embeddings, movie_embeddings, rating_predictions = self(features)\n\n # We compute the loss for each task.\n rating_loss = self.rating_task(\n labels=ratings,\n predictions=rating_predictions,\n )\n retrieval_loss = self.retrieval_task(user_embeddings, movie_embeddings)\n\n # And combine them using the loss weights.\n return (self.rating_weight * rating_loss\n + self.retrieval_weight * retrieval_loss)", "Rating-specialized model\nDepending on the weights we assign, the model will encode a different balance of the tasks. Let's start with a model that only considers ratings.", "# Here, configuring the model with losses and metrics.\n# TODO 1: Your code goes here.\n\n\ncached_train = train.shuffle(100_000).batch(8192).cache()\ncached_test = test.batch(4096).cache()\n\n# Training the ratings model.\nmodel.fit(cached_train, epochs=3)\nmetrics = model.evaluate(cached_test, return_dict=True)\n\nprint(f\"Retrieval top-100 accuracy: {metrics['factorized_top_k/top_100_categorical_accuracy']:.3f}.\")\nprint(f\"Ranking RMSE: {metrics['root_mean_squared_error']:.3f}.\")", "The model does OK on predicting ratings (with an RMSE of around 1.11), but performs poorly at predicting which movies will be watched or not: its accuracy at 100 is almost 4 times worse than a model trained solely to predict watches.\nRetrieval-specialized model\nLet's now try a model that focuses on retrieval only.", "# Here, configuring the model with losses and metrics.\n# TODO 2: Your code goes here.\n\n\n# Training the retrieval model.\nmodel.fit(cached_train, epochs=3)\nmetrics = model.evaluate(cached_test, return_dict=True)\n\nprint(f\"Retrieval top-100 accuracy: {metrics['factorized_top_k/top_100_categorical_accuracy']:.3f}.\")\nprint(f\"Ranking RMSE: {metrics['root_mean_squared_error']:.3f}.\")", "We get the opposite result: a model that does well on retrieval, but poorly on predicting ratings.\nJoint model\nLet's now train a model that assigns positive weights to both tasks.", "# Here, configuring the model with losses and metrics.\n# TODO 3: Your code goes here.\n\n\n# Training the joint model.\nmodel.fit(cached_train, epochs=3)\nmetrics = model.evaluate(cached_test, return_dict=True)\n\nprint(f\"Retrieval top-100 accuracy: {metrics['factorized_top_k/top_100_categorical_accuracy']:.3f}.\")\nprint(f\"Ranking RMSE: {metrics['root_mean_squared_error']:.3f}.\")", "The result is a model that performs roughly as well on both tasks as each specialized model. \nWhile the results here do not show a clear accuracy benefit from a joint model in this case, multi-task learning is in general an extremely useful tool. We can expect better results when we can transfer knowledge from a data-abundant task (such as clicks) to a closely related data-sparse task (such as purchases)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
aboucaud/python-euclid2016
notebooks/03-Plotting.ipynb
bsd-3-clause
[ "Plotting\nThere are several plotting modules in python. Matplolib is the most complete/versatile package for all 2D plotting. The easiest way to construct a new plot is to have a look at http://matplotlib.org/gallery.html and get inspiration from the available examples. The official documentation can be found at: http://matplotlib.org/contents.html\n\nQuick plots, or Matplotlib dirty usage\nProper use of Matplotlib\nSubplots\nImages and contours\nAnimation\nStyles\nD3\nOther honerable mentions\n\nQuick plots, or Matplotlib dirty usage", "%matplotlib\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# To get interactive plotting (otherwise you need to \n# type plt.show() at the end of the plotting commands)\nplt.ion() \n\nx = np.linspace(0, 10)\ny = np.sin(x)\n\n# basic X/Y line plotting with '--' dashed line and linewidth of 2\nplt.plot(x, y, '--', label='first line')\n\n# overplot a dotted line on the previous plot\nplt.plot(x, np.cos(x)*np.cos(x/2), '.', linewidth=3, label='other') \n\nx_axis_label = plt.xlabel('x axis') #change the label of the xaxis\n\n# change your mind about the label : you do not need to replot everything !\nplt.xlabel('another x axis')\n\n# or you can use the re-tuned object\nx_axis_label.set_text('changed it from the object itself')\n\n# simply add the legend (from the previous label)\nlegend = plt.legend() \n\nplt.savefig('plot.png') # save the current figure in png\nplt.savefig('plot.eps') # save the current figure in ps, no need to redo it !\n\n!ls", "Proper use of Matplotlib\nWe will use interactive plots inline in the notebook. This feature is enabled through:", "%matplotlib\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# define a figure which can contains several plots, you can define resolution and so on here...\nfig2 = plt.figure()\n\n# add one axis, axes are actual plots where you can put data.fits (nx, ny, index)\nax = fig2.add_subplot(1, 1, 1)", "Add a cruve with a title to the plot", "x = np.linspace(0, 2*np.pi)\nax.plot(x, np.sin(x), '+')\nax.set_title('this title')\nplt.show()\n\n# is a simpler syntax to add one axis into the figure (we will stick to this)\nfig, ax = plt.subplots()\nax.plot(x, np.sin(x), '+')\nax.set_title('simple subplot')", "A long list of markers can be found at http://matplotlib.org/api/markers_api.html\nas for the colors, there is a nice discussion at http://stackoverflow.com/questions/22408237/named-colors-in-matplotlib\nAll the components of a figure can be accessed throught the 'Figure' object", "print(type(fig))\n\nprint(dir(fig))\n\nprint(fig.axes)\n\nprint('This is the x-axis object', fig.axes[0].xaxis)\nprint('And this is the y-axis object', fig.axes[0].yaxis)\n\n# arrow pointing to the origin of the axes\nax_arrow = ax.annotate('ax = fig.axes[0]',\n xy=(0, -1), # tip of the arrow\n xytext=(1, -0.5), # location of the text\n arrowprops={'facecolor':'red', 'shrink':0.05})\n\n# arrow pointing to the x axis\nx_ax_arrow = ax.annotate('ax.xaxis',\n xy=(3, -1), # tip of the arrow\n xytext=(3, -0.5), # location of the text\n arrowprops={'facecolor':'red', 'shrink':0.05})\nxax = ax.xaxis\n\n# arrow pointing to the y axis\ny_ax_arrow = ax.annotate('ax.yaxis',\n xy=(0, 0), # tip of the arrow\n xytext=(1, 0.5), # location of the text\n arrowprops={'facecolor':'red', 'shrink':0.05})\n", "Add a labels to the x and y axes", "# add some ascii text label\n# this is equivelant to:\n# ax.set_xlabel('x')\nxax.set_label_text('x')\n\n# add latex rendered text to the y axis\nax.set_ylabel('$sin(x)$', size=20, color='g', rotation=0)", "Finally dump the figure to a png file", "fig.savefig('myplot.png')\n\n!ls\n!eog myplot.png", "Lets define a function that creates an empty base plot to which we will add\nstuff for each demonstration. The function returns the figure and the axes object.", "from matplotlib import pyplot as plt\nimport numpy as np\n\ndef create_base_plot():\n fig, ax = plt.subplots()\n ax.set_title('sample figure')\n return fig, ax\n\ndef plot_something():\n fig, ax = create_base_plot()\n x = np.linspace(0, 2*np.pi)\n ax.semilogx(x, np.cos(x)*np.cos(x/2), 'r--.')\n plt.show()", "Log plots", "fig, ax = create_base_plot()\n\n# normal-xlog plots\nax.semilogx(x, np.cos(x)*np.cos(x/2), 'r--.')\n\n# clear the plot and plot a function using the y axis in log scale\nax.clear()\nax.semilogy(x, np.exp(x))\n\n# you can (un)set it, whenever you want\n#ax.set_yscale('linear') # change they y axis to linear scale\n#ax.set_yscale('log') # change the y axis to log scale\n\n# you can also make loglog plots\n#ax.clear()\n#ax.loglog(x, np.exp(x)*np.sin(x))\nplt.setp(ax, **dict(yscale='log', xscale='log'))", "This is equivelant to:\nax.plot(x, np.exp(x)*np.sin(x))\nplt.setp(ax, 'yscale', 'log', 'xscale', 'log')\n\nhere we have introduced a new method of setting property values via pyplot.setp.\nsetp takes as first argument a matplotlib object. Each pair of positional argument\nafter that is treated as a key value pair for the set method name and its value. For\nexample:\nax.set_scale('linear')\nbecomes\nplt.setp(ax, 'scale', 'linear')\nThis is useful if you need to set lots of properties, such as:", "plt.setp(ax, 'xscale', 'linear', 'xlim', [1, 5], 'ylim', [0.1, 10], 'xlabel', 'x',\n 'ylabel', 'y', 'title', 'foo',\n 'xticks', [1, 2, 3, 4, 5],\n 'yticks', [0.1, 1, 10],\n 'yticklabels', ['low', 'medium', 'high'])", "Histograms", "fig1, ax = create_base_plot()\nn, bins, patches = ax.hist(np.random.normal(0, 0.1, 10000), bins=50)", "Subplots\nMaking subplots is relatively easy. Just pass the shape of the grid of plots to plt.subplots() that was used in the above examples.", "# Create one figure with two plots/axes, with their xaxis shared\nfig, (ax1, ax2) = plt.subplots(2, sharex=True)\nax1.plot(x, np.sin(x), '-.', color='r', label='first line')\nother = ax2.plot(x, np.cos(x)*np.cos(x/2), 'o-', linewidth=3, label='other')\nax1.legend()\nax2.legend()\n\n# adjust the spacing between the axes\nfig.subplots_adjust(hspace=0.0)\n\n# add a scatter plot to the first axis\nax1.scatter(x, np.sin(x)+np.random.normal(0, 0.1, np.size(x)))", "create a 3x3 grid of plots", "fig, axs = plt.subplots(3, 3)\n\nprint(axs.shape)\n\n# add an index to all the subplots\nfor ax_index, ax in enumerate(axs.flatten()):\n ax.set_title(ax_index)\n\n# remove all ticks\nfor ax in axs.flatten():\n plt.setp(ax, 'xticks', [], 'yticks', [])\n\nfig.subplots_adjust(hspace=0, wspace=0)\n\n# plot a curve in the diagonal subplots\nfor ax, func in zip(axs.diagonal(), [np.sin, np.cos, np.exp]):\n ax.plot(x, func(x))", "Images and contours", "xx, yy = np.mgrid[-2:2:100j, -2:2:100j]\nimg = np.sin(xx) + np.cos(yy)\n\nfig, ax = create_base_plot()\n\n# to have 0,0 in the lower left corner and no interpolation\nimg_plot = ax.imshow(img, origin='lower', interpolation='None')\n# to add a grid to any axis\nax.grid() \n\nimg_plot.set_cmap('hot') # changing the colormap\n\nimg_plot.set_cmap('spectral') # changing the colormap\ncolorb = fig.colorbar(img_plot) # adding a color bar\n\nimg_plot.set_clim(-0.5, 0.5) # changing the dynamical range\n\n# add contour levels\nimg_contours = ax.contour(img, [-1, -0.5, 0.0, 0.5])\nplt.clabel(img_contours, inline=True, fontsize=20)", "Animation", "from IPython.display import HTML\nimport matplotlib.animation as animation\n\ndef f(x, y):\n return np.sin(x) + np.cos(y)\n\nfig, ax = create_base_plot()\nim = ax.imshow(f(xx, yy), cmap=plt.get_cmap('viridis'))\n\ndef updatefig(*args):\n global xx, yy\n xx += np.pi / 15.\n yy += np.pi / 20.\n im.set_array(f(xx, yy))\n return im,\n\nani = animation.FuncAnimation(fig, updatefig, interval=50, blit=True)\n_ = ani.to_html5_video()\n\n# change title during animation!!\nax.set_title('runtime title')", "Styles\nConfiguring matplotlib\nMost of the matplotlib code chunk that are written are usually about styling and not actual plotting. \nOne feature that might be of great help if you are in this case is to use the matplotlib.style module.\nIn this notebook, we will go through the available matplotlib styles and their corresponding configuration files. Then we will explain the two ways of using the styles and finally show you how to write a personalized style.\n\nPre-configured style files\nAn available variable returns a list of the names of some pre-configured matplotlib style files.", "print('\\n'.join(plt.style.available))\n\nx = np.arange(0, 10, 0.01)\n\ndef f(x, t):\n return np.sin(x) * np.exp(1 - x / 10 + t / 2)\n\ndef simple_plot(style):\n plt.figure()\n with plt.style.context(style, after_reset=True):\n for t in range(5):\n plt.plot(x, f(x, t))\n plt.title('Simple plot')\n\nsimple_plot('ggplot')\n\nsimple_plot('dark_background')\n\nsimple_plot('grayscale')\n\nsimple_plot('fivethirtyeight')\n\nsimple_plot('bmh')", "Content of the style files\nA matplotlib style file is a simple text file containing the desired matplotlib rcParam configuration, with the .mplstyle extension.\nLet's display the content of the 'ggplot' style.", "import os\nggplotfile = os.path.join(plt.style.core.BASE_LIBRARY_PATH, 'ggplot.mplstyle')\n\n!cat $ggplotfile", "Maybe the most interesting feature of this style file is the redefinition of the color cycle using hexadecimal notation. This allows the user to define is own color palette for its multi-line plots.\nuse versus context\nThere are two ways of using the matplotlib styles.\n\nplt.style.use(style)\nplt.style.context(style):\n\nThe use method applied at the beginning of a script will be the default choice in most cases when the style is to be set for the entire script. The only issue is that it sets the matplotlib style for the given Python session, meaning that a second call to use with a different style will only apply new style parameters and not reset the first style. That is if the axes.grid is set to True by the first style and there is nothing concerning the grid in the second style config, the grid will remain set to True which is not matplotlib default.\nOn the contrary, the context method will be useful when only one or two figures are to be set to a given style. It shall be used with the with statement to create a context manager in which the plot will be made.\nLet's illustrate this.", "plt.style.use('ggplot')\n\nplt.figure()\nplt.plot(x, f(x, 0))", "The 'ggplot' style has been applied to the current session. One of its features that differs from standard matplotlib configuration is to put the ticks outside the main figure (axes.axisbelow: True)", "with plt.style.context('dark_background'):\n plt.figure()\n plt.plot(x, f(x, 1))\n", "Now using the 'dark_background' style as a context, we can spot the main changes (background, line color, axis color) and we can also see the outside ticks, although they are not part of this particular style. This is the 'ggplot' axes.axisbelow setup that has not been overwritten by the new style.\nOnce the with block has ended, the style goes back to its previous status, that is the 'ggplot' style.", "plt.figure()\nplt.plot(x, f(x, 2))", "Custom style file\nStarting from these configured files, it is easy to now create our own styles for textbook figures and talk figures and switch from one to another in a single code line plt.style.use('mystyle') at the beginning of the plotting script.\nWhere to create it ?\nmatplotlib will look for the user style files at the following path :", "print(plt.style.core.USER_LIBRARY_PATHS)", "Note: The directory corresponding to this path will most probably not exist so one will need to create it.", "styledir = plt.style.core.USER_LIBRARY_PATHS[0]\n\n!mkdir -p $styledir", "One can now copy an existing style file to serve as a boilerplate.", "mystylefile = os.path.join(styledir, 'mystyle.mplstyle')\n\n!cp $ggplotfile $mystylefile\n\n!cd $styledir\n\n%%file mystyle.mplstyle\n\nfont.size: 16.0 # large font\n\naxes.linewidth: 2\naxes.grid: True \naxes.titlesize: x-large\naxes.labelsize: x-large\naxes.labelcolor: 555555\naxes.axisbelow: True \n \nxtick.color: 555555\nxtick.direction: out\n\nytick.color: 555555\nytick.direction: out\n \ngrid.color: white\ngrid.linestyle: : # dotted line", "D3", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport mpld3\nmpld3.enable_notebook()\n\n# Scatter points\nfig, ax = plt.subplots(subplot_kw=dict(axisbg='#EEEEEE'))\nax.grid(color='white', linestyle='solid')\n\nN = 50\nscatter = ax.scatter(np.random.normal(size=N),\n np.random.normal(size=N),\n c=np.random.random(size=N),\n s = 1000 * np.random.random(size=N),\n alpha=0.3,\n cmap=plt.cm.jet)\n\nax.set_title(\"D3 Scatter Plot\", size=18);\n\n\nimport mpld3\nmpld3.display(fig)\n\nfrom mpld3 import plugins\n\nfig, ax = plt.subplots(subplot_kw=dict(axisbg='#EEEEEE'))\nax.grid(color='white', linestyle='solid')\n\nN = 50\nscatter = ax.scatter(np.random.normal(size=N),\n np.random.normal(size=N),\n c=np.random.random(size=N),\n s = 1000 * np.random.random(size=N),\n alpha=0.3,\n cmap=plt.cm.jet)\n\nax.set_title(\"D3 Scatter Plot (with tooltips!)\", size=20)\n\nlabels = ['point {0}'.format(i + 1) for i in range(N)]\nfig.plugins = [plugins.PointLabelTooltip(scatter, labels)]", "Seaborn", "%matplotlib\n\nplot_something()\n\nimport seaborn\nplot_something()", "source: https://github.com/mwaskom/seaborn\ntutorial: https://www.youtube.com/watch?v=E8OQAdQlljE\n\nOther honerable mentions\n\nMayavi: http://code.enthought.com/projects/mayavi/\nplotly: https://plot.ly/\nbokeh: http://bokeh.pydata.org/en/latest/\npygal: http://www.pygal.org/en/latest/" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
batfish/pybatfish
docs/source/notebooks/forwarding.ipynb
apache-2.0
[ "import pandas as pd\nfrom pybatfish.client.session import Session\nfrom pybatfish.datamodel import *\n\npd.set_option(\"display.width\", 300) \npd.set_option(\"display.max_columns\", 20) \npd.set_option(\"display.max_rows\", 1000) \npd.set_option(\"display.max_colwidth\", None)\n\n# Configure all pybatfish loggers to use WARN level\nimport logging\nlogging.getLogger('pybatfish').setLevel(logging.WARN)\n\nbf = Session(host=\"localhost\")\n\n", "Packet Forwarding\nThis category of questions allows you to query how different types of\ntraffic is forwarded by the network and if endpoints are able to\ncommunicate. You can analyze these aspects in a few different ways.\n\nTraceroute\nBi-directional Traceroute\nReachability\nBi-directional Reachability\nLoop detection\nMultipath Consistency for host-subnets\nMultipath Consistency for router loopbacks", "bf.set_network('generate_questions')\n\nbf.set_snapshot('generate_questions')", "Traceroute\nTraces the path(s) for the specified flow.\nPerforms a virtual traceroute in the network from a starting node. A destination IP and ingress (source) node must be specified. Other IP headers are given default values if unspecified.\nUnlike a real traceroute, this traceroute is directional. That is, for it to succeed, the reverse connectivity is not needed. This feature can help debug connectivity issues by decoupling the two directions.\nInputs\nName | Description | Type | Optional | Default Value\n--- | --- | --- | --- | --- \nstartLocation | Location (node and interface combination) to start tracing from. | LocationSpec | False | \nheaders | Packet header constraints. | HeaderConstraints | False | \nmaxTraces | Limit the number of traces returned. | int | True | \nignoreFilters | If set, filters/ACLs encountered along the path are ignored. | bool | True | \nInvocation", "result = bf.q.traceroute(startLocation='@enter(as2border1[GigabitEthernet2/0])', headers=HeaderConstraints(dstIps='2.34.201.10', srcIps='8.8.8.8')).answer().frame()", "Return Value\nName | Description | Type\n--- | --- | ---\nFlow | The flow | Flow\nTraces | The traces for this flow | Set of Trace\nTraceCount | The total number traces for this flow | int\nRetrieving the flow definition", "result.Flow", "Retrieving the detailed Trace information", "len(result.Traces)\n\nresult.Traces[0]", "Evaluating the first Trace", "result.Traces[0][0]", "Retrieving the disposition of the first Trace", "result.Traces[0][0].disposition", "Retrieving the first hop of the first Trace", "result.Traces[0][0][0]", "Retrieving the last hop of the first Trace", "result.Traces[0][0][-1]\n\nbf.set_network('generate_questions')\n\nbf.set_snapshot('generate_questions')", "Bi-directional Traceroute\nTraces the path(s) for the specified flow, along with path(s) for reverse flows.\nThis question performs a virtual traceroute in the network from a starting node. A destination IP and ingress (source) node must be specified. Other IP headers are given default values if unspecified.\nIf the trace succeeds, a traceroute is performed in the reverse direction.\nInputs\nName | Description | Type | Optional | Default Value\n--- | --- | --- | --- | --- \nstartLocation | Location (node and interface combination) to start tracing from. | LocationSpec | False | \nheaders | Packet header constraints. | HeaderConstraints | False | \nmaxTraces | Limit the number of traces returned. | int | True | \nignoreFilters | If set, filters/ACLs encountered along the path are ignored. | bool | True | \nInvocation", "result = bf.q.bidirectionalTraceroute(startLocation='@enter(as2border1[GigabitEthernet2/0])', headers=HeaderConstraints(dstIps='2.34.201.10', srcIps='8.8.8.8')).answer().frame()", "Return Value\nName | Description | Type\n--- | --- | ---\nForward_Flow | The forward flow. | Flow\nForward_Traces | The forward traces. | List of Trace\nNew_Sessions | Sessions initialized by the forward trace. | List of str\nReverse_Flow | The reverse flow. | Flow\nReverse_Traces | The reverse traces. | List of Trace\nRetrieving the Forward flow definition", "result.Forward_Flow", "Retrieving the detailed Forward Trace information", "len(result.Forward_Traces)\n\nresult.Forward_Traces[0]", "Evaluating the first Forward Trace", "result.Forward_Traces[0][0]", "Retrieving the disposition of the first Forward Trace", "result.Forward_Traces[0][0].disposition", "Retrieving the first hop of the first Forward Trace", "result.Forward_Traces[0][0][0]", "Retrieving the last hop of the first Forward Trace", "result.Forward_Traces[0][0][-1]", "Retrieving the Return flow definition", "result.Reverse_Flow", "Retrieving the detailed Return Trace information", "len(result.Reverse_Traces)\n\nresult.Reverse_Traces[0]", "Evaluating the first Reverse Trace", "result.Reverse_Traces[0][0]", "Retrieving the disposition of the first Reverse Trace", "result.Reverse_Traces[0][0].disposition", "Retrieving the first hop of the first Reverse Trace", "result.Reverse_Traces[0][0][0]", "Retrieving the last hop of the first Reverse Trace", "result.Reverse_Traces[0][0][-1]\n\nbf.set_network('generate_questions')\n\nbf.set_snapshot('generate_questions')", "Reachability\nFinds flows that match the specified path and header space conditions.\nSearches across all flows that match the specified conditions and returns examples of such flows. This question can be used to ensure that certain services are globally accessible and parts of the network are perfectly isolated from each other.\nInputs\nName | Description | Type | Optional | Default Value\n--- | --- | --- | --- | --- \npathConstraints | Constraint the path a flow can take (start/end/transit locations). | PathConstraints | True | \nheaders | Packet header constraints. | HeaderConstraints | True | \nactions | Only return flows for which the disposition is from this set. | DispositionSpec | True | success\nmaxTraces | Limit the number of traces returned. | int | True | \ninvertSearch | Search for packet headers outside the specified headerspace, rather than inside the space. | bool | True | \nignoreFilters | Do not apply filters/ACLs during analysis. | bool | True | \nInvocation", "result = bf.q.reachability(pathConstraints=PathConstraints(startLocation = '/as2/'), headers=HeaderConstraints(dstIps='host1', srcIps='0.0.0.0/0', applications='DNS'), actions='SUCCESS').answer().frame()", "Return Value\nName | Description | Type\n--- | --- | ---\nFlow | The flow | Flow\nTraces | The traces for this flow | Set of Trace\nTraceCount | The total number traces for this flow | int\nRetrieving the flow definition", "result.Flow", "Retrieving the detailed Trace information", "len(result.Traces)\n\nresult.Traces[0]", "Evaluating the first Trace", "result.Traces[0][0]", "Retrieving the disposition of the first Trace", "result.Traces[0][0].disposition", "Retrieving the first hop of the first Trace", "result.Traces[0][0][0]", "Retrieving the last hop of the first Trace", "result.Traces[0][0][-1]\n\nbf.set_network('generate_questions')\n\nbf.set_snapshot('generate_questions')", "Bi-directional Reachability\nSearches for successfully delivered flows that can successfully receive a response.\nPerforms two reachability analyses, first originating from specified sources, then returning back to those sources. After the first (forward) pass, sets up sessions in the network and creates returning flows for each successfully delivered forward flow. The second pass searches for return flows that can be successfully delivered in the presence of the setup sessions.\nInputs\nName | Description | Type | Optional | Default Value\n--- | --- | --- | --- | --- \npathConstraints | Constraint the path a flow can take (start/end/transit locations). | PathConstraints | True | \nheaders | Packet header constraints. | HeaderConstraints | False | \nreturnFlowType | Specifies the type of return flows to search. | str | True | SUCCESS\nInvocation", "result = bf.q.bidirectionalReachability(pathConstraints=PathConstraints(startLocation = '/as2dist1/'), headers=HeaderConstraints(dstIps='host1', srcIps='0.0.0.0/0', applications='DNS'), returnFlowType='SUCCESS').answer().frame()", "Return Value\nName | Description | Type\n--- | --- | ---\nForward_Flow | The forward flow. | Flow\nForward_Traces | The forward traces. | List of Trace\nNew_Sessions | Sessions initialized by the forward trace. | List of str\nReverse_Flow | The reverse flow. | Flow\nReverse_Traces | The reverse traces. | List of Trace\nRetrieving the Forward flow definition", "result.Forward_Flow", "Retrieving the detailed Forward Trace information", "len(result.Forward_Traces)\n\nresult.Forward_Traces[0]", "Evaluating the first Forward Trace", "result.Forward_Traces[0][0]", "Retrieving the disposition of the first Forward Trace", "result.Forward_Traces[0][0].disposition", "Retrieving the first hop of the first Forward Trace", "result.Forward_Traces[0][0][0]", "Retrieving the last hop of the first Forward Trace", "result.Forward_Traces[0][0][-1]", "Retrieving the Return flow definition", "result.Reverse_Flow", "Retrieving the detailed Return Trace information", "len(result.Reverse_Traces)\n\nresult.Reverse_Traces[0]", "Evaluating the first Reverse Trace", "result.Reverse_Traces[0][0]", "Retrieving the disposition of the first Reverse Trace", "result.Reverse_Traces[0][0].disposition", "Retrieving the first hop of the first Reverse Trace", "result.Reverse_Traces[0][0][0]", "Retrieving the last hop of the first Reverse Trace", "result.Reverse_Traces[0][0][-1]\n\nbf.set_network('generate_questions')\n\nbf.set_snapshot('generate_questions')", "Loop detection\nDetects forwarding loops.\nSearches across all possible flows in the network and returns example flows that will experience forwarding loops.\nInputs\nName | Description | Type | Optional | Default Value\n--- | --- | --- | --- | --- \nmaxTraces | Limit the number of traces returned. | int | True | \nInvocation", "result = bf.q.detectLoops().answer().frame()", "Return Value\nName | Description | Type\n--- | --- | ---\nFlow | The flow | Flow\nTraces | The traces for this flow | Set of Trace\nTraceCount | The total number traces for this flow | int\nPrint the first 5 rows of the returned Dataframe", "result.head(5)\n\nbf.set_network('generate_questions')\n\nbf.set_snapshot('generate_questions')", "Multipath Consistency for host-subnets\nValidates multipath consistency between all pairs of subnets.\nSearches across all flows between subnets that are treated differently (i.e., dropped versus forwarded) by different paths in the network and returns example flows.\nInputs\nName | Description | Type | Optional | Default Value\n--- | --- | --- | --- | --- \nmaxTraces | Limit the number of traces returned. | int | True | \nInvocation", "result = bf.q.subnetMultipathConsistency().answer().frame()", "Return Value\nName | Description | Type\n--- | --- | ---\nFlow | The flow | Flow\nTraces | The traces for this flow | Set of Trace\nTraceCount | The total number traces for this flow | int\nRetrieving the flow definition", "result.Flow", "Retrieving the detailed Trace information", "len(result.Traces)\n\nresult.Traces[0]", "Evaluating the first Trace", "result.Traces[0][0]", "Retrieving the disposition of the first Trace", "result.Traces[0][0].disposition", "Retrieving the first hop of the first Trace", "result.Traces[0][0][0]", "Retrieving the last hop of the first Trace", "result.Traces[0][0][-1]\n\nbf.set_network('generate_questions')\n\nbf.set_snapshot('generate_questions')", "Multipath Consistency for router loopbacks\nValidates multipath consistency between all pairs of loopbacks.\nFinds flows between loopbacks that are treated differently (i.e., dropped versus forwarded) by different paths in the presence of multipath routing.\nInputs\nName | Description | Type | Optional | Default Value\n--- | --- | --- | --- | --- \nmaxTraces | Limit the number of traces returned. | int | True | \nInvocation", "result = bf.q.loopbackMultipathConsistency().answer().frame()", "Return Value\nName | Description | Type\n--- | --- | ---\nFlow | The flow | Flow\nTraces | The traces for this flow | Set of Trace\nTraceCount | The total number traces for this flow | int\nRetrieving the flow definition", "result.Flow", "Retrieving the detailed Trace information", "len(result.Traces)\n\nresult.Traces[0]", "Evaluating the first Trace", "result.Traces[0][0]", "Retrieving the disposition of the first Trace", "result.Traces[0][0].disposition", "Retrieving the first hop of the first Trace", "result.Traces[0][0][0]", "Retrieving the last hop of the first Trace", "result.Traces[0][0][-1]" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
wmvanvliet/neuroscience_tutorials
eeg-bci/1. Load EEG data and plot ERP.ipynb
bsd-2-clause
[ "Loading EEG data and plotting an ERP\nWelcome to this IPython notebook. This page is a live interface to a running Python instance, where we create 'cells'. A cell is either some text (which can include images and formulas) or code, in which case we can execute that code by pressing shift+enter. See the notebook documentation for an overview of the functionality of this environment.\nI'm going to assume some basic knowledge about Python (tutorial), Numpy (tutorial) and Matplotlib (tutorial).", "%pylab inline", "The Magic Trick\nIn this tutorial we will do some simple EEG data analysis in order to 'read' a subjects mind. This experiment is playfully called the \"magic trick\". The subject was sitting in front of a screen and was presented with 9 playing cards:", "cards = [\n 'Ace of spades',\n 'Jack of clubs',\n 'Queen of hearts',\n 'King of diamonds',\n '10 of spaces',\n '3 of clubs',\n '10 of hearts',\n '3 of diamonds',\n 'King of spades',\n]", "He picked one of these cards and kept it in his mind. Next, the 9 playing cards would flash one-by-one in a random order across the screen. Each card was presented a total of 30 times. The subject would mentally count the number of times his card would appear on the screen (which was 30 if he was paying attention, we are not interested in the answer he got, it just helps keep the subject focused on the cards).\nIn this tutorial we will analyse the average response to each card. The card that the subject had in mind should produce a larger response than the others.\nThe data used in this tutorial is EEG data that has been bandpass filtered with a 3rd order Butterworth filter with a passband of 0.5-30 Hz. This results in relatively clean looking data. When doing ERP analysis on other data, you will probably have to filter it yourself. Don't do ERP analysis on non-filtered, non-baselined data! Bandpass filtering is covered in the 3rd tutorial.\nThe EEG data is stored on the virtual server you are talking to right now, as a MATLAB file, which we can load by using the SciPy module:", "import scipy.io\nm = scipy.io.loadmat('data/tutorial1-01.mat')\nprint(m.keys())", "The scipy.io.loadmat function returns a dictionary containing the variables stored in the matlab file. Two of them are of interest to us, the actual EEG and the labels which indicate at which point in time which card was presented to the subject.", "EEG = m['EEG']\nlabels = m['labels'].flatten()\n\nprint('EEG dimensions:', EEG.shape)\nprint('Label dimensions:', labels.shape)", "The EEG variable is a Numpy Array containing 7 rows that contain the signal collected from 7 electrodes. The label variable contains the output of our trigger cable, which was used to synchronize the EEG signal with what was happening on the screen. Every time we presented a card on the screen, we send a non-zero value through the trigger cable. The labels variable will therefore contain mostly zeros, but non-zero values at the moments in time we presented a card to the subject. Lets plot the raw EEG data:", "figure(figsize=(15,3))\nplot(EEG.T)", "All channels are drawn on top of each other, which is not convenient. Usually, EEG data is plotted with the channels vertically stacked, an artefact stemming from the days where EEG machines drew on large rolls of paper. Lets add a constant value to each EEG channel before plotting them and some decoration like a meaningful x and y axis. I'll write this as a function, since this will come in handy later on:", "from matplotlib.collections import LineCollection\n\ndef plot_eeg(EEG, vspace=100, color='k'):\n '''\n Plot the EEG data, stacking the channels horizontally on top of each other.\n\n Parameters\n ----------\n EEG : array (channels x samples)\n The EEG data\n vspace : float (default 100)\n Amount of vertical space to put between the channels\n color : string (default 'k')\n Color to draw the EEG in\n '''\n \n bases = vspace * arange(7) # vspace * 0, vspace * 1, vspace * 2, ..., vspace * 6\n \n # To add the bases (a vector of length 7) to the EEG (a 2-D Matrix), we don't use\n # loops, but rely on a NumPy feature called broadcasting:\n # http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html\n EEG = EEG.T + bases\n \n # Calculate a timeline in seconds, knowing that the sample rate of the EEG recorder was 2048 Hz.\n samplerate = 2048.\n time = arange(EEG.shape[0]) / samplerate\n \n # Plot EEG versus time\n plot(time, EEG, color=color)\n\n # Add gridlines to the plot\n grid()\n \n # Label the axes\n xlabel('Time (s)')\n ylabel('Channels')\n \n # The y-ticks are set to the locations of the electrodes. The international 10-20 system defines\n # default names for them.\n gca().yaxis.set_ticks(bases)\n gca().yaxis.set_ticklabels(['Fz', 'Cz', 'Pz', 'CP1', 'CP3', 'C3', 'C4'])\n \n # Put a nice title on top of the plot\n title('EEG data')\n\n# Testing our function\nfigure(figsize=(15, 4))\nplot_eeg(EEG)", "And to top it off, lets add vertical lines whenever a card was shown to the subject:", "figure(figsize=(15, 4))\nplot_eeg(EEG)\nfor onset in flatnonzero(labels):\n axvline(onset / 2048., color='r')\n", "As you can see, cards were shown at a rate of 2 per second. \nWe are interested in the response generated whenever a card was shown, so we cut one-second-long pieces of EEG signal that start from the moment a card was shown. These pieces will be named 'trials'. A useful function here is flatnonzero which returns all the indices of an array which contain to a non-zero value. It effectively gives us the time (as an index) when a card was shown, if we use it in a clever way.", "onsets = flatnonzero(labels)\nprint(onsets[:10]) # Print the first 10 onsets\nprint('Number of onsets:', len(onsets))\n\nclasses = labels[onsets]\nprint('Card shown at each onset:', classes[:10])", "Lets create a 3-dimensional array containing all the trials:", "nchannels = 7 # 7 EEG channels\nsample_rate = 2048. # The sample rate of the EEG recording device was 2048Hz\nnsamples = int(1.0 * sample_rate) # one second's worth of data samples\nntrials = len(onsets)\n\ntrials = zeros((ntrials, nchannels, nsamples))\nfor i, onset in enumerate(onsets):\n trials[i, :, :] = EEG[:, onset:onset + nsamples]\n \nprint(trials.shape)", "Lets plot one of the trials:", "figure(figsize=(4, 4))\nplot_eeg(trials[0, :, :], vspace=30)", "Looking at the individual trials is not all that informative. Lets calculate the average response to each card and plot that. To get all the trials where a particular card was shown, I use a trick called logical indexing.", "# Lets give each response a different color\ncolors = ['k', 'b', 'g', 'y', 'm', 'r', 'c', '#ffff00', '#aaaaaa']\n\nfigure(figsize=(4,8))\n\n# Plot the mean EEG response to each card, such an average is called an ERP in the literature\nfor i in range(len(cards)):\n # Use logical indexing to get the right trial indices\n erp = mean(trials[classes == i+1, :, :], axis=0)\n plot_eeg(erp, vspace=20, color=colors[i])", "One of the cards jumps out: the one corresponding to the green line. You can see it most clearly at channel Cz around 0.4 seconds. This line corresponds the the 3rd card which turns out to be:", "cards[2]", "Lets try our hand at an algorithm that automatically determines which card was picked by the user. The first step is to make some estimate of the P300 amplitude for each trial. We see the P300 peaks somewhere in time interval from 0.3 to 0.5. Let's take the mean voltage in that time interval as an estimate:", "from_index = int(0.3 * sample_rate)\nto_index = int(0.5 * sample_rate)\np300_amplitudes = mean(mean(trials[:, :, from_index:to_index], axis=1), axis=1)\np300_amplitudes -= min(p300_amplitudes) # Make them all positive\n\n# Plot for each trial the estimate of the P300 amplitude\nfigure(figsize=(15,3))\nbar(range(ntrials), p300_amplitudes)\nxlim(0, ntrials)\nxlabel('trial')\nylabel('P300 amplitude')", "Peaks in the graph above should line up with the times that the chosen card was shown:", "# Plot the times at which the first card was shown\nfigure(figsize=(15,3))\nbar(range(ntrials), classes == 1)\nxlim(0, ntrials)\nylim(-0.2, 1.2)\nxlabel('trial')\nylabel('Card #1 shown?')", "To have some score of how well peaks in P300 amplitude line up with times that the card was shown, we can use Pearson's correlation function:", "from scipy.stats import pearsonr\npearsonr(classes == 1, p300_amplitudes)[0]", "All that's left is to calculate this score for each card, and pick the card with the highest score:", "nclasses = len(cards)\nscores = [pearsonr(classes == i+1, p300_amplitudes)[0] for i in range(nclasses)]\n\n# Plot the scores\nfigure(figsize=(4,3))\nbar(arange(nclasses)+1, scores, align='center')\nxticks(arange(nclasses)+1, cards, rotation=-90)\nylabel('score')\n\n# Pick the card with the highest score\nwinning_card = argmax(scores)\nprint('Was your card the %s?' % cards[winning_card])", "If you want, you can now continue with the next tutorial:\n2. Frequency analysis" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
pligor/predicting-future-product-prices
dfa/notebook/.ipynb_checkpoints/dfa_simulations-checkpoint.ipynb
agpl-3.0
[ "DFA robustness simulations\nDominik Krzemiński", "import numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport scipy.signal as ss\nimport scipy.stats as st\n\nplt.style.use('ggplot')\n%matplotlib inline", "Detrended Fluctuation Analysis is a method which allows to measure self-affinity properties of time series. It is claimed to be very roboust method for Hurst exponent estimation even for nonstationary signals. It consists of three main steps:\n1) Cumulative sum calculation;\n2) Detrending time series in windows $\\Delta n$ equally distributed on logarithmic scale;\n3) Mean squared residuals $F$ calculation on a set of windows $\\Delta n_i$;\nFinally, to determine DFA exponent one need to fit a line to so-called fluctuation function $F(\\Delta n)$. A slope of the line is our Hurst exponent estimator.\nIn the following simulations we test robustness of the method to short and high amplitude artifacs and signal slicing. We use self-implemented version of DFA algorithm, which may be slower but for testing reasons is more transparent and thus easier to understand.", "def calc_rms(x, scale):\n \"\"\"\n Root Mean Square in windows with linear detrending.\n \n Args:\n -----\n *x* : numpy.array\n one dimensional data vector\n *scale* : int\n length of the window in which RMS will be calculaed\n Returns:\n --------\n *rms* : numpy.array\n RMS data in each window with length len(x)//scale\n \"\"\"\n # making an array with data divided in windows\n shape = (x.shape[0]//scale, scale)\n X = np.lib.stride_tricks.as_strided(x,shape=shape)\n # vector of x-axis points to regression\n scale_ax = np.arange(scale)\n rms = np.zeros(X.shape[0])\n for e, xcut in enumerate(X):\n coeff = np.polyfit(scale_ax, xcut, 1)\n xfit = np.polyval(coeff, scale_ax)\n # detrending and computing RMS of each window\n rms[e] = np.sqrt(np.mean((xcut-xfit)**2))\n return rms\n\ndef dfa(x, scale_lim=[5,9], scale_dens=0.25, show=False):\n \"\"\"\n Detrended Fluctuation Analysis - algorithm with measures power law\n scaling of the given signal *x*.\n More details about algorithm can be found e.g. here:\n Hardstone, R. et al. Detrended fluctuation analysis: A scale-free \n view on neuronal oscillations, (2012).\n \n Args:\n -----\n *x* : numpy.array\n one dimensional data vector\n *scale_lim* = [5,9] : list of lenght 2 \n boundaries of the scale where scale means windows in which RMS\n is calculated. Numbers from list are indexes of 2 to the power\n of range.\n *scale_dens* = 0.25 : float\n density of scale divisions\n *show* = False\n if True it shows matplotlib picture\n Returns:\n --------\n *scales* : numpy.array\n vector of scales\n *fluct* : numpy.array\n fluctuation function\n *alpha* : float\n DFA exponent\n \"\"\"\n # cumulative sum of data with substracted offset\n y = np.cumsum(x - np.mean(x))\n scales = (2**np.arange(scale_lim[0], scale_lim[1], scale_dens)).astype(np.int)\n fluct = np.zeros(len(scales))\n # computing RMS for each window\n for e, sc in enumerate(scales):\n fluct[e] = np.mean(np.sqrt(calc_rms(y, sc)**2))\n # fitting a line to rms data\n coeff = np.polyfit(np.log2(scales), np.log2(fluct), 1)\n if show:\n fluctfit = 2**np.polyval(coeff,np.log2(scales))\n plt.loglog(scales, fluct, 'bo')\n plt.loglog(scales, fluctfit, 'r', label=r'$\\alpha$ = %0.2f'%coeff[0])\n plt.title('DFA')\n plt.xlabel(r'$\\log_{10}$(time window)')\n plt.ylabel(r'$\\log_{10}$<F(t)>')\n plt.legend()\n plt.show()\n return scales, fluct, coeff[0]\n\n\ndef power_law_noise(n, alpha, var=1):\n '''\n Generale power law noise. \n \n Args:\n -----\n *n* : int\n number of data points\n *alpha* : float\n DFA exponent\n *var* = 1 : float\n variance\n Returns:\n --------\n *x* : numpy.array\n generated noisy data with exponent *alpha*\n\n Based on:\n N. Jeremy Kasdin, Discrete simulation of power law noise (for\n oscillator stability evaluation)\n '''\n # computing standard deviation from variance\n stdev = np.sqrt(np.abs(var))\n beta = 2*alpha-1\n hfa = np.zeros(2*n)\n hfa[0] = 1\n for i in range(1,n):\n hfa[i] = hfa[i-1] * (0.5*beta + (i-1))/i\n # sample white noise\n wfa = np.hstack((-stdev +2*stdev * np.random.rand(n), np.zeros(n)))\n fh = np.fft.fft(hfa)\n fw = np.fft.fft(wfa)\n fh = fh[1:n+1]\n fw = fw[1:n+1]\n ftot = fh * fw\n # matching the conventions of the Numerical Recipes\n ftot = np.hstack((ftot, np.zeros(n-1)))\n x = np.fft.ifft(ftot) \n return np.real(x[:n])\n", "Firstly let's just test our implemetation on randomly generated power-law data.", "n = 2**12\ndfa_alpha = 0.7\nx = power_law_noise(n, dfa_alpha)\nscales, fluct, esta = dfa(x, show=1)\nprint(\"DFA exponent {}\".format(esta))", "We got acceptable estimation of the initial value of $\\alpha$=0.7.\nSimulation 1: artifacts\nNow we are ready to perform the first simulation. In biomedical signals (EEG in particular) many high amplitude artifacts appear. Those can be caused by body movements, eyes blinking or just by recording device. Typically, in most of the studies researchers inspect signals visually and remove parts of them when neccessary. Although some more sophisticated methods exist, this is still the most common choice giving the best efficiency. However, because DFA is considered to be valid also for non-stationary time series we could take an adventage of that property. Beforehand let's test it if it is true.\nFirst of all, we need some model of signal artifacs.", "mr = ss.morlet(100, w=0.9, s=0.3)\nplt.plot(mr.real)", "Artifacts look very often as a big unexpected peak with much higher amplitude than the rest of the signal. I decided to model it as a Morlet wavelet with low frequency. I multiply part of the signal by that shape with some arbitrarly big amplitude.\nThe picture below shows an example of signal with artifact.", "x = power_law_noise(n, dfa_alpha)\nplt.figure(figsize=(9,7))\nplt.subplot(211)\nplt.plot(x)\nplt.title(\"Original signal\")\nplt.ylim([-2.3,2.3])\nncut = 500\nidx = 400\nmr = ss.morlet(ncut, w=1, s=0.3)\nx[idx:idx+ncut] *= 10*mr.real\nplt.subplot(212)\nplt.plot(x)\nplt.ylim([-2.3,2.3])\nplt.xlabel('time')\nplt.title(\"Signal with artifact\")\nscales, fluct, esta = dfa(x)\nprint(\"DFA exponent {}\".format(esta))", "Now we perform bootstrapping, so in principle repeat such an operation Nrep times adding artifacts in random places with random amplitudes and lengths.", "Nrep = 1000 # how many resamplings\nx_down, x_top = 400, 3500 # range of artifacts beginnings\nsig_amp, mu_amp = 3.5, 10 # amplitude parameters (to random Gauss generator)\nsig_ncut, mu_ncut = 100, 500 # length of the artifact\n\ndfavec = np.zeros(Nrep)\nfor i in range(Nrep):\n if i%10==0: print(i)#, end=' ')\n x = power_law_noise(n, dfa_alpha)\n idx = np.random.randint(x_down, x_top)\n ncut = int(np.random.randn()*sig_ncut+mu_ncut)\n mr = ss.morlet(ncut, w=np.random.randn()*0.1+1, s=np.random.randn()*0.1+0.3)\n amp = np.random.randn()*sig_amp+mu_amp\n if idx+ncut-x.shape[0] > 0: idx = x.shape[0]-ncut-1 # checks if idxs are in range of x\n x[idx:idx+ncut] *= amp*mr.real\n scales, fluct, estalpha = dfa(x)\n dfavec[i] = estalpha", "As a result we get a histogram with confidence level values marked by red dashed lines and actual value marked as a purple line. We see that we cannot reject a null-hypothesis that artifacts (those generated as above) don't have any impact on DFA exponent estimation.", "alpha = 0.05\nv1 = st.scoreatpercentile(dfavec, 0.5*alpha*100)\nv2 = st.scoreatpercentile(dfavec, 100-0.5*alpha*100)\nplt.figure(figsize=(9,6))\nplt.hist(dfavec, color='#57aefc')\nplt.axvline(v1, color='r', linestyle='--')\nplt.axvline(v2, color='r', linestyle='--')\nplt.axvline(dfa_alpha, color='m')\nplt.ylabel('Counts')\nplt.xlabel('DFA-exp')\nplt.title('Histogram - artifacts')\nplt.show()", "Simulation 2: slicing\nIn the second simulation we are going to check what happens if we slice the signal and join two pieces together. Does it affect DFA value?\nAs it happened before firstly we consider only signle case.", "n = 2**13\nx = power_law_noise(n, dfa_alpha)\nplt.figure(figsize=(9,7))\nplt.subplot(211)\nplt.plot(x)\nplt.title(\"Original signal\")\nplt.subplot(212)\nidx = 1400\ngap_width = 400\nx_c = np.concatenate((x[:idx],x[idx+gap_width:]))\nplt.plot(x_c)\nplt.xlabel('time')\nplt.title(\"Sliced signal\")\nscales, fluct, estaalpha = dfa(x_c)\nprint(\"DFA exponent {}\".format(estaalpha))", "And now we test it by bootstrapping.", "Nrep = 1000 # how many resamplings\nx_down, x_top = int(0.1*n), int(0.9*n) # range of slice\nsig_gw, mu_gw = 100, 300 # gap width\n\ngap_width = 200\n\ndfavec = np.zeros(Nrep)\nfor i in range(Nrep):\n if i%10==0: print(i, end=\" \")\n x = power_law_noise(n, dfa_alpha)\n idx = np.random.randint(x_down, x_top)\n gap_width = int(np.random.randn()*sig_gw+mu_gw)\n x_c = np.concatenate((x[:idx],x[idx+gap_width:]))\n scales, fluct, estalpha = dfa(x)\n dfavec[i] = estalpha", "Once again the initial value is in between confidence intervals so we can infer that slicing has no effect on DFA estimation.", "alpha = 0.05\nv1 = st.scoreatpercentile(dfavec, 0.5*alpha*100)\nv2 = st.scoreatpercentile(dfavec, 100-0.5*alpha*100)\nplt.figure(figsize=(9, 6))\nplt.hist(dfavec, color='#57aefc')\nplt.axvline(v1, color='r', linestyle='--')\nplt.axvline(v2, color='r', linestyle='--')\nplt.axvline(dfa_alpha, color='m')\nplt.ylabel('Counts')\nplt.xlabel('DFA-exp')\nplt.title('Histogram - slicing')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ivergara/science_notebooks
Apendix.ipynb
gpl-3.0
[ "Apendix\nHere we compute the transition matrix elements of a $d^4$ system like the one explored in chapter 5 for the ruthenate Ca$_2$RuO$_4$ (CRO). We compute the possible transitions of the type $d^4d^4\\rightarrow d^3d^5$, applying some knowledge regarding the allowed ground and excited states of the CRO system. The transition matrix elements obtained are then used to compute the amplitude of certain transitions corresponding to the low energy optical excitations found in CRO.\nWith this in mind, we first explore a small example to test the logic of the computation. Then, we proceed to calculate the transition matrix elements of interest.\nTransitions for a two level and two site system with an electron on each site\nFor a two sites $(i,j)$ and two levels $\\alpha, \\beta$ system with one electron per site, the initial states are $d_i^1d_j^1$ configurations where one electron sits on the $\\alpha$ level on each site which are $4$ in total. The final states are $d_i^0d_j^2$ configurations amounting to $5$ in total.\nThe representation of the states used is as follows: each level/orbital is composed of two elements of an array, one for each spin (up, down). This is to say that each element of the array corresponds to a creation operator $c^\\dagger_{\\alpha\\sigma}$ when viewed from a second quantization notation. Thus, the first two elements of the array correspond to the $+$ and $-$ spin $\\alpha$ level and the following two to the $+$ and $-$ spin $\\beta$ level. To build the representation of the states of the combined two sites, we concatenate two single site arrays. This leads to an array where the first 4 elements correspond to the $i$ site and the second 4 ones correspond to the $j$ one.\nFor example, the state with spin up on both sites at the $\\alpha$ level, equivalent to $c^\\dagger_{i\\alpha\\uparrow}c^\\dagger_{j\\alpha\\uparrow}$, is [1 0 0 0 1 0 0 0]. As a second example, we take the configuration where there is one spin up in each level at site $j$ whereas the site $i$ is empty [0 0 0 0 1 0 1 0]. For the remainder of this introduction, we will label those two configurations as initial and final, respectively.", "initial = [1, 0, 0, 0, 1, 0, 0, 0]\nfinal = [0, 0, 0, 0, 1, 0, 1, 0]", "To compute if an optical transition between two states is possible or not, we first get some libraries to make this easier.", "# Importing necessary extensions\nimport numpy as np\nimport itertools\nimport functools\nimport operator\n\n# The use of type annotations requires Python 3.6 or newer\nfrom typing import List", "The question is, whether there is a transition matrix element between the aforementioned initial and final states. We can easily anser that with a yes, since the receiving level $\\beta,+$ is empty in the initial state and no spin flip is involved when moving the particle from $\\alpha,+$ to $\\beta,+$. Thus, the question now is how to compute it in a systematic way. \nWe can start by taking the XOR (exclusive or) operation between the constructed representations of the states. This means that we check where the changes between the states in question are located between the two states in question. Then, we check the positions (index) where we get a 1, and if we find that both are odd or both are even, we can say that the transition is allowed. Whereas, if one is in odd and the other in even positions it is not allowed as it would imply a spin flip in the transition. This is equivalent as to write $<f|c^\\dagger_{i\\alpha'\\sigma}c^\\dagger_{j\\beta'\\sigma}|i>$.\nNow we can go step by step codifying this procedure. First checking for the XOR operation and then asking where the two states differ.", "# looking for the positions/levels with different occupation\nchanges = np.logical_xor(initial, final)\n# obtaining the indexes of those positions\nnp.nonzero(changes)[0].tolist() ", "We can see that we get a change in positions $0$ and $6$ which correspond to $\\alpha,+$ and $\\beta,+$ in site $i$ and $j$, respectively. Now we apply modulo 2, which will allow us to check if the changes are in even or odd positions mapping even positions to $0$ whereas odd positions to $1$. Thus, if both are even or odd there will be just one unique element in the list otherwise there will be two unique elements.", "modulo = 2\nnp.unique(np.remainder(np.nonzero(changes), modulo)).size == 1", "Thus, in this case of chosen initial and finals states, the transition is allowed since both are even. We can wraps all of this logic in a function.", "def is_allowed(initial: List[int], final: List[int]) -> bool:\n \"\"\"\n Given an initial and final states as represented in a binary\n list, returns if it is allowed considering spin conservation.\n \"\"\"\n return np.unique(\n np.remainder(\n np.nonzero(\n np.logical_xor(initial,final)), 2)).size == 1", "Now we have a function that tells us if between two states an optical transition is possible or not. To recapitulate, we can recompute our previous case and then with a different final state that is not allowed since it involves a spin flip, e.g., [0 0 0 0 0 1 1 0].", "is_allowed(initial, final)\n\nis_allowed(initial, [0, 0, 0, 0, 0, 1, 1, 0])", "With this preamble, we are equiped to handle more complex cases. Given the chosen computational representation for the states, the normalization coefficients of the states are left out. Thus, one has to take care to keep track of them when constructing properly the transition matrix element in question later on.\nCa$_2$RuO$_4$\nLet us first explore the $d^4$ system. In a low spin $d^4$ system, we have only the t$_{2g}$ orbitals ($xy$, $yz$, $xz$) active which leads to a 6 elements representation for a site. Two neighboring states involved in a transition are concatenateed into a single array consisting of 12 elements.\nFor this, we create the function to generate the list representation of states given an amount of electrons and levels.", "def generate_states(electrons: int, levels: int) -> List[List[int]]:\n \"\"\"\n Generates the list representation of a given number of electrons \n and levels (degeneracy not considered).\n \"\"\"\n # create an array of length equal to the amount of levels \n # with an amount of 1's equal to the number of electrons \n # specified which will be used as a seed/template\n seed = [1 if position < electrons else 0 \n for position in range(levels)]\n # taking the seed state, we generate all possible permutations \n # and remove duplicates using a set operation\n generated_states = list(set(itertools.permutations(seed)))\n generated_states.sort(reverse=True)\n return generated_states", "With this we can generate states of 3, 4, and 5 electrons in a 3 level system with degeneracy 2 meaning 6 levels in total.", "states_d3 = generate_states(3,6)\nstates_d4 = generate_states(4,6)\nstates_d5 = generate_states(5,6)", "We can consider first the $d^4$ states and take a look at them.", "states_d4", "It is quite a list of generated states. But from this whole list, not all states are relevant for the problem at hand. This means that we can reduce the amount of states beforehand by applying the physical constrains we have. \nFrom all the $d^4$ states, we consider only those with a full $d_{xy}$ orbital and those which distribute the other two electrons in different orbitals as possible initial states for the Ca2RuO4 system. In our representation, this means only the states that have a 1 in the first two entries and no double occupancy in $zx$ or $yz$ orbitals.", "possible_states_d4 = [\n # select states that fulfill\n list(state) for state in states_d4\n # dxy orbital double occupancy\n if state[0]==1 and state[1]==1\n # dzx/dyz orbital single occupancy\n and state[2] is not state[3]\n ]\npossible_states_d4", "We obtain 4 different $d^4$ states that fullfill the conditions previously indicated. From the previous list, the first and last elements correspond to states with $S_z=\\pm1$ whereas the ones in the middle correspond to the two superimposed states for the $S=0$ state, namely, a magnon. These four states, could have been easily written down by hand, but the power of this approach is evident when generating and selecting the possible states of the $d^3$ configuration.\nFor the $d^3$ states, we want at least those which keep one electron in the $d_{xy}$ orbital since we know that the others states are not reachable with one movement as required by optical spectroscopy.", "possible_states_d3 = [list(state) for state in states_d3 \n if state[0]==1 # xy up occupied\n or state[1]==1] # xy down occupied\npossible_states_d3", "In the case of the $d^5$ states, since our ground state has a doule occupied $d_{xy}$ orbital then it has to stay occupied.", "possible_states_d5 = [list(state) for state in states_d5 \n # xy up down occupied\n if state[0]==1 and state[1]==1 \n ]\npossible_states_d5 ", "We could generate all $d^3d^5$ combinations and check how many of them there are.", "def combine_states(first: List[List[int]], \n second: List[List[int]]) -> List[List[int]]:\n \"\"\"\n Takes two lists of list representations of states and returns \n the list representation of a two-site state.\n \"\"\"\n # Producing all the possible final states. \n # This has to be read from bottom to top.\n # 3) the single site representations are combined \n # into one single two-site representation\n # 2) we iterate over all the combinations produced\n # 1) make the product of the given first and second \n # states lists\n final_states = [\n functools.reduce(operator.add, combination) # 3)\n for combination # 2)\n in itertools.product(first, second) # 1)\n ]\n \n final_states.sort(reverse=True)\n \n return final_states\n\nprint(\"The number of combined states is: \", \n len(combine_states(possible_states_d3,possible_states_d5)))", "We already saw in the previous section how we can check if a transition is allowed in our list codification of the states. Here we will make it a function slightly more complex to help us deal with generating final states.", "def label(initial, final, levels, mapping):\n \"\"\"Helper function to label the levels/orbitals involved.\"\"\"\n changes = np.nonzero(np.logical_xor(initial, final))\n positions = np.remainder(changes, levels)//2\n return f\"{mapping[positions[0][0]]} and {mapping[positions[0][1]]}\"\n\ndef transition(initial: List[int], \n final: List[List[int]], \n debug = False) -> None:\n \"\"\"\n This function takes the list representation of an initial double \n site state and a list of final d3 states of intrest. \n Then, it computes if the transition from the given initial state \n to a compounded d3d5 final states are possible. \n The d5 states are implicitly used in the function from those \n already generated and filtered.\n \"\"\"\n \n def process(final_states):\n # We iterate over all final states and test whether the \n # transition from the given initial state is allowed\n for state in final_states:\n allowed = is_allowed(initial, state)\n if allowed:\n labeled = label(initial, \n state, \n 6, \n {0: \"xy\", 1: \"xz\", 2: \"yz\"})\n print(f\" final state {state} allowed \\\nbetween {labeled}.\")\n else:\n if debug:\n print(f\" final state {state} not allowed.\")\n \n d5 = list(possible_states_d5)\n print(\"From initial state {}\".format(initial))\n print(\"d3d5\")\n process(combine_states(final, d5))\n print(\"d5d3\")\n process(combine_states(d5, final)) ", "With this, we can now explore the transitions between the different initial states and final states ($^4A_2$, $^2E$, and $^2T_1$ multiplets for the $d^3$ sector). Concerning the $d^4$ states, as explained in chapter 5, there is the possibility to be in the $S_z=\\pm1$ or $S_z=0$. We will cover each one of them in the following.\nWhat we are ultimately interested in is in the intensities of the transitions and thus we need the amplitudes since $I\\sim\\hat{A}^2$. We will go through each multiplet covering the ground states consisting of only $S_z=\\pm1$ and then with the $S_z=0$.\n$^4A_2$\nFirst, we will deal with the $^4A_2$ multiplet. The representations for the $|^4A_2,\\pm3/2>$ states are given by", "A2_32 = [[1,0,1,0,1,0]] # 4A2 Sz=3/2\nA2_neg_32 = [[0,1,0,1,0,1]] # 4A2 Sz=-3/2", "whereas the ones for the$|^4A_2,\\pm1/2>$", "A2_12 = [[0,1,1,0,1,0], [1,0,0,1,1,0], [1,0,1,0,0,1]] # 4A2 Sz=1/2\nA2_neg_12 = [[1,0,0,1,0,1], [0,1,1,0,0,1], [0,1,0,1,1,0]] # 4A2 Sz=-1/2", "Notice that the prefactors and signs are missing from this representation, and have to be taken into account when combining all the pieces into the end result.\n$S_z=\\pm1$\nStarting with the pure $S_z=\\pm1$ as initial states, meaning $d_{\\uparrow}^4d_{\\uparrow}^4$ (FM) and $d_{\\uparrow}^4d_{\\downarrow}^4$ (AFM), we have the following representations:", "FM = [1,1,1,0,1,0,1,1,1,0,1,0]\nAFM_up = [1,1,1,0,1,0,1,1,0,1,0,1]\nAFM_down = [1,1,0,1,0,1,1,1,1,0,1,0]", "Handling the ferromagnetic ordering first, the allowed transitions from the initial state into the $|^4A_2,3/2>$ state are", "transition(FM, A2_32)", "Comparing the initial and final states representations and considering the $|^4A_2,3/2>$ prefactor, we obtain that there are two possible transitions with matrix element $t_{xy,xz}$ and $t_{xy,yz}$. Each one is allowed twice from swapping the positions between $d^3$ and $d^5$.\nThen, for the $|^4A_2,\\pm1/2>$ states", "transition(FM, A2_12)\n\ntransition(FM, A2_neg_12)", "Thus, for the $|^4A_2,\\pm1/2>$ states, there is no allowed transition starting from the FM initial ground state.\nRepeating for both $^4A_2$ but starting from the antiferromagnetic state ($d^4_\\uparrow d^4_\\downarrow$) initial state we get", "transition(AFM_up, A2_32)\n\ntransition(AFM_up, A2_12)\n\ntransition(AFM_up, A2_neg_12)", "We see that the AFM initial ground state has no transition matrix element for the $|^4A_2,3/2>$ state. Whereas transitions involving the $|^4A_2,\\pm1/2>$ state are allowed. Once again, checking the prefactors for the multiplet and the initial ground state we get a transition matrix element of $t_{xy,xz}/\\sqrt{3}$ and $t_{xy,yz}/\\sqrt{3}$, twice each.\nThese are the same results as could have been obtained using simple physical arguments.\n$S_z=0$\nThe case of $S_z=0$ is handled similarly, the difference is that we get more terms to handle. We start with the $d_0^4d_\\uparrow^4$ initial state and the $|^4A_2,\\pm3/2>$ states. Since the $d_0^4$ is a superposition of two states, we will split it in the two parts.\nBeing $|f>$ any valid final state involving a combination (tensor product) of a $|d^3>$ and a $|d^5>$ states, and being $|i>$ of the type $|d^4_0>|d^4_\\uparrow>$ where $|d^4_0>=|A>+|B>$, then the matrix element $<f|\\hat{t}|i>$ can be split as $<f|\\hat{t}|A>|d^4_\\uparrow>+<f|\\hat{t}|B>|d^4_\\uparrow>)$.", "S0_1 = [1, 1, 1, 0, 0, 1] # |A>\nS0_2 = [1, 1, 0, 1, 1, 0] # |B>\n\nd_zero_down = [1, 1, 0, 1, 0, 1]\nd_zero_up = [1, 1, 1, 0, 1, 0]", "Thus, we append the $d^4_\\uparrow$ representation to each part of the $d^4_0$ states. Then, checking for the transitions into the $|^4A_2,\\pm3/2>$ $d^3$ state we get", "transition(S0_1 + d_zero_up, A2_32)\ntransition(S0_2 + d_zero_up, A2_32)\nprint(\"\\n\\n\")\ntransition(S0_1 + d_zero_up, A2_neg_32)\ntransition(S0_2 + d_zero_up, A2_neg_32)", "Collecting the terms we get that for $|^4A_2, 3/2>$ there is no transitions into a $|d^3>|d^5>$ final state but there are transitions into two different $|d^5>|d^3>$ final states, one for each of the $|A>$ and $|B>$ parts. Thus, considering the numerical factors of the involved states, the amplitude in this case is $\\frac{1}{\\sqrt{2}}t_{xy,xz}$ and $\\frac{1}{\\sqrt{2}}t_{xy,yz}$. In this case, the states involved in $|^4A_2, -3/2>$ do not show any allowed transition.\nNow, we can perform the same procedure but considering the $d^4_\\downarrow$ state.", "transition(S0_1 + d_zero_down, A2_32)\ntransition(S0_2 + d_zero_down, A2_32)\nprint(\"\\n\\n\")\ntransition(S0_1 + d_zero_down, A2_neg_32)\ntransition(S0_2 + d_zero_down, A2_neg_32)", "Here, we observe the same situation than before but swapping the roles between the $|^4A_2,\\pm3/2>$ states. This means that the contribution of the $d^0 d^4_\\uparrow$ is the same as the $d^0 d^4_\\downarrow$ one.\nSimilarly, we can start from the $d^4_\\uparrow d^0$ or the $d^4_\\downarrow d^0$ which will also swap from transitions involving a $|d^5>|d^3>$ state to the $|d^3>|d^5>$ ones. The explicit computation is shown next for completeness.", "transition(d_zero_up + S0_1, A2_32)\ntransition(d_zero_up + S0_2, A2_32)\nprint(\"\\n\\n\")\ntransition(d_zero_up + S0_1, A2_neg_32)\ntransition(d_zero_up + S0_2, A2_neg_32)\nprint(\"\\n\\n\")\ntransition(d_zero_down + S0_1, A2_32)\ntransition(d_zero_down + S0_2, A2_32)\nprint(\"\\n\\n\")\ntransition(d_zero_down + S0_1, A2_neg_32)\ntransition(d_zero_down + S0_2, A2_neg_32)", "Following the same procedure for the $|^4A_2, 1/2>$ states and $d^4_0d^4_\\uparrow$ ground state", "transition(S0_1 + d_zero_up, A2_12)\ntransition(S0_2 + d_zero_up, A2_12)", "Here we get some possible transitions to final states of interest. Here, we have to remember that the \"receiving\" $d3$ multiplet has three terms, which have to be added if present. For the $|d^3>|d^5>$ case there are two allowed transitions into $d^5$ states involving $t_{xy,xz}$ and $t_{xy,yz}$ for $|A>$ and $|B>$. From $|A>$ and $|B>$ we find computed terms that correspond to the same $d^5$ final state that have to be added.\nThus, considering the $1/\\sqrt{2}$ and $1/\\sqrt{3}$ prefactors for the states, each term has a factor of $1/\\sqrt{6}$. Then, we obtain the total contributions $\\sqrt{\\frac{2}{3}}t_{xy,xz}$ and $\\sqrt{\\frac{2}{3}}t_{xy,yz}$ for transitions into $d^5_{xz/xy,\\downarrow}$ in the $|d^3>|d^5>$ case, whereas for the $|d^5>|d^3>$ one, we obtain $\\sqrt{\\frac{1}{6}}t_{xy,xz}$ and $\\sqrt{\\frac{1}{6}}t_{xy,yz}$ for the final states involving $d^5_{xz,\\uparrow}$ and $d^5_{xz,\\uparrow}$ states, respectively.\nAnd for the $|^4A_2, -1/2>$ state", "transition(S0_1 + d_zero_up, A2_neg_12)\ntransition(S0_2 + d_zero_up, A2_neg_12)", "there is no transition found.\nWe repeat for $|d^4_\\uparrow d^4_0>$", "transition(d_zero_up + S0_1, A2_12)\ntransition(d_zero_up + S0_2, A2_12)\nprint(\"\\n\\n\")\ntransition(d_zero_up + S0_1, A2_neg_12)\ntransition(d_zero_up + S0_2, A2_neg_12)", "Which is the same situation than before but swapping the position of the contributions as we already saw for the $|^4A_2, 3/2>$ case. For completeness we show the situation with $d^4_\\downarrow$ as follows.", "transition(S0_1 + d_zero_down, A2_12)\ntransition(S0_2 + d_zero_down, A2_12)\nprint(\"\\n\\n\")\ntransition(d_zero_down + S0_1, A2_12)\ntransition(d_zero_down + S0_2, A2_12)\nprint(\"\\n\\n\")\ntransition(S0_1 + d_zero_down, A2_neg_12)\ntransition(S0_2 + d_zero_down, A2_neg_12)\nprint(\"\\n\\n\")\ntransition(d_zero_down + S0_1, A2_neg_12)\ntransition(d_zero_down + S0_2, A2_neg_12)", "Continuing with the $d^4_0d^4_0$ the situation gets more complicated since $<f|\\hat{t}|d^4_0>|d^4_0>$ can be split as follows $<f|\\hat{t}(|A>+|B>)(|A>+|B>)$ which gives 4 terms labeled $F$ to $I$. Thus, we construct the four combinations for the initial state and calculate each one of them to later sum them up.", "F = S0_1 + S0_1\nG = S0_1 + S0_2\nH = S0_2 + S0_1\nI = S0_2 + S0_2", "First dealing with the $|^4A_2,\\pm 3/2>$ states for the $d^3$ sector.", "transition(F, A2_32)\ntransition(G, A2_32)\ntransition(H, A2_32)\ntransition(I, A2_32)\n\ntransition(F, A2_neg_32)\ntransition(G, A2_neg_32)\ntransition(H, A2_neg_32)\ntransition(I, A2_neg_32)", "No transitions from the $d^4_0d^4_0$ state to $|^4A_2,\\pm3/2>$.\nAnd now repeating the same strategy for the $|^4A_2,1/2>$ state", "transition(F, A2_12)\ntransition(G, A2_12)\ntransition(H, A2_12)\ntransition(I, A2_12)", "Here we have terms for both $|d^3>|d^5>$ and $|d^5>|d^3>$ and for each component of the initial state which can be grouped into which $d^5$ state they transition into. Terms pairs $F-H$ and $G-I$ belong together involving the $d^5_{xz\\downarrow}$ and $d^5_{yz\\downarrow}$ states, respectively.\nAdding terms corresponding to $d^3$ multiplet participating and considering the prefactors, we get the terms $\\frac{1}{\\sqrt{3}}t_{xy,xz}$ and $\\frac{1}{\\sqrt{3}}t_{xy,yz}$.\nAnd for completeness the $|^4A_2,-1/2>$ state", "transition(F, A2_neg_12)\ntransition(G, A2_neg_12)\ntransition(H, A2_neg_12)\ntransition(I, A2_neg_12)", "For $|^4A_2,-1/2>$ states we obtain the same values than for $|^4A_2,1/2>$ but involving the other spin state.\nNow we have all the amplitudes corresponding to transitions into the $^4A_2$ multiplet enabled by the initial states involving $S_z=0$, namely, $\\uparrow 0+ 0\\uparrow+ \\downarrow 0 + 0\\downarrow + 00$.\n$|^2E,a/b>$\nFirst we encode the $|^2E,a>$ multiplet and check the $S_z=\\pm1$ ground states", "Ea = [[0,1,1,0,1,0], [1,0,0,1,1,0], [1,0,1,0,0,1]]\ntransition(AFM_down, Ea)\ntransition(AFM_up, Ea)\ntransition(FM, Ea)", "For the $|^2E,a>$ multiplet, only transitions from the AFM ground state are possible. Collecting the prefactors we get that the transition matrix element in $-\\sqrt{2/3}t_{xy,xz}$ and $-\\sqrt{2/3}t_{xy,yz}$ as could be easily checked by hand.\nThen, for the $|^2E,b>$ multiplet", "Eb = [[1,0,1,0,0,1], [1,0,0,1,1,0]]\ntransition(AFM_down, Eb)\ntransition(AFM_up, Eb)\ntransition(FM, Eb)", "From the $S=\\pm1$ initial states, no transitions possible to Eb.\nWe follow with the situation when considering the $S=0$. In this case, each initial state is decomposed in two parts resulting in 4 terms.", "transition(S0_1 + S0_1, Ea)\ntransition(S0_1 + S0_2, Ea)\ntransition(S0_2 + S0_1, Ea)\ntransition(S0_2 + S0_2, Ea)", "Each one of the combinations is allowed, thus considering the prefactors of the $S_0$ and $|^2E,a>$ we obtain $\\sqrt{\\frac{2}{3}}t_{xy,xz}$ and $\\sqrt{\\frac{2}{3}}t_{xy,yz}$.\nDoing the same for $|^2E,b>$", "transition(S0_1 + S0_1, Eb)\ntransition(S0_1 + S0_2, Eb)\ntransition(S0_2 + S0_1, Eb)\ntransition(S0_2 + S0_2, Eb)", "Adding all the contributions of the allowed terms we obtain, that due to the - sign in the $|^2E,b>$ multiplet, the contribution is 0.\nWe sill have to cover the ground state of the kind $d_0^4d_\\uparrow^4$. As done previously, we again will split the $d_0^4$ in the two parts.", "S0_1 = [1, 1, 1, 0, 0, 1]\nS0_2 = [1, 1, 0, 1, 1, 0]", "and then we add the $d^4_\\uparrow$ representation to each one. Thus, for the $|^2E, Ea>$ $d^3$ multiplet we get", "transition(S0_1 + d_zero_up, Ea)\ntransition(S0_2 + d_zero_up, Ea)\nprint(\"\\n\\n\")\ntransition(d_zero_up + S0_1, Ea)\ntransition(d_zero_up + S0_2, Ea)", "Here, both parts of the $S_z=0$ state contribute. Checking the prefactors for $S_z=0$ ($1/\\sqrt{2}$) and $|^2E, Ea>$ ($1/\\sqrt{6}$) we get a matrix element $\\sqrt{\\frac{2}{3}}t_{xy/xz}$.\nFollowing for transitions into the $|^2E, Eb>$", "transition(S0_1 + d_zero_up, Eb)\ntransition(S0_2 + d_zero_up, Eb)\nprint(\"\\n\\n\")\ntransition(d_zero_up + S0_1, Eb)\ntransition(d_zero_up + S0_2, Eb)", "$|^2T_1,+/->$\nThis multiplet has 6 possible forms, $\\textit{xy}$, $\\textit{xz}$, and $\\textit{yz}$ singly occupied\nFirst we encode the $|^2T_1,+>$ multiplet with singly occupied $\\textit{xy}$", "T1_p_xy = [[1,0,1,1,0,0], [1,0,0,0,1,1]]\ntransition(AFM_down, T1_p_xy)\ntransition(AFM_up, T1_p_xy)\ntransition(FM, T1_p_xy)", "And for the $|^2T_1,->$", "T1_n_xy = [[0,1,1,1,0,0], [0,1,0,0,1,1]]\ntransition(AFM_down, T1_n_xy)\ntransition(AFM_up, T1_n_xy)\ntransition(FM, T1_n_xy)", "In this case, there is no possible transition to states with a singly occupied $\\textit{xy}$ orbital from the $\\textit{xy}$ ordered ground state.", "T1_p_xz = [[1,1,1,0,0,0], [0,0,1,0,1,1]]\ntransition(AFM_up, T1_p_xz)\ntransition(FM, T1_p_xz)\n\nT1_p_yz = [[1,1,0,0,1,0], [0,0,1,1,1,0]]\ntransition(AFM_up, T1_p_yz)\ntransition(FM, T1_p_yz)", "We can see that the transitions from the ferromagnetic state are forbidden for the $xy$ orbitally ordered ground state for both $|^2T_1, xz\\uparrow>$ and $|^2T_1, yz\\uparrow>$ while allowing for transitions with amplitudes: $t_{yz,xz}/\\sqrt{2}$, $t_{xz,xz}/\\sqrt{2}$, $t_{xz,yz}/\\sqrt{2}$, and $t_{yz,yz}/\\sqrt{2}$.\nFor completeness, we show the transitions into the states $|^2T_1, xz\\downarrow>$ and $|^2T_1, yz\\downarrow>$ from the $\\uparrow\\uparrow$ and $\\uparrow\\downarrow$ ground states.", "T1_n_xz = [[1,1,0,1,0,0], [0,0,0,1,1,1]]\ntransition(AFM_up, T1_n_xz)\ntransition(FM, T1_n_xz)\n\nT1_n_yz = [[1,1,0,0,0,1], [0,0,1,1,0,1]]\ntransition(AFM_up, T1_n_yz)\ntransition(FM, T1_n_yz)", "S=0\nNow the challenge of addressing this multiplet when considering the $S=0$ component in the ground state.", "S0_1 = [1, 1, 1, 0, 0, 1]\nS0_2 = [1, 1, 0, 1, 1, 0]\n\nT1_p_xz = [[1,1,1,0,0,0], [0,0,1,0,1,1]]\nT1_p_yz = [[1,1,0,0,1,0], [0,0,1,1,1,0]]", "First, we calculate for the $d^4_0d^4_\\uparrow$ ground state. Again the $d^4_0$ state is split in two parts.", "transition(S0_1 + d_zero_up, T1_p_xz)\ntransition(S0_2 + d_zero_up, T1_p_xz)\nprint(\"\\n\\n\")\ntransition(S0_1 + d_zero_up, T1_p_yz)\ntransition(S0_2 + d_zero_up, T1_p_yz)", "And for $d^4_0d^4_\\downarrow$", "transition(S0_1 + d_zero_down, T1_p_xz)\ntransition(S0_2 + d_zero_down, T1_p_xz)\nprint(\"\\n\\n\")\ntransition(S0_1 + d_zero_down, T1_p_yz)\ntransition(S0_2 + d_zero_down, T1_p_yz)", "Thus, for final states with singly occupied $\\textit{xz}$ multiplet, we obtain transitions involving $t_{yz,xz}/2$, $t_{yz,yz}/2$, $t_{xz,xz}/2$ and $t_{xz,yz}/2$ when accounting for the prefactors of the states.\nFor completeness, repeating for the cases $d^4_\\uparrow d^4_0$ and $d^4_\\downarrow d^4_0$", "transition(d_zero_up + S0_1, T1_p_xz)\ntransition(d_zero_up + S0_2, T1_p_xz)\nprint(\"\\n\\n\")\ntransition(d_zero_up + S0_1, T1_p_yz)\ntransition(d_zero_up + S0_2, T1_p_yz)\nprint(\"\\n\\n\")\nprint(\"\\n\\n\")\ntransition(d_zero_down + S0_1, T1_p_xz)\ntransition(d_zero_down + S0_2, T1_p_xz)\nprint(\"\\n\\n\")\ntransition(d_zero_down + S0_1, T1_p_yz)\ntransition(d_zero_down + S0_2, T1_p_yz)", "In this case, considering the prefactors of the states involved, we obtain contributions $t_{yz,xy}/{2}$ and $t_{yz,yz}/{2}$, $t_{xz,xz}/{2}$, and $t_{xz,yz}/{2}$.\nAnd at last $d^4_0d^4_0$", "transition(S0_1 + S0_1, T1_p_xz)\ntransition(S0_1 + S0_2, T1_p_xz)\ntransition(S0_2 + S0_1, T1_p_xz)\ntransition(S0_2 + S0_2, T1_p_xz)\nprint(\"------------------------\")\ntransition(S0_1 + S0_1, T1_p_yz)\ntransition(S0_1 + S0_2, T1_p_yz)\ntransition(S0_2 + S0_1, T1_p_yz)\ntransition(S0_2 + S0_2, T1_p_yz)", "With contributions $t_{yz,xy}/{2\\sqrt{2}}$ and $t_{yz,yz}/{2\\sqrt{2}}$, $t_{xz,xz}/{2\\sqrt{2}}$, and $t_{xz,yz}/{2\\sqrt{2}}$." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/feateng/tftransform.ipynb
apache-2.0
[ "<h1> Exploring tf.transform </h1>\n\nWhile Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming.\nOnly specific combinations of TensorFlow/Beam are supported by tf.transform. So make sure to get a combo that is.\n\nTFT 0.8.0\nTF 1.8 or higher\nApache Beam [GCP] 2.9.0 or higher", "%%bash\npip install apache-beam[gcp]==2.16.0 tensorflow_transform==0.15.0", "<b>Restart the kernel</b> after you do a pip install (click on the reload button above).", "%%bash\npip freeze | grep -e 'flow\\|beam'\n\nimport tensorflow as tf\nimport tensorflow_transform as tft\nimport shutil\nprint(tf.__version__)\n\n# change these to try this notebook out\nBUCKET = 'cloud-training-demos-ml'\nPROJECT = 'cloud-training-demos'\nREGION = 'us-central1'\n\nimport os\nos.environ['BUCKET'] = BUCKET\nos.environ['PROJECT'] = PROJECT\nos.environ['REGION'] = REGION\n\n%%bash\ngcloud config set project $PROJECT\ngcloud config set compute/region $REGION\n\n%%bash\nif ! gsutil ls | grep -q gs://${BUCKET}/; then\n gsutil mb -l ${REGION} gs://${BUCKET}\nfi", "Input source: BigQuery\nGet data from BigQuery but defer filtering etc. to Beam.\nNote that the dayofweek column is now strings.", "from google.cloud import bigquery\ndef create_query(phase, EVERY_N):\n \"\"\"\n phase: 1=train 2=valid\n \"\"\"\n base_query = \"\"\"\nWITH daynames AS\n (SELECT ['Sun', 'Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat'] AS daysofweek)\nSELECT\n (tolls_amount + fare_amount) AS fare_amount,\n daysofweek[ORDINAL(EXTRACT(DAYOFWEEK FROM pickup_datetime))] AS dayofweek,\n EXTRACT(HOUR FROM pickup_datetime) AS hourofday,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count AS passengers,\n 'notneeded' AS key\nFROM\n `nyc-tlc.yellow.trips`, daynames\nWHERE\n trip_distance > 0 AND fare_amount > 0\n \"\"\"\n\n if EVERY_N == None:\n if phase < 2:\n # training\n query = \"{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING), 4)) < 2\".format(base_query)\n else:\n query = \"{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING), 4)) = {1}\".format(base_query, phase)\n else:\n query = \"{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), {1})) = {2}\".format(base_query, EVERY_N, phase)\n \n return query\n\nquery = create_query(2, 100000)\n\ndf_valid = bigquery.Client().query(query).to_dataframe()\ndisplay(df_valid.head())\ndf_valid.describe()", "Create ML dataset using tf.transform and Dataflow\nLet's use Cloud Dataflow to read in the BigQuery data and write it out as CSV files. Along the way, let's use tf.transform to do scaling and transforming. Using tf.transform allows us to save the metadata to ensure that the appropriate transformations get carried out during prediction as well.", "%%writefile requirements.txt\ntensorflow-transform==0.8.0", "Test transform_data is type pcollection. test if _ = is neccesary", "import datetime\nimport tensorflow as tf\nimport apache_beam as beam\nimport tensorflow_transform as tft\nfrom tensorflow_transform.beam import impl as beam_impl\n\ndef is_valid(inputs):\n try:\n pickup_longitude = inputs['pickuplon']\n dropoff_longitude = inputs['dropofflon']\n pickup_latitude = inputs['pickuplat']\n dropoff_latitude = inputs['dropofflat']\n hourofday = inputs['hourofday']\n dayofweek = inputs['dayofweek']\n passenger_count = inputs['passengers']\n fare_amount = inputs['fare_amount']\n return (fare_amount >= 2.5 and pickup_longitude > -78 and pickup_longitude < -70 \\\n and dropoff_longitude > -78 and dropoff_longitude < -70 and pickup_latitude > 37 \\\n and pickup_latitude < 45 and dropoff_latitude > 37 and dropoff_latitude < 45 \\\n and passenger_count > 0)\n except:\n return False\n \ndef preprocess_tft(inputs):\n import datetime \n print inputs\n result = {}\n result['fare_amount'] = tf.identity(inputs['fare_amount']) \n result['dayofweek'] = tft.string_to_int(inputs['dayofweek']) # builds a vocabulary\n result['hourofday'] = tf.identity(inputs['hourofday']) # pass through\n result['pickuplon'] = (tft.scale_to_0_1(inputs['pickuplon'])) # scaling numeric values\n result['pickuplat'] = (tft.scale_to_0_1(inputs['pickuplat']))\n result['dropofflon'] = (tft.scale_to_0_1(inputs['dropofflon']))\n result['dropofflat'] = (tft.scale_to_0_1(inputs['dropofflat']))\n result['passengers'] = tf.cast(inputs['passengers'], tf.float32) # a cast\n result['key'] = tf.as_string(tf.ones_like(inputs['passengers'])) # arbitrary TF func\n # engineered features\n latdiff = inputs['pickuplat'] - inputs['dropofflat']\n londiff = inputs['pickuplon'] - inputs['dropofflon']\n result['latdiff'] = tft.scale_to_0_1(latdiff)\n result['londiff'] = tft.scale_to_0_1(londiff)\n dist = tf.sqrt(latdiff * latdiff + londiff * londiff)\n result['euclidean'] = tft.scale_to_0_1(dist)\n return result\n\ndef preprocess(in_test_mode):\n import os\n import os.path\n import tempfile\n from apache_beam.io import tfrecordio\n from tensorflow_transform.coders import example_proto_coder\n from tensorflow_transform.tf_metadata import dataset_metadata\n from tensorflow_transform.tf_metadata import dataset_schema\n from tensorflow_transform.beam import tft_beam_io\n from tensorflow_transform.beam.tft_beam_io import transform_fn_io\n\n job_name = 'preprocess-taxi-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S') \n if in_test_mode:\n import shutil\n print 'Launching local job ... hang on'\n OUTPUT_DIR = './preproc_tft'\n shutil.rmtree(OUTPUT_DIR, ignore_errors=True)\n EVERY_N = 100000\n else:\n print 'Launching Dataflow job {} ... hang on'.format(job_name)\n OUTPUT_DIR = 'gs://{0}/taxifare/preproc_tft/'.format(BUCKET)\n import subprocess\n subprocess.call('gsutil rm -r {}'.format(OUTPUT_DIR).split())\n EVERY_N = 10000\n \n options = {\n 'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),\n 'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),\n 'job_name': job_name,\n 'project': PROJECT,\n 'max_num_workers': 6,\n 'teardown_policy': 'TEARDOWN_ALWAYS',\n 'no_save_main_session': True,\n 'requirements_file': 'requirements.txt'\n }\n opts = beam.pipeline.PipelineOptions(flags=[], **options)\n if in_test_mode:\n RUNNER = 'DirectRunner'\n else:\n RUNNER = 'DataflowRunner'\n\n # set up raw data metadata\n raw_data_schema = {\n colname : dataset_schema.ColumnSchema(tf.string, [], dataset_schema.FixedColumnRepresentation())\n for colname in 'dayofweek,key'.split(',')\n }\n raw_data_schema.update({\n colname : dataset_schema.ColumnSchema(tf.float32, [], dataset_schema.FixedColumnRepresentation())\n for colname in 'fare_amount,pickuplon,pickuplat,dropofflon,dropofflat'.split(',')\n })\n raw_data_schema.update({\n colname : dataset_schema.ColumnSchema(tf.int64, [], dataset_schema.FixedColumnRepresentation())\n for colname in 'hourofday,passengers'.split(',')\n })\n raw_data_metadata = dataset_metadata.DatasetMetadata(dataset_schema.Schema(raw_data_schema))\n\n # run Beam \n with beam.Pipeline(RUNNER, options=opts) as p:\n with beam_impl.Context(temp_dir=os.path.join(OUTPUT_DIR, 'tmp')):\n # save the raw data metadata\n raw_data_metadata | 'WriteInputMetadata' >> tft_beam_io.WriteMetadata(\n os.path.join(OUTPUT_DIR, 'metadata/rawdata_metadata'),\n pipeline=p)\n \n # read training data from bigquery and filter rows \n raw_data = (p \n | 'train_read' >> beam.io.Read(beam.io.BigQuerySource(query=create_query(1, EVERY_N), use_standard_sql=True))\n | 'train_filter' >> beam.Filter(is_valid))\n raw_dataset = (raw_data, raw_data_metadata)\n \n # analyze and transform training data\n transformed_dataset, transform_fn = (\n raw_dataset | beam_impl.AnalyzeAndTransformDataset(preprocess_tft))\n transformed_data, transformed_metadata = transformed_dataset\n \n # save transformed training data to disk in efficient tfrecord format\n transformed_data | 'WriteTrainData' >> tfrecordio.WriteToTFRecord(\n os.path.join(OUTPUT_DIR, 'train'),\n file_name_suffix='.gz',\n coder=example_proto_coder.ExampleProtoCoder(\n transformed_metadata.schema))\n \n # read eval data from bigquery and filter rows \n raw_test_data = (p \n | 'eval_read' >> beam.io.Read(beam.io.BigQuerySource(query=create_query(2, EVERY_N), use_standard_sql=True))\n | 'eval_filter' >> beam.Filter(is_valid))\n raw_test_dataset = (raw_test_data, raw_data_metadata)\n \n # transform eval data\n transformed_test_dataset = (\n (raw_test_dataset, transform_fn) | beam_impl.TransformDataset())\n transformed_test_data, _ = transformed_test_dataset\n \n # save transformed training data to disk in efficient tfrecord format\n transformed_test_data | 'WriteTestData' >> tfrecordio.WriteToTFRecord(\n os.path.join(OUTPUT_DIR, 'eval'),\n file_name_suffix='.gz',\n coder=example_proto_coder.ExampleProtoCoder(\n transformed_metadata.schema))\n \n # save transformation function to disk for use at serving time\n transform_fn | 'WriteTransformFn' >> transform_fn_io.WriteTransformFn(\n os.path.join(OUTPUT_DIR, 'metadata'))\n\npreprocess(in_test_mode=False) # change to True to run locally\n\n%%bash\n# ls preproc_tft\ngsutil ls gs://${BUCKET}/taxifare/preproc_tft/", "<h2> Train off preprocessed data </h2>", "%%bash\nrm -rf taxifare_tft.tar.gz taxi_trained\nexport PYTHONPATH=${PYTHONPATH}:$PWD/taxifare_tft\npython -m trainer.task \\\n --train_data_paths=\"gs://${BUCKET}/taxifare/preproc_tft/train*\" \\\n --eval_data_paths=\"gs://${BUCKET}/taxifare/preproc_tft/eval*\" \\\n --output_dir=./taxi_trained \\\n --train_steps=10 --job-dir=/tmp \\\n --metadata_path=gs://${BUCKET}/taxifare/preproc_tft/metadata\n\n!ls $PWD/taxi_trained/export/exporter\n\n%%writefile /tmp/test.json\n{\"dayofweek\":\"Thu\",\"hourofday\":17,\"pickuplon\": -73.885262,\"pickuplat\": 40.773008,\"dropofflon\": -73.987232,\"dropofflat\": 40.732403,\"passengers\": 2}\n\n%%bash\nmodel_dir=$(ls $PWD/taxi_trained/export/exporter/)\ngcloud ai-platform local predict \\\n --model-dir=./taxi_trained/export/exporter/${model_dir} \\\n --json-instances=/tmp/test.json", "Copyright 2016-2018 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/mri/cmip6/models/mri-esm2-0/aerosol.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Aerosol\nMIP Era: CMIP6\nInstitute: MRI\nSource ID: MRI-ESM2-0\nTopic: Aerosol\nSub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model. \nProperties: 69 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:19\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'mri', 'mri-esm2-0', 'aerosol')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Key Properties --&gt; Timestep Framework\n4. Key Properties --&gt; Meteorological Forcings\n5. Key Properties --&gt; Resolution\n6. Key Properties --&gt; Tuning Applied\n7. Transport\n8. Emissions\n9. Concentrations\n10. Optical Radiative Properties\n11. Optical Radiative Properties --&gt; Absorption\n12. Optical Radiative Properties --&gt; Mixtures\n13. Optical Radiative Properties --&gt; Impact Of H2o\n14. Optical Radiative Properties --&gt; Radiative Scheme\n15. Optical Radiative Properties --&gt; Cloud Interactions\n16. Model \n1. Key Properties\nKey properties of the aerosol model\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of aerosol model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of aerosol model code", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Scheme Scope\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAtmospheric domains covered by the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasic approximations made in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables Form\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPrognostic variables in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/volume ratio for aerosols\" \n# \"3D number concenttration for aerosols\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.6. Number Of Tracers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of tracers in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "1.7. Family Approach\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre aerosol calculations generalized into families of species?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestep Framework\nPhysical properties of seawater in ocean\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMathematical method deployed to solve the time evolution of the prognostic variables", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses atmospheric chemistry time stepping\" \n# \"Specific timestepping (operator splitting)\" \n# \"Specific timestepping (integrated)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Split Operator Advection Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for aerosol advection (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Split Operator Physical Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for aerosol physics (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Integrated Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the aerosol model (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Integrated Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the type of timestep scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Meteorological Forcings\n**\n4.1. Variables 3D\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nThree dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Variables 2D\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTwo dimensionsal forcing variables, e.g. land-sea mask definition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Frequency\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nFrequency with which meteological forcings are applied (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Resolution\nResolution in the aersosol model grid\n5.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Canonical Horizontal Resolution\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Number Of Horizontal Gridpoints\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5.4. Number Of Vertical Levels\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5.5. Is Adaptive Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Tuning Applied\nTuning methodology for aerosol model\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Transport\nAerosol transport\n7.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of transport in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for aerosol transport modeling", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Specific transport scheme (eulerian)\" \n# \"Specific transport scheme (semi-lagrangian)\" \n# \"Specific transport scheme (eulerian and semi-lagrangian)\" \n# \"Specific transport scheme (lagrangian)\" \n# TODO - please enter value(s)\n", "7.3. Mass Conservation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod used to ensure mass conservation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Mass adjustment\" \n# \"Concentrations positivity\" \n# \"Gradients monotonicity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7.4. Convention\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTransport by convention", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.convention') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Convective fluxes connected to tracers\" \n# \"Vertical velocities connected to tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8. Emissions\nAtmospheric aerosol emissions\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of emissions in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod used to define aerosol species (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Prescribed (climatology)\" \n# \"Prescribed CMIP6\" \n# \"Prescribed above surface\" \n# \"Interactive\" \n# \"Interactive above surface\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of the aerosol species are taken into account in the emissions scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Volcanos\" \n# \"Bare ground\" \n# \"Sea surface\" \n# \"Lightning\" \n# \"Fires\" \n# \"Aircraft\" \n# \"Anthropogenic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Prescribed Climatology\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify the climatology type for aerosol emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Interannual\" \n# \"Annual\" \n# \"Monthly\" \n# \"Daily\" \n# TODO - please enter value(s)\n", "8.5. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and prescribed via a climatology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.6. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.7. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.8. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and specified via an &quot;other method&quot;", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.9. Other Method Characteristics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCharacteristics of the &quot;other method&quot; used for aerosol emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Concentrations\nAtmospheric aerosol concentrations\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of concentrations in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Prescribed Lower Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the lower boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Prescribed Upper Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the upper boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Prescribed Fields Mmr\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed as mass mixing ratios.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Prescribed Fields Mmr\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed as AOD plus CCNs.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Optical Radiative Properties\nAerosol optical and radiative properties\n10.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of optical and radiative properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Optical Radiative Properties --&gt; Absorption\nAbsortion properties in aerosol scheme\n11.1. Black Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.2. Dust\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of dust at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Organics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of organics at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12. Optical Radiative Properties --&gt; Mixtures\n**\n12.1. External\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there external mixing with respect to chemical composition?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Internal\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there internal mixing with respect to chemical composition?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.3. Mixing Rule\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf there is internal mixing with respect to chemical composition then indicate the mixinrg rule", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Optical Radiative Properties --&gt; Impact Of H2o\n**\n13.1. Size\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes H2O impact size?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.2. Internal Mixture\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes H2O impact internal mixture?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14. Optical Radiative Properties --&gt; Radiative Scheme\nRadiative scheme for aerosol\n14.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of radiative scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Shortwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of shortwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Longwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of longwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15. Optical Radiative Properties --&gt; Cloud Interactions\nAerosol-cloud interactions\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of aerosol-cloud interactions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Twomey\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the Twomey effect included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.3. Twomey Minimum Ccn\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the Twomey effect is included, then what is the minimum CCN number?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Drizzle\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the scheme affect drizzle?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.5. Cloud Lifetime\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the scheme affect cloud lifetime?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.6. Longwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of longwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Model\nAerosol model\n16.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16.2. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProcesses included in the Aerosol model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dry deposition\" \n# \"Sedimentation\" \n# \"Wet deposition (impaction scavenging)\" \n# \"Wet deposition (nucleation scavenging)\" \n# \"Coagulation\" \n# \"Oxidation (gas phase)\" \n# \"Oxidation (in cloud)\" \n# \"Condensation\" \n# \"Ageing\" \n# \"Advection (horizontal)\" \n# \"Advection (vertical)\" \n# \"Heterogeneous chemistry\" \n# \"Nucleation\" \n# TODO - please enter value(s)\n", "16.3. Coupling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther model components coupled to the Aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Radiation\" \n# \"Land surface\" \n# \"Heterogeneous chemistry\" \n# \"Clouds\" \n# \"Ocean\" \n# \"Cryosphere\" \n# \"Gas phase chemistry\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.4. Gas Phase Precursors\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of gas phase aerosol precursors.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.gas_phase_precursors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"DMS\" \n# \"SO2\" \n# \"Ammonia\" \n# \"Iodine\" \n# \"Terpene\" \n# \"Isoprene\" \n# \"VOC\" \n# \"NOx\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.5. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nType(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bulk\" \n# \"Modal\" \n# \"Bin\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.6. Bulk Scheme Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of species covered by the bulk scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.bulk_scheme_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon / soot\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.24/_downloads/9e70404d3a55a6b6d1c1877784347c14/mixed_source_space_inverse.ipynb
bsd-3-clause
[ "%matplotlib inline", "Compute MNE inverse solution on evoked data with a mixed source space\nCreate a mixed source space and compute an MNE inverse solution on an evoked\ndataset.", "# Author: Annalisa Pascarella <a.pascarella@iac.cnr.it>\n#\n# License: BSD-3-Clause\n\nimport os.path as op\nimport matplotlib.pyplot as plt\n\nfrom nilearn import plotting\n\nimport mne\nfrom mne.minimum_norm import make_inverse_operator, apply_inverse\n\n# Set dir\ndata_path = mne.datasets.sample.data_path()\nsubject = 'sample'\ndata_dir = op.join(data_path, 'MEG', subject)\nsubjects_dir = op.join(data_path, 'subjects')\nbem_dir = op.join(subjects_dir, subject, 'bem')\n\n# Set file names\nfname_mixed_src = op.join(bem_dir, '%s-oct-6-mixed-src.fif' % subject)\nfname_aseg = op.join(subjects_dir, subject, 'mri', 'aseg.mgz')\n\nfname_model = op.join(bem_dir, '%s-5120-bem.fif' % subject)\nfname_bem = op.join(bem_dir, '%s-5120-bem-sol.fif' % subject)\n\nfname_evoked = data_dir + '/sample_audvis-ave.fif'\nfname_trans = data_dir + '/sample_audvis_raw-trans.fif'\nfname_fwd = data_dir + '/sample_audvis-meg-oct-6-mixed-fwd.fif'\nfname_cov = data_dir + '/sample_audvis-shrunk-cov.fif'", "Set up our source space\nList substructures we are interested in. We select only the\nsub structures we want to include in the source space:", "labels_vol = ['Left-Amygdala',\n 'Left-Thalamus-Proper',\n 'Left-Cerebellum-Cortex',\n 'Brain-Stem',\n 'Right-Amygdala',\n 'Right-Thalamus-Proper',\n 'Right-Cerebellum-Cortex']", "Get a surface-based source space, here with few source points for speed\nin this demonstration, in general you should use oct6 spacing!", "src = mne.setup_source_space(subject, spacing='oct5',\n add_dist=False, subjects_dir=subjects_dir)", "Now we create a mixed src space by adding the volume regions specified in the\nlist labels_vol. First, read the aseg file and the source space bounds\nusing the inner skull surface (here using 10mm spacing to save time,\nwe recommend something smaller like 5.0 in actual analyses):", "vol_src = mne.setup_volume_source_space(\n subject, mri=fname_aseg, pos=10.0, bem=fname_model,\n volume_label=labels_vol, subjects_dir=subjects_dir,\n add_interpolator=False, # just for speed, usually this should be True\n verbose=True)\n\n# Generate the mixed source space\nsrc += vol_src\nprint(f\"The source space contains {len(src)} spaces and \"\n f\"{sum(s['nuse'] for s in src)} vertices\")", "View the source space", "src.plot(subjects_dir=subjects_dir)", "We could write the mixed source space with::\n\n\n\nwrite_source_spaces(fname_mixed_src, src, overwrite=True)\n\n\n\nWe can also export source positions to NIfTI file and visualize it again:", "nii_fname = op.join(bem_dir, '%s-mixed-src.nii' % subject)\nsrc.export_volume(nii_fname, mri_resolution=True, overwrite=True)\nplotting.plot_img(nii_fname, cmap='nipy_spectral')", "Compute the fwd matrix", "fwd = mne.make_forward_solution(\n fname_evoked, fname_trans, src, fname_bem,\n mindist=5.0, # ignore sources<=5mm from innerskull\n meg=True, eeg=False, n_jobs=1)\ndel src # save memory\n\nleadfield = fwd['sol']['data']\nprint(\"Leadfield size : %d sensors x %d dipoles\" % leadfield.shape)\nprint(f\"The fwd source space contains {len(fwd['src'])} spaces and \"\n f\"{sum(s['nuse'] for s in fwd['src'])} vertices\")\n\n# Load data\ncondition = 'Left Auditory'\nevoked = mne.read_evokeds(fname_evoked, condition=condition,\n baseline=(None, 0))\nnoise_cov = mne.read_cov(fname_cov)", "Compute inverse solution", "snr = 3.0 # use smaller SNR for raw data\ninv_method = 'dSPM' # sLORETA, MNE, dSPM\nparc = 'aparc' # the parcellation to use, e.g., 'aparc' 'aparc.a2009s'\nloose = dict(surface=0.2, volume=1.)\n\nlambda2 = 1.0 / snr ** 2\n\ninverse_operator = make_inverse_operator(\n evoked.info, fwd, noise_cov, depth=None, loose=loose, verbose=True)\ndel fwd\n\nstc = apply_inverse(evoked, inverse_operator, lambda2, inv_method,\n pick_ori=None)\nsrc = inverse_operator['src']", "Plot the mixed source estimate", "initial_time = 0.1\nstc_vec = apply_inverse(evoked, inverse_operator, lambda2, inv_method,\n pick_ori='vector')\nbrain = stc_vec.plot(\n hemi='both', src=inverse_operator['src'], views='coronal',\n initial_time=initial_time, subjects_dir=subjects_dir,\n brain_kwargs=dict(silhouette=True), smoothing_steps=7)", "Plot the surface", "brain = stc.surface().plot(initial_time=initial_time,\n subjects_dir=subjects_dir, smoothing_steps=7)", "Plot the volume", "fig = stc.volume().plot(initial_time=initial_time, src=src,\n subjects_dir=subjects_dir)", "Process labels\nAverage the source estimates within each label of the cortical parcellation\nand each sub structure contained in the src space", "# Get labels for FreeSurfer 'aparc' cortical parcellation with 34 labels/hemi\nlabels_parc = mne.read_labels_from_annot(\n subject, parc=parc, subjects_dir=subjects_dir)\n\nlabel_ts = mne.extract_label_time_course(\n [stc], labels_parc, src, mode='mean', allow_empty=True)\n\n# plot the times series of 2 labels\nfig, axes = plt.subplots(1)\naxes.plot(1e3 * stc.times, label_ts[0][0, :], 'k', label='bankssts-lh')\naxes.plot(1e3 * stc.times, label_ts[0][-1, :].T, 'r', label='Brain-stem')\naxes.set(xlabel='Time (ms)', ylabel='MNE current (nAm)')\naxes.legend()\nmne.viz.tight_layout()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
parrt/lolviz
examples.ipynb
bsd-3-clause
[ "Examples for lolviz\nInstall\nIf on mac, I had to do this:\nbash\n$ brew install graphviz # had to upgrade graphviz on el capitan\nThen\nbash\n$ pip install lolviz\nSample visualizations", "from lolviz import *\n\nobjviz([u'2016-08-12',107.779999,108.440002,107.779999,108.18])\n\ntable = [\n ['Date','Open','High','Low','Close','Volume'],\n ['2016-08-12',107.779999,108.440002,107.779999,108.18,18612300,108.18],\n]\nobjviz(table)\n\nd = dict([(c,chr(c)) for c in range(ord('a'),ord('f'))])\nobjviz(d)\n\ntuplelist = d.items()\nlistviz(tuplelist)\n\ntuplelist = d.items()\nlistviz(tuplelist, showassoc=False)\n\nobjviz(tuplelist)\n\nT = ['11','12','13','14',['a','b','c'],'16']\nlolviz(T)\n\nobjviz({'hi','mom'})\n\nobjviz({'superuser':True, 'mgr':False})\n\nobjviz(set(['elem%d'%i for i in range(20)])) # long set shown vertically\n\n# test linked list node\nclass Node:\n def __init__(self, value, next=None):\n self.value = value\n self.next = next\n\nhead = Node('tombu')\nhead = Node('parrt', head)\nhead = Node(\"xue\", head)\nobjviz(head)\n\na = {Node('parrt'),Node('mary')}\nobjviz(a)\n\nhead2 = ('parrt',('mary',None))\nobjviz(head2)\n\ndata = [[]] * 5 # INCORRECT list of list init\nlolviz(data)\n\ndata[0].append( ('a',4) )\ndata[2].append( ('b',9) ) # whoops! should be different list object\nlolviz(data)\n\ntable = [ [] for i in range(5) ] # correct way to init\nlolviz(table)\n\nkey = 'a'\nvalue = 99\ndef hashcode(o): return ord(o) # assume keys are single-element strings\nprint(\"hashcode =\", hashcode(key))\nbucket_index = hashcode(key) % len(table)\nprint(\"bucket_index =\", bucket_index)\nbucket = table[bucket_index]\nbucket.append( (key,value) ) # add association to the bucket\nlolviz(table)\n\nkey = 'f'\nvalue = 99\nprint(\"hashcode =\", hashcode(key))\nbucket_index = hashcode(key) % len(table)\nprint(\"bucket_index =\", bucket_index)\nbucket = table[bucket_index]\nbucket.append( (key,value) ) # add association to the bucket\nlolviz(table)", "If we don't indicate we want a simple 2-level list of list with lolviz(), we get a generic object graph:", "objviz(table)\n\ncourses = [\n ['msan501', 51],\n ['msan502', 32],\n ['msan692', 101]\n]\nmycourses = courses\nprint(id(mycourses), id(courses))\nobjviz(courses)", "You can also display strings as arrays in isolation (but not in other data structures as I figured it's not that useful in most cases):", "strviz('New York')\n\nclass Tree:\n def __init__(self, value, left=None, right=None):\n self.value = value\n self.left = left\n self.right = right\n \nroot = Tree('parrt',\n Tree('mary',\n Tree('jim',\n Tree('srinivasan'),\n Tree('april'))),\n Tree('xue',None,Tree('mike')))\n\ntreeviz(root)\n\nfrom IPython.display import display\n\nN = 100\n\ndef f(x):\n a = ['hi','mom']\n thestack = callsviz(varnames=['table','x','head','courses','N','a'])\n display(thestack)\n \nf(99)", "If you'd like to save an image from jupyter, use render():", "def f(x):\n thestack = callsviz(varnames=['table','x','tree','head','courses'])\n print(thestack.source[:100]) # show first 100 char of graphviz syntax\n thestack.render(\"/tmp/t\") # save as PDF\n \nf(99)", "Numpy viz", "import numpy as np\n\nA = np.array([[1,2,8,9],[3,4,22,1]])\nobjviz(A)\n\nB = np.ones((100,100))\nfor i in range(100):\n for j in range(100):\n B[i,j] = i+j\nB\n\nmatrixviz(A)\n\nmatrixviz(B)\n\nA = np.array(np.arange(-5.0,5.0,2.1))\n\nB = A.reshape(-1,1)\n\nmatrices = [A,B]\n\ndef f():\n w,h = 20,20\n C = np.ones((w,h), dtype=int)\n for i in range(w):\n for j in range(h):\n C[i,j] = i+j\n display(callsviz(varnames=['matrices','A','C']))\n\nf()", "Pandas dataframes, series", "import pandas as pd\ndf = pd.DataFrame()\ndf[\"sqfeet\"] = [750, 800, 850, 900,950]\ndf[\"rent\"] = [1160, 1200, 1280, 1450,2000]\nobjviz(df)\n\nobjviz(df.rent)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
gabicfa/RedesSociais
encontro02/5-kruskal.ipynb
gpl-3.0
[ "Encontro 02, Parte 5: Algoritmo de Kruskal\nEste guia foi escrito para ajudar você a atingir os seguintes objetivos:\n\nimplementar o algoritmo de Kruskal;\npraticar o uso da biblioteca da disciplina.\n\nPrimeiramente, vamos importar a biblioteca:", "import sys\nsys.path.append('..')\n\nimport socnet as sn", "A seguir, vamos configurar as propriedades visuais:", "sn.graph_width = 320\nsn.graph_height = 180", "Por fim, vamos carregar e visualizar um grafo:", "g = sn.load_graph('5-kruskal.gml', has_pos=True)\n\nfor e in g.edges_iter():\n g.edge[e[0]][e[1]]['label'] = g.edge[e[0]][e[1]]['c']\n\nsn.show_graph(g, elab=True)", "Árvores geradoras mínimas\nDizemos que:\n* um passeio $\\langle n_0, n_1, \\ldots, n_{k-1} \\rangle$ é um circuito se $\\langle n_0, n_1, \\ldots, n_{k-2} \\rangle$ é um caminho e $n_0 = n_{k-1}$;\n* um conjunto de arestas $F$ é uma floresta se não existem circuitos no grafo $(N, F)$;\n* um grafo é conexo se para quaisquer nós $s$ e $t$ existe um caminho de $s$ a $t$;\n* uma floresta $T$ é uma árvore geradora se o grafo $(N, T)$ é conexo.\nO custo de uma árvore geradora $T$ é\n$\\sum_{{n, m} \\in T} c(n, m)$.\nUma árvore geradora é mínima se não existe outra árvore geradora de custo menor. Note que podem existir múltiplas árvores geradoras mínimas.\nAlgoritmo de Kruskal\nPodemos eficientemente obter uma árvore geradora mínima usando o algoritmo de Kruskal. A ideia desse algoritmo é simples: inicializamos uma floresta $F$ como o conjunto vazio e verificamos todas as arestas em ordem não-decrescente de custo. Para cada aresta, adicionamos ela a $F$ se essa adição não formar circuito no grafo $(N, F)$.\nVamos especificar uma classe que representa a floresta. Não é necessário entender todos os detalhes dela, apenas que o atributo f é o conjunto das arestas e os dois últimos métodos são auto-explicativos.", "class Forest(object):\n def __init__(self, g):\n self.g = g\n self.f = set()\n for n in g.nodes():\n self._make_set(n)\n\n def _make_set(self, x):\n g.node[x]['p'] = x\n g.node[x]['rank'] = 0\n\n def _union(self, x, y):\n self._link(self._find_set(x), self._find_set(y))\n\n def _link(self, x, y):\n if g.node[x]['rank'] > g.node[y]['rank']:\n g.node[y]['p'] = x\n else:\n g.node[x]['p'] = y\n if g.node[x]['rank'] == g.node[y]['rank']:\n g.node[y]['rank'] = g.node[y]['rank'] + 1\n\n def _find_set(self, x):\n if x != g.node[x]['p']:\n g.node[x]['p'] = self._find_set(g.node[x]['p'])\n return g.node[x]['p']\n\n def adding_does_not_form_circuit(self, n, m):\n return self._find_set(n) != self._find_set(m)\n\n def add(self, n, m):\n self.f.add((n, m))\n self._union(n, m)", "Exercício\nMonte uma visualização do algoritmo de Kruskal. Use a classe Forest.", "from math import inf, isinf\n\ndef snapshot(g, frames):\n frame = sn.generate_frame(g, nlab=False, elab=True)\n frames.append(frame)\n\nred = (255, 0, 0) \nblue = (0, 0, 255) \ngreen = (0, 255, 0)\nframes = []\n\nf = Forest (g)\nedges = []\n\ne = g.edges_iter()\n\nfor i in e:\n edges.append((i[0],i[1],g.get_edge_data(i[0],i[1])['c']))\n\nedges.sort(reverse = True, key=lambda x: (-x[2],x[0]))\n\nsn.reset_node_colors(g) \nsn.reset_edge_colors(g)\nsnapshot(g, frames)\n\nfor n,m,c in edges:\n g.edge[m][n]['color'] = green\n snapshot(g, frames)\n \n if(f.adding_does_not_form_circuit(n,m)):\n g.edge[m][n]['color'] = blue\n snapshot(g, frames) \n f.add(n,m)\n else:\n g.edge[m][n]['color'] = sn.edge_color\n snapshot(g, frames)\n\n\nsn.show_animation(frames)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
turbomanage/training-data-analyst
courses/machine_learning/deepdive2/launching_into_ml/solutions/first_model.ipynb
apache-2.0
[ "First BigQuery ML models for Taxifare Prediction\nIn this notebook, we will use BigQuery ML to build our first models for taxifare prediction.BigQuery ML provides a fast way to build ML models on large structured and semi-structured datasets.\nLearning Objectives\n\nChoose the correct BigQuery ML model type and specify options\nEvaluate the performance of your ML model\nImprove model performance through data quality cleanup\nCreate a Deep Neural Network (DNN) using SQL\n\nEach learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook. \nWe'll start by creating a dataset to hold all the models we create in BigQuery\nImport libraries", "import os", "Set environment variables", "%%bash\nexport PROJECT=$(gcloud config list project --format \"value(core.project)\")\necho \"Your current GCP Project Name is: \"$PROJECT\n\nPROJECT = \"your-gcp-project-here\" # REPLACE WITH YOUR PROJECT NAME\nREGION = \"us-central1\" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1\n\n# Do not change these\nos.environ[\"BUCKET\"] = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID\nos.environ[\"REGION\"] = REGION\n\nif PROJECT == \"your-gcp-project-here\":\n print(\"Don't forget to update your PROJECT name! Currently:\", PROJECT)", "Create a BigQuery Dataset and Google Cloud Storage Bucket\nA BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called serverlessml if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.", "%%bash\n\n## Create a BigQuery dataset for serverlessml if it doesn't exist\ndatasetexists=$(bq ls -d | grep -w serverlessml)\n\nif [ -n \"$datasetexists\" ]; then\n echo -e \"BigQuery dataset already exists, let's not recreate it.\"\nelse\n echo \"Creating BigQuery dataset titled: serverlessml\"\n\n bq --location=US mk --dataset \\\n --description 'Taxi Fare' \\\n $PROJECT:serverlessml\n echo \"\\nHere are your current datasets:\"\n bq ls\nfi \n\n## Create GCS bucket if it doesn't exist already...\nexists=$(gsutil ls -d | grep -w gs://${PROJECT}/)\n\nif [ -n \"$exists\" ]; then\n echo -e \"Bucket exists, let's not recreate it.\"\nelse\n echo \"Creating a new GCS bucket.\"\n gsutil mb -l ${REGION} gs://${PROJECT}\n echo \"\\nHere are your current buckets:\"\n gsutil ls\nfi", "Model 1: Raw data\nLet's build a model using just the raw data. It's not going to be very good, but sometimes it is good to actually experience this.\nThe model will take a minute or so to train. When it comes to ML, this is blazing fast.", "%%bigquery\nCREATE OR REPLACE MODEL\n serverlessml.model1_rawdata\n\nOPTIONS(input_label_cols=['fare_amount'],\n model_type='linear_reg') AS\n\nSELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count * 1.0 AS passengers\nFROM\n `nyc-tlc.yellow.trips`\nWHERE\n ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1", "Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook.\nNote that BigQuery automatically split the data we gave it, and trained on only a part of the data and used the rest for evaluation. We can look at eval statistics on that held-out data:", "%%bigquery\nSELECT * FROM ML.EVALUATE(MODEL serverlessml.model1_rawdata)", "Let's report just the error we care about, the Root Mean Squared Error (RMSE)", "%%bigquery\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL serverlessml.model1_rawdata)", "We told you it was not going to be good! Recall that our heuristic got 8.13, and our target is $6.\nNote that the error is going to depend on the dataset that we evaluate it on.\nWe can also evaluate the model on our own held-out benchmark/test dataset, but we shouldn't make a habit of this (we want to keep our benchmark dataset as the final evaluation, not make decisions using it all along the way. If we do that, our test dataset won't be truly independent).", "%%bigquery\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL serverlessml.model1_rawdata, (\n SELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count * 1.0 AS passengers\n FROM\n `nyc-tlc.yellow.trips`\n WHERE\n ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 2\n AND trip_distance > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n AND passenger_count > 0\n ))", "Model 2: Apply data cleanup\nRecall that we did some data cleanup in the previous lab. Let's do those before training.\nThis is a dataset that we will need quite frequently in this notebook, so let's extract it first.", "%%bigquery\nCREATE OR REPLACE TABLE\n serverlessml.cleaned_training_data AS\n\nSELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count*1.0 AS passengers\nFROM\n `nyc-tlc.yellow.trips`\nWHERE\n ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1\n AND trip_distance > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n AND passenger_count > 0\n\n%%bigquery\n-- LIMIT 0 is a free query; this allows us to check that the table exists.\nSELECT * FROM serverlessml.cleaned_training_data\nLIMIT 0\n\n%%bigquery\nCREATE OR REPLACE MODEL\n serverlessml.model2_cleanup\n\nOPTIONS(input_label_cols=['fare_amount'],\n model_type='linear_reg') AS\n\nSELECT\n *\nFROM\n serverlessml.cleaned_training_data\n\n%%bigquery\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL serverlessml.model2_cleanup)", "Model 3: More sophisticated models\nWhat if we try a more sophisticated model? Let's try Deep Neural Networks (DNNs) in BigQuery:\nDNN\nTo create a DNN, simply specify dnn_regressor for the model_type and add your hidden layers.", "%%bigquery\n-- This model type is in alpha, so it may not work for you yet.\n-- This training takes on the order of 15 minutes.\nCREATE OR REPLACE MODEL\n serverlessml.model3b_dnn\n\nOPTIONS(input_label_cols=['fare_amount'],\n model_type='dnn_regressor', hidden_units=[32, 8]) AS\n\nSELECT\n *\nFROM\n serverlessml.cleaned_training_data\n\n%%bigquery\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL serverlessml.model3b_dnn)", "Nice!\nEvaluate DNN on benchmark dataset\nLet's use the same validation dataset to evaluate -- remember that evaluation metrics depend on the dataset. You can not compare two models unless you have run them on the same withheld data.", "%%bigquery\nSELECT\n SQRT(mean_squared_error) AS rmse \nFROM\n ML.EVALUATE(MODEL serverlessml.model3b_dnn, (\n SELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_datetime,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count * 1.0 AS passengers,\n 'unused' AS key\n FROM\n `nyc-tlc.yellow.trips`\n WHERE\n ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2\n AND trip_distance > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n AND passenger_count > 0\n ))", "Wow! Later in this sequence of notebooks, we will get to below $4, but this is quite good, for very little work.\nIn this notebook, we showed you how to use BigQuery ML to quickly build ML models. We will come back to BigQuery ML when we want to experiment with different types of feature engineering. The speed of BigQuery ML is very attractive for development.\nCopyright 2019 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
MadsJensen/intro_to_scientific_computing
src/Logfile_generator.ipynb
bsd-3-clause
[ "Logfile generator\nGenerate a dataset such as in Luck (2009): Impaired response selection in schizophrenia... (see external-folder)\n\noddball visual task (not counterbalanced)\npress 1 when letter\npress 2 when digit\n20 patients, 20 controls\nseparate CSV file linking code (0001_ABC) with category\n1280 responses per subject (!)\n80/20 frequent/rare-split\nbehavioural results crafted to reproduce main features of Table 3\naccuracy\nRT distribution medians and heavy tails for patients\nlog files named 0001_ABC_20170101.log\nsome comment lines (#)\n3 columns of data\nevent time in 100s of microseconds\nan arbitrary data column (ignored)\nactual event: STIM={letter|digit} or RESP={1|2}\n\n\n\nPossible additional complications (intentional errors)\n\nmissing responses (too late)\nlog file name corrupt\n\nStudent task\nWrite a program to parse the 40 log files to extract median RT and accuracy values, and write out a single CSV file like this\n|Subjid|Group|Cond|Median|Accuracy|\n|:---:|:---:|---|---|---|\n|{str}|Patient/Control|Freq/Rare|{float}|{float}|\n|...|...|...|...|...|\nAlso write out summary stats for median and accuracy values, separately for each group and condition. Compare these to the results in the paper (Table 3).\nWhat's needed for a simple approach?\nThese could/should be introduced in exercises during previous days of the course!\n\nbasic data types\nint, float, string, boolean\n\n\nstring manipulation\nsubstring match (logfile from subjid)\nos.path.join\nsplit string (get 0001_ABC from file name)\nget STIM value: is it a letter or a digit?\n'a' in string.ascii_letters (generally in Python: element in list returns Boolean)\nget RESP value: does it match?\n\n\nfor loop\niterate over various lists\n\n\nconditionals\ne.g., if stim_val == 'letter' and resp_val == 1: answer = True; else answer = False, etc.\n\n\nread/write textual files (in Python)\ndon't use with open() as fp:-idiom, rather simple procedural approach (like one would in Matlab)\nmatch starting hash (#) for comments\nnewline-character (\\n)!!\n\n\nfunctions\nprevious exercise: write find_file_matching_wildcard (see this notebook), then use it here!\nglob folder contents into list object\n\n\nparse single log file & calculate summary stats\n\n\n\nPossible additional challenges\n\nplot the RT distributions (probably too much/no time)\nfit a gamma distribution and wonder about heavyness of tails\n\n\nmodify the code to only include correct responses to RT calculations", "import numpy as np\nfrom scipy.stats import gamma\nimport matplotlib.pyplot as plt\nimport random\nimport string\nimport datetime\nimport csv\nimport os\nimport glob\n\ndef random_date(start, end):\n \"\"\"Generate a random datetime between `start` and `end`\"\"\"\n return start + datetime.timedelta(\n # Get a random amount of seconds between `start` and `end`\n seconds=random.randint(0, int((end - start).total_seconds())),\n )\n\n# When we want to 'freeze' a set of log files, just copy them to e.g. 'logs'\nlogs_autogen = 'logs_autogen'", "Behavioural parameters", "gam_pars = {\n 'Control': dict(Freq=(2.8, 0.0, 1), Rare=(2.8, 0.75, 1)),\n 'Patient': dict(Freq=(3.0, 0.0, 1.2), Rare=(3.0, 1., 1.2))}\n\nsubs_per_group = 20\nn_trials = 1280\nprobs = dict(Rare=0.2, Freq=0.8)\naccuracy = dict(Control=dict(Freq=0.96, Rare=0.886),\n Patient=dict(Freq=0.945, Rare=0.847))\n\nlogfile_date_range = (datetime.date(2016, 9, 1),\n datetime.date(2017, 8, 31))", "Plot RT distributions for sanity checking", "fig, axs = plt.subplots(2, 1)\n\n# These are chosen empirically to generate sane RTs\nx_shift = 220\nx_mult = 100\n\ncols = dict(Patient='g', Control='r')\nlins = dict(Freq='-', Rare='--')\n# For plotting\nx = np.linspace(gamma.ppf(0.01, *gam_pars['Control']['Freq']),\n gamma.ppf(0.99, *gam_pars['Patient']['Rare']), 100)\n\nRTs = {}\nax = axs[0]\nfor sub in gam_pars.keys():\n for cond in ['Freq', 'Rare']:\n lab = sub + '/' + cond\n ax.plot(x_shift + x_mult * x, gamma.pdf(x, *gam_pars[sub][cond]),\n cols[sub], ls=lins[cond], lw=2, alpha=0.6, label=lab)\n RTs.setdefault(sub, {}).update(\n {cond: gamma.rvs(*gam_pars[sub][cond],\n size=int(probs[cond] * n_trials))\n * x_mult + x_shift})\n \nax.legend(loc='best', frameon=False)\n\nax = axs[1]\nfor sub in gam_pars.keys():\n for cond in ['Freq', 'Rare']:\n lab = sub + '/' + cond\n ax.hist(RTs[sub][cond], bins=20, normed=True,\n histtype='stepfilled', alpha=0.2, label=lab)\n print('{:s}\\tmedian = {:.1f} [{:.1f}, {:.1f}]'\n .format(lab, np.median(RTs[sub][cond]),\n np.min(RTs[sub][cond]), np.max(RTs[sub][cond])))\n\nax.legend(loc='best', frameon=False)\nplt.show()", "Create logfile data", "# calculate time in 100 us steps\n# 1-3 sec start delay\nstart_time = np.random.randint(1e4, 3e4)\n# Modify ISI a little from paper: accomodate slightly longer tails\n# of the simulated distributions (up to about 1500 ms)\nISI_ran = (1.5e4, 1.9e4)\n\nfreq_stims = string.ascii_lowercase\nrare_stims = string.digits", "Create subject IDs", "# ctrl_NUMs = list(np.random.randint(10, 60, size=2 * subs_per_group))\nctrl_NUMs = list(random.sample(range(10, 60), 2 * subs_per_group))\npat_NUMs = sorted(random.sample(ctrl_NUMs, subs_per_group))\nctrl_NUMs = sorted([c for c in ctrl_NUMs if not c in pat_NUMs])\n\nIDs = dict(Control=['{:04d}_{:s}'.format(n, ''.join(random.choices(\n string.ascii_uppercase, k=3))) for n in ctrl_NUMs],\n Patient=['{:04d}_{:s}'.format(n, ''.join(random.choices(\n string.ascii_uppercase, k=3))) for n in pat_NUMs])", "Write subject ID codes to a CSV file", "with open(os.path.join(logs_autogen, 'subj_codes.csv'), 'wt') as fp:\n csvw = csv.writer(fp, delimiter=';')\n for stype in IDs.keys():\n for sid in IDs[stype]:\n csvw.writerow([sid, stype])", "Function for generating individualised RTs", "def indiv_RT(sub_type, cond):\n # globals: gam_pars, probs, n_trials, x_mult, x_shift\n return(gamma.rvs(*gam_pars[sub_type][cond],\n size=int(probs[cond] * n_trials))\n * x_mult + x_shift)", "Write logfiles", "# Write to empty logs dir\nif not os.path.exists(logs_autogen):\n os.makedirs(logs_autogen)\nfor f in glob.glob(os.path.join(logs_autogen, '*.log')):\n os.remove(f)\n\nfor stype in ['Control', 'Patient']:\n for sid in IDs[stype]:\n log_date = random_date(*logfile_date_range)\n log_fname = '{:s}_{:s}.log'.format(sid, log_date.isoformat())\n \n with open(os.path.join(logs_autogen, log_fname), 'wt') as log_fp:\n log_fp.write('# Original filename: {:s}\\n'.format(log_fname))\n log_fp.write('# Time unit: 100 us\\n')\n log_fp.write('# RARECAT=digit\\n')\n log_fp.write('#\\n')\n log_fp.write('# Time\\tHHGG\\tEvent\\n')\n\n reacts = np.r_[indiv_RT(stype, 'Freq'), indiv_RT(stype, 'Rare')]\n # no need to shuffle ITIs...\n itis = np.random.randint(*ISI_ran, size=len(reacts))\n\n n_freq = len(RTs[stype]['Freq'])\n n_rare = len(RTs[stype]['Rare'])\n n_resps = n_freq + n_rare\n\n resps = np.random.choice([0, 1], p=[1 - accuracy[stype]['Rare'],\n accuracy[stype]['Rare']],\n size=n_resps)\n\n # this only works in python 3.6\n freq_s = random.choices(freq_stims, k=n_freq)\n # for older python:\n # random.choice(string.ascii_uppercase) for _ in range(N)\n rare_s = random.choices(rare_stims, k=n_rare)\n\n stims = np.r_[freq_s, rare_s]\n\n resps = np.r_[np.random.choice([0, 1], p=[1 - accuracy[stype]['Freq'],\n accuracy[stype]['Freq']],\n size=n_freq),\n np.random.choice([0, 1], p=[1 - accuracy[stype]['Rare'],\n accuracy[stype]['Rare']],\n size=n_rare)]\n corr_answs = np.r_[np.ones(n_freq, dtype=np.int),\n 2*np.ones(n_rare, dtype=np.int)]\n\n # This shuffles the lists together...\n tmp = list(zip(reacts, stims, resps, corr_answs))\n np.random.shuffle(tmp)\n reacts, stims, resps, corr_answs = zip(*tmp)\n\n assert len(resps) == len(stims)\n\n prev_present, prev_response = start_time, -1\n for rt, iti, stim, resp, corr_ans in \\\n zip(reacts, itis, stims, resps, corr_answs):\n\n\n # This is needed to ensure that the present response time\n # exceeds the previous response time (plus a little buffer)\n # Slightly skews (truncates) the distribution, but what the hell\n pres_time = max([prev_present + iti,\n prev_response + 100])\n resp_time = pres_time + int(10. * rt)\n\n prev_present = pres_time\n prev_response = resp_time\n log_fp.write('{:d}\\t42\\tSTIM={:s}\\n'.format(pres_time, stim))\n if resp == 0 and corr_ans == 1:\n answ = 2\n elif resp == 0 and corr_ans == 2:\n answ = 1\n else:\n answ = corr_ans\n log_fp.write('{:d}\\t42\\tRESP={:d}\\n'.format(resp_time, answ, resp))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
lknelson/text-analysis-2017
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
bsd-3-clause
[ "Combining Pandas and Text Analysis\nWe have learned how to work with numbers in the Python package pandas, and how to work with text in Python using built-in Python functions and using the NLTK package. To operationalize concepts and analyze the numbers, we can combine these two packages together.\nLearning Goals\n\nBegin to think about how we can quantify text to use the output in further analyses, or to visualize the output\nLearn how to add text analysis techniques to a pandas dataframe\nLearn a few more visualization techniques\nLearn a number of new pandas functions:\nthe pandas apply function\nthe pandas tolist function\nthe pandas lambda function\n\n\nLearn a new built-in function, the .join() function\n\nOutline\n\nText as a column in a pandas df\nDescriptive statistics and visualization\nThe str attribute\nThe apply function\nThe lambda function\nExtracting text\nExercise: average TTR\n\nKey Terms\n\ncategorical variable\nis a variable that can take on one of a limited, and usually fixed, number of possible values\n\n\nlambda function\nsyntax that allows us to write and apply our own function in a pandas dataframe\n\n\nx-axis\nthe horizontal axis of a graph\n\n\ny-axis\nthe vertical axis of a graph\n\n\nerror bars\na graphical representation of the variability of data and are used on graphs to indicate the error, or uncertainty in a reported measurement. They give a general idea of how precise a measurement is, or conversely, how far from the reported value the true (error free) value might be.\n\n\nstandard deviation\na measure that is used to quantify the amount of variation or dispersion of a set of data values\n\n\njoin function\n''.join(), joins the elements in a list into one string\n\n\n\n<a id='df'></a>\n0. Create a DF from a .csv file\nWe have seen texts in the form of raw text. Today we'll deal with text that is in the form of a .csv file. We can read it into Python in the same way we read in the numerical dataset from the National Survey of Family Growth. \nData preparation\nI created a .csv file from a collection of 19th century children's literature. The data were compiled by students in this course.\nThe raw data are found here.\nThat page has additional corpora, so search through it to see if anything sparks your interest.\nI did some minimal cleaning to get the children's literature data in .csv format for our use. The delimiter for this file is a tab, so technically it's a tab separated file, or tsv. We can specify that delimiter with the option \"sep = '\\t'\"", "import pandas\nimport nltk\nimport string\nimport matplotlib.pyplot as plt #note this last import statement. Why might we use \"import as\"?\n\n#read in our data\ndf = pandas.read_csv(\"../Data/childrens_lit.csv.bz2\", sep = '\\t', encoding = 'utf-8', compression = 'bz2', index_col=0)\ndf", "Notice this is a typical dataframe, possibly with more columns as strings than numbers. The text in contained in the column 'text'.\nNotice also there are missing texts. For now, we will drop these texts so we can move forward with text analysis. In your own work, you should justify dropping missing texts when possible.", "df = df.dropna(subset=[\"text\"])\ndf\n\n##Ex: Print the first text in the dataframe (starts with \"A DOG WITH A BAD NAME\"). \n###Hint: Remind yourself about the syntax for slicing a dataframe", "<a id='stats'></a>\n1. Descriptive Statistics and Visualization\nThe first thing we probably want to do is describe our data, to make sure everything is in order. We can use the describe function for the numerical data, and the value_counts function for categorical data.", "print(df.describe()) #get descriptive statistics for all numerical columns\nprint()\nprint(df['author gender'].value_counts()) #frequency counts for categorical data\nprint()\nprint(df['year'].value_counts()) #treat year as a categorical variable\nprint()\nprint(df['year'].mode()) #find the year in which the most novels were published", "We can do a few things by just using the metadata already present.\nFor example, we can use the groupby and the count() function to graph the number of books by male and female authors. This is similar to the value_counts() function, but allows us to plot the output.", "#creat a pandas object that is a groupby dataframe, grouped on author gender\ngrouped_gender = df.groupby(\"author gender\")\nprint(grouped_gender['text'].count())", "Let's graph the number of texts by gender of author.", "grouped_gender['text'].count().plot(kind = 'bar')\nplt.show()\n\n#Ex: Create a variable called 'grouped_year' that groups the dataframe by year.\n## Print the number of texts per year.", "We can graph this via a line graph.", "grouped_year['text'].count().plot(kind = 'line')\nplt.show()", "Oops! That doesn't look right! Python automatically converted the year to scientific notation. We can set that option to False.", "plt.ticklabel_format(useOffset=False) #forces Python to not convert numbers\ngrouped_year['text'].count().plot(kind = 'line')\nplt.show()", "We haven't done any text analysis yet. Let's apply some of our text analysis techniques to the text, add columns with the output, and analyze/visualize the output.\n<a id='str'></a>\n2. The str attribute\nLuckily for us, pandas has an attribute called 'str' which allows us to access Python's built-in string functions.\nFor example, we can make the text lowercase, and assign this to a new column.\nNote: You may get a \"SettingWithCopyWarning\" highlighted with a pink background. This is not an error, it is Python telling you that while the code is valid, you might be doing something stupid. In this case, the warning is a false positive. In most cases you should read the warning carefully and try to fix your code.", "df['text_lc'] = df['text'].str.lower()\ndf\n\n##Ex: create a new column, 'text_split', that contains the lower case text split into list. \n####HINT: split on white space, don't tokenize it.", "<a id='apply'></a>\n3. The apply function\nWe can also apply a function to each row. To get a word count of a text file we would take the length of the split string like this:\nlen(text_split)\nIf we want to do this on every row in our dataframe, we can use the apply() function.", "df['word_count'] = df['text_split'].apply(len)\ndf", "What is the average length of each novel in our data? With pandas, this is easy!", "df['word_count'].mean()", "(These are long novels!) We can also group and slice our dataframe to do further analyses.", "###Ex: print the average novel length for male authors and female authors.\n###### What conclusions might you draw from this?\n\n###Ex: graph the average novel length by gender\n\n##EX: Add error bars to your graph", "Gold star exercise\nThis one is a bit tricky. If you're not quite there, no worries! We'll work through it together.\nEx: plot the average novel length by year, with error bars. Your x-axis should be year, and your y-axis number of words.\nHINT: Copy and paste what we did above with gender, and then change the necessary variables and options. By my count, you should only have to change one variable, and one graph option.", "#Write your exercise solution here", "<a id='lambda'></a>\n4. Applying NLTK Functions and the lambda function\nIf we want to apply nltk functions we can do so using .apply(). If we want to use list comprehension on the split text, we have to introduce one more Python trick: the lambda function. This simply allows us to write our own function to apply to each row in our dataframe. For example, we may want tokenize our text instead of splitting on the white space. To do this we can use the lambda function.\nNote: If you want to explore lambda functions more, see the notebook titled A-Bonus_LambdaFunctions.ipynb in this folder.\nBecause of the length of the novels tokenizing the text takes a bit of time. We'll instead tokenize the title only.", "df['title_tokens'] = df['title'].apply(nltk.word_tokenize)\ndf['title_tokens']", "With this tokenized list we might want to, for example, remove punctuation. Again, we can use the lambda function, with list comprehension.", "df['title_tokens_clean'] = df['title_tokens'].apply(lambda x: [word for word in x if word not in list(string.punctuation)])\ndf['title_tokens_clean']", "<a id='extract'></a>\n5. Extracting Text from a Dataframe\nWe may want to extract the text from our dataframe, to do further analyses on the text only. We can do this using the tolist() function and the join() function.", "novels = df['text'].tolist()\nprint(novels[:1])\n\n#turn all of the novels into one long string using the join function\ncat_novels = ''.join(n for n in novels)\nprint(cat_novels[:100])", "<a id='exercise'></a>\n6. Exercise: Average TTR (if time, otherwise do on your own)\nMotivating Question: Is there a difference in the average TTR for male and female authors?\nTo answer this, go step by step.\nFor computational reasons we will use the list we created by splitting on white spaces rather than tokenized text. So this is approximate only.\nWe first need to count the token type in each novel. We can do this in two steps. First, create a column that contains a list of the unique token types, by applying the set function.", "##Ex: create a new column, 'text_type', which contains a list of unique token types\n\n##Ex: create a new column, 'type_count', which is a count of the token types in each novel.\n##Ex: create a new column, 'ttr', which contains the type-token ratio for each novel.\n\n##Ex: Print the average ttr by author gender\n##Ex: Graph this result with error bars" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
openclimatedata/pymagicc
notebooks/Diagnose-TCR-ECS-TCRE.ipynb
agpl-3.0
[ "Diagnosing MAGICC's TCR, ECS and TCRE", "# NBVAL_IGNORE_OUTPUT\nfrom datetime import datetime\n\nfrom pymagicc.core import MAGICC6, MAGICC7\n\n%matplotlib inline\nfrom matplotlib import pyplot as plt\n\nplt.style.use(\"ggplot\")", "Basic usage\nThe simplest option is to simply call the diagnose_tcr_ecs_tcre method of the MAGICC instance and read out the results.", "with MAGICC6() as magicc:\n # you can tweak whatever parameters you want in\n # MAGICC6/run/MAGCFG_DEFAULTALL.CFG, here's a few\n # examples that might be of interest\n results = magicc.diagnose_tcr_ecs_tcre(\n CORE_CLIMATESENSITIVITY=2.75,\n CORE_DELQ2XCO2=3.65,\n CORE_HEATXCHANGE_LANDOCEAN=1.5,\n )\nprint(\n \"TCR is {tcr:.4f}, ECS is {ecs:.4f} and TCRE is {tcre:.6f}\".format(\n **results\n )\n)", "If we wish, we can alter the MAGICC instance's parameters before calling the diagnose_tcr_ecs method.", "with MAGICC6() as magicc:\n results_default = magicc.diagnose_tcr_ecs_tcre()\n results_low_ecs = magicc.diagnose_tcr_ecs_tcre(CORE_CLIMATESENSITIVITY=1.5)\n results_high_ecs = magicc.diagnose_tcr_ecs_tcre(\n CORE_CLIMATESENSITIVITY=4.5\n )\n\nprint(\n \"Default TCR is {tcr:.4f}, ECS is {ecs:.4f} and TCRE is {tcre:.6f}\".format(\n **results_default\n )\n)\nprint(\n \"Low TCR is {tcr:.4f}, ECS is {ecs:.4f} and TCRE is {tcre:.6f}\".format(\n **results_low_ecs\n )\n)\nprint(\n \"High TCR is {tcr:.4f}, ECS is {ecs:.4f} and TCRE is {tcre:.6f}\".format(\n **results_high_ecs\n )\n)", "Making a plot\nThe output also includes the timeseries that were used in the diagnosis experiment. Hence we can use the output to make a plot.", "# NBVAL_IGNORE_OUTPUT\njoin_year = 1900\n\npdf = (\n results[\"timeseries\"]\n .filter(region=\"World\")\n .to_iamdataframe()\n .swap_time_for_year()\n .data\n)\nfor variable, df in pdf.groupby(\"variable\"):\n fig, axes = plt.subplots(1, 2, sharey=True, figsize=(16, 4.5))\n unit = df[\"unit\"].unique()[0]\n\n for scenario, scdf in df.groupby(\"scenario\"):\n scdf.plot(x=\"year\", y=\"value\", ax=axes[0], label=scenario)\n scdf.plot(x=\"year\", y=\"value\", ax=axes[1], label=scenario)\n\n axes[0].set_xlim([1750, join_year])\n axes[0].set_ylabel(\"{} ({})\".format(variable, unit))\n\n axes[1].set_xlim(left=join_year)\n axes[1].legend_.remove()\n\n fig.tight_layout()\n\n# NBVAL_IGNORE_OUTPUT\nresults[\"timeseries\"].filter(\n scenario=\"abrupt-2xCO2\", region=\"World\", year=range(1795, 1905)\n).timeseries()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
oroszl/szamprob
notebooks/Package04/3D.ipynb
gpl-3.0
[ "3D ábrák\nA matplotlib csomag elsősorban 2D ábrák gyártására lett kitalálva. Ennek ellenére rendelkezik néhány 3D-s ábrakészítési függvénnyel is. Vizsgáljunk meg ebből párat! Ahhoz, hogy a 3D-s ábrázolási függvényeket el tudjuk érni, be kell tölteni a matplotlib csomag mpl_toolkits.mplot3d alcsomagját.", "%pylab inline \nfrom mpl_toolkits.mplot3d import * #3D-s ábrák alcsomagja\nfrom ipywidgets import * #interaktivitáshoz szükséges függvények", "Térbeli görbék, adathalmazok\nAhhoz hogy egy ábrát térben tudjunk megjeleníteni, fel kell készíteni a környezetet. A térbeli ábrák megjelenítése és azok tulajdonságainak beállítása kicsit körülményesebb a 2D-s ábráknál. A legszembetűnőbb különbség, hogy az ábrák úgynevezett axes (körül belül itt a koordinátatengelyekre kell gondolni...) objektumok köré csoportosulnak, s ezek tulajdonságaiként, illetve ezeken alkalmazott függvényekként jönnek létre maguk az ábrák. Példaképpen ábrázoljunk egy egszerű paraméteres térbeli görbét! Legyen ez a görbe a következő spirális függvény:\n\\begin{equation}\n\\mathbf{r}(t)=\\left(\\begin{array}{c}\n\\cos(3t)\\\n\\sin(3t)\\\nt\n\\end{array}\\right)\n\\end{equation}\nElőször is gyártsuk let a $t$ paraméter mintavételezési pontjait a $[0,2\\pi]$ intervallumban:", "t=linspace(0,2*pi,100) # 100 pont 0 és 2*pi között", "A következő kódcellában két dolog fog történni. Előszöris létrehozzuk az ax nevű axes objektumot, amelynek expliciten megadjuk, hogy 3D-s koordinátarendszer legyen. Illetve erre az objektumra hatva a plot függvénnyel létrehozzuk magát az ábrát. Figyeljük meg, hogy most a plot függvény háruom bemenő paramétert vár!", "ax=subplot(1,1,1,projection='3d') #térbeli koordináta tengely létrehozása\nax.plot(cos(3*t),sin(3*t),t)", "Ahogy a síkbeli ábráknál láttuk, a plot függvényt itt is használhatjuk rendezetlenül mintavételezett adatok ábrázolására is.", "ax=subplot(1,1,1,projection='3d')\nax.plot(rand(10),rand(10),rand(10),'o')", "A stílusdefiníciók a 2D ábrákhoz hasonló kulcsszavas argumentumok alapján dolgozódnak fel! Lássunk erre is egy példát:", "ax=subplot(1,1,1,projection='3d') #térbeli koordináta tengely létrehozása\nax.plot(cos(3*t),sin(3*t),t,color='green',linestyle='dashed',linewidth=3)", "Térbeli ábrák megjelenítése kapcsán rendszeresen felmerülő probléma, hogy jó irányból nézzünk rá az ábrára. Az ábra nézőpontjait a view_init függvény segítségével tudjuk megadni. A view_init két paramétere ekvatoriális gömbi koordinátarendszerben adja meg az ábra nézőpontját. A két bemenő paraméter a deklináció és az azimutszög fokban mérve. Például az $x$-tengely felől így lehet készíteni ábrát:", "ax=subplot(1,1,1,projection='3d') #térbeli koordináta tengely létrehozása\nax.plot(cos(3*t),sin(3*t),t)\nax.view_init(0,0)", "Az $y$-tengely felől pedig így:", "ax=subplot(1,1,1,projection='3d') #térbeli koordináta tengely létrehozása\nax.plot(cos(3*t),sin(3*t),t)\nax.view_init(0,90)", "Ha interaktív függvényeket használunk, akkor a nézőpontot az alábbiak szerint interaktívan tudjuk változtatni:", "\ndef forog(th,phi):\n ax=subplot(1,1,1,projection='3d')\n ax.plot(sin(3*t),cos(3*t),t)\n ax.view_init(th,phi)\n\ninteract(forog,th=(-90,90),phi=(0,360));", "Kétváltozós függvények és felületek\nA térbeli ábrák egyik előnye, hogy térbeli felületeket is meg tudunk jeleníteni. Ennek a legegyszerűbb esete a kétváltozós\n$$z=f(x,y)$$\n függvények magasságtérképszerű ábrázolása. Ahogy azt már megszoktuk, itt is az első feladat a mintavételezés és a függvény kiértékelése. Az alábbiakban vizsgáljuk meg a $$z=-[\\sin(x) ^{10} + \\cos(10 + y x) \\cos(x)]\\exp((-x^2-y^2)/4)$$ függvényt!", "x,y = meshgrid(linspace(-3,3,250),linspace(-5,5,250)) # mintavételezési pontok legyártása.\nz = -(sin(x) ** 10 + cos(10 + y * x) * cos(x))*exp((-x**2-y**2)/4) # függvény kiértékelés", "A plot_surface függvény segítségével jeleníthetjük meg ezt a függvényt.", "ax = subplot(111, projection='3d')\nax.plot_surface(x, y, z)", "Sokszor szemléletes a kirajzolódott felületet valamilyen színskála szerint színezni. Ezt a síkbeli ábráknál már megszokott módon a cmap kulcsszó segítségével tehetjük.", "ax = subplot(111, projection='3d')\nax.plot_surface(x, y, z,cmap='viridis')", "A térbeli felületek legáltalánosabb megadása kétparaméteres vektor értékű függvényekkel lehetséges. Azaz \n\\begin{equation}\n\\mathbf{r}(u,v)=\\left(\\begin{array}{c}\nf(u,v)\\\ng(u,v)\\\nh(u,v)\n\\end{array}\\right)\n\\end{equation}\nVizsgáljunk meg erre egy példát, ahol a megjeleníteni kívánt felület egy tórusz! A tórusz egy lehetséges paraméterezése a következő:\n\\begin{equation}\n\\mathbf{r}(\\theta,\\varphi)=\\left(\\begin{array}{c}\n(R_1 + R_2 \\cos \\theta) \\cos{\\varphi}\\\n(R_1 + R_2 \\cos \\theta) \\sin{\\varphi} \\\nR_2 \\sin \\theta\n\\end{array}\\right)\n\\end{equation}\nItt $R_1$ és $R_2$ a tórusz két sugarának paramétere, $\\theta$ és $\\varphi$ pedig mind a ketten a $[0,2\\pi]$ intervallumon futnak végig. Legyen $R_1=4$ és $R_2=1$. Rajzoljuk ki ezt a felületet! Első lépésként gyártsuk le az ábrázolandó felület pontjait:", "theta,phi=meshgrid(linspace(0,2*pi,250),linspace(0,2*pi,250))\nx=(4 + 1*cos(theta))*cos(phi)\ny=(4 + 1*cos(theta))*sin(phi) \nz=1*sin(theta)", "Ábrázolni ismét a plot_surface függvény segítségével tudunk:", "ax = subplot(111, projection='3d')\nax.plot_surface(x, y, z)", "A fenti ábrát egy kicsit arányosabbá tehetjük, ha a tengelyek megjelenítésének arányát, illetve a tengerek határait átállítjuk. Ezt a set_aspect, illetve a set_xlim, set_ylim és set_zlim függvények segítségével tehetjük meg:", "ax = subplot(111, projection='3d')\nax.plot_surface(x, y, z)\nax.set_aspect('equal');\nax.set_xlim(-5,5);\nax.set_ylim(-5,5);\nax.set_zlim(-5,5);", "Végül tegyük ezt az ábrát is interaktívvá:", "def forog(th,ph):\n ax = subplot(111, projection='3d')\n ax.plot_surface(x, y, z)\n ax.view_init(th,ph)\n ax.set_aspect('equal');\n ax.set_xlim(-5,5);\n ax.set_ylim(-5,5);\n ax.set_zlim(-5,5);\n\ninteract(forog,th=(-90,90),ph=(0,360));", "Erőterek 3D-ben\nTérbeli vektortereket, azaz olyan függvényeket, amelyek a tér minden pontjához egy háromdimenziós vektort rendelnek, a síkbeli ábrákhoz hasonlóan itt is a quiver parancs segítségével tudunk megjeleníteni. Az alábbi példában az egységgömb felületének 100 pontjába rajzolunk egy-egy radiális irányba mutató vektort:", "phiv,thv=(2*pi*rand(100),pi*rand(100)) #Ez a két sor a térbeli egység gömb \nxv,yv,zv=(cos(phiv)*sin(thv),sin(phiv)*sin(thv),cos(thv)) #100 véletlen pontját jelöli ki\nuv,vv,wv=(xv,yv,zv) #Ez pedig a megfelelő pontokhoz hozzá rendel egy egy radiális vektort\n\nax = subplot(111, projection='3d')\nax.quiver(xv, yv, zv, uv, vv, wv, length=0.3,color='darkcyan')\nax.set_aspect('equal')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
astro4dev/OAD-Data-Science-Toolkit
Teaching Materials/Programming/Python/Python3Espanol/1_Introduccion/02. Variables, tipos y operaciones.ipynb
gpl-3.0
[ "Variables, tipos y operaciones\n\nEnteros, flotantes, cadenas, booleanos.\nOperaciones entre tipos\nNombres de variables", "a=2 # Declaro (defino) el valor de la variable a\n\na\n\na+a+a+a+a\n\na*5\n\na * 5\n\na*a\n\ntype(a)\n\nb=1.5\n\ntype(b)\n\nB=2.5\n\nprint(b,B)\n\nc='German '\n\ntype(c)\n\nc\n\nc+c+c+c+c\n\nc*5\n\nc*c\n\n1<5\n\n1 < 5\n\n1<0\n\nd=(1<5)\n\nd\n\nd = (1 < 5)\n\nd\n\ntype(d)\n\n2*1.2\n\nb*c\n\na*c\n\n2*False\n\n2*True\n\n2*False\n\na<b\n\ne=(a<b)\n\ne\n\na==b # Son iguales?\n\na=b # Re-defino el valor de a, y le asigno el que tiene b\n\nhoy='Martes'\nprint(hoy)\nhoy='Miércoles'\nprint(hoy)\nhoy='Jueves'\nprint(hoy)\nhoy=3\nprint(hoy)\n\nhoy = \"Viernes\"\nhoy = 32.5\nhoy = 19\nprint(hoy)", "¿Cómo podemos encontrar el tipo de una variable?\n\n(a) Usar la función print() para determinar el tipo mirando el resultado.\n(b) Usar la función type().\n(c) Usarla en una expresión y usar print() sobre el resultado.\n(d) Mirar el lugar donde se declaró la variable.\n\nSi a=\"10\" y b=\"Diez\", se puede decir que a y b:\n\n(a) Son del mismo tipo.\n(b) Se pueden multiplicar.\n(c) Son iguales.\n(d) Son de tipos distintos.", "a='10'\nb='Diez'\ntype(b)", "Operaciones entre tipos", "int(3.14)\n\nint(3.9999) # Redondea?\n\nint?\n\nint\n\nint(3.0)\n\nint(3)\n\nint(\"12\")\n\nint(\"twelve\")\n\nfloat(3)\n\nfloat?\n\nstr(3)\n\nstr(3.0)\n\nstr(int(2.9999))", "Nombres de variables", "hola=10\nhola\n\nmi variable=10\n\nmi_variable=10\n\nmi-variable=10\n\nmi.variable=10\n\nmi$variable=10\n\nvariable_1=34\n\n1_variable=34\n\npi=3.1315\n\ndef=10", "Palabras (keywords) que Python no deja usar como nombres para variables:\nand as assert break class continue\ndef del elif else except exec\nfinally for from global if import\nin is lambda nonlocal not or\npass raise return try while with\nyield True False None\n<img src=\"img/mc.jpg\">\nEste material fue recopilado para Clubes de Ciencia Colombia 2017 por Luis Henry Quiroga (GitHub: lhquirogan) - Germán Chaparro (GitHub: saint-germain), y fue traducido y adaptado de http://interactivepython.org/runestone/static/thinkcspy/index.html\nCopyright (C) Brad Miller, David Ranum, Jeffrey Elkner, Peter Wentworth, Allen B. Downey, Chris\nMeyers, and Dario Mitchell. Permission is granted to copy, distribute\nand/or modify this document under the terms of the GNU Free Documentation\nLicense, Version 1.3 or any later version published by the Free Software\nFoundation; with Invariant Sections being Forward, Prefaces, and\nContributor List, no Front-Cover Texts, and no Back-Cover Texts. A copy of\nthe license is included in the section entitled “GNU Free Documentation\nLicense”." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/asl-ml-immersion
notebooks/launching_into_ml/solutions/2_first_model.ipynb
apache-2.0
[ "First BigQuery ML models for Taxifare Prediction\nLearning Objectives\n * Choose the correct BigQuery ML model type and specify options\n * Evaluate the performance of your ML model\n * Improve model performance through data quality cleanup\n * Create a Deep Neural Network (DNN) using SQL\nOverview\nIn this notebook, we will use BigQuery ML to build our first models for taxifare prediction.BigQuery ML provides a fast way to build ML models on large structured and semi-structured datasets. We'll start by creating a dataset to hold all the models we create in BigQuery.\nSet environment variables", "PROJECT = !gcloud config get-value project\nPROJECT = PROJECT[0]\nBUCKET = PROJECT\nREGION = \"us-central1\"\n\n%env PROJECT=$PROJECT\n%env BUCKET=$BUCKET\n%env REGION=$REGION", "Create a BigQuery Dataset and Google Cloud Storage Bucket\nA BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called serverlessml if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.", "%%bash\n\n# Create a BigQuery dataset for serverlessml if it doesn't exist\ndatasetexists=$(bq ls -d | grep -w serverlessml)\n\nif [ -n \"$datasetexists\" ]; then\n echo -e \"BigQuery dataset already exists, let's not recreate it.\"\nelse\n echo \"Creating BigQuery dataset titled: serverlessml\"\n\n bq --location=US mk --dataset \\\n --description 'Taxi Fare' \\\n $PROJECT:serverlessml\n echo \"\\nHere are your current datasets:\"\n bq ls\nfi \n\n# Create GCS bucket if it doesn't exist already...\nexists=$(gsutil ls -d | grep -w gs://${BUCKET}/)\n\nif [ -n \"$exists\" ]; then\n echo -e \"Bucket exists, let's not recreate it.\"\nelse\n echo \"Creating a new GCS bucket.\"\n gsutil mb -l ${REGION} gs://${BUCKET}\n echo \"\\nHere are your current buckets:\"\n gsutil ls\nfi", "Model 1: Raw data\nLet's build a model using just the raw data. It's not going to be very good, but sometimes it is good to actually experience this.\nThe model will take a minute or so to train. When it comes to ML, this is blazing fast.", "%%bigquery\nCREATE OR REPLACE MODEL\n serverlessml.model1_rawdata\n\nOPTIONS(input_label_cols=['fare_amount'],\n model_type='linear_reg') AS\n\nSELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count * 1.0 AS passengers\nFROM\n `nyc-tlc.yellow.trips`\nWHERE\n ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1", "Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook.\nNote that BigQuery automatically split the data we gave it, and trained on only a part of the data and used the rest for evaluation. We can look at eval statistics on that held-out data:", "%%bigquery\nSELECT * FROM ML.EVALUATE(MODEL serverlessml.model1_rawdata)", "Let's report just the error we care about, the Root Mean Squared Error (RMSE)", "%%bigquery\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL serverlessml.model1_rawdata)", "We told you it was not going to be good! Recall that our heuristic got 8.13, and our target is $6.\nNote that the error is going to depend on the dataset that we evaluate it on.\nWe can also evaluate the model on our own held-out benchmark/test dataset, but we shouldn't make a habit of this (we want to keep our benchmark dataset as the final evaluation, not make decisions using it all along the way. If we do that, our test dataset won't be truly independent).", "%%bigquery\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL serverlessml.model1_rawdata, (\n SELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count * 1.0 AS passengers\n FROM\n `nyc-tlc.yellow.trips`\n WHERE\n ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 2\n AND trip_distance > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n AND passenger_count > 0\n ))", "Model 2: Apply data cleanup\nRecall that we did some data cleanup in the previous lab. Let's do those before training.\nThis is a dataset that we will need quite frequently in this notebook, so let's extract it first.", "%%bigquery\nCREATE OR REPLACE TABLE\n serverlessml.cleaned_training_data AS\n\nSELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count*1.0 AS passengers\nFROM\n `nyc-tlc.yellow.trips`\nWHERE\n ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1\n AND trip_distance > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n AND passenger_count > 0\n\n%%bigquery\n-- LIMIT 0 is a free query, this allows us to check that the table exists.\nSELECT * FROM serverlessml.cleaned_training_data\nLIMIT 0\n\n%%bigquery\nCREATE OR REPLACE MODEL\n serverlessml.model2_cleanup\n\nOPTIONS(input_label_cols=['fare_amount'],\n model_type='linear_reg') AS\n\nSELECT\n *\nFROM\n serverlessml.cleaned_training_data\n\n%%bigquery\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL serverlessml.model2_cleanup)", "Model 3: More sophisticated models\nWhat if we try a more sophisticated model? Let's try Deep Neural Networks (DNNs) in BigQuery:\nDNN\nTo create a DNN, simply specify dnn_regressor for the model_type and add your hidden layers.", "%%bigquery\n-- This training takes on the order of 15 minutes.\nCREATE OR REPLACE MODEL\n serverlessml.model3b_dnn\n\nOPTIONS(input_label_cols=['fare_amount'],\n model_type='dnn_regressor', hidden_units=[32, 8]) AS\n\nSELECT\n *\nFROM\n serverlessml.cleaned_training_data\n\n%%bigquery\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL serverlessml.model3b_dnn)", "Nice!\nEvaluate DNN on benchmark dataset\nLet's use the same validation dataset to evaluate -- remember that evaluation metrics depend on the dataset. You can not compare two models unless you have run them on the same withheld data.", "%%bigquery\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL serverlessml.model3b_dnn, (\n SELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_datetime,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count * 1.0 AS passengers,\n 'unused' AS key\n FROM\n `nyc-tlc.yellow.trips`\n WHERE\n ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2\n AND trip_distance > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n AND passenger_count > 0\n ))", "Wow! Later in this sequence of notebooks, we will get to below $4, but this is quite good, for very little work.\nIn this notebook, we showed you how to use BigQuery ML to quickly build ML models. We will come back to BigQuery ML when we want to experiment with different types of feature engineering. The speed of BigQuery ML is very attractive for development.\nCopyright 2019 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/nerc/cmip6/models/sandbox-3/toplevel.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Toplevel\nMIP Era: CMIP6\nInstitute: NERC\nSource ID: SANDBOX-3\nSub-Topics: Radiative Forcings. \nProperties: 85 (42 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:27\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'nerc', 'sandbox-3', 'toplevel')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Flux Correction\n3. Key Properties --&gt; Genealogy\n4. Key Properties --&gt; Software Properties\n5. Key Properties --&gt; Coupling\n6. Key Properties --&gt; Tuning Applied\n7. Key Properties --&gt; Conservation --&gt; Heat\n8. Key Properties --&gt; Conservation --&gt; Fresh Water\n9. Key Properties --&gt; Conservation --&gt; Salt\n10. Key Properties --&gt; Conservation --&gt; Momentum\n11. Radiative Forcings\n12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2\n13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4\n14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O\n15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3\n16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3\n17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC\n18. Radiative Forcings --&gt; Aerosols --&gt; SO4\n19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon\n20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon\n21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate\n22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect\n23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect\n24. Radiative Forcings --&gt; Aerosols --&gt; Dust\n25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic\n26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic\n27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt\n28. Radiative Forcings --&gt; Other --&gt; Land Use\n29. Radiative Forcings --&gt; Other --&gt; Solar \n1. Key Properties\nKey properties of the model\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop level overview of coupled model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of coupled model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Flux Correction\nFlux correction properties of the model\n2.1. Details\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how flux corrections are applied in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Genealogy\nGenealogy and history of the model\n3.1. Year Released\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nYear the model was released", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. CMIP3 Parent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCMIP3 parent if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.3. CMIP5 Parent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCMIP5 parent if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.4. Previous Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nPreviously known as", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Software Properties\nSoftware properties of model\n4.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.4. Components Structure\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how model realms are structured into independent software components (coupled via a coupler) and internal software components.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.5. Coupler\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nOverarching coupling framework for model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OASIS\" \n# \"OASIS3-MCT\" \n# \"ESMF\" \n# \"NUOPC\" \n# \"Bespoke\" \n# \"Unknown\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Coupling\n**\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of coupling in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Atmosphere Double Flux\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5.3. Atmosphere Fluxes Calculation Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nWhere are the air-sea fluxes calculated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Atmosphere grid\" \n# \"Ocean grid\" \n# \"Specific coupler grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5.4. Atmosphere Relative Winds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Tuning Applied\nTuning methodology for model\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics/diagnostics of the global mean state used in tuning model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics/diagnostics used in tuning model/component (such as 20th century)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.5. Energy Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.6. Fresh Water Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Conservation --&gt; Heat\nGlobal heat convervation properties of the model\n7.1. Global\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how heat is conserved globally", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Atmos Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Atmos Land Interface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how heat is conserved at the atmosphere/land coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.4. Atmos Sea-ice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.5. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.6. Land Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the land/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation --&gt; Fresh Water\nGlobal fresh water convervation properties of the model\n8.1. Global\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how fresh_water is conserved globally", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Atmos Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh_water is conserved at the atmosphere/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Atmos Land Interface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how fresh water is conserved at the atmosphere/land coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Atmos Sea-ice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.5. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh water is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.6. Runoff\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how runoff is distributed and conserved", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.7. Iceberg Calving\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how iceberg calving is modeled and conserved", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.8. Endoreic Basins\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how endoreic basins (no ocean access) are treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.9. Snow Accumulation\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how snow accumulation over land and over sea-ice is treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Key Properties --&gt; Conservation --&gt; Salt\nGlobal salt convervation properties of the model\n9.1. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how salt is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Key Properties --&gt; Conservation --&gt; Momentum\nGlobal momentum convervation properties of the model\n10.1. Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how momentum is conserved in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Radiative Forcings\nRadiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)\n11.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of radiative forcings (GHG and aerosols) implementation in model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2\nCarbon dioxide forcing\n12.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4\nMethane forcing\n13.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O\nNitrous oxide forcing\n14.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3\nTroposheric ozone forcing\n15.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3\nStratospheric ozone forcing\n16.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC\nOzone-depleting and non-ozone-depleting fluorinated gases forcing\n17.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Equivalence Concentration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDetails of any equivalence concentrations used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"Option 1\" \n# \"Option 2\" \n# \"Option 3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Radiative Forcings --&gt; Aerosols --&gt; SO4\nSO4 aerosol forcing\n18.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon\nBlack carbon aerosol forcing\n19.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon\nOrganic carbon aerosol forcing\n20.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate\nNitrate forcing\n21.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect\nCloud albedo effect forcing (RFaci)\n22.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect\nCloud lifetime effect forcing (ERFaci)\n23.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "23.3. RFaci From Sulfate Only\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative forcing from aerosol cloud interactions from sulfate aerosol only?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "23.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "24. Radiative Forcings --&gt; Aerosols --&gt; Dust\nDust forcing\n24.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic\nTropospheric volcanic forcing\n25.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic\nStratospheric volcanic forcing\n26.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt\nSea salt forcing\n27.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Radiative Forcings --&gt; Other --&gt; Land Use\nLand use forcing\n28.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "28.2. Crop Change Only\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLand use change represented via crop change only?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "28.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Radiative Forcings --&gt; Other --&gt; Solar\nSolar forcing\n29.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow solar forcing is provided", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"irradiance\" \n# \"proton\" \n# \"electron\" \n# \"cosmic ray\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "29.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ML4DS/ML4all
U2.SpectralClustering/.ipynb_checkpoints/SpecClustering-checkpoint.ipynb
mit
[ "Spectral Clustering Algorithms\nNotebook version: 1.1 (Nov 17, 2017)\n\nAuthor: Jesús Cid Sueiro (jcid@tsc.uc3m.es)\n Jerónimo Arenas García (jarenas@tsc.uc3m.es)\n\nChanges: v.1.0 - First complete version. \n v.1.1 - Python 3 version", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy import stats\n\n# use seaborn plotting defaults\nimport seaborn as sns; sns.set()\n\nfrom sklearn.cluster import KMeans\nfrom sklearn.datasets.samples_generator import make_blobs, make_circles\nfrom sklearn.utils import shuffle\nfrom sklearn.metrics.pairwise import rbf_kernel\nfrom sklearn.cluster import SpectralClustering\n\n# For the graph representation\nimport networkx as nx", "1. Introduction\nThe key idea of spectral clustering algorithms is to search for groups of connected data. I.e, rather than pursuing compact clusters, spectral clustering allows for arbitrary shape clusters.\nThis can be illustrated with two artifitial datasets that we will use along this notebook.\n1.1. Gaussian clusters:\nThe first one consists of 4 compact clusters generated from a Gaussian distribution. This is the kind of dataset that are best suited to centroid-based clustering algorithms like $K$-means. If the goal of the clustering algorithm is to minimize the intra-cluster distances and find a representative prototype or centroid for each cluster, $K$-means may be a good option.", "N = 300\nnc = 4\nXs, ys = make_blobs(n_samples=N, centers=nc,\n random_state=6, cluster_std=0.60, shuffle = False)\nX, y = shuffle(Xs, ys, random_state=0)\n\nplt.scatter(X[:, 0], X[:, 1], s=30);\nplt.axis('equal')\nplt.show()", "Note that we have computed two data matrices: \n\n${\\bf X}$, which contains the data points in an arbitray ordering\n${\\bf X}_s$, where samples are ordered by clusters, according to the cluster id array, ${\\bf y}$.\n\nNote that both matrices contain the same data (rows) but in different order. The sorted matrix will be useful later for illustration purposes, but keep in mind that, in a real clustering application, vector ${\\bf y}$ is unknown (learning is not supervised), and only a data matrix with an arbitrary ordering (like ${\\bf X}$) will be available. \n1.2. Concentric rings\nThe second dataset contains two concentric rings. One could expect from a clustering algorithm to identify two different clusters, one per each ring of points. If this is the case, $K$-means or any other algorithm focused on minimizing distances to some cluster centroids is not a good choice.", "X2s, y2s = make_circles(n_samples=N, factor=.5, noise=.05, shuffle=False)\nX2, y2 = shuffle(X2s, y2s, random_state=0)\nplt.scatter(X2[:, 0], X2[:, 1], s=30)\nplt.axis('equal')\nplt.show()", "Note, again, that we have computed both the sorted (${\\bf X}_{2s}$) and the shuffled (${\\bf X}_2$) versions of the dataset in the code above.\nExercise 1:\nUsing the code of the previous notebook, run the $K$-means algorithm with 4 centroids for the two datasets. In the light of your results, why do you think $K$-means does not work well for the second dataset?", "# <SOL>\nest = KMeans(n_clusters=4)\nclusters = est.fit_predict(X)\nplt.scatter(X[:, 0], X[:, 1], c=clusters, s=30, cmap='rainbow')\nplt.axis('equal')\n\nclusters = est.fit_predict(X2)\nplt.figure()\nplt.scatter(X2[:, 0], X2[:, 1], c=clusters, s=30, cmap='rainbow')\nplt.axis('equal')\nplt.show()\n# </SOL>", "Spectral clustering algorithms are focused on connectivity: clusters are determined by maximizing some measure of intra-cluster connectivity and maximizing some form of inter-cluster connectivity.\n2. The affinity matrix\n2.1. Similarity function\nTo implement a spectral clustering algorithm we must specify a similarity measure between data points. In this session, we will use the rbf kernel, that computes the similarity between ${\\bf x}$ and ${\\bf y}$ as:\n$$\\kappa({\\bf x},{\\bf y}) = \\exp(-\\gamma \\|{\\bf x}-{\\bf y}\\|^2)$$\nOther similarity functions can be used, like the kernel functions implemented in Scikit-learn (see the <a href=http://scikit-learn.org/stable/modules/metrics.html> metrics </a> module).\n2.2. Affinity matrix\nFor a dataset ${\\cal S} = {{\\bf x}^{(0)},\\ldots,{\\bf x}^{(N-1)}}$, the $N\\times N$ affinity matrix ${\\bf K}$ contains the similarity measure between each pair of samples. Thus, its components are\n$$K_{ij} = \\kappa\\left({\\bf x}^{(i)}, {\\bf x}^{(j)}\\right)$$\nThe following fragment of code illustrates all pairs of distances between any two points in the dataset.", "gamma = 0.5\nK = rbf_kernel(X, X, gamma=gamma)", "2.3. Visualization\nWe can visualize the affinity matrix as an image, by translating component values into pixel colors or intensities.", "plt.imshow(K, cmap='hot')\nplt.colorbar()\nplt.title('RBF Affinity Matrix for gamma = ' + str(gamma))\nplt.grid('off')\nplt.show()", "Despite the apparent randomness of the affinity matrix, it contains some hidden structure, that we can uncover by visualizing the affinity matrix computed with the sorted data matrix, ${\\bf X}_s$.", "Ks = rbf_kernel(Xs, Xs, gamma=gamma)\n\nplt.imshow(Ks, cmap='hot')\nplt.colorbar()\nplt.title('RBF Affinity Matrix for gamma = ' + str(gamma))\nplt.grid('off')\nplt.show()", "Note that, despite their completely different appearance, both affinity matrices contain the same values, but with a different order of rows and columns.\nFor this dataset, the sorted affinity matrix is almost block diagonal. Note, also, that the block-wise form of this matrix depends on parameter $\\gamma$.\nExercise 2:\nModify the selection of $\\gamma$, and check the effect of this in the appearance of the sorted similarity matrix. Write down the values for which you consider that the structure of the matrix better resembles the number of clusters in the datasets.\nOut from the diagonal block, similarities are close to zero. We can enforze a block diagonal structure be setting to zero the small similarity values. \nFor instance, by thresholding ${\\bf K}s$ with threshold $t$, we get the truncated (and sorted) affinity matrix\n$$\n\\overline{K}{s,ij} = K_{s,ij} \\cdot \\text{u}(K_{s,ij} - t)\n$$\n(where $\\text{u}()$ is the step function) which is block diagonal.\nExercise 3:\nCompute the truncated and sorted affinity matrix with $t=0.001$", "t = 0.001\n# Kt = <FILL IN> # Truncated affinity matrix\nKt = K*(K>t) # Truncated affinity matrix\n# Kst = <FILL IN> # Truncated and sorted affinity matrix\nKst = Ks*(Ks>t) # Truncated and sorted affinity matrix\n# </SOL>", "3. Affinity matrix and data graph\nAny similarity matrix defines a weighted graph in such a way that the weight of the edge linking ${\\bf x}^{(i)}$ and ${\\bf x}^{(j)}$ is $K_{ij}$.\nIf $K$ is a full matrix, the graph is fully connected (there is and edge connecting every pair of nodes). But we can get a more interesting sparse graph by setting to zero the edges with a small weights. \nFor instance, let us visualize the graph for the truncated affinity matrix $\\overline{\\bf K}$ with threshold $t$. You can also check the effect of increasing or decreasing $t$.", "G = nx.from_numpy_matrix(Kt)\ngraphplot = nx.draw(G, X, node_size=40, width=0.5,)\nplt.axis('equal')\nplt.show()", "Note that, for this dataset, the graph connects edges from the same cluster only. Therefore, the number of diagonal blocks in $\\overline{\\bf K}_s$ is equal to the number of connected components in the graph.\nNote, also, the graph does not depend on the sample ordering in the data matrix: the graphs for any matrix ${\\bf K}$ and its sorted version ${\\bf K}_s$ are the same.\n4. The Laplacian matrix\nThe <a href = https://en.wikipedia.org/wiki/Laplacian_matrix>Laplacian matrix</a> of a given affinity matrix ${\\bf K}$ is given by\n$${\\bf L} = {\\bf D} - {\\bf K}$$\nwhere ${\\bf D}$ is the diagonal degree matrix given by\n$$D_{ii}=\\sum^{n}{j} K{ij}$$\n4.1. Properties of the Laplacian matrix\nThe Laplacian matrix of any symmetric matrix ${\\bf K}$ has several interesting properties:\nP1.\n\n${\\bf L}$ is symmetric and positive semidefinite. Therefore, all its eigenvalues $\\lambda_0,\\ldots, \\lambda_{N-1}$ are non-negative. Remind that each eigenvector ${\\bf v}$ with eigenvalue $\\lambda$ satisfies\n$${\\bf L} \\cdot {\\bf v} = \\lambda {\\bf v}$$\n\nP2.\n\n${\\bf L}$ has at least one eigenvector with zero eigenvalue: indeed, for ${\\bf v} = {\\bf 1}_N = (1, 1, \\ldots, 1)^\\intercal$ we get\n$${\\bf L} \\cdot {\\bf 1}_N = {\\bf 0}_N$$\nwhere ${\\bf 0}_N$ is the $N$ dimensional all-zero vector.\n\nP3.\n\nIf ${\\bf K}$ is block diagonal, its Laplacian is block diagonal.\n\nP4.\n\nIf ${\\bf L}$ is a block diagonal with blocks ${\\bf L}0, {\\bf L}_1, \\ldots, {\\bf L}{c-1}$, then it has at least $c$ orthogonal eigenvectors with zero eigenvalue: indeed, each block ${\\bf L}_i$ is the Laplacian matrix of the graph containing the samples in the $i$ connected component, therefore, according to property P2,\n$${\\bf L}i \\cdot {\\bf 1}{N_i} = {\\bf 0}_{N_i}$$\nwhere $N_i$ is the number of samples in the $i$-th connected component.\nTherefore, if $${\\bf v}i = \\left(\\begin{array}{l} \n{\\bf 0}{N_0} \\\n\\vdots \\\n{\\bf 0}{N{i-1}} \\\n{\\bf 1}{N_i} \\\n{\\bf 0}{N_{i+1}} \\\n\\vdots \\\n{\\bf 0}{N{c-1}}\n\\end{array}\n\\right)\n$$ \nthen\n$${\\bf L} \\cdot {\\bf v}{i} = {\\bf 0}{N}$$\n\nWe can compute the Laplacian matrix for the given dataset and visualize the eigenvalues:", "Dst = np.diag(np.sum(Kst, axis=1))\nLst = Dst - Kst\n\n# Next, we compute the eigenvalues of the matrix\nw = np.linalg.eigvalsh(Lst)\nplt.figure()\nplt.plot(w, marker='.');\nplt.title('Eigenvalues of the matrix')\nplt.show()", "Exercise 4:\nVerify that ${\\bf 1}N$ is an eigenvector with zero eigenvalues. To do so, compute ${\\bf L}{st} \\cdot {\\bf 1}_N$ and verify that its <a href= https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.norm.html>euclidean norm</a> is close to zero (it may be not exactly zero due to finite precission errors).\nVerify that vectors ${\\bf v}_i$ defined above (that you can compute using vi = (ys==i)) also have zero eigenvalue.", "# <SOL>\nprint(np.linalg.norm(Lst.dot(np.ones((N,1)))))\nfor i in range(nc):\n vi = (ys==i)\n print(np.linalg.norm(Lst.dot(vi)))\n# </SOL>", "Exercise 5:\nVerify that the spectral properties of the Laplacian matrix computed from ${\\bf K}{st}$ still apply using the unsorted matrix, ${\\bf K}_t$: compute ${\\bf L}{t} \\cdot {\\bf v}'_{i}$, where ${\\bf v}'_i$ is a binary vector with components equal to 1 at the positions corresponding to samples in cluster $i$ (that you can compute using vi = (y==i))), and verify that its euclidean norm is close to zero.", "# <SOL>\nDt = np.diag(np.sum(Kt, axis=1))\nLt = Dt - Kt\nprint(np.linalg.norm(Lt.dot(np.ones((N,1)))))\nfor i in range(nc):\n vi = (y==i)\n print(np.linalg.norm(Lt.dot(vi)))\n# </SOL>", "Note that the position of 1's in eigenvectors ${\\bf v}_i$ points out the samples in the $i$-th connected component. This suggest the following tentative clustering algorithm:\n\nCompute the affinity matrix\nCompute the laplacian matrix\nCompute $c$ orthogonal eigenvectors with zero eigenvalue\nIf $v_{in}=1$, assign ${\\bf x}^{(n)}$ to cluster $i$. \n\nThis is the grounding idea of some spectral clustering algorithms. In this precise form, this algorithm does not usually work, for several reasons that we will discuss next, but with some modifications it becomes a powerfull method.\n4.2. Computing eigenvectors of the Laplacian Matrix\nOne of the reasons why the algorithm above may not work is that vectors ${\\bf v}'0, \\ldots,{\\bf v}'{c-1}$ are not the only zero eigenvectors or ${\\bf L}_t$: any linear combination of them is also a zero eigenvector. Eigenvector computation algorithms may return a different set of orthogonal eigenvectors.\nHowever, one can expect that eigenvector should have similar component in the positions corresponding to samples in the same connected component.", "wst, vst = np.linalg.eigh(Lst)\n\nfor n in range(nc):\n plt.plot(vst[:,n], '.-')", "4.3. Non block diagonal matrices.\nAnother reason to modify our tentative algorithm is that, in more realistic cases, the affinity matrix may have an imperfect block diagonal structure. In such cases, the smallest eigenvalues may be nonzero and eigenvectors may be not exactly piecewise constant.\nExercise 6\nPlot the eigenvector profile for the shuffled and not thresholded affinity matrix, ${\\bf K}$.", "# <SOL>\nD = np.diag(np.sum(K, axis=1))\nL = D - K\nw, v = np.linalg.eigh(L)\nfor n in range(nc):\n plt.plot(v[:,n], '.-')\n# </SOL>", "Note that, despite the eigenvector components can not be used as a straighforward cluster indicator, they are strongly informative of the clustering structure. \n\nAll points in the same cluster have similar values of the corresponding eigenvector components $(v_{n0}, \\ldots, v_{n,c-1})$.\nPoints from different clusters have different values of the corresponding eigenvector components $(v_{n0}, \\ldots, v_{n,c-1})$.\n\nTherfore we can define vectors ${\\bf z}^{(n)} = (v_{n0}, \\ldots, v_{n,c-1})$ and apply a centroid based algorithm (like $K$-means) to identify all points with similar eigenvector components. The corresponding samples in ${\\bf X}$ become the final clusters of the spectral clustering algorithm. \nOne possible way to identify the cluster structure is to apply a $K$-means algorithm over the eigenvector coordinates. The steps of the spectral clustering algorithm become the following\n5. A spectral clustering (graph cutting) algorithm\n5.1. The steps of the spectral clustering algorithm.\nSummarizing, the steps of the spectral clustering algorithm for a data matrix ${\\bf X}$ are the following:\n\nCompute the affinity matrix, ${\\bf K}$. Optionally, truncate the smallest components to zero.\nCompute the laplacian matrix, ${\\bf L}$\nCompute the $c$ orthogonal eigenvectors with smallest eigenvalues, ${\\bf v}0,\\ldots,{\\bf v}{c-1}$\nConstruct the sample set ${\\bf Z}$ with rows ${\\bf z}^{(n)} = (v_{0n}, \\ldots, v_{c-1,n})$\nApply the $K$-means algorithms over ${\\bf Z}$ with $K=c$ centroids.\nAssign samples in ${\\bf X}$ to clusters: if ${\\bf z}^{(n)}$ is assigned by $K$-means to cluster $i$, assign sample ${\\bf x}^{(n)}$ in ${\\bf X}$ to cluster $i$.\n\nExercise 7:\nIn this exercise we will apply the spectral clustering algorithm to the two-rings dataset ${\\bf X}_2$, using $\\gamma = 20$, $t=0.1$ and $c = 2$ clusters.\n\nComplete step 1, and plot the graph induced by ${\\bf K}$", "# <SOL>\ng = 20\nt = 0.1\nK2 = rbf_kernel(X2, X2, gamma=g)\nK2t = K2*(K2>t)\nG2 = nx.from_numpy_matrix(K2t)\ngraphplot = nx.draw(G2, X2, node_size=40, width=0.5)\nplt.axis('equal')\nplt.show()\n# </SOL>", "Complete step 2, 3 and 4, and draw a scatter plot of the samples in ${\\bf Z}$", "# <SOL>\nD2t = np.diag(np.sum(K2t, axis=1))\nL2t = D2t - K2t\nw2t, v2t = np.linalg.eigh(L2t)\nZ2t = v2t[:,0:2]\n\nplt.scatter(Z2t[:,0], Z2t[:,1], s=20)\nplt.show()\n# </SOL>", "Complete step 5", "est = KMeans(n_clusters=2)\nclusters = est.fit_predict(Z2t)", "Finally, complete step 6 and show, in a scatter plot, the result of the clustering algorithm", "plt.scatter(X2[:, 0], X2[:, 1], c=clusters, s=50, cmap='rainbow')\nplt.axis('equal')\nplt.show()", "5.2. Scikit-learn implementation.\nThe <a href=http://scikit-learn.org/stable/modules/generated/sklearn.cluster.SpectralClustering.html> spectral clustering algorithm </a> in Scikit-learn requires the number of clusters to be specified. It works well for a small number of clusters but is not advised when using many clusters and/or data.\nFinally, we are going to run spectral clustering on both datasets. Spend a few minutes figuring out the meaning of parameters of the Spectral Clustering implementation of Scikit-learn:\nhttp://scikit-learn.org/stable/modules/generated/sklearn.cluster.SpectralClustering.html\nNote that there is not equivalent parameter to our threshold $t$, which has been useful for the graph representations. However, playing with $\\gamma$ should be enough to get a good clustering.\nThe following piece of code executes the algorithm with an 'rbf' kernel. You can manually adjust the number of clusters and the parameter of the kernel to study the behavior of the algorithm. When you are done, you can also:\n\nModify the code to allow for kernels different than the 'rbf'\nRepeat the analysis for the second dataset (two_rings)", "n_clusters = 4\ngamma = .1 # Warning do not exceed gamma=100\nSpClus = SpectralClustering(n_clusters=n_clusters,affinity='rbf',\n gamma=gamma)\nSpClus.fit(X)\n\nplt.scatter(X[:, 0], X[:, 1], c=SpClus.labels_.astype(np.int), s=50, \n cmap='rainbow')\nplt.axis('equal')\nplt.show()\n\nnc = 2\ngamma = 50 #Warning do not exceed gamma=300\n\nSpClus = SpectralClustering(n_clusters=nc, affinity='rbf', gamma=gamma)\nSpClus.fit(X2)\n\nplt.scatter(X2[:, 0], X2[:, 1], c=SpClus.labels_.astype(np.int), s=50, \n cmap='rainbow')\nplt.axis('equal')\nplt.show()\n\nnc = 5\nSpClus = SpectralClustering(n_clusters=nc, affinity='nearest_neighbors')\nSpClus.fit(X2)\n\nplt.scatter(X2[:, 0], X2[:, 1], c=SpClus.labels_.astype(np.int), s=50, \n cmap='rainbow')\nplt.axis('equal')\nplt.show()", "5.2. Other clustering algorithms.\n5.2.1. Agglomerative Clustering algorithms\nBottom-up approach:\n\nAt the beginning, each data point is a different cluster\nAt each step of the algorithm two clusters are merged, according to certain performance criterion\nAt the end of the algorithm, all points belong to the root node\n\nIn practice, this creates a hierarchical tree, that can be visualized with a dendogram. We can cut the tree at different levels, in each case obtaining a different number of clusters.\n<img src=https://www.mathworks.com/help/stats/dendrogram_partial.png> \nCriteria for merging clusters\nWe merge the two closest clusters, where the distance between clusters is defined as:\n\nSingle: Distance between clusters is the minimum of the distances between any two points in the clusters\nComplete: Maximal distance between any two points in each cluster\nAverage: Average distance between points in both clusters\nCentroid: Distance between the (Euclidean) centroids of both clusters\nWard: We merge centroids so that the overall increment of {\\em within-cluster} variance is minimum. \n\nPython implementations\nHierarchical clustering may lead to clusters of very different sizes. Complete linkage is the worst strategy, while Ward gives the most regular sizes. However, the affinity (or distance used in clustering) cannot be varied with Ward, thus for non Euclidean metrics, average linkage is a good alternative. \nThere are at least three different implementations of the algorithm:\n\nScikit-learn: Only implements complete',ward', and `average' linkage methods. Allows for the definition of connectivity constraints\nScipy\nfastcluster: Similar to Scipy, but more efficient with respect to computation and memory." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Diyago/Machine-Learning-scripts
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
apache-2.0
[ "Face Generation\nIn this project, you'll define and train a DCGAN on a dataset of faces. Your goal is to get a generator network to generate new images of faces that look as realistic as possible!\nThe project will be broken down into a series of tasks from loading in data to defining and training adversarial networks. At the end of the notebook, you'll be able to visualize the results of your trained Generator to see how it performs; your generated samples should look like fairly realistic faces with small amounts of noise.\nGet the Data\nYou'll be using the CelebFaces Attributes Dataset (CelebA) to train your adversarial networks.\nThis dataset is more complex than the number datasets (like MNIST or SVHN) you've been working with, and so, you should prepare to define deeper networks and train them for a longer time to get good results. It is suggested that you utilize a GPU for training.\nPre-processed Data\nSince the project's main focus is on building the GANs, we've done some of the pre-processing for you. Each of the CelebA images has been cropped to remove parts of the image that don't include a face, then resized down to 64x64x3 NumPy images. Some sample data is show below.\n<img src='assets/processed_face_data.png' width=60% />\n\nIf you are working locally, you can download this data by clicking here\n\nThis is a zip file that you'll need to extract in the home directory of this notebook for further loading and processing. After extracting the data, you should be left with a directory of data processed_celeba_small/", "# can comment out after executing\n!unzip processed_celeba_small.zip\n\ndata_dir = 'processed_celeba_small/'\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport pickle as pkl\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport problem_unittests as tests\n#import helper\n\n%matplotlib inline", "Visualize the CelebA Data\nThe CelebA dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations, you'll only need the images. Note that these are color images with 3 color channels (RGB) each.\nPre-process and Load the Data\nSince the project's main focus is on building the GANs, we've done some of the pre-processing for you. Each of the CelebA images has been cropped to remove parts of the image that don't include a face, then resized down to 64x64x3 NumPy images. This pre-processed dataset is a smaller subset of the very large CelebA data.\n\nThere are a few other steps that you'll need to transform this data and create a DataLoader.\n\nExercise: Complete the following get_dataloader function, such that it satisfies these requirements:\n\nYour images should be square, Tensor images of size image_size x image_size in the x and y dimension.\nYour function should return a DataLoader that shuffles and batches these Tensor images.\n\nImageFolder\nTo create a dataset given a directory of images, it's recommended that you use PyTorch's ImageFolder wrapper, with a root directory processed_celeba_small/ and data transformation passed in.", "# necessary imports\nimport torch\nfrom torchvision import datasets\nfrom torchvision import transforms\n\ndef get_dataloader(batch_size, image_size, data_dir='processed_celeba_small/'):\n \"\"\"\n Batch the neural network data using DataLoader\n :param batch_size: The size of each batch; the number of images in a batch\n :param img_size: The square size of the image data (x, y)\n :param data_dir: Directory where image data is located\n :return: DataLoader with batched data\n \"\"\"\n \n # TODO: Implement function and return a dataloader\n \n return None\n", "Create a DataLoader\nExercise: Create a DataLoader celeba_train_loader with appropriate hyperparameters.\nCall the above function and create a dataloader to view images. \n* You can decide on any reasonable batch_size parameter\n* Your image_size must be 32. Resizing the data to a smaller size will make for faster training, while still creating convincing images of faces!", "# Define function hyperparameters\nbatch_size = \nimg_size = \n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\n# Call your function and get a dataloader\nceleba_train_loader = get_dataloader(batch_size, img_size)\n", "Next, you can view some images! You should seen square images of somewhat-centered faces.\nNote: You'll need to convert the Tensor images into a NumPy type and transpose the dimensions to correctly display an image, suggested imshow code is below, but it may not be perfect.", "# helper display function\ndef imshow(img):\n npimg = img.numpy()\n plt.imshow(np.transpose(npimg, (1, 2, 0)))\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\n# obtain one batch of training images\ndataiter = iter(celeba_train_loader)\nimages, _ = dataiter.next() # _ for no labels\n\n# plot the images in the batch, along with the corresponding labels\nfig = plt.figure(figsize=(20, 4))\nplot_size=20\nfor idx in np.arange(plot_size):\n ax = fig.add_subplot(2, plot_size/2, idx+1, xticks=[], yticks=[])\n imshow(images[idx])", "Exercise: Pre-process your image data and scale it to a pixel range of -1 to 1\nYou need to do a bit of pre-processing; you know that the output of a tanh activated generator will contain pixel values in a range from -1 to 1, and so, we need to rescale our training images to a range of -1 to 1. (Right now, they are in a range from 0-1.)", "# TODO: Complete the scale function\ndef scale(x, feature_range=(-1, 1)):\n ''' Scale takes in an image x and returns that image, scaled\n with a feature_range of pixel values from -1 to 1. \n This function assumes that the input x is already scaled from 0-1.'''\n # assume x is scaled to (0, 1)\n # scale to feature_range and return scaled x\n \n return x\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\n# check scaled range\n# should be close to -1 to 1\nimg = images[0]\nscaled_img = scale(img)\n\nprint('Min: ', scaled_img.min())\nprint('Max: ', scaled_img.max())", "Define the Model\nA GAN is comprised of two adversarial networks, a discriminator and a generator.\nDiscriminator\nYour first task will be to define the discriminator. This is a convolutional classifier like you've built before, only without any maxpooling layers. To deal with this complex data, it's suggested you use a deep network with normalization. You are also allowed to create any helper functions that may be useful.\nExercise: Complete the Discriminator class\n\nThe inputs to the discriminator are 32x32x3 tensor images\nThe output should be a single value that will indicate whether a given image is real or fake", "import torch.nn as nn\nimport torch.nn.functional as F\n\nclass Discriminator(nn.Module):\n\n def __init__(self, conv_dim):\n \"\"\"\n Initialize the Discriminator Module\n :param conv_dim: The depth of the first convolutional layer\n \"\"\"\n super(Discriminator, self).__init__()\n\n # complete init function\n \n\n def forward(self, x):\n \"\"\"\n Forward propagation of the neural network\n :param x: The input to the neural network \n :return: Discriminator logits; the output of the neural network\n \"\"\"\n # define feedforward behavior\n \n return x\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_discriminator(Discriminator)", "Generator\nThe generator should upsample an input and generate a new image of the same size as our training data 32x32x3. This should be mostly transpose convolutional layers with normalization applied to the outputs.\nExercise: Complete the Generator class\n\nThe inputs to the generator are vectors of some length z_size\nThe output should be a image of shape 32x32x3", "class Generator(nn.Module):\n \n def __init__(self, z_size, conv_dim):\n \"\"\"\n Initialize the Generator Module\n :param z_size: The length of the input latent vector, z\n :param conv_dim: The depth of the inputs to the *last* transpose convolutional layer\n \"\"\"\n super(Generator, self).__init__()\n\n # complete init function\n \n\n def forward(self, x):\n \"\"\"\n Forward propagation of the neural network\n :param x: The input to the neural network \n :return: A 32x32x3 Tensor image as output\n \"\"\"\n # define feedforward behavior\n \n return x\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_generator(Generator)", "Initialize the weights of your networks\nTo help your models converge, you should initialize the weights of the convolutional and linear layers in your model. From reading the original DCGAN paper, they say:\n\nAll weights were initialized from a zero-centered Normal distribution with standard deviation 0.02.\n\nSo, your next task will be to define a weight initialization function that does just this!\nYou can refer back to the lesson on weight initialization or even consult existing model code, such as that from the networks.py file in CycleGAN Github repository to help you complete this function.\nExercise: Complete the weight initialization function\n\nThis should initialize only convolutional and linear layers\nInitialize the weights to a normal distribution, centered around 0, with a standard deviation of 0.02.\nThe bias terms, if they exist, may be left alone or set to 0.", "def weights_init_normal(m):\n \"\"\"\n Applies initial weights to certain layers in a model .\n The weights are taken from a normal distribution \n with mean = 0, std dev = 0.02.\n :param m: A module or layer in a network \n \"\"\"\n # classname will be something like:\n # `Conv`, `BatchNorm2d`, `Linear`, etc.\n classname = m.__class__.__name__\n \n # TODO: Apply initial weights to convolutional and linear layers\n \n ", "Build complete network\nDefine your models' hyperparameters and instantiate the discriminator and generator from the classes defined above. Make sure you've passed in the correct input arguments.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ndef build_network(d_conv_dim, g_conv_dim, z_size):\n # define discriminator and generator\n D = Discriminator(d_conv_dim)\n G = Generator(z_size=z_size, conv_dim=g_conv_dim)\n\n # initialize model weights\n D.apply(weights_init_normal)\n G.apply(weights_init_normal)\n\n print(D)\n print()\n print(G)\n \n return D, G\n", "Exercise: Define model hyperparameters", "# Define model hyperparams\nd_conv_dim = \ng_conv_dim = \nz_size = \n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nD, G = build_network(d_conv_dim, g_conv_dim, z_size)", "Training on GPU\nCheck if you can train on GPU. Here, we'll set this as a boolean variable train_on_gpu. Later, you'll be responsible for making sure that \n\n\nModels,\nModel inputs, and\nLoss function arguments\n\n\nAre moved to GPU, where appropriate.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport torch\n\n# Check for a GPU\ntrain_on_gpu = torch.cuda.is_available()\nif not train_on_gpu:\n print('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Training on GPU!')", "Discriminator and Generator Losses\nNow we need to calculate the losses for both types of adversarial networks.\nDiscriminator Losses\n\n\nFor the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_real_loss + d_fake_loss. \nRemember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.\n\n\nGenerator Loss\nThe generator loss will look similar only with flipped labels. The generator's goal is to get the discriminator to think its generated images are real.\nExercise: Complete real and fake loss functions\nYou may choose to use either cross entropy or a least squares error loss to complete the following real_loss and fake_loss functions.", "def real_loss(D_out):\n '''Calculates how close discriminator outputs are to being real.\n param, D_out: discriminator logits\n return: real loss'''\n loss = \n return loss\n\ndef fake_loss(D_out):\n '''Calculates how close discriminator outputs are to being fake.\n param, D_out: discriminator logits\n return: fake loss'''\n loss = \n return loss", "Optimizers\nExercise: Define optimizers for your Discriminator (D) and Generator (G)\nDefine optimizers for your models with appropriate hyperparameters.", "import torch.optim as optim\n\n# Create optimizers for the discriminator D and generator G\nd_optimizer = \ng_optimizer = ", "Training\nTraining will involve alternating between training the discriminator and the generator. You'll use your functions real_loss and fake_loss to help you calculate the discriminator losses.\n\nYou should train the discriminator by alternating on real and fake images\nThen the generator, which tries to trick the discriminator and should have an opposing loss function\n\nSaving Samples\nYou've been given some code to print out some loss statistics and save some generated \"fake\" samples.\nExercise: Complete the training function\nKeep in mind that, if you've moved your models to GPU, you'll also have to move any model inputs to GPU.", "def train(D, G, n_epochs, print_every=50):\n '''Trains adversarial networks for some number of epochs\n param, D: the discriminator network\n param, G: the generator network\n param, n_epochs: number of epochs to train for\n param, print_every: when to print and record the models' losses\n return: D and G losses'''\n \n # move models to GPU\n if train_on_gpu:\n D.cuda()\n G.cuda()\n\n # keep track of loss and generated, \"fake\" samples\n samples = []\n losses = []\n\n # Get some fixed data for sampling. These are images that are held\n # constant throughout training, and allow us to inspect the model's performance\n sample_size=16\n fixed_z = np.random.uniform(-1, 1, size=(sample_size, z_size))\n fixed_z = torch.from_numpy(fixed_z).float()\n # move z to GPU if available\n if train_on_gpu:\n fixed_z = fixed_z.cuda()\n\n # epoch training loop\n for epoch in range(n_epochs):\n\n # batch training loop\n for batch_i, (real_images, _) in enumerate(celeba_train_loader):\n\n batch_size = real_images.size(0)\n real_images = scale(real_images)\n\n # ===============================================\n # YOUR CODE HERE: TRAIN THE NETWORKS\n # ===============================================\n \n # 1. Train the discriminator on real and fake images\n d_loss = \n\n # 2. Train the generator with an adversarial loss\n g_loss = \n \n \n # ===============================================\n # END OF YOUR CODE\n # ===============================================\n\n # Print some loss stats\n if batch_i % print_every == 0:\n # append discriminator loss and generator loss\n losses.append((d_loss.item(), g_loss.item()))\n # print discriminator and generator loss\n print('Epoch [{:5d}/{:5d}] | d_loss: {:6.4f} | g_loss: {:6.4f}'.format(\n epoch+1, n_epochs, d_loss.item(), g_loss.item()))\n\n\n ## AFTER EACH EPOCH## \n # this code assumes your generator is named G, feel free to change the name\n # generate and save sample, fake images\n G.eval() # for generating samples\n samples_z = G(fixed_z)\n samples.append(samples_z)\n G.train() # back to training mode\n\n # Save training generator samples\n with open('train_samples.pkl', 'wb') as f:\n pkl.dump(samples, f)\n \n # finally return losses\n return losses", "Set your number of training epochs and train your GAN!", "# set number of epochs \nn_epochs = \n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# call training function\nlosses = train(D, G, n_epochs=n_epochs)", "Training loss\nPlot the training losses for the generator and discriminator, recorded after each epoch.", "fig, ax = plt.subplots()\nlosses = np.array(losses)\nplt.plot(losses.T[0], label='Discriminator', alpha=0.5)\nplt.plot(losses.T[1], label='Generator', alpha=0.5)\nplt.title(\"Training Losses\")\nplt.legend()", "Generator samples from training\nView samples of images from the generator, and answer a question about the strengths and weaknesses of your trained models.", "# helper function for viewing a list of passed in sample images\ndef view_samples(epoch, samples):\n fig, axes = plt.subplots(figsize=(16,4), nrows=2, ncols=8, sharey=True, sharex=True)\n for ax, img in zip(axes.flatten(), samples[epoch]):\n img = img.detach().cpu().numpy()\n img = np.transpose(img, (1, 2, 0))\n img = ((img + 1)*255 / (2)).astype(np.uint8)\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)\n im = ax.imshow(img.reshape((32,32,3)))\n\n# Load samples from generator, taken while training\nwith open('train_samples.pkl', 'rb') as f:\n samples = pkl.load(f)\n\n_ = view_samples(-1, samples)", "Question: What do you notice about your generated samples and how might you improve this model?\nWhen you answer this question, consider the following factors:\n* The dataset is biased; it is made of \"celebrity\" faces that are mostly white\n* Model size; larger models have the opportunity to learn more features in a data feature space\n* Optimization strategy; optimizers and number of epochs affect your final result\nAnswer: (Write your answer in this cell)\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_face_generation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gjwo/nilm_gjw_data
notebooks/disaggregation-hart-CO-active_only.ipynb
apache-2.0
[ "Disaggregation - Hart Active data only\nCustomary imports", "%matplotlib inline\nimport numpy as np\nimport pandas as pd\nfrom os.path import join\nfrom pylab import rcParams\nimport matplotlib.pyplot as plt\nrcParams['figure.figsize'] = (13, 6)\nplt.style.use('ggplot')\n#import nilmtk\nfrom nilmtk import DataSet, TimeFrame, MeterGroup, HDFDataStore\nfrom nilmtk.disaggregate.hart_85 import Hart85\nfrom nilmtk.disaggregate import CombinatorialOptimisation\nfrom nilmtk.utils import print_dict, show_versions\nfrom nilmtk.metrics import f1_score\n#import seaborn as sns\n#sns.set_palette(\"Set3\", n_colors=12)\n\nimport warnings\nwarnings.filterwarnings(\"ignore\") #suppress warnings, comment out if warnings required", "show versions for any diagnostics", "#uncomment if required\n#show_versions()\n", "Load dataset", "data_dir = '/Users/GJWood/nilm_gjw_data/HDF5/'\ngjw = DataSet(join(data_dir, 'nilm_gjw_data.hdf5'))\nprint('loaded ' + str(len(gjw.buildings)) + ' buildings')\nbuilding_number=1", "Let us perform our analysis on selected 2 days", "gjw.store.window = TimeFrame(start='2015-09-03 00:00:00+01:00', end='2015-09-05 00:00:00+01:00')\ngjw.set_window = TimeFrame(start='2015-09-03 00:00:00+01:00', end='2015-09-05 00:00:00+01:00')\nelec = gjw.buildings[building_number].elec\nmains = elec.mains()\nmains.plot()\n#plt.show()\n\nhouse = elec['fridge'] #only one meter so any selection will do\ndf = house.load().next() #load the first chunk of data into a dataframe\n#df.info() #check that the data is what we want (optional)\n#note the data has two columns and a time index\n\n\n#df.head()\n\n#df.tail()\n\n#df.plot()\n#plt.show()", "Hart Training\nWe'll now do the training from the aggregate data. The algorithm segments the time series data into steady and transient states. Thus, we'll first figure out the transient and the steady states. Next, we'll try and pair the on and the off transitions based on their proximity in time and value.", "#df.ix['2015-09-03 11:00:00+01:00':'2015-09-03 12:00:00+01:00'].plot()# select a time range and plot it\n#plt.show()\n\nh = Hart85()\nh.train(mains,cols=[('power','active')])\n\n\nh.steady_states\n\n\nax = mains.plot()\nh.steady_states['active average'].plot(style='o', ax = ax);\nplt.ylabel(\"Power (W)\")\nplt.xlabel(\"Time\");\n#plt.show()", "Hart Disaggregation", "disag_filename = join(data_dir, 'disag_gjw_hart.hdf5')\noutput = HDFDataStore(disag_filename, 'w')\nh.disaggregate(mains,output,sample_period=1)\noutput.close()\n\ndisag_hart = DataSet(disag_filename)\ndisag_hart\n\ndisag_hart_elec = disag_hart.buildings[building_number].elec\ndisag_hart_elec", "Combinatorial Optimisation training", "co = CombinatorialOptimisation()\nco.train(mains,cols=[('power','active')])\n\n\nco.steady_states\n\nax = mains.plot()\nco.steady_states['active average'].plot(style='o', ax = ax);\nplt.ylabel(\"Power (W)\")\nplt.xlabel(\"Time\");\n\ndisag_filename = join(data_dir, 'disag_gjw_co.hdf5')\noutput = HDFDataStore(disag_filename, 'w')\nco.disaggregate(mains,output,sample_period=1)\noutput.close()", "Can't use because no test data for comparison", "from nilmtk.metrics import f1_score\nf1_hart= f1_score(disag_hart_elec, test_elec)\nf1_hart.index = disag_hart_elec.get_labels(f1_hart.index)\nf1_hart.plot(kind='barh')\nplt.ylabel('appliance');\nplt.xlabel('f-score');\nplt.title(\"Hart\");" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
omoju/Fundamentals
Data/data_Stats_3_ChanceModels.ipynb
gpl-3.0
[ "Data\nChance Models", "%pylab inline\n\n# Import libraries\nfrom __future__ import absolute_import, division, print_function\n\n# Ignore warnings\nimport warnings\nwarnings.filterwarnings('ignore')\n\nimport sys\nsys.path.append('tools/')\n\nimport numpy as np\nimport pandas as pd\nimport math\n\n# Graphing Libraries\nimport matplotlib.pyplot as pyplt\nimport seaborn as sns\nsns.set_style(\"white\") \n\n# Configure for presentation\nnp.set_printoptions(threshold=50, linewidth=50)\nimport matplotlib as mpl\nmpl.rc('font', size=16)\n\nfrom IPython.display import display", "Uniform Sample\nA uniform sample is a sample drawn at random without replacements", "def sample(num_sample, top):\n \"\"\"\n Create a random sample from a table\n \n Attributes\n ---------\n num_sample: int\n top: dataframe\n \n Returns a random subset of table index\n \"\"\"\n df_index = []\n\n for i in np.arange(0, num_sample, 1):\n\n # pick randomly from the whole table\n sample_index = np.random.randint(0, len(top))\n\n # store index\n df_index.append(sample_index)\n \n return df_index\n\ndef sample_no_replacement(num_sample, top):\n \"\"\"\n Create a random sample from a table\n \n Attributes\n ---------\n num_sample: int\n top: dataframe\n \n Returns a random subset of table index\n \"\"\"\n df_index = []\n lst = np.arange(0, len(top), 1)\n\n for i in np.arange(0, num_sample, 1):\n\n # pick randomly from the whole table\n sample_index = np.random.choice(lst)\n\n lst = np.setdiff1d(lst,[sample_index])\n df_index.append(sample_index)\n \n return df_index\n\n", "Dice", "die = pd.DataFrame()\ndie[\"Face\"] = [1,2,3,4,5,6]\n\ndie", "Coin", "coin = pd.DataFrame()\ncoin[\"Face\"] = [1,2]\ncoin", "We can simulate the act of rolling dice by just pulling out rows", "index_ = sample(3, die)\ndf = die.ix[index_, :]\ndf\n\nindex_ = sample(1, coin)\ndf = coin.ix[index_, :]\ndf\n\ndef sum_draws( n, box ):\n \"\"\"\n Construct histogram for the sum of n draws from a box with replacement\n \n Attributes\n -----------\n n: int (number of draws)\n box: dataframe (the box model)\n \"\"\"\n data = numpy.zeros(shape=(n,1))\n if n > 0:\n for i in range(n):\n index_ = np.random.randint(0, len(box), n)\n df = box.ix[index_, :]\n data[i] = df.Content.sum()\n\n bins = np.arange(data.min()-0.5, data.max()+1, 1) \n pyplt.hist(data, bins=bins, normed=True)\n pyplt.ylabel('percent per unit')\n pyplt.xlabel('Number on ticket')\n pyplt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.);\n else:\n raise ValueError('n has to be greater than 0')\n \n \n\n\nbox = pd.DataFrame()\nbox[\"Content\"] = [0,1,2,3,4]\n\npyplt.rcParams['figure.figsize'] = (4, 3)\n\nsum_draws(100, box)\n\npyplt.rcParams['figure.figsize'] = (4, 3)\n\nlow, high = box.Content.min() - 0.5, box.Content.max() + 1\nbins = np.arange(low, high, 1) \n\nbox.plot.hist(bins=bins, normed=True)\npyplt.ylabel('percent per unit')\npyplt.xlabel('Number on ticket')\npyplt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.);\n\n\nsum_draws(1000, box)", "Modeling the Law of Averages\nThe law of averages states that as the number of draws increases, so too does the difference between the expected average versus the observed average. \n$$ Chance \\ Error = Observed - Expected $$\nIn the case of coin tosses, as the number of tosses goes up, so does the absolute chance error.", "def number_of_heads( n, box ):\n \"\"\"\n The number of heads in n tosses\n \n Attributes\n -----------\n n: int (number of draws)\n box: dataframe (the coin box model)\n \"\"\"\n data = numpy.zeros(shape=(n,1))\n if n > 0:\n value = np.random.randint(0, len(box), n)\n data = value\n else:\n raise ValueError('n has to be greater than 0')\n \n return data.sum()\n\n\nbox = pd.DataFrame()\nbox[\"Content\"] = [0,1]\n\nlow, high, step = 100, 10000, 2\nlength = len(range(low, high, step))\nnum_tosses = numpy.zeros(shape=(length,1))\nnum_heads = numpy.zeros(shape=(length,1))\nchance_error = numpy.zeros(shape=(length,1))\npercentage_difference = numpy.zeros(shape=(length,1))\ni= 0\n\nfor n in range(low, high, step):\n observed = number_of_heads(n, box)\n expected = n//2\n num_tosses[i] = n\n num_heads[i] = observed\n chance_error[i] = math.fabs(expected - observed)\n percentage_difference[i] = math.fabs(((num_heads[i] / num_tosses[i]) * 100) - 50)\n i += 1\n \n\navg_heads = pd.DataFrame(index= range(low, high, step) )\navg_heads['num_tosses'] = num_tosses\navg_heads['num_heads'] = num_heads\navg_heads['chance_error'] = chance_error\navg_heads['percentage_difference'] = percentage_difference\n\navg_heads.reset_index(inplace=True)\n\npyplt.rcParams['figure.figsize'] = (8, 3)\npyplt.plot(avg_heads.chance_error, 'ro', markersize=1)\npyplt.ylim(-50, 500)\npyplt.title('Modeling the Law of Averages')\npyplt.ylabel('Difference between \\nObserved versus Expected')\npyplt.xlabel('Number of Tosses');\n\npyplt.rcParams['figure.figsize'] = (8, 4)\nax = pyplt.plot(avg_heads.percentage_difference, 'bo', markersize=1)\npyplt.ylim(-5, 20)\npyplt.ylabel('The Percentage Difference\\n Between Observed and Expected')\npyplt.xlabel('Number of Tosses');\n\npyplt.rcParams['figure.figsize'] = (4, 3)", "Chance Processes\nTo figure out to what extent numbers are influenced by chance processes, it is good to make an analogy to a box model with its sum of draws." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
comp-journalism/Baseline_Problem_for_Algorithm_Audits
Statistics.ipynb
mit
[ "Statistical analysis", "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy import stats\n\n%matplotlib inline\n\nplt.style.use('ggplot')\nplt.rcParams['figure.figsize'] = (15, 3)\nplt.rcParams['font.family'] = 'sans-serif'\n\npd.set_option('display.width', 5000) \npd.set_option('display.max_columns', 60)\n\nHC_baseline = pd.read_csv('./BASELINE/HC_baseline_full_ratings.csv')\nDT_baseline = pd.read_csv('./BASELINE/DT_baseline_full_ratings.csv')\n\nHC_imagebox = pd.read_csv('./IMAGE_BOX/HC_imagebox_full_ratings.csv')\nDT_imagebox = pd.read_csv('./IMAGE_BOX/DT_imagebox_full_ratings.csv')", "Statistical analysis on Allsides bias rating:\nNo sources from the images boxes were rated in the Allsides bias rating dataset. Therefore comparisons between bias of baseline sources versus image box sources could not be performed.\nStatistical analysis on Facebook Study bias rating:\nHillary Clinton Image Box images versus Baseline images source bias according to Facebook bias ratings:", "print(\"Baseline skew: \", stats.skew(HC_baseline.facebookbias_rating[HC_baseline.facebookbias_rating<3]))\nprint(\"Image Box skew: \", stats.skew(HC_imagebox.facebookbias_rating[HC_imagebox.facebookbias_rating<3]))", "from the stats page \"For normally distributed data, the skewness should be about 0. A skewness value > 0 means that there is more weight in the left tail of the distribution. The function skewtest can be used to determine if the skewness value is close enough to 0, statistically speaking.\"", "print(\"Baseline skew: \", stats.skewtest(HC_baseline.facebookbias_rating[HC_baseline.facebookbias_rating<3]))\nprint(\"Image Box skew: \", stats.skewtest(HC_imagebox.facebookbias_rating[HC_imagebox.facebookbias_rating<3]))\n\nstats.ks_2samp(HC_baseline.facebookbias_rating[HC_baseline.facebookbias_rating<3], \n HC_imagebox.facebookbias_rating[HC_imagebox.facebookbias_rating<3])\n\nHC_imagebox.facebookbias_rating.plot.hist(alpha=0.5, bins=20, range=(-1,1), color='blue')\n\nHC_baseline.facebookbias_rating.plot.hist(alpha=0.5, bins=20, range=(-1,1), color='green')", "Donald Trump Image Box images versus Baseline images source bias according to Facebook bias ratings:", "print(\"Baseline skew: \", stats.skew(DT_baseline.facebookbias_rating[DT_baseline.facebookbias_rating<3]))\nprint(\"Image Box skew: \", stats.skew(DT_imagebox.facebookbias_rating[DT_imagebox.facebookbias_rating<3]))\n\nstats.ks_2samp(DT_baseline.facebookbias_rating[DT_baseline.facebookbias_rating<3], \n DT_imagebox.facebookbias_rating[DT_imagebox.facebookbias_rating<3])\n\nDT_imagebox.facebookbias_rating.plot.hist(alpha=0.5, bins=20, range=(-1,1), color='red')\n\nDT_baseline.facebookbias_rating.plot.hist(alpha=0.5, bins=20, range=(-1,1), color='green')\n\nprint(\"Number of missing ratings for Hillary Clinton Baseline data: \", len(HC_baseline[HC_baseline.facebookbias_rating == 999]))\nprint(\"Number of missing ratings for Hillary Clinton Image Box data: \", len(HC_imagebox[HC_imagebox.facebookbias_rating == 999]))\nprint(\"Number of missing ratings for Donald Trump Baseline data: \", len(DT_baseline[DT_baseline.facebookbias_rating == 999]))\nprint(\"Number of missing ratings for Donald Trump Image Box data: \", len(DT_baseline[DT_imagebox.facebookbias_rating == 999]))", "The Kolmogorov-Smirnov analyis shows that the distribution of political representation across image sources is different between the baseline images and those found in the image box.\n\nStatistical analysis on Allsides + Facebook + MondoTimes + my bias ratings:\nConvert strings to integers:", "def convert_to_ints(col):\n if col == 'Left':\n return -1\n elif col == 'Center':\n return 0\n elif col == 'Right':\n return 1\n else:\n return np.nan\n\nHC_imagebox['final_rating_ints'] = HC_imagebox.final_rating.apply(convert_to_ints)\nDT_imagebox['final_rating_ints'] = DT_imagebox.final_rating.apply(convert_to_ints)\nHC_baseline['final_rating_ints'] = HC_baseline.final_rating.apply(convert_to_ints)\nDT_baseline['final_rating_ints'] = DT_baseline.final_rating.apply(convert_to_ints)\n\nHC_imagebox.final_rating_ints.value_counts()\n\nDT_imagebox.final_rating_ints.value_counts()", "Prepare data for chi squared test", "HC_baseline_counts = HC_baseline.final_rating.value_counts()\nHC_imagebox_counts = HC_imagebox.final_rating.value_counts()\nDT_baseline_counts = DT_baseline.final_rating.value_counts()\nDT_imagebox_counts = DT_imagebox.final_rating.value_counts()\n\nHC_baseline_counts.head()\n\nnormalised_bias_ratings = pd.DataFrame({'HC_ImageBox':HC_imagebox_counts,\n 'HC_Baseline' : HC_baseline_counts,\n 'DT_ImageBox': DT_imagebox_counts,\n 'DT_Baseline': DT_baseline_counts} )\n\nnormalised_bias_ratings", "Remove Unknown / unreliable row", "normalised_bias_ratings = normalised_bias_ratings[:3]", "Calculate percentages for plotting purposes", "normalised_bias_ratings.loc[:,'HC_Baseline_pcnt'] = normalised_bias_ratings.HC_Baseline/normalised_bias_ratings.HC_Baseline.sum()*100\nnormalised_bias_ratings.loc[:,'HC_ImageBox_pcnt'] = normalised_bias_ratings.HC_ImageBox/normalised_bias_ratings.HC_ImageBox.sum()*100\nnormalised_bias_ratings.loc[:,'DT_Baseline_pcnt'] = normalised_bias_ratings.DT_Baseline/normalised_bias_ratings.DT_Baseline.sum()*100\nnormalised_bias_ratings.loc[:,'DT_ImageBox_pcnt'] = normalised_bias_ratings.DT_ImageBox/normalised_bias_ratings.DT_ImageBox.sum()*100\n\nnormalised_bias_ratings\n\nnormalised_bias_ratings.columns\n\nHC_percentages = normalised_bias_ratings[['HC_Baseline_pcnt', 'HC_ImageBox_pcnt']]\nDT_percentages = normalised_bias_ratings[['DT_Baseline_pcnt', 'DT_ImageBox_pcnt']]", "Test Hillary Clinton Image Box images against Baseline images:", "stats.chisquare(f_exp=normalised_bias_ratings.HC_Baseline, \n f_obs=normalised_bias_ratings.HC_ImageBox)\n\nHC_percentages.plot.bar()", "Test Donald Trump Image Box images against Basline images:", "stats.chisquare(f_exp=normalised_bias_ratings.DT_Baseline, \n f_obs=normalised_bias_ratings.DT_ImageBox)\n\nDT_percentages.plot.bar()", "Chi square test shows that the distribution of political representation across image sources is different between the baseline images and those found in the image box both candidates.\nHillary Clinton image box images increased left-leaning and decreased centrist source representation compared with baseline. \nDonald Trump image box images increased right-leaning, and decreased centrist source representation compared with baseline.\n\nConclusion:\nUsing Google as its own baseline was sufficient to conclude that representation on the main results page is different from that in the baseline, indicating that bias is introduced in the curation of images for the image box." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
googledatalab/notebooks
samples/ML Toolbox/Regression/Census/2 Service Preprocess.ipynb
apache-2.0
[ "Data Preparation and Preprocessing with BigQuery\nThis notebook is the first of a set of steps to run machine learning on the cloud. This step is about data preparation and preprocessing, and will mirror the equivalent portions of the local notebook.\nWorkspace Setup\nThe first step is to setup the workspace that we will use within this notebook - the python libraries, and the Google Cloud Storage bucket that will be used to contain the inputs and outputs produced over the course of the steps.", "import google.datalab as datalab\nimport google.datalab.ml as ml\nimport mltoolbox.regression.dnn as regression\nimport os", "The storage bucket we create will be created by default using the project id.", "storage_bucket = 'gs://' + datalab.Context.default().project_id + '-datalab-workspace/'\nstorage_region = 'us-central1'\n\nworkspace_path = os.path.join(storage_bucket, 'census')\n\n# We will rely on outputs from data preparation steps in the previous notebook.\nlocal_workspace_path = '/content/datalab/workspace/census'\n\n!gsutil mb -c regional -l {storage_region} {storage_bucket}", "NOTE: If you have previously run this notebook, and want to start from scratch, then run the next cell to delete previous outputs.", "!gsutil -m rm -rf {workspace_path}", "Data\nTo get started, we will copy the data into this workspace from the local workspace created in the previous notebook.\nGenerally, in your own work, you will have existing data to work with, that you may or may not need to copy around, depending on its current location.", "!gsutil -q cp {local_workspace_path}/data/train.csv {workspace_path}/data/train.csv\n!gsutil -q cp {local_workspace_path}/data/eval.csv {workspace_path}/data/eval.csv\n!gsutil -q cp {local_workspace_path}/data/schema.json {workspace_path}/data/schema.json\n!gsutil ls -r {workspace_path}", "DataSets", "train_data_path = os.path.join(workspace_path, 'data/train.csv')\neval_data_path = os.path.join(workspace_path, 'data/eval.csv')\nschema_path = os.path.join(workspace_path, 'data/schema.json')\n\ntrain_data = ml.CsvDataSet(file_pattern=train_data_path, schema_file=schema_path)\neval_data = ml.CsvDataSet(file_pattern=eval_data_path, schema_file=schema_path)", "Data Analysis\nWhen building a model, a number of pieces of information about the training data are required - for example, the list of entries or vocabulary of a categorical/discrete column, or aggregate statistics like min and max for numerical columns. These require a full pass over the training data, and is usually done once, and needs to be repeated once if you change the schema in a future iteration.\nOn the Cloud, this analysis is done with BigQuery, by referencing the csv data in storage as external data sources. The output of this analysis will be stored into storage.\nIn the analyze() call below, notice the use of cloud=True to move data analysis from happening locally to happening in the cloud.", "analysis_path = os.path.join(workspace_path, 'analysis')\n\nregression.analyze(dataset=train_data, output_dir=analysis_path, cloud=True)", "Like in the local notebook, the output of analysis is a stats file that contains analysis from the numerical columns, and a vocab file from each categorical column.", "!gsutil ls {analysis_path}", "Let's inspect one of the files; in particular the numerical analysis, since it will also tell us some interesting statistics about the income column, the value we want to predict.", "!gsutil cat {analysis_path}/stats.json", "Next Steps\nThis notebook completed the first steps of our machine learning workflow - data preparation and analysis. This data and the analysis outputs will be used to train a model, which is covered in the next notebook." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/training-data-analyst
blogs/sklearn/babyweight_skl.ipynb
apache-2.0
[ "<h1> Structured data prediction using Cloud ML Engine with scikit-learn </h1>\n\nThis notebook illustrates:\n<ol>\n<li> Creating datasets for Machine Learning using BigQuery\n<li> Creating a model using scitkit learn \n<li> Training on Cloud ML Engine\n<li> Deploying model\n<li> Predicting with model\n<li> Hyperparameter tuning of scikit-learn models\n</ol>\n\nPlease see this notebook for more context on this problem and how the features were chosen.", "# change these to try this notebook out\nBUCKET = 'cloud-training-demos-ml'\nPROJECT = 'cloud-training-demos'\nPROJECTNUMBER = '663413318684'\nREGION = 'us-central1'\n\nimport os\nos.environ['BUCKET'] = BUCKET\nos.environ['PROJECT'] = PROJECT\nos.environ['PROJECTNUMBER'] = PROJECTNUMBER\nos.environ['REGION'] = REGION\n\n%bash\ngcloud config set project $PROJECT\ngcloud config set compute/region $REGION\n\n%%bash\nif ! gsutil ls | grep -q gs://${BUCKET}/; then\n gsutil mb -l ${REGION} gs://${BUCKET}\nfi\n\n%bash\n# Pandas will use this privatekey to access BigQuery on our behalf.\n# Do NOT check in the private key into git!!!\n# if you get a JWT grant error when using this key, create the key via gcp web console in IAM > Service Accounts section\nKEYFILE=babyweight/trainer/privatekey.json\nif [ ! -f $KEYFILE ]; then\n gcloud iam service-accounts keys create \\\n --iam-account ${PROJECTNUMBER}-compute@developer.gserviceaccount.com \\\n $KEYFILE\nfi\n\nKEYDIR='babyweight/trainer'", "Exploring dataset\nPlease see this notebook for more context on this problem and how the features were chosen.", "#%writefile babyweight/trainer/model.py\n\n# Copyright 2018 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "<h2> Creating a ML dataset using BigQuery </h2>\n\nWe can use BigQuery to create the training and evaluation datasets. Because of the masking (ultrasound vs. no ultrasound), the query itself is a little complex.", "#%writefile -a babyweight/trainer/model.py\ndef create_queries():\n query_all = \"\"\"\n WITH with_ultrasound AS (\n SELECT\n weight_pounds AS label,\n CAST(is_male AS STRING) AS is_male,\n mother_age,\n CAST(plurality AS STRING) AS plurality,\n gestation_weeks,\n FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth\n FROM\n publicdata.samples.natality\n WHERE\n year > 2000\n AND gestation_weeks > 0\n AND mother_age > 0\n AND plurality > 0\n AND weight_pounds > 0\n ),\n\n without_ultrasound AS (\n SELECT\n weight_pounds AS label,\n 'Unknown' AS is_male,\n mother_age,\n IF(plurality > 1, 'Multiple', 'Single') AS plurality,\n gestation_weeks,\n FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth\n FROM\n publicdata.samples.natality\n WHERE\n year > 2000\n AND gestation_weeks > 0\n AND mother_age > 0\n AND plurality > 0\n AND weight_pounds > 0\n ),\n\n preprocessed AS (\n SELECT * from with_ultrasound\n UNION ALL\n SELECT * from without_ultrasound\n )\n\n SELECT\n label,\n is_male,\n mother_age,\n plurality,\n gestation_weeks\n FROM\n preprocessed\n \"\"\"\n\n train_query = \"{} WHERE ABS(MOD(hashmonth, 4)) < 3\".format(query_all)\n eval_query = \"{} WHERE ABS(MOD(hashmonth, 4)) = 3\".format(query_all)\n return train_query, eval_query\n\nprint create_queries()[0]\n\n#%writefile -a babyweight/trainer/model.py\ndef query_to_dataframe(query):\n import pandas as pd\n import pkgutil\n privatekey = pkgutil.get_data(KEYDIR, 'privatekey.json')\n print(privatekey[:200])\n return pd.read_gbq(query,\n project_id=PROJECT,\n dialect='standard',\n private_key=privatekey)\n\ndef create_dataframes(frac): \n # small dataset for testing\n if frac > 0 and frac < 1:\n sample = \" AND RAND() < {}\".format(frac)\n else:\n sample = \"\"\n\n train_query, eval_query = create_queries()\n train_query = \"{} {}\".format(train_query, sample)\n eval_query = \"{} {}\".format(eval_query, sample)\n\n train_df = query_to_dataframe(train_query)\n eval_df = query_to_dataframe(eval_query)\n return train_df, eval_df\n\ntrain_df, eval_df = create_dataframes(0.001)\ntrain_df.describe()\n\neval_df.head()", "<h2> Creating a scikit-learn model using random forests </h2>\n\nLet's train the model locally", "#%writefile -a babyweight/trainer/model.py\ndef input_fn(indf):\n import copy\n import pandas as pd\n df = copy.deepcopy(indf)\n\n # one-hot encode the categorical columns\n df[\"plurality\"] = df[\"plurality\"].astype(pd.api.types.CategoricalDtype(\n categories=[\"Single\",\"Multiple\",\"1\",\"2\",\"3\",\"4\",\"5\"]))\n df[\"is_male\"] = df[\"is_male\"].astype(pd.api.types.CategoricalDtype(\n categories=[\"Unknown\",\"false\",\"true\"]))\n # features, label\n label = df['label']\n del df['label']\n features = pd.get_dummies(df)\n return features, label\n\ntrain_x, train_y = input_fn(train_df)\nprint(train_x[:5])\nprint(train_y[:5])\n\nfrom sklearn.ensemble import RandomForestRegressor\nestimator = RandomForestRegressor(max_depth=5, n_estimators=100, random_state=0)\nestimator.fit(train_x, train_y)\n\nimport numpy as np\neval_x, eval_y = input_fn(eval_df)\neval_pred = estimator.predict(eval_x)\nprint(eval_pred[1000:1005])\nprint(eval_y[1000:1005])\nprint(np.sqrt(np.mean((eval_pred-eval_y)*(eval_pred-eval_y))))\n\n#%writefile -a babyweight/trainer/model.py\ndef train_and_evaluate(frac, max_depth=5, n_estimators=100):\n import numpy as np\n\n # get data\n train_df, eval_df = create_dataframes(frac)\n train_x, train_y = input_fn(train_df)\n # train\n from sklearn.ensemble import RandomForestRegressor\n estimator = RandomForestRegressor(max_depth=max_depth, n_estimators=n_estimators, random_state=0)\n estimator.fit(train_x, train_y)\n # evaluate\n eval_x, eval_y = input_fn(eval_df)\n eval_pred = estimator.predict(eval_x)\n rmse = np.sqrt(np.mean((eval_pred-eval_y)*(eval_pred-eval_y)))\n print(\"Eval rmse={}\".format(rmse))\n return estimator, rmse\n\n#%writefile -a babyweight/trainer/model.py\ndef save_model(estimator, gcspath, name):\n from sklearn.externals import joblib\n import os, subprocess, datetime\n model = 'model.joblib'\n joblib.dump(estimator, model)\n model_path = os.path.join(gcspath, datetime.datetime.now().strftime(\n 'export_%Y%m%d_%H%M%S'), model)\n subprocess.check_call(['gsutil', 'cp', model, model_path])\n return model_path\n\nsaved = save_model(estimator, 'gs://{}/babyweight/sklearn'.format(BUCKET), 'babyweight')\n\nprint saved", "Packaging up as a Python package\nNote the %writefile in the cells above. I uncommented those and ran the cells to write out a model.py\nThe following cell writes out a task.py", "%writefile babyweight/trainer/task.py\n# Copyright 2018 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport argparse\nimport os\n\nimport hypertune\nimport model\n\nif __name__ == '__main__':\n parser = argparse.ArgumentParser()\n parser.add_argument(\n '--bucket',\n help = 'GCS path to output.',\n required = True\n )\n parser.add_argument(\n '--frac',\n help = 'Fraction of input to process',\n type = float,\n required = True\n )\n parser.add_argument(\n '--maxDepth',\n help = 'Depth of trees',\n type = int,\n default = 5\n )\n parser.add_argument(\n '--numTrees',\n help = 'Number of trees',\n type = int,\n default = 100\n )\n parser.add_argument(\n '--projectId',\n help = 'ID (not name) of your project',\n required = True\n )\n parser.add_argument(\n '--job-dir',\n help = 'output directory for model, automatically provided by gcloud',\n required = True\n )\n \n args = parser.parse_args()\n arguments = args.__dict__\n \n model.PROJECT = arguments['projectId']\n model.KEYDIR = 'trainer'\n \n estimator, rmse = model.train_and_evaluate(arguments['frac'],\n arguments['maxDepth'],\n arguments['numTrees']\n )\n loc = model.save_model(estimator, \n arguments['job_dir'], 'babyweight')\n print(\"Saved model to {}\".format(loc))\n \n # this is for hyperparameter tuning\n hpt = hypertune.HyperTune()\n hpt.report_hyperparameter_tuning_metric(\n hyperparameter_metric_tag='rmse',\n metric_value=rmse,\n global_step=0)\n\n# done\n\n!pip freeze | grep pandas\n\n%writefile babyweight/setup.py\n# Copyright 2018 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom setuptools import setup\n\nsetup(name='trainer',\n version='1.0',\n description='Natality, with sklearn',\n url='http://github.com/GoogleCloudPlatform/training-data-analyst',\n author='Google',\n author_email='nobody@google.com',\n license='Apache2',\n packages=['trainer'],\n ## WARNING! Do not upload this package to PyPI\n ## BECAUSE it contains a private key\n package_data={'': ['privatekey.json']},\n install_requires=[\n 'pandas-gbq==0.3.0',\n 'urllib3',\n 'google-cloud-bigquery==0.29.0',\n 'cloudml-hypertune'\n ],\n zip_safe=False)", "Try out the package on a subset of the data.", "%bash\nexport PYTHONPATH=${PYTHONPATH}:${PWD}/babyweight\npython -m trainer.task \\\n --bucket=${BUCKET} --frac=0.001 --job-dir=gs://${BUCKET}/babyweight/sklearn --projectId $PROJECT", "<h2> Training on Cloud ML Engine </h2>\n\nSubmit the code to the ML Engine service", "%bash\n\nRUNTIME_VERSION=\"1.8\"\nPYTHON_VERSION=\"2.7\"\nJOB_NAME=babyweight_skl_$(date +\"%Y%m%d_%H%M%S\")\nJOB_DIR=\"gs://$BUCKET/babyweight/sklearn/${JOBNAME}\"\n\ngcloud ml-engine jobs submit training $JOB_NAME \\\n --job-dir $JOB_DIR \\\n --package-path $(pwd)/babyweight/trainer \\\n --module-name trainer.task \\\n --region us-central1 \\\n --runtime-version=$RUNTIME_VERSION \\\n --python-version=$PYTHON_VERSION \\\n -- \\\n --bucket=${BUCKET} --frac=0.1 --projectId $PROJECT", "The training finished in 20 minutes with a RMSE of 1.05 lbs.\n<h2> Deploying the trained model </h2>\n<p>\nDeploying the trained model to act as a REST web service is a simple gcloud call.", "%bash\ngsutil ls gs://${BUCKET}/babyweight/sklearn/ | tail -1\n\n%bash\nMODEL_NAME=\"babyweight\"\nMODEL_VERSION=\"skl\"\nMODEL_LOCATION=$(gsutil ls gs://${BUCKET}/babyweight/sklearn/ | tail -1)\necho \"Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes\"\n#gcloud ml-engine versions delete ${MODEL_VERSION} --model ${MODEL_NAME}\n#gcloud ml-engine models delete ${MODEL_NAME}\n#gcloud ml-engine models create ${MODEL_NAME} --regions $REGION\ngcloud alpha ml-engine versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} \\\n --framework SCIKIT_LEARN --runtime-version 1.8 --python-version=2.7", "<h2> Using the model to predict </h2>\n<p>\nSend a JSON request to the endpoint of the service to make it predict a baby's weight ... Note that we need to send in an array of numbers in the same order as when we trained the model. You can sort of save some preprocessing by using sklearn's Pipeline, but we did our preprocessing with Pandas, so that is not an option.\n<p>\nSo, let's find the order of columns:", "data = []\nfor i in range(2):\n data.append([])\n for col in eval_x:\n # convert from numpy integers to standard integers\n data[i].append(int(np.uint64(eval_x[col][i]).item()))\n\nprint(eval_x.columns)\nprint(json.dumps(data))", "As long as you send in the data in that order, it will work:", "from googleapiclient import discovery\nfrom oauth2client.client import GoogleCredentials\nimport json\n\ncredentials = GoogleCredentials.get_application_default()\napi = discovery.build('ml', 'v1', credentials=credentials)\n\nrequest_data = {'instances':\n # [u'mother_age', u'gestation_weeks', u'is_male_Unknown', u'is_male_0',\n # u'is_male_1', u'plurality_Single', u'plurality_Multiple',\n # u'plurality_1', u'plurality_2', u'plurality_3', u'plurality_4',\n # u'plurality_5']\n [[24, 38, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0], \n [34, 39, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0]]\n}\n\nparent = 'projects/%s/models/%s/versions/%s' % (PROJECT, 'babyweight', 'skl')\nresponse = api.projects().predict(body=request_data, name=parent).execute()\nprint \"response={0}\".format(response)", "Hyperparameter tuning\nLet's do a bunch of parallel trials to find good maxDepth and numTrees", "%writefile hyperparam.yaml\ntrainingInput:\n hyperparameters:\n goal: MINIMIZE\n maxTrials: 100\n maxParallelTrials: 5\n hyperparameterMetricTag: rmse\n params:\n - parameterName: maxDepth\n type: INTEGER\n minValue: 2\n maxValue: 8\n scaleType: UNIT_LINEAR_SCALE\n - parameterName: numTrees\n type: INTEGER\n minValue: 50\n maxValue: 150\n scaleType: UNIT_LINEAR_SCALE\n\n%bash\nRUNTIME_VERSION=\"1.8\"\nPYTHON_VERSION=\"2.7\"\nJOB_NAME=babyweight_skl_$(date +\"%Y%m%d_%H%M%S\")\nJOB_DIR=\"gs://$BUCKET/babyweight/sklearn/${JOBNAME}\"\n\ngcloud ml-engine jobs submit training $JOB_NAME \\\n --job-dir $JOB_DIR \\\n --package-path $(pwd)/babyweight/trainer \\\n --module-name trainer.task \\\n --region us-central1 \\\n --runtime-version=$RUNTIME_VERSION \\\n --python-version=$PYTHON_VERSION \\\n --config=hyperparam.yaml \\\n -- \\\n --bucket=${BUCKET} --frac=0.01 --projectId $PROJECT", "If you go to the GCP console and click on the job, you will see the trial information start to populating, with the lowest rmse trial listed first. I got the best performance with these settings:\n<pre>\n \"hyperparameters\": {\n \"maxDepth\": \"8\",\n \"numTrees\": \"90\"\n },\n \"finalMetric\": {\n \"trainingStep\": \"1\",\n \"objectiveValue\": 1.03123724461\n }\n</pre>\n\nTrain on full dataset\nLet's train on the full dataset with these hyperparameters. I am using a larger machine (8 CPUS, 52 GB of memory).", "%writefile largemachine.yaml\ntrainingInput:\n scaleTier: CUSTOM\n masterType: large_model\n\n%bash\n\nRUNTIME_VERSION=\"1.8\"\nPYTHON_VERSION=\"2.7\"\nJOB_NAME=babyweight_skl_$(date +\"%Y%m%d_%H%M%S\")\nJOB_DIR=\"gs://$BUCKET/babyweight/sklearn/${JOBNAME}\"\n\ngcloud ml-engine jobs submit training $JOB_NAME \\\n --job-dir $JOB_DIR \\\n --package-path $(pwd)/babyweight/trainer \\\n --module-name trainer.task \\\n --region us-central1 \\\n --runtime-version=$RUNTIME_VERSION \\\n --python-version=$PYTHON_VERSION \\\n --scale-tier=CUSTOM \\\n --config=largemachine.yaml \\\n -- \\\n --bucket=${BUCKET} --frac=1 --projectId $PROJECT --maxDepth 8 --numTrees 90", "Copyright 2018 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/ncar/cmip6/models/sandbox-3/landice.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Landice\nMIP Era: CMIP6\nInstitute: NCAR\nSource ID: SANDBOX-3\nTopic: Landice\nSub-Topics: Glaciers, Ice. \nProperties: 30 (21 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:22\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'ncar', 'sandbox-3', 'landice')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Grid\n4. Glaciers\n5. Ice\n6. Ice --&gt; Mass Balance\n7. Ice --&gt; Mass Balance --&gt; Basal\n8. Ice --&gt; Mass Balance --&gt; Frontal\n9. Ice --&gt; Dynamics \n1. Key Properties\nLand ice key properties\n1.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of land surface model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of land surface model code", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Ice Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify how ice albedo is modelled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.ice_albedo') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"function of ice age\" \n# \"function of ice density\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Atmospheric Coupling Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich variables are passed between the atmosphere and ice (e.g. orography, ice mass)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Oceanic Coupling Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich variables are passed between the ocean and ice", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich variables are prognostically calculated in the ice model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice velocity\" \n# \"ice thickness\" \n# \"ice temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of land ice code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Grid\nLand ice grid\n3.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the grid in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. Adaptive Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs an adative grid being used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.3. Base Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe base resolution (in metres), before any adaption", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.base_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Resolution Limit\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf an adaptive grid is being used, what is the limit of the resolution (in metres)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.resolution_limit') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Projection\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe projection of the land ice grid (e.g. albers_equal_area)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.projection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Glaciers\nLand ice glaciers\n4.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of glaciers in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of glaciers, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Dynamic Areal Extent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDoes the model include a dynamic glacial extent?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5. Ice\nIce sheet and ice shelf\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the ice sheet and ice shelf in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Grounding Line Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.grounding_line_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grounding line prescribed\" \n# \"flux prescribed (Schoof)\" \n# \"fixed grid size\" \n# \"moving grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5.3. Ice Sheet\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre ice sheets simulated?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_sheet') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5.4. Ice Shelf\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre ice shelves simulated?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_shelf') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Ice --&gt; Mass Balance\nDescription of the surface mass balance treatment\n6.1. Surface Mass Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Ice --&gt; Mass Balance --&gt; Basal\nDescription of basal melting\n7.1. Bedrock\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of basal melting over bedrock", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of basal melting over the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Ice --&gt; Mass Balance --&gt; Frontal\nDescription of claving/melting from the ice shelf front\n8.1. Calving\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of calving from the front of the ice shelf", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Melting\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of melting from the front of the ice shelf", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Ice --&gt; Dynamics\n**\n9.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description if ice sheet and ice shelf dynamics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Approximation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nApproximation type used in modelling ice dynamics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.approximation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SIA\" \n# \"SAA\" \n# \"full stokes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.3. Adaptive Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there an adaptive time scheme for the ice scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9.4. Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn
doc/notebooks/automaton.push_weights.ipynb
gpl-3.0
[ "automaton.push_weights\nPush the weights towards in the initial states.\nPreconditions:\n- None\nPostconditions:\n- The Result is equivalent to the input automaton.\nExamples", "import vcsn", "In a Tropical Semiring\nThe following example is taken from mohri.2009.hwa, Figure 12.", "%%automaton --strip a\ncontext = \"lal_char, zmin\"\n$ -> 0\n0 -> 1 <0>a, <1>b, <5>c\n0 -> 2 <0>d, <1>e\n1 -> 3 <0>e, <1>f\n2 -> 3 <4>e, <5>f\n3 -> $\n\na.push_weights()", "Note that weight pushing improves the \"minimizability\" of weighted automata:", "a.minimize()\n\na.push_weights().minimize()", "In $\\mathbb{Q}$\nAgain, the following example is taken from mohri.2009.hwa, Figure 12 (subfigure 12.d lacks two transitions), but computed in $\\mathbb{Q}$ rather than $\\mathbb{R}$ to render more readable results.", "%%automaton --strip a\ncontext = \"lal_char, q\"\n$ -> 0\n0 -> 1 <0>a, <1>b, <5>c\n0 -> 2 <0>d, <1>e\n1 -> 3 <0>e, <1>f\n2 -> 3 <4>e, <5>f\n3 -> $\n\na.push_weights()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
maubarsom/ORFan-proteins
phage_assembly/5_annotation/asm_v1.2/orf_160621/3b_select_reliable_orfs.ipynb
mit
[ "import pandas as pd\nimport re\nfrom glob import glob", "1. Load blast hits", "#Load blast hits\nblastp_hits = pd.read_csv(\"2_blastp_hits.tsv\",sep=\"\\t\",quotechar='\"')\nblastp_hits.head()\n#Filter out Metahit 2010 hits, keep only Metahit 2014\nblastp_hits = blastp_hits[blastp_hits.db != \"metahit_pep\"]", "2. Process blastp results\n2.1 Extract ORF stats from fasta file", "#Assumes the Fasta file comes with the header format of EMBOSS getorf\nfh = open(\"1_orf/d9539_asm_v1.2_orf.fa\")\nheader_regex = re.compile(r\">([^ ]+?) \\[([0-9]+) - ([0-9]+)\\]\")\norf_stats = []\nfor line in fh:\n header_match = header_regex.match(line)\n if header_match:\n is_reverse = line.rstrip(\" \\n\").endswith(\"(REVERSE SENSE)\")\n q_id = header_match.group(1)\n #Position in contig\n q_cds_start = int(header_match.group(2) if not is_reverse else header_match.group(3))\n q_cds_end = int(header_match.group(3) if not is_reverse else header_match.group(2))\n #Length of orf in aminoacids\n q_len = (q_cds_end - q_cds_start + 1) / 3\n orf_stats.append( pd.Series(data=[q_id,q_len,q_cds_start,q_cds_end,(\"-\" if is_reverse else \"+\")],\n index=[\"q_id\",\"orf_len\",\"q_cds_start\",\"q_cds_end\",\"strand\"]))\n \norf_stats_df = pd.DataFrame(orf_stats)\nprint(orf_stats_df.shape)\norf_stats_df.head()\n\n#Write orf stats to fasta\norf_stats_df.to_csv(\"1_orf/orf_stats.csv\",index=False)", "2.2 Annotate blast hits with orf stats", "blastp_hits_annot = blastp_hits.merge(orf_stats_df,left_on=\"query_id\",right_on=\"q_id\")\n#Add query coverage calculation\nblastp_hits_annot[\"q_cov_calc\"] = (blastp_hits_annot[\"q_end\"] - blastp_hits_annot[\"q_start\"] + 1 ) * 100 / blastp_hits_annot[\"q_len\"]\nblastp_hits_annot.sort_values(by=\"bitscore\",ascending=False).head()\n\nassert blastp_hits_annot.shape[0] == blastp_hits.shape[0]", "2.3 Extract best hit for each ORF ( q_cov > 0.8 and pct_id > 40% and e-value < 1)\nDefine these resulting 7 ORFs as the core ORFs for the d9539 assembly. \nThe homology between the Metahit gene catalogue is very good, and considering the catalogue was curated \non a big set of gut metagenomes, it is reasonable to assume that these putative proteins would come \nfrom our detected circular putative virus/phage genome\nTwo extra notes:\n * Additionally, considering only these 7 ORFs , almost the entire genomic region is covered, with very few non-coding regions, still consistent with the hypothesis of a small viral genome which should be mainly coding\n\nAlso, even though the naive ORF finder detected putative ORFs in both positive and negative strands, the supported ORFs only occur in the positive strand. This could be an indication of a ssDNA or ssRNA virus.", "! mkdir -p 4_msa_prots\n\n#Get best hit (highest bitscore) for each ORF\ngb = blastp_hits_annot[ (blastp_hits_annot.q_cov > 80) & (blastp_hits_annot.pct_id > 40) & (blastp_hits_annot.e_value < 1) ].groupby(\"query_id\")\nreliable_orfs = pd.DataFrame( hits.ix[hits.bitscore.idxmax()] for q_id,hits in gb )[[\"query_id\",\"db\",\"subject_id\",\"pct_id\",\"q_cov\",\"q_len\",\n \"bitscore\",\"e_value\",\"strand\",\"q_cds_start\",\"q_cds_end\"]]\nreliable_orfs = reliable_orfs.sort_values(by=\"q_cds_start\",ascending=True)\nreliable_orfs", "2.4 Extract selected orfs for further analysis", "reliable_orfs[\"orf_id\"] = [\"orf{}\".format(x) for x in range(1,reliable_orfs.shape[0]+1) ]\nreliable_orfs[\"cds_len\"] = reliable_orfs[\"q_cds_end\"] - reliable_orfs[\"q_cds_start\"] +1\nreliable_orfs.sort_values(by=\"q_cds_start\",ascending=True).to_csv(\"3_filtered_orfs/filt_orf_stats.csv\",index=False,header=True)\nreliable_orfs.sort_values(by=\"q_cds_start\",ascending=True).to_csv(\"3_filtered_orfs/filt_orf_list.txt\",index=False,header=False,columns=[\"query_id\"])", "2.4.2 Extract fasta", "! ~/utils/bin/seqtk subseq 1_orf/d9539_asm_v1.2_orf.fa 3_filtered_orfs/filt_orf_list.txt > 3_filtered_orfs/d9539_asm_v1.2_orf_filt.fa", "2.4.3 Write out filtered blast hits", "filt_blastp_hits = blastp_hits_annot[ blastp_hits_annot.query_id.apply(lambda x: x in reliable_orfs.query_id.tolist())]\nfilt_blastp_hits.to_csv(\"3_filtered_orfs/d9539_asm_v1.2_orf_filt_blastp.tsv\",sep=\"\\t\",quotechar='\"')\nfilt_blastp_hits.head()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
pligor/predicting-future-product-prices
02_preprocessing/exploration11-price_history_gaussian_process_regressor_clustered_cross_valid.ipynb
agpl-3.0
[ "# -*- coding: UTF-8 -*-\n#%load_ext autoreload\n%reload_ext autoreload\n%autoreload 2\n\nfrom __future__ import division\nimport numpy as np\nimport pandas as pd\nimport sys\nimport math\nfrom sklearn.preprocessing import LabelEncoder, OneHotEncoder\nimport re\nimport os\nimport csv\nfrom helpers.outliers import MyOutliers\nfrom skroutz_mobile import SkroutzMobile\nfrom sklearn.ensemble import IsolationForest\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.tree import DecisionTreeClassifier, export_graphviz\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.metrics import accuracy_score, confusion_matrix, r2_score\nfrom skroutz_mobile import SkroutzMobile\nfrom sklearn.model_selection import StratifiedShuffleSplit\nfrom helpers.my_train_test_split import MySplitTrainTest\nfrom sklearn.preprocessing import StandardScaler\nfrom preprocess_price_history import PreprocessPriceHistory\nfrom price_history import PriceHistory\nfrom dfa import dfa\nimport scipy.signal as ss\nfrom scipy.spatial.distance import euclidean\nfrom fastdtw import fastdtw\nfrom sklearn.cluster import KMeans\nfrom sklearn.preprocessing import MinMaxScaler\nimport random\nfrom sklearn.metrics import silhouette_score\nfrom os.path import isfile, isdir\nfrom preprocess_price_history import PreprocessPriceHistory\nfrom os.path import isfile\nfrom sklearn.gaussian_process import GaussianProcessRegressor\nfrom mobattrs_price_history_merger import MobAttrsPriceHistoryMerger\n#from george import kernels\n#import george\nfrom sklearn.manifold import TSNE\nimport matplotlib as mpl\nimport pickle\nimport dill\nfrom gaussian_process_price_prediction.gaussian_process_cluster_predictor import \\\n GaussianProcessPricePredictorForCluster\nfrom gp_opt.gaussian_process_regressor_11_gp_opt import GaussianProcessPricePredictorGpOpt\nfrom gp_opt.plot_res_gp import plot_res_gp\n\nrandom_state = np.random.RandomState(seed=16011984)\n%matplotlib inline\n\nmpl.rc('figure', figsize=(17,7)) #setting the default value of figsize for our plots\n#https://matplotlib.org/users/customizing.html\n\nkmeans_group = 10\n\ninput_min_len = 60\ntarget_len = 30\nseq_min_len = input_min_len + target_len\nseq_min_len\n\n#reading from\ndata_path = '../../../../Dropbox/data'\nmobattrs_ph_path = data_path + '/mobattrs_price_history'\nmobattrs_ph_norm_path = mobattrs_ph_path + '/mobattrs_ph_norm.npy'\nsku_ids_groups_path = data_path + '/sku_ids_groups'\nnpz_sku_ids_group_kmeans = sku_ids_groups_path + '/sku_ids_kmeans_{:02d}.npz'.format(kmeans_group)\n\nprice_history_csv = \"../price_history_03_seq_start_suddens_trimmed.csv\"\n\nmobiles_path = data_path + '/mobiles'\nmobs_norm_path = mobiles_path + '/mobiles_norm.csv'", "train test split", "sku_id_groups = np.load(npz_sku_ids_group_kmeans)\nfor key, val in sku_id_groups.iteritems():\n print key, \",\", val.shape\n\n# gp_predictor = GaussianProcessPricePredictorForCluster(npz_sku_ids_group_kmeans=npz_sku_ids_group_kmeans,\n# mobs_norm_path=mobs_norm_path,\n# price_history_csv=price_history_csv,\n# input_min_len=input_min_len,\n# target_len=target_len)\n\n%%time\n#gp_predictor.prepare(chosen_cluster=9)\n\n%%time\n#dtw_mean = gp_predictor.train_validate()\n\n#dtw_mean\n\n# Do not run this again unless you have enough space in the disk and lots of memory\n# with open('cur_gp.pickle', 'w') as fp: # Python 3: open(..., 'wb')\n# pickle.dump(gp, fp)", "Cross Validation", "#writing to\nbayes_opt_dir = data_path + '/gp_regressor'\nassert isdir(bayes_opt_dir)\npairs_ts_npy_filename = 'pairs_ts'\ncv_score_dict_npy_filename = 'dtw_scores'\npairs_ts_npy_filename = 'pairs_ts'\nres_gp_filename = 'res_gp_opt'", "Cluster: 6\nBest Length Scale: 1.2593471510883105\nn restart optimizer: 5\nCluster: 4\nBest Length Scale: 2.5249662383238189\nn restarts optimizer: 4\nCluster: 0\nBest Length Scale: 4.2180911518619402\nn restarts optimizer: 3 \nCluster: 1\nBest Length Scale: 0.90557520548216341\nn restarts optimizer: 2 \nCluster: 7\nBest Length Scale: 0.86338778478034262\nn restarts optimizer: 2\nCluster: 5\nBest Length Scale: 0.65798759657324202\nn restarts optimizer: 2\nCluster: 3\nBest Length scale: 0.92860995029528248\nn restarts optimizer: 1\nCluster: 2\nBest length scale: 1.0580280512277951\nn restarts optimizer: 10\nCluster: 9\nbest length scale: ???\n//...\nCluster 9", "%%time\ncur_gp_opt = GaussianProcessPricePredictorGpOpt(chosen_cluster=9,\n bayes_opt_dir=bayes_opt_dir,\n cv_score_dict_npy_filename=cv_score_dict_npy_filename,\n pairs_ts_npy_filename=pairs_ts_npy_filename,\n res_gp_filename=res_gp_filename,\n npz_sku_ids_group_kmeans=npz_sku_ids_group_kmeans,\n mobs_norm_path=mobs_norm_path,\n price_history_csv=price_history_csv,\n input_min_len=input_min_len,\n target_len=target_len,\n random_state=random_state,\n verbose = True,\n n_restarts_optimizer=1)\n\nopt_res = cur_gp_opt.run_opt(n_random_starts=5, n_calls=10)\n\nplot_res_gp(opt_res)\n\nopt_res.best_params", "Cluster 2", "%%time\ncur_gp_opt = GaussianProcessPricePredictorGpOpt(chosen_cluster=2,\n bayes_opt_dir=bayes_opt_dir,\n cv_score_dict_npy_filename=cv_score_dict_npy_filename,\n pairs_ts_npy_filename=pairs_ts_npy_filename,\n res_gp_filename=res_gp_filename,\n npz_sku_ids_group_kmeans=npz_sku_ids_group_kmeans,\n mobs_norm_path=mobs_norm_path,\n price_history_csv=price_history_csv,\n input_min_len=input_min_len,\n target_len=target_len,\n random_state=random_state,\n verbose = True,\n n_restarts_optimizer=10)\n\nopt_res = cur_gp_opt.run_opt(n_random_starts=15, n_calls=30)\n\nplot_res_gp(opt_res)\n\nopt_res.best_params", "Cluster 3", "%%time\ncur_gp_opt = GaussianProcessPricePredictorGpOpt(chosen_cluster=3,\n bayes_opt_dir=bayes_opt_dir,\n cv_score_dict_npy_filename=cv_score_dict_npy_filename,\n pairs_ts_npy_filename=pairs_ts_npy_filename,\n res_gp_filename=res_gp_filename,\n npz_sku_ids_group_kmeans=npz_sku_ids_group_kmeans,\n mobs_norm_path=mobs_norm_path,\n price_history_csv=price_history_csv,\n input_min_len=input_min_len,\n target_len=target_len,\n random_state=random_state,\n verbose = True,\n n_restarts_optimizer=1)\n\nopt_res = cur_gp_opt.run_opt(n_random_starts=5, n_calls=10)\n\nplot_res_gp(opt_res)\n\nopt_res.best_params", "Cluster 5", "%%time\ncur_gp_opt = GaussianProcessPricePredictorGpOpt(chosen_cluster=5,\n bayes_opt_dir=bayes_opt_dir,\n cv_score_dict_npy_filename=cv_score_dict_npy_filename,\n pairs_ts_npy_filename=pairs_ts_npy_filename,\n res_gp_filename=res_gp_filename,\n npz_sku_ids_group_kmeans=npz_sku_ids_group_kmeans,\n mobs_norm_path=mobs_norm_path,\n price_history_csv=price_history_csv,\n input_min_len=input_min_len,\n target_len=target_len,\n random_state=random_state,\n verbose = True,\n n_restarts_optimizer=2)\n\nopt_res = cur_gp_opt.run_opt(n_random_starts=9, n_calls=20)\n\nplot_res_gp(opt_res)\n\nopt_res.best_params", "Cluster 7", "%%time\ncur_gp_opt = GaussianProcessPricePredictorGpOpt(chosen_cluster=7,\n bayes_opt_dir=bayes_opt_dir,\n cv_score_dict_npy_filename=cv_score_dict_npy_filename,\n pairs_ts_npy_filename=pairs_ts_npy_filename,\n res_gp_filename=res_gp_filename,\n npz_sku_ids_group_kmeans=npz_sku_ids_group_kmeans,\n mobs_norm_path=mobs_norm_path,\n price_history_csv=price_history_csv,\n input_min_len=input_min_len,\n target_len=target_len,\n random_state=random_state,\n verbose = True,\n n_restarts_optimizer=2)\n\nopt_res = cur_gp_opt.run_opt(n_random_starts=6, n_calls=13)\n\nplot_res_gp(opt_res)\n\nopt_res.best_params", "Cluster 1", "%%time\ncur_gp_opt = GaussianProcessPricePredictorGpOpt(chosen_cluster=1,\n bayes_opt_dir=bayes_opt_dir,\n cv_score_dict_npy_filename=cv_score_dict_npy_filename,\n pairs_ts_npy_filename=pairs_ts_npy_filename,\n res_gp_filename=res_gp_filename,\n npz_sku_ids_group_kmeans=npz_sku_ids_group_kmeans,\n mobs_norm_path=mobs_norm_path,\n price_history_csv=price_history_csv,\n input_min_len=input_min_len,\n target_len=target_len,\n random_state=random_state,\n verbose = True,\n n_restarts_optimizer=2)\n\nopt_res = cur_gp_opt.run_opt(n_random_starts=7, n_calls=15)\n\nplot_res_gp(opt_res)\n\nopt_res.best_params", "Cluster 6", "%%time\ncur_gp_opt = GaussianProcessPricePredictorGpOpt(chosen_cluster=6,\n bayes_opt_dir=bayes_opt_dir,\n cv_score_dict_npy_filename=cv_score_dict_npy_filename,\n pairs_ts_npy_filename=pairs_ts_npy_filename,\n res_gp_filename=res_gp_filename,\n npz_sku_ids_group_kmeans=npz_sku_ids_group_kmeans,\n mobs_norm_path=mobs_norm_path,\n price_history_csv=price_history_csv,\n input_min_len=input_min_len,\n target_len=target_len,\n random_state=random_state,\n verbose = False,\n n_restarts_optimizer=5)\n\nopt_res = cur_gp_opt.run_opt(n_random_starts=3, n_calls=10)\n\nplot_res_gp(opt_res)\n\nopt_res.best_params", "Cluster 4", "%%time\ncur_gp_opt = GaussianProcessPricePredictorGpOpt(chosen_cluster=4,\n bayes_opt_dir=bayes_opt_dir,\n cv_score_dict_npy_filename=cv_score_dict_npy_filename,\n pairs_ts_npy_filename=pairs_ts_npy_filename,\n res_gp_filename=res_gp_filename,\n npz_sku_ids_group_kmeans=npz_sku_ids_group_kmeans,\n mobs_norm_path=mobs_norm_path,\n price_history_csv=price_history_csv,\n input_min_len=input_min_len,\n target_len=target_len,\n random_state=random_state,\n verbose = True,\n n_restarts_optimizer=4)\n\nopt_res = cur_gp_opt.run_opt(n_random_starts=5, n_calls=20)\n\nplot_res_gp(opt_res)\n\nopt_res.best_params", "Cluster 0", "%%time\ncur_gp_opt = GaussianProcessPricePredictorGpOpt(chosen_cluster=0,\n bayes_opt_dir=bayes_opt_dir,\n cv_score_dict_npy_filename=cv_score_dict_npy_filename,\n pairs_ts_npy_filename=pairs_ts_npy_filename,\n res_gp_filename=res_gp_filename,\n npz_sku_ids_group_kmeans=npz_sku_ids_group_kmeans,\n mobs_norm_path=mobs_norm_path,\n price_history_csv=price_history_csv,\n input_min_len=input_min_len,\n target_len=target_len,\n random_state=random_state,\n verbose = True,\n n_restarts_optimizer=3)\n\nopt_res = cur_gp_opt.run_opt(n_random_starts=10, n_calls=20)\n\nplot_res_gp(opt_res)\n\nopt_res.best_params" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dream-olfaction/olfaction-prediction
opc_python/hulab/collaboration/target_data_preparation.ipynb
mit
[ "prepares the target matrix with average values\nseparate target file for the selection and for the training\nfor feature selection we take the averages\nfor training we select the right values (1/1000 dilution or 'high')", "import pandas as pd\nimport numpy as np\nimport os", "target data for feature selection\naverage all data for each compound", "# load the training data \ndata = pd.read_csv(os.path.abspath('__file__' + \"/../../../../data/TrainSet.txt\"),sep='\\t')\n\ndata.drop(['Intensity','Odor','Replicate','Dilution'],axis=1, inplace=1)\ndata.columns = ['#oID', 'individual'] + list(data.columns)[2:]\ndata.head()\n\n# load leaderboard data and reshape them to match the training data\nLB_data_high = pd.read_csv(os.path.abspath('__file__' + \"/../../../../data/LBs1.txt\"),sep='\\t')\nLB_data_high = LB_data_high.pivot_table(index=['#oID','individual'],columns='descriptor',values='value')\nLB_data_high.reset_index(level=[0,1],inplace=1)\nLB_data_high.rename(columns={' CHEMICAL':'CHEMICAL'}, inplace=True)\nLB_data_high = LB_data_high[data.columns]\nLB_data_high.head()\n\n# load leaderboard low intensity data and reshape them to match the training data\nLB_data_low = pd.read_csv(os.path.abspath('__file__' + \"/../../../../data/leaderboard_set_Low_Intensity.txt\"),sep='\\t')\nLB_data_low = LB_data_low.pivot_table(index=['#oID','individual'],columns='descriptor',values='value')\nLB_data_low.reset_index(level=[0,1],inplace=1)\nLB_data_low.rename(columns={' CHEMICAL':'CHEMICAL'}, inplace=True)\nLB_data_low = LB_data_low[data.columns]\nLB_data_low.head()\n\n# put them all together\nselection_data = pd.concat((data,LB_data_high,LB_data_low),ignore_index=True)\n\n# replace descriptor data with np.nan if intensity is zero\nfor descriptor in [u'VALENCE/PLEASANTNESS', u'BAKERY', u'SWEET', u'FRUIT', u'FISH',\n u'GARLIC', u'SPICES', u'COLD', u'SOUR', u'BURNT', u'ACID', u'WARM',\n u'MUSKY', u'SWEATY', u'AMMONIA/URINOUS', u'DECAYED', u'WOOD',\n u'GRASS', u'FLOWER', u'CHEMICAL']:\n selection_data.loc[(selection_data['INTENSITY/STRENGTH'] == 0),descriptor] = np.nan\n\n#average them all\nselection_data = selection_data.groupby('#oID').mean()\nselection_data.drop('individual',1,inplace=1)\nselection_data.to_csv('targets_for_feature_selection.csv')\nselection_data.head()", "target data for training\nfilter out the relevant data for each compound", "# load the train data \ndata = pd.read_csv(os.path.abspath('__file__' + \"/../../../../data/TrainSet.txt\"),sep='\\t')\n\ndata.drop(['Odor','Replicate'],axis=1, inplace=1)\ndata.columns = [u'#oID','Intensity','Dilution', u'individual', u'INTENSITY/STRENGTH', u'VALENCE/PLEASANTNESS', u'BAKERY', u'SWEET', u'FRUIT', u'FISH', u'GARLIC', u'SPICES', u'COLD', u'SOUR', u'BURNT', u'ACID', u'WARM', u'MUSKY', u'SWEATY', u'AMMONIA/URINOUS', u'DECAYED', u'WOOD', u'GRASS', u'FLOWER', u'CHEMICAL']\ndata.head()\n\n#load LB data\nLB_data_high = pd.read_csv(os.path.abspath('__file__' + \"/../../../../data/LBs1.txt\"),sep='\\t')\nLB_data_high = LB_data_high.pivot_table(index=['#oID','individual'],columns='descriptor',values='value')\nLB_data_high.reset_index(level=[0,1],inplace=1)\nLB_data_high.rename(columns={' CHEMICAL':'CHEMICAL'}, inplace=True)\nLB_data_high['Dilution'] = '1/1,000 '\nLB_data_high['Intensity'] = 'high '\nLB_data_high = LB_data_high[data.columns]\nLB_data_high.head()\n\n# put them together\ndata = pd.concat((data,LB_data_high),ignore_index=True)\n# replace descriptor data with np.nan if intensity is zero\nfor descriptor in [u'VALENCE/PLEASANTNESS', u'BAKERY', u'SWEET', u'FRUIT', u'FISH',\n u'GARLIC', u'SPICES', u'COLD', u'SOUR', u'BURNT', u'ACID', u'WARM',\n u'MUSKY', u'SWEATY', u'AMMONIA/URINOUS', u'DECAYED', u'WOOD',\n u'GRASS', u'FLOWER', u'CHEMICAL']:\n data.loc[(data['INTENSITY/STRENGTH'] == 0),descriptor] = np.nan\n\n# average the duplicates \ndata = data.groupby(['individual','#oID','Dilution','Intensity']).mean() \ndata.reset_index(level=[2,3], inplace=True) \n\n#filter out data for intensity prediction\ndata_int = data[data.Dilution == '1/1,000 ']\n\n# filter out data for everything else\ndata = data[data.Intensity == 'high ']\n\n\n# replace the Intensity data with the data_int intensity values\ndata['INTENSITY/STRENGTH'] = data_int['INTENSITY/STRENGTH']\ndata.drop(['Dilution','Intensity'],inplace=1,axis=1)\ndata.reset_index(level=[0,1], inplace=True)\n\ndata.head()\n\ndata = data.groupby('#oID').mean()\n\ndata.shape\n\n#save it\ndata.to_csv('target.csv')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
tpin3694/tpin3694.github.io
machine-learning/enhance_contrast_of_greyscale_image.ipynb
mit
[ "Title: Enhance Contrast Of Greyscale Image\nSlug: enhance_contrast_of_greyscale_image\nSummary: How to enhance the contrast of images using OpenCV in Python. \nDate: 2017-09-11 12:00\nCategory: Machine Learning\nTags: Preprocessing Images \nAuthors: Chris Albon\nPreliminaries", "# Load image\nimport cv2\nimport numpy as np\nfrom matplotlib import pyplot as plt", "Load Image As Greyscale", "# Load image as grayscale\nimage = cv2.imread('images/plane_256x256.jpg', cv2.IMREAD_GRAYSCALE)", "Enhance Image", "# Enhance image\nimage_enhanced = cv2.equalizeHist(image)", "View Image", "# Show image\nplt.imshow(image_enhanced, cmap='gray'), plt.axis(\"off\")\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
enoordeh/StatisticalMethods
examples/XrayImage/FirstLook.ipynb
gpl-2.0
[ "A First Look at an X-ray Image Dataset\nImages are data. They can be 2D, from cameras, or 1D, from spectrographs, or 3D, from IFUs (integral field units). In each case, the data come packaged as an array of numbers, which we can visualize, and do calculations with.\nLet's suppose we are interested in clusters of galaxies. We choose one, Abell 1835, and propose to observe it with the XMM-Newton space telescope. We are successful, we design the observations, and they are taken for us. Next: we download the data, and take a look at it.\nGetting the Data\nWe will download our images from HEASARC, the online archive where XMM data are stored.", "from __future__ import print_function\nimport astropy.io.fits as pyfits\nimport numpy as np\nimport os\nimport urllib\nimport astropy.visualization as viz\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 10.0)", "Download the example data files if we don't already have them.", "targdir = 'a1835_xmm'\nif not os.path.isdir(targdir):\n os.mkdir()\n\nfilenames = ('P0098010101M2U009IMAGE_3000.FTZ', \n 'P0098010101M2U009EXPMAP3000.FTZ',\n 'P0098010101M2X000BKGMAP3000.FTZ')\n\nremotedir = 'http://heasarc.gsfc.nasa.gov/FTP/xmm/data/rev0/0098010101/PPS/'\n\nfor filename in filenames:\n path = os.path.join(targdir, filename)\n url = os.path.join(remotedir, filename)\n if not os.path.isfile(path):\n urllib.urlretrieve(url, path)\n\nimagefile, expmapfile, bkgmapfile = [os.path.join(targdir, filename) for filename in filenames]\n \nfor filename in os.listdir(targdir):\n print('{0:>10.2f} KB {1}'.format(os.path.getsize(os.path.join(targdir, filename))/1024.0, filename))", "The XMM MOS2 image\nLet's find the \"science\" image taken with the MOS2 camera, and display it.", "imfits = pyfits.open(imagefile)\nimfits.info()", "imfits is a FITS object, containing multiple data structures. The image itself is an array of integer type, and size 648x648 pixels, stored in the primary \"header data unit\" or HDU. \n\nIf we need it to be floating point for some reason, we need to cast it:\nim = imfits[0].data.astype('np.float32')\nNote that this (probably?) prevents us from using the pyfits \"writeto\" method to save any changes. Assuming the integer type is ok, just get a pointer to the image data.\n\nAccessing the .data member of the FITS object returns the image data as a numpy ndarray.", "im = imfits[0].data", "Let's look at this with ds9.", "!ds9 -log \"$imagefile\"", "If you don't have the image viewing tool ds9, you should install it - it's very useful astronomical software. You can download it (later!) from this webpage.\n\nWe can also display the image in the notebook:", "plt.imshow(viz.scale_image(im, scale='log', max_cut=40), cmap='gray', origin='lower');\nplt.savefig(\"figures/cluster_image.png\")", "Exercise\nWhat is going on in this image? \nMake a list of everything that is interesting about this image with your neighbor, and we'll discuss the features you identify in about 5 minutes time.\n\nJust to prove that images really are arrays of numbers:", "im[350:359,350:359]\n\nindex = np.unravel_index(im.argmax(), im.shape)\nprint(\"image dimensions:\",im.shape)\nprint(\"location of maximum pixel value:\",index)\nprint(\"maximum pixel value: \",im[index])", "NB. Images read in with pyfits are indexed with eg im[y,x]: ds9 shows that the maximum pixel value is at \"image coordinates\" x=328, y=348. pyplot knows what to do, but sometimes we may need to take the transpose of the im array. What pyplot does need to be told is that in astronomy, the origin of the image is conventionally taken to be at the bottom left hand corner, not the top left hand corner. That's what the origin=lower in the plt.imshow command was about.\nWe will work in image coordinates throughout this course, for simplicity. Aligning images on the sky via a \"World Coordinate System\" is something to be learned elsewhere." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
phanrahan/magmathon
notebooks/tutorial/coreir/FullAdder.ipynb
mit
[ "FullAdder - Combinational Circuits\nThis notebook walks through the implementation of a basic combinational circuit, a full adder. This example introduces many of the features of Magma including circuits, wiring, operators, and the type system.\nStart by importing magma and mantle. magma is the core system which implements circuits and the methods to compose them, and mantle is a library of useful circuits.", "import magma as m\nimport mantle", "A full adder has three single bit inputs, and returns the sum and the carry. The sum is the exclusive or of the 3 bits, the carry is 1 if any two of the inputs bits are 1. Here is a schematic of a full adder circuit (from logisim).\n<img src=\"images/full_adder_logisim.png\" width=\"500\"/>\nWe start by defining a magma combinational function that implements a full adder. \nThe full adder function takes three single bit inputs (type m.Bit) and returns two single bit outputs as a tuple.\nThe first element of tuple is the sum, the second element is the carry. Note that the arguments and return values of the functions have type annotations using Python 3's typing syntax.\nWe compute the sum and carry using standard Python bitwise operators &amp;, |, and ^.", "@m.circuit.combinational\ndef full_adder(A: m.Bit, B: m.Bit, C: m.Bit) -> (m.Bit, m.Bit):\n return A ^ B ^ C, A & B | B & C | C & A # sum, carry", "We can test our combinational function to verify that our implementation behaves as expected fault.\nWe'll use the fault.PythonTester which will simulate the circuit using magma's Python simulator.", "import fault\ntester = fault.PythonTester(full_adder)\nassert tester(1, 0, 0) == (1, 0), \"Failed\"\nassert tester(0, 1, 0) == (1, 0), \"Failed\"\nassert tester(1, 1, 0) == (0, 1), \"Failed\"\nassert tester(1, 0, 1) == (0, 1), \"Failed\"\nassert tester(1, 1, 1) == (1, 1), \"Failed\"\nprint(\"Success!\")", "combinational functions are polymorphic over Python and magma types. If the function is called with magma values, it will produce a circuit instance, wire up the inputs, and return references to the outputs. Otherwise, it will invoke the function in Python. For example, we can use the Python function to verify the circuit simulation.", "assert tester(1, 0, 0) == full_adder(1, 0, 0), \"Failed\"\nassert tester(0, 1, 0) == full_adder(0, 1, 0), \"Failed\"\nassert tester(1, 1, 0) == full_adder(1, 1, 0), \"Failed\"\nassert tester(1, 0, 1) == full_adder(1, 0, 1), \"Failed\"\nassert tester(1, 1, 1) == full_adder(1, 1, 1), \"Failed\"\nprint(\"Success!\")", "Circuits\nNow that we have an implementation of full_adder as a combinational function, \nwe'll use it to construct a magma Circuit. \nA Circuit in magma corresponds to a module in verilog.\nThis example shows using the combinational function inside a circuit definition, as opposed to using the Python implementation shown before.", "class FullAdder(m.Circuit):\n io = m.IO(I0=m.In(m.Bit),\n I1=m.In(m.Bit),\n CIN=m.In(m.Bit),\n O=m.Out(m.Bit),\n COUT=m.Out(m.Bit))\n \n O, COUT = full_adder(io.I0, io.I1, io.CIN)\n io.O @= O\n io.COUT @= COUT", "First, notice that the FullAdder is a subclass of Circuit. All magma circuits are classes in python.\nSecond, the function IO creates the interface to the circuit. \nThe arguments toIO are keyword arguments. \nThe key is the name of the argument in the circuit, and the value is its type. \nIn this circuit, all the inputs and outputs have Magma type Bit. \nWe also qualify each type as an input or an output using the functions In and Out.\nNote that when we call the python function fulladder\nit is passed magma values not standard python values.\nIn the previous cell, we tested fulladder with standard python ints,\nwhile in this case, the values passed to the Python fulladder function \nare magma values of type Bit.\nThe Python bitwise operators for Magma types are overloaded to automatically create subcircuits to compute logical functions.\nfulladder returns two values.\nThese values are assigned to the python variables O and COUT. \nRemember that assigning to a Python variable \nsets the variable to refer to the object.\nmagma values are Python objects,\nso assigning an object to a variable creates a reference to that magma value.\nIn order to complete the definition of the circuit, \nO and COUT need to be wired to the outputs in the interface.\nThe python @= operator is overloaded to perform wiring.\nLet's inspect the circuit definition by printing the __repr__.", "print(repr(FullAdder))", "We see that it has created an instance of the full_adder combinational function and wired up the interface.\nWe can also inspect the contents of the full_adder circuit definition. Notice that it has lowered the Python operators into a structural representation of the primitive logicoperations.", "print(repr(full_adder.circuit_definition))", "We can also inspect the code generated by the m.circuit.combinational decorator by looking in the .magma directory for a file named .magma/full_adder.py. When using m.circuit.combinational, magma will generate a file matching the name of the decorated function. You'll notice that the generated code introduces an extra temporary variable (this is an artifact of the SSA pass that magma runs to handle if/else statements).", "with open(\".magma/full_adder.py\") as f:\n print(f.read())", "In the code above, a mux is imported and named phi. If the combinational circuit contains any if-then-else constructs, they will be transformed into muxes.\nNote also the m.wire function. m.wire(O0, io.I0) is equivalent to io.O0 @= O0.\nStaged testing with Fault\nfault is a python package for testing magma circuits. By default, fault is quiet, so we begin by enabling logging using the built-in logging module", "import logging\nlogging.basicConfig(level=logging.INFO)\nimport fault", "Earlier in the notebook, we showed an example using fault.PythonTester to simulate a circuit. This uses an interactive programming model where test actions are immediately dispatched to the underlying simulator (which is why we can perform assertions on the simulation values in Python.\nfault also provides a staged metaprogramming environment built upon the Tester class. Using the staged environment means values are not returned immediately to Python. Instead, the Python test code records a sequence of actions that are compiled and run in a later stage.\nA Tester is instantiated with a magma circuit.", "tester = fault.Tester(FullAdder)", "An instance of a Tester has an attribute .circuit that enables the user to record test actions. For example, inputs to a circuit can be poked by setting the attribute corresponding to the input port name.", "tester.circuit.I0 = 1\ntester.circuit.I1 = 1\ntester.circuit.CIN = 1", "fault's default Tester provides the semantics of a cycle accurate simulator, so, unlike verilog, pokes do not create events that trigger computation. Instead, these poke values are staged, and the propogation of their effect occurs when the user calls the eval action.", "tester.eval()", "To assert that the output of the circuit is equal to a value, we use the expect method that are defined on the attributes corresponding to circuit output ports", "tester.circuit.O.expect(1)\ntester.circuit.COUT.expect(1)", "Because fault is a staged programming environment, the above actions are not executed until we have advanced to the next stage. In the first stage, the user records test actions (e.g. poke, eval, expect). In the second stage, the test is compiled and run using a target runtime. Here's examples of running the test using magma's python simulator, the coreir c++ simulator, and verilator.", "# compile_and_run throws an exception if the test fails\ntester.compile_and_run(\"verilator\")", "The tester also provides the same convenient __call__ interface we saw before.", "O, COUT = tester(1, 0, 0)\ntester.expect(O, 1)\ntester.expect(COUT, 0)\ntester.compile_and_run(\"verilator\")", "Generate Verilog\nMagma's default compiler will generate verilog using CoreIR", "m.compile(\"build/FullAdder\", FullAdder, inline=True)\n%cat build/FullAdder.v", "Generate CoreIR\nWe can also inspect the intermediate CoreIR used in the generation process.", "%cat build/FullAdder.json", "Here's an example of running a CoreIR pass on the intermediate representation.", "!coreir -i build/FullAdder.json -p instancecount" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ALEXKIRNAS/DataScience
Coursera/Machine-learning-data-analysis/Course 2/Week_02/MetricsPA.ipynb
mit
[ "Сравнение метрик качества бинарной классификации\nProgramming Assignment\nВ этом задании мы разберемся, в чем состоит разница между разными метриками качества. Мы остановимся на задаче бинарной классификации (с откликами 0 и 1), но рассмотрим ее как задачу предсказания вероятности того, что объект принадлежит классу 1. Таким образом, мы будем работать с вещественной, а не бинарной целевой переменной.\nЗадание оформлено в стиле демонстрации с элементами Programming Assignment. Вам нужно запустить уже написанный код и рассмотреть предложенные графики, а также реализовать несколько своих функций. Для проверки запишите в отдельные файлы результаты работы этих функций на указанных наборах входных данных, это можно сделать с помощью предложенных в заданиях функций write_answer_N, N - номер задачи. Загрузите эти файлы в систему.\nДля построения графиков нужно импортировать соответствующие модули. \nБиблиотека seaborn позволяет сделать графики красивее. Если вы не хотите ее использовать, закомментируйте третью строку.\nБолее того, для выполнения Programming Assignment модули matplotlib и seaborn не нужны (вы можете не запускать ячейки с построением графиков и смотреть на уже построенные картинки).", "import numpy as np\nfrom matplotlib import pyplot as plt\nimport seaborn\n%matplotlib inline", "Что предсказывают алгоритмы\nДля вычисления метрик качества в обучении с учителем нужно знать только два вектора: вектор правильных ответов и вектор предсказанных величин; будем обозначать их actual и predicted. Вектор actual известен из обучающей выборки, вектор predicted возвращается алгоритмом предсказания. Сегодня мы не будем использовать какие-то алгоритмы классификации, а просто рассмотрим разные векторы предсказаний.\nВ нашей формулировке actual состоит из нулей и единиц, а predicted - из величин из интервала [0, 1] (вероятности класса 1). Такие векторы удобно показывать на scatter plot.\nЧтобы сделать финальное предсказание (уже бинарное), нужно установить порог T: все объекты, имеющие предсказание выше порога, относят к классу 1, остальные - к классу 0.", "# рисует один scatter plot\ndef scatter(actual, predicted, T):\n plt.scatter(actual, predicted)\n plt.xlabel(\"Labels\")\n plt.ylabel(\"Predicted probabilities\")\n plt.plot([-0.2, 1.2], [T, T])\n plt.axis([-0.1, 1.1, -0.1, 1.1])\n \n# рисует несколько scatter plot в таблице, имеющей размеры shape\ndef many_scatters(actuals, predicteds, Ts, titles, shape):\n plt.figure(figsize=(shape[1]*5, shape[0]*5))\n i = 1\n for actual, predicted, T, title in zip(actuals, predicteds, Ts, titles):\n ax = plt.subplot(shape[0], shape[1], i)\n ax.set_title(title)\n i += 1\n scatter(actual, predicted, T)", "Идеальная ситуация: существует порог T, верно разделяющий вероятности, соответствующие двум классам. Пример такой ситуации:", "actual_0 = np.array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \n 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])\npredicted_0 = np.array([ 0.19015288, 0.23872404, 0.42707312, 0.15308362, 0.2951875 ,\n 0.23475641, 0.17882447, 0.36320878, 0.33505476, 0.202608 ,\n 0.82044786, 0.69750253, 0.60272784, 0.9032949 , 0.86949819,\n 0.97368264, 0.97289232, 0.75356512, 0.65189193, 0.95237033,\n 0.91529693, 0.8458463 ])\n\nplt.figure(figsize=(5, 5))\nscatter(actual_0, predicted_0, 0.5)", "Интервалы вероятностей для двух классов прекрасно разделяются порогом T = 0.5.\nЧаще всего интервалы накладываются - тогда нужно аккуратно подбирать порог. \nСамый неправильный алгоритм делает все наоборот: поднимает вероятности класса 0 выше вероятностей класса 1. Если так произошло, стоит посмотреть, не перепутались ли метки 0 и 1 при создании целевого вектора из сырых данных.\nПримеры:", "actual_1 = np.array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n 1., 1., 1., 1.])\npredicted_1 = np.array([ 0.41310733, 0.43739138, 0.22346525, 0.46746017, 0.58251177,\n 0.38989541, 0.43634826, 0.32329726, 0.01114812, 0.41623557,\n 0.54875741, 0.48526472, 0.21747683, 0.05069586, 0.16438548,\n 0.68721238, 0.72062154, 0.90268312, 0.46486043, 0.99656541,\n 0.59919345, 0.53818659, 0.8037637 , 0.272277 , 0.87428626,\n 0.79721372, 0.62506539, 0.63010277, 0.35276217, 0.56775664])\nactual_2 = np.array([ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0.,\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])\npredicted_2 = np.array([ 0.07058193, 0.57877375, 0.42453249, 0.56562439, 0.13372737,\n 0.18696826, 0.09037209, 0.12609756, 0.14047683, 0.06210359,\n 0.36812596, 0.22277266, 0.79974381, 0.94843878, 0.4742684 ,\n 0.80825366, 0.83569563, 0.45621915, 0.79364286, 0.82181152,\n 0.44531285, 0.65245348, 0.69884206, 0.69455127])\n\nmany_scatters([actual_0, actual_1, actual_2], [predicted_0, predicted_1, predicted_2], \n [0.5, 0.5, 0.5], [\"Perfect\", \"Typical\", \"Awful algorithm\"], (1, 3))", "Алгоритм может быть осторожным и стремиться сильно не отклонять вероятности от 0.5, а может рисковать - делать предсказания близакими к нулю или единице.", "# рискующий идеальный алгоитм\nactual_0r = np.array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1.,\n 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])\npredicted_0r = np.array([ 0.23563765, 0.16685597, 0.13718058, 0.35905335, 0.18498365,\n 0.20730027, 0.14833803, 0.18841647, 0.01205882, 0.0101424 ,\n 0.10170538, 0.94552901, 0.72007506, 0.75186747, 0.85893269,\n 0.90517219, 0.97667347, 0.86346504, 0.72267683, 0.9130444 ,\n 0.8319242 , 0.9578879 , 0.89448939, 0.76379055])\n# рискующий хороший алгоритм\nactual_1r = np.array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.,\n 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])\npredicted_1r = np.array([ 0.13832748, 0.0814398 , 0.16136633, 0.11766141, 0.31784942,\n 0.14886991, 0.22664977, 0.07735617, 0.07071879, 0.92146468,\n 0.87579938, 0.97561838, 0.75638872, 0.89900957, 0.93760969,\n 0.92708013, 0.82003675, 0.85833438, 0.67371118, 0.82115125,\n 0.87560984, 0.77832734, 0.7593189, 0.81615662, 0.11906964,\n 0.18857729])\n\nmany_scatters([actual_0, actual_1, actual_0r, actual_1r], \n [predicted_0, predicted_1, predicted_0r, predicted_1r], \n [0.5, 0.5, 0.5, 0.5],\n [\"Perfect careful\", \"Typical careful\", \"Perfect risky\", \"Typical risky\"], \n (2, 2))", "Также интервалы могут смещаться. Если алгоритм боится ошибок false positive, то он будет чаще делать предсказания, близкие к нулю. \nАналогично, чтобы избежать ошибок false negative, логично чаще предсказывать большие вероятности.", "actual_10 = np.array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n 1., 1., 1.])\npredicted_10 = np.array([ 0.29340574, 0.47340035, 0.1580356 , 0.29996772, 0.24115457, 0.16177793,\n 0.35552878, 0.18867804, 0.38141962, 0.20367392, 0.26418924, 0.16289102, \n 0.27774892, 0.32013135, 0.13453541, 0.39478755, 0.96625033, 0.47683139, \n 0.51221325, 0.48938235, 0.57092593, 0.21856972, 0.62773859, 0.90454639, 0.19406537,\n 0.32063043, 0.4545493 , 0.57574841, 0.55847795 ])\nactual_11 = np.array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])\npredicted_11 = np.array([ 0.35929566, 0.61562123, 0.71974688, 0.24893298, 0.19056711, 0.89308488,\n 0.71155538, 0.00903258, 0.51950535, 0.72153302, 0.45936068, 0.20197229, 0.67092724,\n 0.81111343, 0.65359427, 0.70044585, 0.61983513, 0.84716577, 0.8512387 , \n 0.86023125, 0.7659328 , 0.70362246, 0.70127618, 0.8578749 , 0.83641841, \n 0.62959491, 0.90445368])\n\nmany_scatters([actual_1, actual_10, actual_11], [predicted_1, predicted_10, predicted_11], \n [0.5, 0.5, 0.5], [\"Typical\", \"Avoids FP\", \"Avoids FN\"], (1, 3))", "Мы описали разные характеры векторов вероятностей. Далее мы будем смотреть, как метрики оценивают разные векторы предсказаний, поэтому обязательно выполните ячейки, создающие векторы для визуализации.\nМетрики, оценивающие бинарные векторы предсказаний\nЕсть две типичные ситуации, когда специалисты по машинному обучению начинают изучать характеристики метрик качества: \n1. при участии в соревновании или решении прикладной задачи, когда вектор предсказаний оценивается по конкретной метрике, и нужно построить алгоритм, максимизирующий эту метрику.\n1. на этапе формализации задачи машинного обучения, когда есть требования прикладной области, и нужно предложить математическую метрику, которая будет соответствовать этим требованиям.\nДалее мы вкратце рассмотрим каждую метрику с этих двух позиций.\nPrecision и recall; accuracy\nДля начала разберемся с метриками, оценивающие качество уже после бинаризации по порогу T, то есть сравнивающие два бинарных вектора: actual и predicted.\nДве популярные метрики - precision и recall. Первая показывает, как часто алгоритм предсказывает класс 1 и оказывается правым, а вторая - как много объектов класса 1 алгоритм нашел. \nТакже рассмотрим самую простую и известную метрику - accuracy; она показывает долю правильных ответов.\nВыясним преимущества и недостатки этих метрик, попробовав их на разных векторах вероятностей.", "from sklearn.metrics import precision_score, recall_score, accuracy_score\n\nT = 0.5\nprint(\"Алгоритмы, разные по качеству:\")\nfor actual, predicted, descr in zip([actual_0, actual_1, actual_2], \n [predicted_0 > T, predicted_1 > T, predicted_2 > T],\n [\"Perfect:\", \"Typical:\", \"Awful:\"]):\n print(descr, \"precision =\", precision_score(actual, predicted), \"recall =\", \\\n recall_score(actual, predicted), \";\",\\\n \"accuracy =\", accuracy_score(actual, predicted))\nprint()\nprint(\"Осторожный и рискующий алгоритмы:\")\nfor actual, predicted, descr in zip([actual_1, actual_1r], \n [predicted_1 > T, predicted_1r > T],\n [\"Typical careful:\", \"Typical risky:\"]):\n print(descr, \"precision =\", precision_score(actual, predicted), \"recall =\", \\\n recall_score(actual, predicted), \";\",\\\n \"accuracy =\", accuracy_score(actual, predicted))\nprint()\nprint(\"Разные склонности алгоритмов к ошибкам FP и FN:\")\nfor actual, predicted, descr in zip([actual_10, actual_11], \n [predicted_10 > T, predicted_11 > T], \n [\"Avoids FP:\", \"Avoids FN:\"]):\n print(descr, \"precision =\", precision_score(actual, predicted), \"recall =\", \\\n recall_score(actual, predicted), \";\",\\\n \"accuracy =\", accuracy_score(actual, predicted))", "Все три метрики легко различают простые случаи хороших и плохих алгоритмов. Обратим внимание, что метрики имеют область значений [0, 1], и потому их легко интерпретировать.\nМетрикам не важны величины вероятностей, им важно только то, сколько объектов неправильно зашли за установленную границу (в данном случае T = 0.5).\nМетрика accuracy дает одинаковый вес ошибкам false positive и false negative, зато пара метрик precision и recall однозначно идентифицирует это различие. Собственно, их для того и используют, чтобы контролировать ошибки FP и FN.\nМы измерили три метрики, фиксировав порог T = 0.5, потому что для почти всех картинок он кажется оптимальным. Давайте посмотрим на последней (самой интересной для этих метрик) группе векторов, как меняются precision и recall при увеличении порога.", "from sklearn.metrics import precision_recall_curve\n\nprecs = []\nrecs = []\nthreshs = []\nlabels = [\"Typical\", \"Avoids FP\", \"Avoids FN\"]\nfor actual, predicted in zip([actual_1, actual_10, actual_11], \n [predicted_1, predicted_10, predicted_11]):\n prec, rec, thresh = precision_recall_curve(actual, predicted)\n precs.append(prec)\n recs.append(rec)\n threshs.append(thresh)\nplt.figure(figsize=(15, 5))\nfor i in range(3):\n ax = plt.subplot(1, 3, i+1)\n plt.plot(threshs[i], precs[i][:-1], label=\"precision\")\n plt.plot(threshs[i], recs[i][:-1], label=\"recall\")\n plt.xlabel(\"threshold\")\n ax.set_title(labels[i])\n plt.legend()", "При увеличении порога мы делаем меньше ошибок FP и больше ошибок FN, поэтому одна из кривых растет, а вторая - падает. По такому графику можно подобрать оптимальное значение порога, при котором precision и recall будут приемлемы. Если такого порога не нашлось, нужно обучать другой алгоритм. \nОговоримся, что приемлемые значения precision и recall определяются предметной областью. Например, в задаче определения, болен ли пациент определенной болезнью (0 - здоров, 1 - болен), ошибок false negative стараются избегать, требуя recall около 0.9. Можно сказать человеку, что он болен, и при дальнейшей диагностике выявить ошибку; гораздо хуже пропустить наличие болезни.\n<font color=\"green\" size=5>Programming assignment: problem 1. </font> Фиксируем порог T = 0.65; по графикам можно примерно узнать, чему равны метрики на трех выбранных парах векторов (actual, predicted). Вычислите точные precision и recall для этих трех пар векторов.\n6 полученных чисел запишите в текстовый файл в таком порядке:\nprecision_1 recall_1 precision_10 recall_10 precision_11 recall_11\nЦифры XXX после пробела соответствуют таким же цифрам в названиях переменных actual_XXX и predicted_XXX.\nПередайте ответ в функцию write_answer_1. Полученный файл загрузите в форму.", "############### Programming assignment: problem 1 ###############\nT = 0.65\nfor _actual, _predicted in zip([actual_1, actual_10, actual_11],\n [predicted_1, predicted_10, predicted_11]):\n print('Precision: %s' % precision_score(_actual, _predicted > T))\n print('Recall: %s\\n' % recall_score(_actual, _predicted > T))\n\ndef write_answer_1(precision_1, recall_1, precision_10, recall_10, precision_11, recall_11):\n answers = [precision_1, recall_1, precision_10, recall_10, precision_11, recall_11]\n with open(\"pa_metrics_problem1.txt\", \"w\") as fout:\n fout.write(\" \".join([str(num) for num in answers]))\n\nwrite_answer_1(1.0, 0.466666666667, 1.0, 0.133333333333, 0.647058823529, 0.846153846154)", "F1-score\nОчевидный недостаток пары метрик precision-recall - в том, что их две: непонятно, как ранжировать алгоритмы. Чтобы этого избежать, используют F1-метрику, которая равна среднему гармоническому precision и recall. \nF1-метрика будет равна 1, если и только если precision = 1 и recall = 1 (идеальный алгоритм). \n(: Обмануть F1 сложно: если одна из величин маленькая, а другая близка к 1 (по графикам видно, что такое соотношение иногда легко получить), F1 будет далека от 1. F1-метрику сложно оптимизировать, потому что для этого нужно добиваться высокой полноты и точности одновременно.\nНапример, посчитаем F1 для того же набора векторов, для которого мы строили графики (мы помним, что там одна из кривых быстро выходит в единицу).", "from sklearn.metrics import f1_score\n\nT = 0.5\nprint(\"Разные склонности алгоритмов к ошибкам FP и FN:\")\nfor actual, predicted, descr in zip([actual_1, actual_10, actual_11], \n [predicted_1 > T, predicted_10 > T, predicted_11 > T], \n [\"Typical:\", \"Avoids FP:\", \"Avoids FN:\"]):\n print(descr, \"f1 =\", f1_score(actual, predicted))", "F1-метрика в двух последних случаях, когда одна из парных метрик равна 1, значительно меньше, чем в первом, сбалансированном случае.\n<font color=\"green\" size=5>Programming assignment: problem 2. </font> На precision и recall влияют и характер вектора вероятностей, и установленный порог. \nДля тех же пар (actual, predicted), что и в предыдущей задаче, найдите оптимальные пороги, максимизирующие F1_score. Будем рассматривать только пороги вида T = 0.1 * k, k - целое; соответственно, нужно найти три значения k. Если f1 максимизируется при нескольких значениях k, укажите наименьшее из них.\nЗапишите найденные числа k в следующем порядке:\nk_1, k_10, k_11\nЦифры XXX после пробела соответствуют таким же цифрам в названиях переменных actual_XXX и predicted_XXX.\nПередайте ответ в функцию write_answer_2. Загрузите файл в форму.\nЕсли вы запишите список из трех найденных k в том же порядке в переменную ks, то с помощью кода ниже можно визуализировать найденные пороги:", "############### Programming assignment: problem 2 ###############\nks = np.zeros(3)\nidexes = np.empty(3)\nfor threshold in np.arange(11):\n T = threshold * 0.1\n for actual, predicted, idx in zip([actual_1, actual_10, actual_11], \n [predicted_1 > T, predicted_10 > T, predicted_11 > T], \n [0, 1, 2]):\n score = f1_score(actual, predicted)\n if score > ks[idx]:\n ks[idx] = score\n idexes[idx] = threshold\nprint(ks)\nprint(idexes)\nks = idexes\n\nmany_scatters([actual_1, actual_10, actual_11], [predicted_1, predicted_10, predicted_11], \n np.array(ks)*0.1, [\"Typical\", \"Avoids FP\", \"Avoids FN\"], (1, 3))\n\ndef write_answer_2(k_1, k_10, k_11):\n answers = [k_1, k_10, k_11]\n with open(\"pa_metrics_problem2.txt\", \"w\") as fout:\n fout.write(\" \".join([str(num) for num in answers]))\n\nwrite_answer_2(5, 3, 6)", "Метрики, оценивающие векторы вероятностей класса 1\nРассмотренные метрики удобно интерпретировать, но при их использовании мы не учитываем большую часть информации, полученной от алгоритма. В некоторых задачах вероятности нужны в чистом виде, например, если мы предсказываем, выиграет ли команда в футбольном матче, и величина вероятности влияет на размер ставки за эту команду. Даже если в конце концов мы все равно бинаризуем предсказание, хочется следить за характером вектора вероятности. \nLog_loss\nLog_loss вычисляет правдоподобие меток в actual с вероятностями из predicted, взятое с противоположным знаком:\n$log_loss(actual, predicted) = - \\frac 1 n \\sum_{i=1}^n (actual_i \\cdot \\log (predicted_i) + (1-actual_i) \\cdot \\log (1-predicted_i))$, $n$ - длина векторов.\nСоответственно, эту метрику нужно минимизировать. \nВычислим ее на наших векторах:", "from sklearn.metrics import log_loss\n\nprint(\"Алгоритмы, разные по качеству:\")\nfor actual, predicted, descr in zip([actual_0, actual_1, actual_2], \n [predicted_0, predicted_1, predicted_2],\n [\"Perfect:\", \"Typical:\", \"Awful:\"]):\n print(descr, log_loss(actual, predicted))\nprint()\nprint(\"Осторожный и рискующий алгоритмы:\")\nfor actual, predicted, descr in zip([actual_0, actual_0r, actual_1, actual_1r], \n [predicted_0, predicted_0r, predicted_1, predicted_1r],\n [\"Ideal careful\", \"Ideal risky\", \"Typical careful:\", \"Typical risky:\"]):\n print(descr, log_loss(actual, predicted))\nprint()\nprint(\"Разные склонности алгоритмов к ошибкам FP и FN:\")\nfor actual, predicted, descr in zip([actual_10, actual_11], \n [predicted_10, predicted_11], \n [\"Avoids FP:\", \"Avoids FN:\"]):\n print(descr, log_loss(actual, predicted))", "Как и предыдущие метрики, log_loss хорошо различает идеальный, типичный и плохой случаи. Но обратите внимание, что интерпретировать величину достаточно сложно: метрика не достигает нуля никогда и не имеет верхней границы. Поэтому даже для идеального алгоритма, если смотреть только на одно значение log_loss, невозможно понять, что он идеальный.\nНо зато эта метрика различает осторожный и рискующий алгоритмы. Как мы видели выше, в случаях Typical careful и Typical risky количество ошибок при бинаризации по T = 0.5 примерно одинаковое, в случаях Ideal ошибок вообще нет. Однако за неудачно угаданные классы в Typical рискующему алгоритму приходится платить большим увеличением log_loss, чем осторожному алгоритму. С другой стороны, за удачно угаданные классы рискованный идеальный алгоритм получает меньший log_loss, чем осторожный идеальный алгоритм.\nТаким образом, log_loss чувствителен и к вероятностям, близким к 0 и 1, и к вероятностям, близким к 0.5. \nОшибки FP и FN обычный Log_loss различать не умеет.\nОднако нетрудно сделать обобщение log_loss на случай, когда нужно больше штрафовать FP или FN: для этого достаточно добавить выпуклую (то есть неотрицательную и суммирующуюся к единице) комбинацию из двух коэффициентов к слагаемым правдоподобия. Например, давайте штрафовать false positive:\n$weighted_log_loss(actual, predicted) = -\\frac 1 n \\sum_{i=1}^n (0.3\\, \\cdot actual_i \\cdot \\log (predicted_i) + 0.7\\,\\cdot (1-actual_i)\\cdot \\log (1-predicted_i))$\nЕсли алгоритм неверно предсказывает большую вероятность первому классу, то есть объект на самом деле принадлежит классу 0, то первое слагаемое в скобках равно нулю, а второе учитывается с большим весом. \n<font color=\"green\" size=5>Programming assignment: problem 3. </font> Напишите функцию, которая берет на вход векторы actual и predicted и возвращает модифицированный Log-Loss, вычисленный по формуле выше. Вычислите ее значение (обозначим его wll) на тех же векторах, на которых мы вычисляли обычный log_loss, и запишите в файл в следующем порядке:\nwll_0 wll_1 wll_2 wll_0r wll_1r wll_10 wll_11\nЦифры XXX после пробела соответствуют таким же цифрам в названиях переменных actual_XXX и predicted_XXX.\nПередайте ответ в функцию write_answer3. Загрузите файл в форму.", "############### Programming assignment: problem 3 ##############\nans = []\ndef modified_log(actual, predicted):\n return - np.sum(0.3 * actual * np.log(predicted) + 0.7 * (1 - actual) * np.log(1 - predicted)) / len(actual)\n\nfor _actual, _predicted in zip([actual_0, actual_1, actual_2, actual_0r, actual_1r, actual_10, actual_11], \n [predicted_0, predicted_1, predicted_2, predicted_0r, predicted_1r, predicted_10, predicted_11]):\n print(modified_log(_actual, _predicted), log_loss(_actual, _predicted))\n ans.append(modified_log(_actual, _predicted))\n\ndef write_answer_3(ans):\n answers = ans\n with open(\"pa_metrics_problem3.txt\", \"w\") as fout:\n fout.write(\" \".join([str(num) for num in answers]))\n\nwrite_answer_3(ans)", "Обратите внимание на разницу weighted_log_loss между случаями Avoids FP и Avoids FN.\nROC и AUC\nПри построении ROC-кривой (receiver operating characteristic) происходит варьирование порога бинаризации вектора вероятностей, и вычисляются величины, зависящие от числа ошибок FP и FN. Эти величины задаются так, чтобы в случае, когда существует порог для идеального разделения классов, ROC-кривая проходила через определенную точку - верхний левый угол квадрата [0, 1] x [0, 1]. Кроме того, она всегда проходит через левый нижний и правый верхний углы. Получается наглядная визуализация качества алгоритма. С целью охарактеризовать эту визуализацию численно, ввели понятие AUC - площадь под ROC-кривой.\nЕсть несложный и эффективный алгоритм, который за один проход по выборке вычисляет ROC-кривую и AUC, но мы не будем вдаваться в детали.\nПостроим ROC-кривые для наших задач:", "from sklearn.metrics import roc_curve, roc_auc_score\n\nplt.figure(figsize=(15, 5))\nplt.subplot(1, 3, 1)\naucs = \"\"\nfor actual, predicted, descr in zip([actual_0, actual_1, actual_2], \n [predicted_0, predicted_1, predicted_2],\n [\"Perfect\", \"Typical\", \"Awful\"]):\n fpr, tpr, thr = roc_curve(actual, predicted)\n plt.plot(fpr, tpr, label=descr)\n aucs += descr + \":%3f\"%roc_auc_score(actual, predicted) + \" \"\nplt.xlabel(\"false positive rate\")\nplt.ylabel(\"true positive rate\")\nplt.legend(loc=4)\nplt.axis([-0.1, 1.1, -0.1, 1.1])\nplt.subplot(1, 3, 2)\nfor actual, predicted, descr in zip([actual_0, actual_0r, actual_1, actual_1r], \n [predicted_0, predicted_0r, predicted_1, predicted_1r],\n [\"Ideal careful\", \"Ideal Risky\", \"Typical careful\", \"Typical risky\"]):\n fpr, tpr, thr = roc_curve(actual, predicted)\n aucs += descr + \":%3f\"%roc_auc_score(actual, predicted) + \" \"\n plt.plot(fpr, tpr, label=descr)\nplt.xlabel(\"false positive rate\")\nplt.ylabel(\"true positive rate\")\nplt.legend(loc=4)\nplt.axis([-0.1, 1.1, -0.1, 1.1])\nplt.subplot(1, 3, 3)\nfor actual, predicted, descr in zip([actual_1, actual_10, actual_11], \n [predicted_1, predicted_10, predicted_11], \n [\"Typical\", \"Avoids FP\", \"Avoids FN\"]):\n fpr, tpr, thr = roc_curve(actual, predicted)\n aucs += descr + \":%3f\"%roc_auc_score(actual, predicted) + \" \"\n plt.plot(fpr, tpr, label=descr)\nplt.xlabel(\"false positive rate\")\nplt.ylabel(\"true positive rate\")\nplt.legend(loc=4)\nplt.axis([-0.1, 1.1, -0.1, 1.1])\nprint (aucs)", "Чем больше объектов в выборке, тем более гладкой выглядит кривая (хотя на самом деле она все равно ступенчатая).\nКак и ожидалось, кривые всех идеальных алгоритмов проходят через левый верхний угол. На первом графике также показана типичная ROC-кривая (обычно на практике они не доходят до \"идеального\" угла). \nAUC рискующего алгоритма значительном меньше, чем у осторожного, хотя осторожный и рискущий идеальные алгоритмы не различаются по ROC или AUC. Поэтому стремиться увеличить зазор между интервалами вероятностей классов смысла не имеет.\nНаблюдается перекос кривой в случае, когда алгоритму свойственны ошибки FP или FN. Однако по величине AUC это отследить невозможно (кривые могут быть симметричны относительно диагонали (0, 1)-(1, 0)). \nПосле того, как кривая построена, удобно выбирать порог бинаризации, в котором будет достигнут компромисс между FP или FN. Порог соответствует точке на кривой. Если мы хотим избежать ошибок FP, нужно выбирать точку на левой стороне квадрата (как можно выше), если FN - точку на верхней стороне квадрата (как можно левее). Все промежуточные точки будут соответствовать разным пропорциям FP и FN.\n<font color=\"green\" size=5>Programming assignment: problem 4. </font> На каждой кривой найдите точку, которая ближе всего к левому верхнему углу (ближе в смысле обычного евклидова расстояния), этой точке соответствует некоторый порог бинаризации. Запишите в выходной файл пороги в следующем порядке:\nT_0 T_1 T_2 T_0r T_1r T_10 T_11\nЦифры XXX после пробела соответствуют таким же цифрам в названиях переменных actual_XXX и predicted_XXX.\nЕсли порогов, минимизирующих расстояние, несколько, выберите наибольший.\nПередайте ответ в функцию write_answer_4. Загрузите файл в форму.\nПояснение: функция roc_curve возвращает три значения: FPR (массив абсции точек ROC-кривой), TPR (массив ординат точек ROC-кривой) и thresholds (массив порогов, соответствующих точкам).\nРекомендуем отрисовывать найденную точку на графике с помощью функции plt.scatter.", "############### Programming assignment: problem 4 ###############\nans = []\nfor actual, predicted in zip([actual_0, actual_1, actual_2, actual_0r, actual_1r, actual_10, actual_11], \n [predicted_0, predicted_1, predicted_2, predicted_0r, predicted_1r, predicted_10, predicted_11]):\n fpr, tpr, thr = roc_curve(actual, predicted)\n dist = np.sqrt(np.square(-fpr) + np.square(1 - tpr))\n idx = np.argmin(dist)\n print(thr[idx])\n ans.append(thr[idx])\n\ndef write_answer_4(ans):\n answers = ans\n with open(\"pa_metrics_problem4.txt\", \"w\") as fout:\n fout.write(\" \".join([str(num) for num in answers]))\n\nwrite_answer_4(ans)", "Наподобие roc_curve, строят также precision-recall curve и ищут площадь под ней.\nЗаключение\nМы рассмотрели несколько метрик бинарной классификации. Некоторые из них, например, log_loss, обобщаются на многоклассовый случай. Если метрику сложно обобщить в виде формулы, задачу многоклассовой классификации рассматривают как совокупность задач бинарной классификации и затем особыми способами усредняют метрику (например, micro и macro averaging).\nНа практике всегда полезно визуализировать векторы, которые выдает ваш алгоритм, чтобы понимать, какие он делает ошибки при разных порогах и как метрика реагирует на выдаваемые векторы предсказаний." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.12/_downloads/plot_clickable_image.ipynb
bsd-3-clause
[ "%matplotlib inline", "================================================================\nDemonstration of how to use ClickableImage / generate_2d_layout.\n================================================================\nIn this example, we open an image file, then use ClickableImage to\nreturn 2D locations of mouse clicks (or load a file already created).\nThen, we use generate_2d_layout to turn those xy positions into a layout\nfor use with plotting topo maps. In this way, you can take arbitrary xy\npositions and turn them into a plottable layout.", "# Authors: Christopher Holdgraf <choldgraf@berkeley.edu>\n#\n# License: BSD (3-clause)\nfrom scipy.ndimage import imread\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfrom os import path as op\nimport mne\nfrom mne.viz import ClickableImage, add_background_image # noqa\nfrom mne.channels import generate_2d_layout # noqa\n\nprint(__doc__)\n\n# Set parameters and paths\nplt.rcParams['image.cmap'] = 'gray'\n\nim_path = op.join(op.dirname(mne.__file__), 'data', 'image', 'mni_brain.gif')\n# We've already clicked and exported\nlayout_path = op.join(op.dirname(mne.__file__), 'data', 'image')\nlayout_name = 'custom_layout.lout'", "Load data and click", "im = imread(im_path)\nplt.imshow(im)\n\"\"\"\nThis code opens the image so you can click on it. Commented out\nbecause we've stored the clicks as a layout file already.\n\n# The click coordinates are stored as a list of tuples\nclick = ClickableImage(im)\nclick.plot_clicks()\ncoords = click.coords\n\n# Generate a layout from our clicks and normalize by the image\nlt = generate_2d_layout(np.vstack(coords), bg_image=im)\nlt.save(layout_path + layout_name) # To save if we want\n\"\"\"\n# We've already got the layout, load it\nlt = mne.channels.read_layout(layout_name, path=layout_path, scale=False)\n\n# Create some fake data\nnchans = len(lt.pos)\nnepochs = 50\nsr = 1000\nnsec = 5\nevents = np.arange(nepochs).reshape([-1, 1])\nevents = np.hstack([events, np.zeros([nepochs, 2], dtype=int)])\ndata = np.random.randn(nepochs, nchans, sr * nsec)\ninfo = mne.create_info(nchans, sr, ch_types='eeg')\nepochs = mne.EpochsArray(data, info, events)\nevoked = epochs.average()\n\n# Using the native plot_topo function with the image plotted in the background\nf = evoked.plot_topo(layout=lt, fig_background=im)" ]
[ "code", "markdown", "code", "markdown", "code" ]
LorenzoBi/courses
UQ/assignment_3/Assignment 3.ipynb
mit
[ "Lorenzo Biasi and Michael Aichmüller", "import numpy as np\nfrom scipy.special import binom\nimport matplotlib.pylab as plt\nfrom scipy.misc import factorial as fact\n%matplotlib inline\n\ndef binomial(p, n, k):\n return binom(n, k) * p ** k * (1 - p) ** (n-k)", "Exercise 1.\na.\n$\\Omega$ will be all the possible combinations we have for 150 object two have two diffent values. For example (0, 0, ..., 0), (1, 0, ..., 0), (0, 1, ..., 0), ... (1, 1, ..., 0), ... (1, 1, ..., 1). This sample space has size of $2^{150}$. The random variable $X(\\omega)$ will be the number of defective objects there are in the sample $\\omega$. We can also define $Y(\\omega) = 150 - X(\\omega)$, that will be counting the number of checked items.\nb.\nThe binomial distribution is the distribution that gives the probability of the number of \"succeses\" in a sequence of random and indipendent boolean values. This is the case for counting the number of broken object in a group of 150 and the probability of being broken of 4%.\nc.\nFor computing the probability that at most 4 objects are broken we need to sum the probabilities that $k$ objects are broken with $k \\in [0, 4]$.\n$P(<5) = \\sum_{k=0}^{150} P(X=k) = \\sum_{k=0}^{150} {150\\choose k}p^k(1-p)^{150-k}$\nThe probability is 28 %", "p = 1. / 365\n1 - np.sum(binomial(p, 23 * (22) / 2, 0))", "b.\nThe same of before just that this time $k \\in [5, 9]$. The probability is 64%", "np.sum(binomial(p, 150, np.arange(5, 10)))\n\nplt.bar(np.arange(20), binomial(p, 150, np.arange(20)))\nplt.bar(np.arange(5), binomial(p, 150, np.arange(5)))\nplt.bar(np.arange(5, 10), binomial(p, 150, np.arange(5,10)))\nplt.xlabel('# defectives')\nplt.ylabel('P(X=k)')", "Exercise 2.\nFor computing how big $q$ needs to be we can compute the probability $p^$ that nobody has the same birthday in a group of $q$ and compute $1 - p^$. The first two people will not have the same birthday with probability of $364/365$, the probability that the third will also have a different birthday will be $364/365 * 363 / 365$. this will go on until the last person. One can make the computation and finds that the minimum for having over 50% of probability that at least two people have the same birthday is 23 with p = 50.73%.", "def not_same_birthday(q):\n return np.prod((365 - np.arange(q))/ 365)\n\nq = 45\np = np.empty(q - 1)\nfor i in range(1, q):\n p[i - 1] = 1 - not_same_birthday(i)\nplt.plot(np.arange(1, q), p)\nplt.plot(23, 1 - not_same_birthday(23), 'r+', label='23 people')\nplt.grid()\nplt.ylabel('Probability')\nplt.xlabel('q')\nplt.legend()\n1 - not_same_birthday(23)", "Exercise 3.\na.\nLet's define $\\Omega$ as all the possible combination we can have with 3 throws of a 6-faced dice. $\\Omega$ will be then:", "import itertools\nx = [1, 2, 3, 4, 5, 6]\nomega = set([p for p in itertools.product(x, repeat=3)])\nprint(r'Omega has', len(omega), 'elements and they are:')\nprint(omega)", "X would be -30 when the sample $\\omega$ has no 6s, 50 when has one, 75 when it has two, and 100 when it has three. The probability distribution of such variable would be the binomial with $p = 1 / 6$, $n=3$ and $k$ the number of 6s.\nSo:\n$P_X(X = -30) = {3\\choose 0}(1 / 6)^0(1-1/6)^{3-0}$\n$P_X(X = 50) = {3\\choose 1}(1 / 6)^1(1-1/6)^{3-1}$\n$P_X(X = 75) = {3\\choose 2}(1 / 6)^2(1-1/6)^{3-2}$\n$P_X(X = 100) = {3\\choose 3}(1 / 6)^3(1-1/6)^{3-3}$\nb.\nI would be part of this competition, in fact if calculate the mean of $X$ as suggested we obtain $\\approx$ 5.67(€).", "g = binomial(1 / 6, 3, np.arange(4)) * np.array([-30, 50, 75, 100])\nnp.sum(g)\n\nplt.bar(np.arange(4), g)\nplt.plot([-.5, 3.5], np.ones(2) * np.sum(g), 'r')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
saezlab/kinact
doc/KSEA_example.ipynb
gpl-3.0
[ "Protocol for Kinase-Substrate Enrichment Analysis (KSEA)\nThis IPython notebook accompanies the chapter 'Phosphoproteomics-based profiling of kinase activities in cancer cell' in the book 'Methods of Molecular Biology: Cancer Systems Biology' from Springer, 2016.\nThe script aims to demonstrate the methodology of KSEA, to facilitate grasping the operations performed in the provided code, and to enable reproduction of the implementation in other programming languages where required.", "# Import useful libraries\nimport numpy as np\nimport pandas as pd\n\n# Import required libraries for data visualisation\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\n# Import the package\nimport kinact\n\n# Magic\n%matplotlib inline", "Quick Start", "# import data\ndata_fc, data_p_value = kinact.get_example_data()\n\n# import prior knowledge\nadj_matrix = kinact.get_kinase_targets()\n\nprint data_fc.head()\nprint\nprint data_p_value.head()\n\n# Perform ksea using the Mean method\nscore, p_value = kinact.ksea.ksea_mean(data_fc=data_fc['5min'].dropna(),\n interactions=adj_matrix,\n mP=data_fc['5min'].values.mean(),\n delta=data_fc['5min'].values.std())\nprint pd.DataFrame({'score': score, 'p_value': p_value}).head()\n\n# Perform ksea using the Alternative Mean method\nscore, p_value = kinact.ksea.ksea_mean_alt(data_fc=data_fc['5min'].dropna(),\n p_values=data_p_value['5min'],\n interactions=adj_matrix,\n mP=data_fc['5min'].values.mean(),\n delta=data_fc['5min'].values.std())\nprint pd.DataFrame({'score': score, 'p_value': p_value}).head()\n\n# Perform ksea using the Delta method\nscore, p_value = kinact.ksea.ksea_delta(data_fc=data_fc['5min'].dropna(), \n p_values=data_p_value['5min'], \n interactions=adj_matrix)\nprint pd.DataFrame({'score': score, 'p_value': p_value}).head()", "1. Loading the data\nIn order to perform the described kinase enrichment analysis, we load the data into a Pandas DataFrame. Here, we use the data from <em>de Graaf et al., 2014</em> for demonstration of KSEA. The data is available as supplemental material to the article online under http://mcponline.org/content/13/9/2426/suppl/DC1. The dataset of interest can be found in the Supplemental Table 2.\nWhen downloading the dataset from the internet, it will be provided as Excel spreadsheet. For the use in this script, it will have to saved as csv-file, using the 'Save As' function in Excel.\nIn the accompanying github repository, we will provide an already processed csv-file together with the code for KSEA.", "# Read data\ndata_raw = pd.read_csv('../kinact/data/deGraaf_2014_jurkat.csv', sep=',', header=0)\n\n# Filter for those p-sites that were matched ambiguously\ndata_reduced = data_raw[~data_raw['Proteins'].str.contains(';')]\n\n# Create identifier for each phosphorylation site, e.g. P06239_S59 for the Serine 59 in the protein Lck\ndata_reduced.loc[:, 'ID'] = data_reduced['Proteins'] + '_' + data_reduced['Amino acid'] + \\\n data_reduced['Positions within proteins']\ndata_indexed = data_reduced.set_index('ID')\n\n# Extract only relevant columns\ndata_relevant = data_indexed[[x for x in data_indexed if x.startswith('Average')]]\n\n# Rename columns\ndata_relevant.columns = [x.split()[-1] for x in data_relevant]\n\n# Convert abundaces into fold changes compared to control (0 minutes after stimulation)\ndata_fc = data_relevant.sub(data_relevant['0min'], axis=0)\ndata_fc.drop('0min', axis=1, inplace=True)\n\n# Also extract the p-values for the fold changes\ndata_p_value = data_indexed[[x for x in data_indexed if x.startswith('p value') and x.endswith('vs0min')]]\ndata_p_value.columns = [x.split('_')[-1].split('vs')[0] + 'min' for x in data_p_value]\ndata_p_value = data_p_value.astype('float') # Excel saved the p-values as strings, not as floating point numbers\n\nprint data_fc.head()\nprint data_p_value.head()", "2. Import prior-knowledge kinase-substrate relationships from PhosphoSitePlus\nIn the following example, we use the data from the PhosphoSitePlus database, which can be downloaded here: http://www.phosphosite.org/staticDownloads.action. \nConsider, that the downloaded file contains a disclaimer at the top of the file, which has to be removed before the file can be used as described below.", "# Read data\nks_rel = pd.read_csv('../kinact/data/PhosphoSitePlus.txt', sep='\\t') \n# The data from the PhosphoSitePlus database is not provided as comma-separated value file (csv), \n# but instead, a tab = \\t delimits the individual cells\n\n# Restrict the data on interactions in the organism of interest\nks_rel_human = ks_rel.loc[(ks_rel['KIN_ORGANISM'] == 'human') & (ks_rel['SUB_ORGANISM'] == 'human')]\n\n# Create p-site identifier of the same format as before\nks_rel_human.loc[:, 'psite'] = ks_rel_human['SUB_ACC_ID'] + '_' + ks_rel_human['SUB_MOD_RSD']\n\n# Create adjencency matrix (links between kinases (columns) and p-sites (rows) are indicated with a 1, NA otherwise)\nks_rel_human.loc[:, 'value'] = 1\nadj_matrix = pd.pivot_table(ks_rel_human, values='value', index='psite', columns='GENE', fill_value=0)\nprint adj_matrix.head()\nprint adj_matrix.sum(axis=0).sort_values(ascending=False).head()", "3. KSEA\n3.1 Quick start for KSEA\nTogether with this tutorial, we will provide an implementation of KSEA as custom Python functions. Examplary, the use of the function for the dataset by de Graaf et al. could look like this.", "score, p_value = kinact.ksea.ksea_delta(data_fc=data_fc['5min'],\n p_values=data_p_value['5min'],\n interactions=adj_matrix,\n )\nprint pd.DataFrame({'score': score, 'p_value': p_value}).head()\n\n# Calculate the KSEA scores for all data with the ksea_mean method\nactivity_mean = pd.DataFrame({c: kinact.ksea.ksea_mean(data_fc=data_fc[c],\n interactions=adj_matrix,\n mP=data_fc.values.mean(),\n delta=data_fc.values.std())[0]\n for c in data_fc})\nactivity_mean = activity_mean[['5min', '10min', '20min', '30min', '60min']]\nprint activity_mean.head()\n\n# Calculate the KSEA scores for all data with the ksea_mean method, using the median\nactivity_median = pd.DataFrame({c: kinact.ksea.ksea_mean(data_fc=data_fc[c],\n interactions=adj_matrix,\n mP=data_fc.values.mean(),\n delta=data_fc.values.std(), median=True)[0]\n for c in data_fc})\nactivity_median = activity_median[['5min', '10min', '20min', '30min', '60min']]\nprint activity_median.head()\n\n# Calculate the KSEA scores for all data with the ksea_mean_alt method\nactivity_mean_alt = pd.DataFrame({c: kinact.ksea.ksea_mean_alt(data_fc=data_fc[c],\n p_values=data_p_value[c],\n interactions=adj_matrix,\n mP=data_fc.values.mean(),\n delta=data_fc.values.std())[0]\n for c in data_fc})\nactivity_mean_alt = activity_mean_alt[['5min', '10min', '20min', '30min', '60min']]\nprint activity_mean_alt.head()\n\n# Calculate the KSEA scores for all data with the ksea_mean method, using the median\nactivity_median_alt = pd.DataFrame({c: kinact.ksea.ksea_mean_alt(data_fc=data_fc[c],\n p_values=data_p_value[c],\n interactions=adj_matrix,\n mP=data_fc.values.mean(),\n delta=data_fc.values.std(),\n median=True)[0]\n for c in data_fc})\nactivity_median_alt = activity_median_alt[['5min', '10min', '20min', '30min', '60min']]\nprint activity_median_alt.head()\n\n# Calculate the KSEA scores for all data with the ksea_delta method\nactivity_delta = pd.DataFrame({c: kinact.ksea.ksea_delta(data_fc=data_fc[c],\n p_values=data_p_value[c],\n interactions=adj_matrix)[0]\n for c in data_fc})\nactivity_delta = activity_delta[['5min', '10min', '20min', '30min', '60min']]\nprint activity_delta.head()\n\nsns.set(context='poster', style='ticks')\nsns.heatmap(activity_mean_alt, cmap=sns.blend_palette([sns.xkcd_rgb['amber'], \n sns.xkcd_rgb['almost black'], \n sns.xkcd_rgb['bright blue']], \n as_cmap=True))\nplt.show()", "In de Graaf et al., they associated (amongst others) the Casein kinase II alpha (CSNK2A1) with higher activity after prolonged stimulation with prostaglandin E2. Here, we plot the activity scores of CSNK2A1 for all three methods of KSEA, which are in good agreement.", "kinase='CSNK2A1'\ndf_plot = pd.DataFrame({'mean': activity_mean.loc[kinase],\n 'delta': activity_delta.loc[kinase],\n 'mean_alt': activity_mean_alt.loc[kinase]})\ndf_plot['time [min]'] = [5, 10, 20, 30, 60]\ndf_plot = pd.melt(df_plot, id_vars='time [min]', var_name='method', value_name='activity score')\ng = sns.FacetGrid(df_plot, col='method', sharey=False, size=3, aspect=1)\ng = g.map(sns.pointplot, 'time [min]', 'activity score')\nplt.subplots_adjust(top=.82)\nplt.show()", "3.2. KSEA in detail\nIn the following, we show in detail the computations that are carried out inside the provided functions. Let us concentrate on a single condition (60 minutes after stimulation with prostaglandin E2) and a single kinase (CDK1).", "data_condition = data_fc['60min'].copy()\n\np_values = data_p_value['60min']\n\nkinase = 'CDK1'\n\nsubstrates = adj_matrix[kinase].replace(0, np.nan).dropna().index\n\ndetected_p_sites = data_fc.index\n\nintersect = list(set(substrates).intersection(detected_p_sites))", "3.2.1. Mean method", "mS = data_condition.loc[intersect].mean()\nmP = data_fc.values.mean()\nm = len(intersect)\ndelta = data_fc.values.std()\nz_score = (mS - mP) * np.sqrt(m) * 1/delta\nfrom scipy.stats import norm\np_value_mean = norm.sf(abs(z_score))\nprint mS, p_value_mean", "3.2.2. Alternative Mean method", "cut_off = -np.log10(0.05)\nset_alt = data_condition.loc[intersect].where(p_values.loc[intersect] > cut_off).dropna()\nmS_alt = set_alt.mean()\nz_score_alt = (mS_alt - mP) * np.sqrt(len(set_alt)) * 1/delta\np_value_mean_alt = norm.sf(abs(z_score_alt))\nprint mS_alt, p_value_mean_alt", "3.2.3. Delta Method", "cut_off = -np.log10(0.05)\n\nscore_delta = len(data_condition.loc[intersect].where((data_condition.loc[intersect] > 0) & \n (p_values.loc[intersect] > cut_off)).dropna()) -\\\n len(data_condition.loc[intersect].where((data_condition.loc[intersect] < 0) & \n (p_values.loc[intersect] > cut_off)).dropna())\nM = len(data_condition) \nn = len(intersect) \nN = len(np.where(p_values.loc[adj_matrix.index.tolist()] > cut_off)[0])\nfrom scipy.stats import hypergeom\nhypergeom_dist = hypergeom(M, n, N)\n\np_value_delta = hypergeom_dist.pmf(len(p_values.loc[intersect].where(p_values.loc[intersect] > cut_off).dropna()))\nprint score_delta, p_value_delta" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
WNoxchi/Kaukasos
FAI_old/lesson2/L2HW.ipynb
mit
[ "Lesson 2 Assignment JNB -- Kaggle Galaxy Zoo\n10 May 2017 - Wayne H Nixalo\nPlan:\n1. Build a Linear Model (no activations) from scratch\n2. Build a 1-Layer Neural Network using linear model layers + activations\n3. Build a finetuned DLNN atop VGG16\n\nexperiment w/ SGD vs RMSprop\nexperiment w/ sigmoid vs tanh vs ReLU\ncompare scores of ea. model\nuse utils.py & vgg16.py source code + Keras.io documentation for help\n\nNote: I'm pretty sure that by \"from scratch\" what J. Howard means is a fresh linear model atop Vgg16.. Creating a straight Linear Model for image classification... does not sound... very sound..\nWhatever, build break learn.", "import keras\n\n# Constants\nNum_Classes = Num_Classes\nbatch_size = 4\nlr = 0.01\n\n# Helper Functions\n\n# get_batches(..) copied from utils.py\n# gen.flow_from_directory() is an iterator that yields batches of images\n# from a directory indefinitely.\nfrom keras.preprocessing import image\ndef get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True, batche_size=4, class_mode='categorical',\n target_size=(224,224)):\n return gen.flow_from_directory(dirname, target_size=target_size,\n class_mode=class_mode, shuffle=shuffle, batch_size=batch_size)\n\n# fast array saving/loading\nimport bcolz\ndef save_array(fname, arr): c=bcolz.carray(arr, rootdir=fname, mode='w'); c.flush()\ndef load_array(fname): return bcolz.open(fname)[:]\n\n# One-Hot Encoding for Keras\nfrom sklearn.preprocessing import OneHotEncoder\ndef onehot(x): return np.array(OneHotEncoder().fit_transform(x.reshape(-1, 1))).todense()\n# should I use that or from Keras?\n# def onehot(x): return keras.utils.np_utils.to_categorical(x)\n\n# from utils.py -- retrieving data saved by bcolz\ndef get_data(path, target_size=(224,224)):\n batches = get_batches(path, shuffle=False, batch_size=1, class_mode=None, target_size=target_size)\n return np.concatenate([batches.next() for i in range(batches.nb_sample)])\n", "1. Basic Linear Model", "LM = keras.model.Sequential([Dense(Num_Classes, input_shape=(784,))])\n\nLM.compile(optimizer=SGD(lr=0.01), loss='mse')\n# LM.compile(optimizer=RMSprop(lr=0.01), loss='mse')\n\n", "2. 1-Layer Neural Network\n3. Finetuned VGG16", "import os, sys\n\nsys.path.insert(os.path.join(1, '../utils/'))\nimport Vgg16" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/bigquery-notebooks
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
apache-2.0
[ "Low-latency item-to-item recommendation system - Orchestrating with TFX\nOverview\nThis notebook is a part of the series that describes the process of implementing a Low-latency item-to-item recommendation system.\nThis notebook demonstrates how to use TFX and AI Platform Pipelines (Unified) to operationalize the workflow that creates embeddings and builds and deploys an ANN Service index. \nIn the notebook you go through the following steps.\n\nCreating TFX custom components that encapsulate operations on BQ, BQML and ANN Service.\nCreating a TFX pipeline that automates the processes of creating embeddings and deploying an ANN Index \nTesting the pipeline locally using Beam runner.\nCompiling the pipeline to the TFX IR format for execution on AI Platform Pipelines (Unified).\nSubmitting pipeline runs.\n\nThis notebook was designed to run on AI Platform Notebooks. Before running the notebook make sure that you have completed the setup steps as described in the README file.\nTFX Pipeline Design\nThe below diagram depicts the TFX pipeline that you will implement in this notebook. Each step of the pipeline is implemented as a TFX Custom Python function component. The components track the relevant metadata in AI Platform (Unfied) ML Metadata using both standard and custom metadata types. \n\n\nThe first step of the pipeline is to compute item co-occurence. This is done by calling the sp_ComputePMI stored procedure created in the preceeding notebooks. \nNext, the BQML Matrix Factorization model is created. The model training code is encapsulated in the sp_TrainItemMatchingModel stored procedure.\nItem embeddings are extracted from the trained model weights and stored in a BQ table. The component calls the sp_ExtractEmbeddings stored procedure that implements the extraction logic.\nThe embeddings are exported in the JSONL format to the GCS location using the BigQuery extract job.\nThe embeddings in the JSONL format are used to create an ANN index by calling the ANN Service Control Plane REST API.\nFinally, the ANN index is deployed to an ANN endpoint.\n\nAll steps and their inputs and outputs are tracked in the AI Platform (Unified) ML Metadata service.", "%load_ext autoreload\n%autoreload 2", "Setting up the notebook's environment\nInstall AI Platform Pipelines client library\nFor AI Platform Pipelines (Unified), which is in the Experimental stage, you need to download and install the AI Platform client library on top of the KFP and TFX SDKs that were installed as part of the initial environment setup.", "AIP_CLIENT_WHEEL = \"aiplatform_pipelines_client-0.1.0.caip20201123-py3-none-any.whl\"\nAIP_CLIENT_WHEEL_GCS_LOCATION = (\n f\"gs://cloud-aiplatform-pipelines/releases/20201123/{AIP_CLIENT_WHEEL}\"\n)\n\n!gsutil cp {AIP_CLIENT_WHEEL_GCS_LOCATION} {AIP_CLIENT_WHEEL}\n\n%pip install {AIP_CLIENT_WHEEL}", "Restart the kernel.", "import IPython\n\napp = IPython.Application.instance()\napp.kernel.do_shutdown(True)", "Import notebook dependencies", "import logging\n\nimport tensorflow as tf\nimport tfx\nfrom aiplatform.pipelines import client\nfrom tfx.orchestration.beam.beam_dag_runner import BeamDagRunner\n\nprint(\"TFX Version: \", tfx.__version__)", "Configure GCP environment\n\nIf you're on AI Platform Notebooks, authenticate with Google Cloud before running the next section, by running\nsh\ngcloud auth login\nin the Terminal window (which you can open via File > New in the menu). You only need to do this once per notebook instance.\nSet the following constants to the values reflecting your environment:\n\nPROJECT_ID - your GCP project ID\nPROJECT_NUMBER - your GCP project number\nBUCKET_NAME - a name of the GCS bucket that will be used to host artifacts created by the pipeline\nPIPELINE_NAME_SUFFIX - a suffix appended to the standard pipeline name. You can change to differentiate between pipelines from different users in a classroom environment\nAPI_KEY - a GCP API key\nVPC_NAME - a name of the GCP VPC to use for the index deployments. \nREGION - a compute region. Don't change the default - us-central - while the ANN Service is in the experimental stage", "PROJECT_ID = \"jk-mlops-dev\" # <---CHANGE THIS\nPROJECT_NUMBER = \"895222332033\" # <---CHANGE THIS\nAPI_KEY = \"AIzaSyBS_RiaK3liaVthTUD91XuPDKIbiwDFlV8\" # <---CHANGE THIS\nUSER = \"user\" # <---CHANGE THIS\nBUCKET_NAME = \"jk-ann-staging\" # <---CHANGE THIS\nVPC_NAME = \"default\" # <---CHANGE THIS IF USING A DIFFERENT VPC\n\nREGION = \"us-central1\"\nPIPELINE_NAME = \"ann-pipeline-{}\".format(USER)\nPIPELINE_ROOT = \"gs://{}/pipeline_root/{}\".format(BUCKET_NAME, PIPELINE_NAME)\nPATH=%env PATH\n%env PATH={PATH}:/home/jupyter/.local/bin\n\nprint(\"PIPELINE_ROOT: {}\".format(PIPELINE_ROOT))", "Defining custom components\nIn this section of the notebook you define a set of custom TFX components that encapsulate BQ, BQML and ANN Service calls. The components are TFX Custom Python function components. \nEach component is created as a separate Python module. You also create a couple of helper modules that encapsulate Python functions and classess used across the custom components. \nRemove files created in the previous executions of the notebook", "component_folder = \"bq_components\"\n\nif tf.io.gfile.exists(component_folder):\n print(\"Removing older file\")\n tf.io.gfile.rmtree(component_folder)\nprint(\"Creating component folder\")\ntf.io.gfile.mkdir(component_folder)\n\n%cd {component_folder}", "Define custom types for ANN service artifacts\nThis module defines a couple of custom TFX artifacts to track ANN Service indexes and index deployments.", "%%writefile ann_types.py\n# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Custom types for managing ANN artifacts.\"\"\"\n\nfrom tfx.types import artifact\n\nclass ANNIndex(artifact.Artifact):\n TYPE_NAME = 'ANNIndex'\n \nclass DeployedANNIndex(artifact.Artifact):\n TYPE_NAME = 'DeployedANNIndex'\n", "Create a wrapper around ANN Service REST API\nThis module provides a convenience wrapper around ANN Service REST API. In the experimental stage, the ANN Service does not have an \"official\" Python client SDK nor it is supported by the Google Discovery API.", "%%writefile ann_service.py\n# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Helper classes encapsulating ANN Service REST API.\"\"\"\n\nimport datetime\nimport logging\nimport json\nimport time\n\nimport google.auth\n\nclass ANNClient(object):\n \"\"\"Base ANN Service client.\"\"\"\n \n def __init__(self, project_id, project_number, region):\n credentials, _ = google.auth.default()\n self.authed_session = google.auth.transport.requests.AuthorizedSession(credentials)\n self.ann_endpoint = f'{region}-aiplatform.googleapis.com'\n self.ann_parent = f'https://{self.ann_endpoint}/v1alpha1/projects/{project_id}/locations/{region}'\n self.project_id = project_id\n self.project_number = project_number\n self.region = region\n \n def wait_for_completion(self, operation_id, message, sleep_time):\n \"\"\"Waits for a completion of a long running operation.\"\"\"\n \n api_url = f'{self.ann_parent}/operations/{operation_id}'\n\n start_time = datetime.datetime.utcnow()\n while True:\n response = self.authed_session.get(api_url)\n if response.status_code != 200:\n raise RuntimeError(response.json())\n if 'done' in response.json().keys():\n logging.info('Operation completed!')\n break\n elapsed_time = datetime.datetime.utcnow() - start_time\n logging.info('{}. Elapsed time since start: {}.'.format(\n message, str(elapsed_time)))\n time.sleep(sleep_time)\n \n return response.json()['response']\n\n\nclass IndexClient(ANNClient):\n \"\"\"Encapsulates a subset of control plane APIs \n that manage ANN indexes.\"\"\"\n\n def __init__(self, project_id, project_number, region):\n super().__init__(project_id, project_number, region)\n\n def create_index(self, display_name, description, metadata):\n \"\"\"Creates an ANN Index.\"\"\"\n \n api_url = f'{self.ann_parent}/indexes'\n \n request_body = {\n 'display_name': display_name,\n 'description': description,\n 'metadata': metadata\n }\n \n response = self.authed_session.post(api_url, data=json.dumps(request_body))\n if response.status_code != 200:\n raise RuntimeError(response.text)\n operation_id = response.json()['name'].split('/')[-1]\n \n return operation_id\n\n def list_indexes(self, display_name=None):\n \"\"\"Lists all indexes with a given display name or\n all indexes if the display_name is not provided.\"\"\"\n \n if display_name:\n api_url = f'{self.ann_parent}/indexes?filter=display_name=\"{display_name}\"'\n else:\n api_url = f'{self.ann_parent}/indexes'\n\n response = self.authed_session.get(api_url).json()\n\n return response['indexes'] if response else []\n \n def delete_index(self, index_id):\n \"\"\"Deletes an ANN index.\"\"\"\n \n api_url = f'{self.ann_parent}/indexes/{index_id}'\n response = self.authed_session.delete(api_url)\n if response.status_code != 200:\n raise RuntimeError(response.text)\n\n\nclass IndexDeploymentClient(ANNClient):\n \"\"\"Encapsulates a subset of control plane APIs \n that manage ANN endpoints and deployments.\"\"\"\n \n def __init__(self, project_id, project_number, region):\n super().__init__(project_id, project_number, region)\n\n def create_endpoint(self, display_name, vpc_name):\n \"\"\"Creates an ANN endpoint.\"\"\"\n \n api_url = f'{self.ann_parent}/indexEndpoints'\n network_name = f'projects/{self.project_number}/global/networks/{vpc_name}'\n\n request_body = {\n 'display_name': display_name,\n 'network': network_name\n }\n\n response = self.authed_session.post(api_url, data=json.dumps(request_body))\n if response.status_code != 200:\n raise RuntimeError(response.text)\n operation_id = response.json()['name'].split('/')[-1]\n \n return operation_id\n \n def list_endpoints(self, display_name=None):\n \"\"\"Lists all ANN endpoints with a given display name or\n all endpoints in the project if the display_name is not provided.\"\"\"\n \n if display_name:\n api_url = f'{self.ann_parent}/indexEndpoints?filter=display_name=\"{display_name}\"'\n else:\n api_url = f'{self.ann_parent}/indexEndpoints'\n\n response = self.authed_session.get(api_url).json()\n \n return response['indexEndpoints'] if response else []\n \n def delete_endpoint(self, endpoint_id):\n \"\"\"Deletes an ANN endpoint.\"\"\"\n \n api_url = f'{self.ann_parent}/indexEndpoints/{endpoint_id}'\n \n response = self.authed_session.delete(api_url)\n if response.status_code != 200:\n raise RuntimeError(response.text)\n \n return response.json()\n \n def create_deployment(self, display_name, deployment_id, endpoint_id, index_id):\n \"\"\"Deploys an ANN index to an endpoint.\"\"\"\n \n api_url = f'{self.ann_parent}/indexEndpoints/{endpoint_id}:deployIndex'\n index_name = f'projects/{self.project_number}/locations/{self.region}/indexes/{index_id}'\n\n request_body = {\n 'deployed_index': {\n 'id': deployment_id,\n 'index': index_name,\n 'display_name': display_name\n }\n }\n\n response = self.authed_session.post(api_url, data=json.dumps(request_body))\n if response.status_code != 200:\n raise RuntimeError(response.text)\n operation_id = response.json()['name'].split('/')[-1]\n \n return operation_id\n \n def get_deployment_grpc_ip(self, endpoint_id, deployment_id):\n \"\"\"Returns a private IP address for a gRPC interface to \n an Index deployment.\"\"\"\n \n api_url = f'{self.ann_parent}/indexEndpoints/{endpoint_id}'\n\n response = self.authed_session.get(api_url)\n if response.status_code != 200:\n raise RuntimeError(response.text)\n \n endpoint_ip = None\n if 'deployedIndexes' in response.json().keys():\n for deployment in response.json()['deployedIndexes']:\n if deployment['id'] == deployment_id:\n endpoint_ip = deployment['privateEndpoints']['matchGrpcAddress']\n \n return endpoint_ip\n\n \n def delete_deployment(self, endpoint_id, deployment_id):\n \"\"\"Undeployes an index from an endpoint.\"\"\"\n \n api_url = f'{self.ann_parent}/indexEndpoints/{endpoint_id}:undeployIndex'\n \n request_body = {\n 'deployed_index_id': deployment_id\n }\n \n response = self.authed_session.post(api_url, data=json.dumps(request_body))\n if response.status_code != 200:\n raise RuntimeError(response.text)\n \n return response\n ", "Create Compute PMI component\nThis component encapsulates a call to the BigQuery stored procedure that calculates item cooccurence. Refer to the preceeding notebooks for more details about item coocurrent calculations.\nThe component tracks the output item_cooc table created by the stored procedure using the TFX (simple) Dataset artifact.", "%%writefile compute_pmi.py\n# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"BigQuery compute PMI component.\"\"\"\n\nimport logging\n\nfrom google.cloud import bigquery\n\nimport tfx\nimport tensorflow as tf\n\nfrom tfx.dsl.component.experimental.decorators import component\nfrom tfx.dsl.component.experimental.annotations import InputArtifact, OutputArtifact, Parameter\n\nfrom tfx.types.experimental.simple_artifacts import Dataset as BQDataset\n\n\n@component\ndef compute_pmi(\n project_id: Parameter[str],\n bq_dataset: Parameter[str],\n min_item_frequency: Parameter[int],\n max_group_size: Parameter[int],\n item_cooc: OutputArtifact[BQDataset]):\n \n stored_proc = f'{bq_dataset}.sp_ComputePMI'\n query = f'''\n DECLARE min_item_frequency INT64;\n DECLARE max_group_size INT64;\n\n SET min_item_frequency = {min_item_frequency};\n SET max_group_size = {max_group_size};\n\n CALL {stored_proc}(min_item_frequency, max_group_size);\n '''\n result_table = 'item_cooc'\n\n logging.info(f'Starting computing PMI...')\n \n client = bigquery.Client(project=project_id)\n query_job = client.query(query)\n query_job.result() # Wait for the job to complete\n \n logging.info(f'Items PMI computation completed. Output in {bq_dataset}.{result_table}.')\n \n # Write the location of the output table to metadata. \n item_cooc.set_string_custom_property('table_name',\n f'{project_id}:{bq_dataset}.{result_table}')\n", "Create Train Item Matching Model component\nThis component encapsulates a call to the BigQuery stored procedure that trains the BQML Matrix Factorization model. Refer to the preceeding notebooks for more details about model training.\nThe component tracks the output item_matching_model BQML model created by the stored procedure using the TFX (simple) Model artifact.", "%%writefile train_item_matching.py\n# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"BigQuery compute PMI component.\"\"\"\n\nimport logging\n\nfrom google.cloud import bigquery\n\nimport tfx\nimport tensorflow as tf\n\nfrom tfx.dsl.component.experimental.decorators import component\nfrom tfx.dsl.component.experimental.annotations import InputArtifact, OutputArtifact, Parameter\n\nfrom tfx.types.experimental.simple_artifacts import Dataset as BQDataset\nfrom tfx.types.standard_artifacts import Model as BQModel\n\n\n@component\ndef train_item_matching_model(\n project_id: Parameter[str],\n bq_dataset: Parameter[str],\n dimensions: Parameter[int],\n item_cooc: InputArtifact[BQDataset],\n bq_model: OutputArtifact[BQModel]):\n \n item_cooc_table = item_cooc.get_string_custom_property('table_name')\n stored_proc = f'{bq_dataset}.sp_TrainItemMatchingModel'\n query = f'''\n DECLARE dimensions INT64 DEFAULT {dimensions};\n CALL {stored_proc}(dimensions);\n '''\n model_name = 'item_matching_model'\n \n logging.info(f'Using item co-occurrence table: item_cooc_table')\n logging.info(f'Starting training of the model...')\n \n client = bigquery.Client(project=project_id)\n query_job = client.query(query)\n query_job.result()\n \n logging.info(f'Model training completed. Output in {bq_dataset}.{model_name}.')\n \n # Write the location of the model to metadata. \n bq_model.set_string_custom_property('model_name',\n f'{project_id}:{bq_dataset}.{model_name}')\n \n ", "Create Extract Embeddings component\nThis component encapsulates a call to the BigQuery stored procedure that extracts embdeddings from the model to the staging table. Refer to the preceeding notebooks for more details about embeddings extraction.\nThe component tracks the output item_embeddings table created by the stored procedure using the TFX (simple) Dataset artifact.", "%%writefile extract_embeddings.py\n# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Extracts embeddings to a BQ table.\"\"\"\n\nimport logging\n\nfrom google.cloud import bigquery\n\nimport tfx\nimport tensorflow as tf\n\nfrom tfx.dsl.component.experimental.decorators import component\nfrom tfx.dsl.component.experimental.annotations import InputArtifact, OutputArtifact, Parameter\n\nfrom tfx.types.experimental.simple_artifacts import Dataset as BQDataset \nfrom tfx.types.standard_artifacts import Model as BQModel\n\n\n@component\ndef extract_embeddings(\n project_id: Parameter[str],\n bq_dataset: Parameter[str],\n bq_model: InputArtifact[BQModel],\n item_embeddings: OutputArtifact[BQDataset]):\n \n embedding_model_name = bq_model.get_string_custom_property('model_name')\n stored_proc = f'{bq_dataset}.sp_ExractEmbeddings'\n query = f'''\n CALL {stored_proc}();\n '''\n embeddings_table = 'item_embeddings'\n\n logging.info(f'Extracting item embeddings from: {embedding_model_name}')\n \n client = bigquery.Client(project=project_id)\n query_job = client.query(query)\n query_job.result() # Wait for the job to complete\n \n logging.info(f'Embeddings extraction completed. Output in {bq_dataset}.{embeddings_table}')\n \n # Write the location of the output table to metadata.\n item_embeddings.set_string_custom_property('table_name', \n f'{project_id}:{bq_dataset}.{embeddings_table}')\n \n\n ", "Create Export Embeddings component\nThis component encapsulates a BigQuery table extraction job that extracts the item_embeddings table to a GCS location as files in the JSONL format. The format of the extracted files is compatible with the ingestion schema for the ANN Service.\nThe component tracks the output files location in the TFX (simple) Dataset artifact.", "%%writefile export_embeddings.py\n# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Exports embeddings from a BQ table to a GCS location.\"\"\"\n\nimport logging\n\nfrom google.cloud import bigquery\n\nimport tfx\nimport tensorflow as tf\n\nfrom tfx.dsl.component.experimental.decorators import component\nfrom tfx.dsl.component.experimental.annotations import InputArtifact, OutputArtifact, Parameter\n\nfrom tfx.types.experimental.simple_artifacts import Dataset \n\nBQDataset = Dataset\n\n@component\ndef export_embeddings(\n project_id: Parameter[str],\n gcs_location: Parameter[str],\n item_embeddings_bq: InputArtifact[BQDataset],\n item_embeddings_gcs: OutputArtifact[Dataset]):\n \n filename_pattern = 'embedding-*.json'\n gcs_location = gcs_location.rstrip('/')\n destination_uri = f'{gcs_location}/{filename_pattern}'\n \n _, table_name = item_embeddings_bq.get_string_custom_property('table_name').split(':')\n \n logging.info(f'Exporting item embeddings from: {table_name}')\n \n bq_dataset, table_id = table_name.split('.')\n client = bigquery.Client(project=project_id)\n dataset_ref = bigquery.DatasetReference(project_id, bq_dataset)\n table_ref = dataset_ref.table(table_id)\n job_config = bigquery.job.ExtractJobConfig()\n job_config.destination_format = bigquery.DestinationFormat.NEWLINE_DELIMITED_JSON\n\n extract_job = client.extract_table(\n table_ref,\n destination_uris=destination_uri,\n job_config=job_config\n ) \n extract_job.result() # Wait for resuls\n \n logging.info(f'Embeddings export completed. Output in {gcs_location}')\n \n # Write the location of the embeddings to metadata.\n item_embeddings_gcs.uri = gcs_location\n\n ", "Create ANN index component\nThis component encapsulats the calls to the ANN Service to create an ANN Index. \nThe component tracks the created index int the TFX custom ANNIndex artifact.", "%%writefile create_index.py\n# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Creates an ANN index.\"\"\"\n\nimport logging\n\nimport google.auth\nimport numpy as np\nimport tfx\nimport tensorflow as tf\n\nfrom google.cloud import bigquery\nfrom tfx.dsl.component.experimental.decorators import component\nfrom tfx.dsl.component.experimental.annotations import InputArtifact, OutputArtifact, Parameter\nfrom tfx.types.experimental.simple_artifacts import Dataset \n\nfrom ann_service import IndexClient\nfrom ann_types import ANNIndex\n\nNUM_NEIGHBOURS = 10\nMAX_LEAVES_TO_SEARCH = 200\nMETRIC = 'DOT_PRODUCT_DISTANCE'\nFEATURE_NORM_TYPE = 'UNIT_L2_NORM'\nCHILD_NODE_COUNT = 1000\nAPPROXIMATE_NEIGHBORS_COUNT = 50\n\n@component\ndef create_index(\n project_id: Parameter[str],\n project_number: Parameter[str],\n region: Parameter[str],\n display_name: Parameter[str],\n dimensions: Parameter[int],\n item_embeddings: InputArtifact[Dataset],\n ann_index: OutputArtifact[ANNIndex]):\n \n index_client = IndexClient(project_id, project_number, region)\n \n logging.info('Creating index:')\n logging.info(f' Index display name: {display_name}')\n logging.info(f' Embeddings location: {item_embeddings.uri}')\n \n index_description = display_name\n index_metadata = {\n 'contents_delta_uri': item_embeddings.uri,\n 'config': {\n 'dimensions': dimensions,\n 'approximate_neighbors_count': APPROXIMATE_NEIGHBORS_COUNT,\n 'distance_measure_type': METRIC,\n 'feature_norm_type': FEATURE_NORM_TYPE,\n 'tree_ah_config': {\n 'child_node_count': CHILD_NODE_COUNT,\n 'max_leaves_to_search': MAX_LEAVES_TO_SEARCH\n }\n }\n }\n \n operation_id = index_client.create_index(display_name, \n index_description,\n index_metadata)\n response = index_client.wait_for_completion(operation_id, 'Waiting for ANN index', 45)\n index_name = response['name']\n \n logging.info('Index {} created.'.format(index_name))\n \n # Write the index name to metadata.\n ann_index.set_string_custom_property('index_name', \n index_name)\n ann_index.set_string_custom_property('index_display_name', \n display_name)\n", "Deploy ANN index component\nThis component deploys an ANN index to an ANN Endpoint. \nThe componet tracks the deployed index in the TFX custom DeployedANNIndex artifact.", "%%writefile deploy_index.py\n# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Deploys an ANN index.\"\"\"\n\nimport logging\n\nimport numpy as np\nimport uuid\nimport tfx\nimport tensorflow as tf\n\nfrom google.cloud import bigquery\nfrom tfx.dsl.component.experimental.decorators import component\nfrom tfx.dsl.component.experimental.annotations import InputArtifact, OutputArtifact, Parameter\nfrom tfx.types.experimental.simple_artifacts import Dataset \n\nfrom ann_service import IndexDeploymentClient\nfrom ann_types import ANNIndex\nfrom ann_types import DeployedANNIndex\n\n\n@component\ndef deploy_index(\n project_id: Parameter[str],\n project_number: Parameter[str],\n region: Parameter[str],\n vpc_name: Parameter[str],\n deployed_index_id_prefix: Parameter[str],\n ann_index: InputArtifact[ANNIndex],\n deployed_ann_index: OutputArtifact[DeployedANNIndex]\n ):\n \n deployment_client = IndexDeploymentClient(project_id, \n project_number,\n region)\n \n index_name = ann_index.get_string_custom_property('index_name')\n index_display_name = ann_index.get_string_custom_property('index_display_name')\n endpoint_display_name = f'Endpoint for {index_display_name}'\n \n logging.info(f'Creating endpoint: {endpoint_display_name}')\n operation_id = deployment_client.create_endpoint(endpoint_display_name, vpc_name)\n response = deployment_client.wait_for_completion(operation_id, 'Waiting for endpoint', 30)\n endpoint_name = response['name']\n logging.info(f'Endpoint created: {endpoint_name}')\n \n endpoint_id = endpoint_name.split('/')[-1]\n index_id = index_name.split('/')[-1]\n deployed_index_display_name = f'Deployed {index_display_name}'\n deployed_index_id = deployed_index_id_prefix + str(uuid.uuid4())\n \n logging.info(f'Creating deployed index: {deployed_index_id}')\n logging.info(f' from: {index_name}')\n operation_id = deployment_client.create_deployment(\n deployed_index_display_name, \n deployed_index_id,\n endpoint_id,\n index_id)\n response = deployment_client.wait_for_completion(operation_id, 'Waiting for deployment', 60)\n logging.info('Index deployed!')\n \n deployed_index_ip = deployment_client.get_deployment_grpc_ip(\n endpoint_id, deployed_index_id\n )\n # Write the deployed index properties to metadata.\n deployed_ann_index.set_string_custom_property('endpoint_name', \n endpoint_name)\n deployed_ann_index.set_string_custom_property('deployed_index_id', \n deployed_index_id)\n deployed_ann_index.set_string_custom_property('index_name', \n index_name)\n deployed_ann_index.set_string_custom_property('deployed_index_grpc_ip', \n deployed_index_ip)\n", "Creating a TFX pipeline\nThe pipeline automates the process of preparing item embeddings (in BigQuery), training a Matrix Factorization model (in BQML), and creating and deploying an ANN Service index.\nThe pipeline has a simple sequential flow. The pipeline accepts a set of runtime parameters that define GCP environment settings and embeddings and index assembly parameters.", "import os\n\nfrom compute_pmi import compute_pmi\nfrom create_index import create_index\nfrom deploy_index import deploy_index\nfrom export_embeddings import export_embeddings\nfrom extract_embeddings import extract_embeddings\nfrom tfx.orchestration.kubeflow.v2 import kubeflow_v2_dag_runner\n# Only required for local run.\nfrom tfx.orchestration.metadata import sqlite_metadata_connection_config\nfrom tfx.orchestration.pipeline import Pipeline\nfrom train_item_matching import train_item_matching_model\n\n\ndef ann_pipeline(\n pipeline_name,\n pipeline_root,\n metadata_connection_config,\n project_id,\n project_number,\n region,\n vpc_name,\n bq_dataset_name,\n min_item_frequency,\n max_group_size,\n dimensions,\n embeddings_gcs_location,\n index_display_name,\n deployed_index_id_prefix,\n) -> Pipeline:\n \"\"\"Implements the SCANN training pipeline.\"\"\"\n\n pmi_computer = compute_pmi(\n project_id=project_id,\n bq_dataset=bq_dataset_name,\n min_item_frequency=min_item_frequency,\n max_group_size=max_group_size,\n )\n\n bqml_trainer = train_item_matching_model(\n project_id=project_id,\n bq_dataset=bq_dataset_name,\n item_cooc=pmi_computer.outputs.item_cooc,\n dimensions=dimensions,\n )\n\n embeddings_extractor = extract_embeddings(\n project_id=project_id,\n bq_dataset=bq_dataset_name,\n bq_model=bqml_trainer.outputs.bq_model,\n )\n\n embeddings_exporter = export_embeddings(\n project_id=project_id,\n gcs_location=embeddings_gcs_location,\n item_embeddings_bq=embeddings_extractor.outputs.item_embeddings,\n )\n\n index_constructor = create_index(\n project_id=project_id,\n project_number=project_number,\n region=region,\n display_name=index_display_name,\n dimensions=dimensions,\n item_embeddings=embeddings_exporter.outputs.item_embeddings_gcs,\n )\n\n index_deployer = deploy_index(\n project_id=project_id,\n project_number=project_number,\n region=region,\n vpc_name=vpc_name,\n deployed_index_id_prefix=deployed_index_id_prefix,\n ann_index=index_constructor.outputs.ann_index,\n )\n\n components = [\n pmi_computer,\n bqml_trainer,\n embeddings_extractor,\n embeddings_exporter,\n index_constructor,\n index_deployer,\n ]\n\n return Pipeline(\n pipeline_name=pipeline_name,\n pipeline_root=pipeline_root,\n # Only needed for local runs.\n metadata_connection_config=metadata_connection_config,\n components=components,\n )", "Testing the pipeline locally\nYou will first run the pipeline locally using the Beam runner.\nClean the metadata and artifacts from the previous runs", "pipeline_root = f\"/tmp/{PIPELINE_NAME}\"\nlocal_mlmd_folder = \"/tmp/mlmd\"\n\nif tf.io.gfile.exists(pipeline_root):\n print(\"Removing previous artifacts...\")\n tf.io.gfile.rmtree(pipeline_root)\nif tf.io.gfile.exists(local_mlmd_folder):\n print(\"Removing local mlmd SQLite...\")\n tf.io.gfile.rmtree(local_mlmd_folder)\nprint(\"Creating mlmd directory: \", local_mlmd_folder)\ntf.io.gfile.mkdir(local_mlmd_folder)\nprint(\"Creating pipeline root folder: \", pipeline_root)\ntf.io.gfile.mkdir(pipeline_root)", "Set pipeline parameters and create the pipeline", "bq_dataset_name = \"song_embeddings\"\nindex_display_name = \"Song embeddings\"\ndeployed_index_id_prefix = \"deployed_song_embeddings_\"\nmin_item_frequency = 15\nmax_group_size = 100\ndimensions = 50\nembeddings_gcs_location = f\"gs://{BUCKET_NAME}/embeddings\"\n\nmetadata_connection_config = sqlite_metadata_connection_config(\n os.path.join(local_mlmd_folder, \"metadata.sqlite\")\n)\n\npipeline = ann_pipeline(\n pipeline_name=PIPELINE_NAME,\n pipeline_root=pipeline_root,\n metadata_connection_config=metadata_connection_config,\n project_id=PROJECT_ID,\n project_number=PROJECT_NUMBER,\n region=REGION,\n vpc_name=VPC_NAME,\n bq_dataset_name=bq_dataset_name,\n index_display_name=index_display_name,\n deployed_index_id_prefix=deployed_index_id_prefix,\n min_item_frequency=min_item_frequency,\n max_group_size=max_group_size,\n dimensions=dimensions,\n embeddings_gcs_location=embeddings_gcs_location,\n)", "Start the run", "logging.getLogger().setLevel(logging.INFO)\n\nBeamDagRunner().run(pipeline)", "Inspect produced metadata\nDuring the execution of the pipeline, the inputs and outputs of each component have been tracked in ML Metadata.", "from ml_metadata import metadata_store\nfrom ml_metadata.proto import metadata_store_pb2\n\nconnection_config = metadata_store_pb2.ConnectionConfig()\nconnection_config.sqlite.filename_uri = os.path.join(\n local_mlmd_folder, \"metadata.sqlite\"\n)\nconnection_config.sqlite.connection_mode = 3 # READWRITE_OPENCREATE\nstore = metadata_store.MetadataStore(connection_config)\nstore.get_artifacts()", "NOTICE. The following code does not work with ANN Service Experimental. It will be finalized when the service moves to the Preview stage.\nRunning the pipeline on AI Platform Pipelines\nYou will now run the pipeline on AI Platform Pipelines (Unified)\nPackage custom components into a container\nThe modules containing custom components must be first package as a docker container image, which is a derivative of the standard TFX image.\nCreate a Dockerfile", "%%writefile Dockerfile\nFROM gcr.io/tfx-oss-public/tfx:0.25.0\nWORKDIR /pipeline\nCOPY ./ ./\nENV PYTHONPATH=\"/pipeline:${PYTHONPATH}\"", "Build and push the docker image to Container Registry", "!gcloud builds submit --tag gcr.io/{PROJECT_ID}/caip-tfx-custom:{USER} .", "Create AI Platform Pipelines client", "from aiplatform.pipelines import client\n\naipp_client = client.Client(project_id=PROJECT_ID, region=REGION, api_key=API_KEY)", "Set the the parameters for AIPP execution and create the pipeline", "metadata_connection_config = None\npipeline_root = PIPELINE_ROOT\n\npipeline = ann_pipeline(\n pipeline_name=PIPELINE_NAME,\n pipeline_root=pipeline_root,\n metadata_connection_config=metadata_connection_config,\n project_id=PROJECT_ID,\n project_number=PROJECT_NUMBER,\n region=REGION,\n vpc_name=VPC_NAME,\n bq_dataset_name=bq_dataset_name,\n index_display_name=index_display_name,\n deployed_index_id_prefix=deployed_index_id_prefix,\n min_item_frequency=min_item_frequency,\n max_group_size=max_group_size,\n dimensions=dimensions,\n embeddings_gcs_location=embeddings_gcs_location,\n)", "Compile the pipeline", "config = kubeflow_v2_dag_runner.KubeflowV2DagRunnerConfig(\n project_id=PROJECT_ID,\n display_name=PIPELINE_NAME,\n default_image=\"gcr.io/{}/caip-tfx-custom:{}\".format(PROJECT_ID, USER),\n)\nrunner = kubeflow_v2_dag_runner.KubeflowV2DagRunner(\n config=config, output_filename=\"pipeline.json\"\n)\nrunner.compile(pipeline, write_out=True)", "Submit the pipeline run", "aipp_client.create_run_from_job_spec(\"pipeline.json\")", "License\nCopyright 2020 Google LLC\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License. You may obtain a copy of the License at: http://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. \nSee the License for the specific language governing permissions and limitations under the License.\nThis is not an official Google product but sample code provided for an educational purpose" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
torgebo/deep_learning_workshop
4-gan/2-gan-mnist.ipynb
mit
[ "\"\"\"This area sets up the Jupyter environment.\nPlease do not modify anything in this cell.\n\"\"\"\nimport os\nimport sys\nimport time\n\n# Add project to PYTHONPATH for future use\nsys.path.insert(1, os.path.join(sys.path[0], '..'))\n\n# Import miscellaneous modules\nfrom IPython.core.display import display, HTML\n\n# Set CSS styling\nwith open('../admin/custom.css', 'r') as f:\n style = \"\"\"<style>\\n{}\\n</style>\"\"\".format(f.read())\n display(HTML(style))\n\n# Plots will be show inside the notebook\n%matplotlib notebook\nimport matplotlib.pyplot as plt\n\nimport problem_unittests as tests", "Generative Adversarial Networks 2\n<div class=\"alert alert-warning\">\nThis is a continuation of the previous notebook, where we learned the gist of what a generative adversarial network (GAN) is and how to learn a 1-d multimodal distribution. Please refer back to the last notebook if you are unsure about what a GAN is.\n</div>\n\nExample: MNIST Dataset\nIn this notebook we will use a GAN to generate samples coming from the familiar MNIST dataset.\nWe will start loading by our data.\n<div class=\"alert alert-info\">\n <strong>In the following snippet of code we will:</strong>\n <ul>\n <li>Load data from MNIST </li>\n <li>Merge the training and test set</li>\n </ul>\n</div>", "import numpy as np \nfrom keras.datasets import mnist\n\nimport admin.tools as tools\n\n\n# Load MNIST data\n(X_train, y_train), (X_test, y_test) = mnist.load_data()\nX_data = np.concatenate((X_train, X_test))", "Input Pre-Processing\nAs we have done previously with MNIST, the first thing we will be doing is normalisation. However, this time we will normalise the 8-bit images from [0, 255] to [-1, 1].\nPrevious research with GANs indicates that this normalisation yields better results (reference paper).\nTask I: Implement an Image Normalisation Function\n<div class=\"alert alert-success\">\n**Task**: Implement a function that normalises the images to the interval [-1,1].\n<ul>\n <li>Inputs are integers in the interval [0,255]</li>\n <li>Outputs should be floats in the interval [-1,1]</li>\n</ul>\n</div>", "def normalize_images(images):\n \"\"\"\n Create Matrix Y\n :param images: Np tensor with N x R x C x CH.\n Where R = Number of rows in a image\n Where C = Number of cols in a image\n Where CH = Number of channles in a image\n \n :return: images with its values normalized to [-1,1].\n \"\"\"\n images = None\n return images\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\n# Test normalisation function and normalise the data if it passes\ntests.test_normalize_images(normalize_images)\nX_data = normalize_images(X_data)", "As we did in a previous notebook we will add an extra dimension to our greyscale images.\n<div class=\"alert alert-info\">\n<strong>In the following code snippet we will:</strong>\n<ul>\n <li>Transform `X_data` from $(28,28)$ to $(28,28,1)$</li>\n</ul>\n</div>", "X_data = np.expand_dims(X_data, axis=-1)\n\nprint('Shape of X_data {}'.format(X_data.shape))", "Task II: Implement a Generator Network\n<div class=\"alert alert-success\">\n<strong>Task:</strong> :\n<ul>\n <li>Make a network that accepts inputs where the shape is defined by `zdim` $\\rightarrow$ `shape=(z_dim,)`</li>\n <li>The number of outputs of your network need to be defined as `nb_outputs`</li>\n <li>Reshape the final layer to be in the shape of `output_shape`</li>\n</ul>\n</div>\n\n\nSince the data lies in the range [-1,1] try using the 'tanh' as the final activation function.\n\nKeras references: Reshape()", "# Import some useful keras libraries\nimport keras\nfrom keras.models import Model\nfrom keras.layers import *\n\n\ndef generator(z_dim, nb_outputs, ouput_shape):\n \n # Define the input_noise as a function of Input()\n latent_var = None\n\n # Insert the desired amount of layers for your network\n x = None\n \n # Map you latest layer to n_outputs\n x = None\n \n # Reshape you data\n x = Reshape(ouput_shape)(x)\n\n model = Model(inputs=latent_var, outputs=x)\n\n return model", "Now, let's build a generative network using the function you just made.\n<div class=\"alert alert-info\">\n <strong>In the following code snippet we will:</strong>\n<ul>\n <li>Define the number of dimensions of the latent vector $\\mathbf{z}$</li>\n <li>Find out the shape of a sample of data</li>\n <li>Compute numbers of dimensions in a sample of data</li>\n <li>Create the network using your function</li>\n <li>Display a summary of your generator network</li>\n</ul>\n</div>", "# Define the dimension of the latent vector\nz_dim = 100\n\n# Dimension of our sample\nsample_dimentions = (28, 28, 1)\n\n# Calculate the number of dimensions in a sample\nn_dimensions=1\nfor x in list(sample_dimentions):\n n_dimensions *= x\n\nprint('A sample of data has shape {} composed of {} dimension(s)'.format(sample_dimentions, n_dimensions))\n\n# Create the generative network\nG = generator(z_dim, n_dimensions, sample_dimentions)\n\n# We recommend the followin optimiser\ng_optim = keras.optimizers.adam(lr=0.002, beta_1=0.5, beta_2=0.999, epsilon=1e-08, decay=0.0)\n\n# Compile network\nG.compile (loss='binary_crossentropy', optimizer=g_optim)\n\n# Network Summary\nG.summary()", "Task III: Implement a Discriminative Network\nThe discriminator network is a simple binary classifier where the output indicates the probability of the input data being real or fake.\n<div class=\"alert alert-success\">\n<strong>Task:</strong>\n<ul>\n <li> Create a network where the input shape is `input_shape`\n <li> We recomend reshaping your network just after input. This way you can have a vector with shape `(None, nb_inputs)`</li>\n <li> Implement a simple network that can classify data</li>\n</ul>\n</div>\n\nKeras references: Reshape()", "def discriminator(input_shape, nb_inputs):\n # Define the network input to have input_shape shape\n input_x = None\n \n # Reshape your input\n x = None\n \n # Implement the rest of you classifier\n x = None\n \n probabilities = Dense(1, activation='sigmoid')(x)\n \n model = Model(inputs=input_x, outputs=probabilities)\n\n return model", "Now, let's build a discriminator network using the function you just made.\n<div class=\"alert alert-info\">\n<strong>In the following code snippet we will:</strong>\n<ul>\n <li>Create the network using your function</li>\n <li>Display a summary of your generator network</li>\n</ul>\n</div>", "# We already computed the shape and number of dimensions in a data sample\nprint('The data has shape {} composed of {} dimension(s)'.format(sample_dimentions, n_dimensions))\n\n# Discriminative Network\nD = discriminator(sample_dimentions,n_dimensions)\n\n# Recommended optimiser\nd_optim = keras.optimizers.adam(lr=0.002, beta_1=0.5, beta_2=0.999, epsilon=1e-08, decay=0.0)\n\n# Compile Network\nD.compile(loss='binary_crossentropy', optimizer=d_optim)\n\n# Network summary\nD.summary()", "Putting the GAN together\nIn the following code we will put the generator and discriminator together so we can train our adversarial model.\n<div class=\"alert alert-info\">\n<strong>In the following code snippet we will:</strong>\n<ul>\n <li>Use the generator and discriminator to construct a GAN</li>\n</ul>\n</div>", "from keras.models import Sequential\n\n\ndef build(generator, discriminator):\n \"\"\"Build a base model for a Generative Adversarial Networks.\n Parameters\n ----------\n generator : keras.engine.training.Model\n A keras model built either with keras.models ( Model, or Sequential ).\n This is the model that generates the data for the Generative Adversarial networks.\n Discriminator : keras.engine.training.Model\n A keras model built either with keras.models ( Model, or Sequential ).\n This is the model that is a binary classifier for REAL/GENERATED data.\n Returns\n -------\n (keras.engine.training.Model)\n It returns a Sequential Keras Model by connecting a Generator model to a\n Discriminator model. [ generator-->discriminator]\n \"\"\"\n model = Sequential()\n model.add(generator)\n discriminator.trainable = False\n model.add(discriminator)\n return model\n\n\n# Create GAN\nG_plus_D = build(G, D)\nG_plus_D.compile(loss='binary_crossentropy', optimizer=g_optim)\nD.trainable = True", "Task IV: Define Hyperparameters\nPlease define the following hyper-parameters to train your GAN.\n<br>\n<div class=\"alert alert-success\">\n <strong>Task:</strong> Please define the following hyperparameters to train your GAN:\n <ul>\n <li> Batch size</li>\n <li>Number of training epochs</li>\n </ul>\n</div>", "BATCH_SIZE = 32\nNB_EPOCHS = 50", "<div class=\"alert alert-info\">\n <strong>In the following code snippet we will:</strong>\n<ul>\n <li>Train the constructed GAN</li>\n <li>Live plot the generated data</li>\n</ul>\n</div>", "# Figure for live plot\nfig, ax = plt.subplots(1,1)\n\n# Allocate space for noise variable\nz = np.zeros((BATCH_SIZE, z_dim))\n\n# n_bathces\nnumber_of_batches = int(X_data.shape[0] / BATCH_SIZE)\n\nfor epoch in range(NB_EPOCHS):\n for index in range(number_of_batches):\n \n # Sample minimibath m=BATCH_SIZE from data generating distribution\n # in other words :\n # Grab a batch of the real data\n data_batch = X_data[index*BATCH_SIZE:(index+1)*BATCH_SIZE]\n \n # Sample minibatch of m= BATCH_SIZE noise samples\n # in other words, we sample from a uniform distribution\n z = np.random.uniform(-1, 1, (BATCH_SIZE,z_dim))\n\n # Sample minibatch m=BATCH_SIZE from data generating distribution Pdata\n # in ohter words\n # Use genrator to create new fake samples\n generated_batch = G.predict(z, verbose=0)\n\n # Update/Train discriminator D\n X = np.concatenate((data_batch, generated_batch))\n y = [1] * BATCH_SIZE + [0.0] * BATCH_SIZE\n\n d_loss = D.train_on_batch(X, y)\n\n # Sample minibatch of m= BATCH_SIZE noise samples\n # in other words, we sample from a uniform distribution\n z = np.random.uniform(-1, 1, (BATCH_SIZE,z_dim))\n\n #Update Generator while not updating discriminator\n D.trainable = False\n # to do gradient ascent we just flip the labels ...\n g_loss = G_plus_D.train_on_batch(z, [1] * BATCH_SIZE)\n D.trainable = True\n \n # Plot data every 10 mini batches\n if index % 10 == 0:\n ax.clear() \n\n # Histogram of generated data\n image =tools.combine_images(X)\n\n image = image*127.5+127.5\n ax.imshow(image.astype(np.uint8))\n fig.canvas.draw()\n time.sleep(0.01)\n\n\n # End of epoch ....\n print(\"epoch %d : g_loss : %f | d_loss : %f\" % (epoch, g_loss, d_loss))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dm-wyncode/zipped-code
content/posts/meditations/Python_objects.ipynb
mit
[ "Classes are objects in Python.\nI like to take examples from one programming language and attempt to use them in another.\nI asked myself if Python code could be written to imitate JavaScript's prototypical inheritence where prototypical inheritence is defined as:\n\n…when an object inherits from another object. This differs from classical inheritance, in which a class inherits from another class.\n\n—https://www.quora.com/What-is-prototypal-inheritance\nClasses are in indeed objects in Python so in a way Python classes are objects that ineed do, too, inherit from other objects. It just isn't implemented exactly the same in JavaScript.\nI did find a blog post about writing a Python library that has objects behaving like JavaScript objects.\nThe material below is adapted from from What is a metaclass in Python?\n\nPython has a very peculiar idea of what classes are, borrowed from the Smalltalk language.", "from pprint import pprint\n\n%%HTML\n<p style=\"color:red;font-size: 150%;\">Classes are more than that in Python. Classes are objects too.</p>\n\n%%HTML\n<p style=\"color:red;font-size: 150%;\">Yes, objects.</p>\n\n%%HTML\n<p style=\"color:red;font-size: 150%;\">As soon as you use the keyword class, Python executes it and creates an OBJECT. The instruction</p>\n\nclass ObjectCreator(object):\n pass", "creates in memory an object with the name \"ObjectCreator\".", "%%HTML\n\n<p style=\"color:red;font-size: 150%;\">This object (the class) is itself capable of creating objects (the instances), and this is why it's a class.</p>", "But still, it's an object, and therefore:\n\nyou can assign it to a variable", "object_creator_class = ObjectCreator\nprint(object_creator_class)", "you can copy it", "from copy import copy\nObjectCreatorCopy = copy(ObjectCreator)\nprint(ObjectCreatorCopy)\nprint(\"copy ObjectCreatorCopy is not ObjectCreator: \", ObjectCreatorCopy is not ObjectCreator)\nprint(\"variable object_creator_class is ObjectCreator: \", object_creator_class is ObjectCreator)", "you can add attributes to it", "print(\"ObjectCreator has an attribute 'new_attribute': \", hasattr(ObjectCreator, 'new_attribute'))\n\nObjectCreator.new_attribute = 'foo' # you can add attributes to a class\nprint(\"ObjectCreator has an attribute 'new_attribute': \", hasattr(ObjectCreator, 'new_attribute'))\n\nprint(\"attribute 'new_attribute': \", ObjectCreator.new_attribute)", "you can pass it as a function parameter", "def echo(o):\n print(o)\n\n# you can pass a class as a parameter\nprint(\"return value of passing Object Creator to {}: \".format(echo), echo(ObjectCreator)) \n\n%%HTML\n\n<p style=\"color:red;font-size: 150%;\">Since classes are objects, you can create them on the fly, like any object.</p>\n\ndef get_class_by(name):\n class Foo:\n pass\n class Bar:\n pass\n classes = {\n 'foo': Foo,\n 'bar': Bar\n }\n return classes.get(name, None)\n\nfor class_ in (get_class_by(name) for name in ('foo', 'bar', )):\n pprint(class_)", "But it's not so dynamic, since you still have to write the whole class yourself.\nSince classes are objects, they must be generated by something.\nWhen you use the class keyword, Python creates this object automatically. But as with most things in Python, it gives you a way to do it manually.\nRemember the function type? The good old function that lets you know what type an object is:", "print(type(1))\n\nprint(type(\"1\"))\n\nprint(type(int))\n\nprint(type(ObjectCreator))\n\nprint(type(type))", "Well, type has a completely different ability, it can also create classes on the fly. type can take the description of a class as parameters, and return a class.", "classes = Foo, Bar = [type(name, (), {}) for name in ('Foo', 'Bar')]\n\nfor class_ in classes:\n pprint(class_)", "type accepts a dictionary to define the attributes of the class. So:", "classes_with_attributes = Foo, Bar = [type(name, (), namespace) \n for name, namespace \n in zip(\n ('Foo', 'Bar'), \n (\n {'assigned_attr': 'foo_attr'}, \n {'assigned_attr': 'bar_attr'}\n )\n )\n ]\n\nfor class_ in classes_with_attributes:\n pprint([item for item in vars(class_).items()])", "Eventually you'll want to add methods to your class. Just define a function with the proper signature and assign it as an attribute.", "def an_added_function(self):\n return \"I am an added function.\"\n\nFoo.added = an_added_function\nfoo = Foo()\nprint(foo.added())", "You see where we are going: in Python, classes are objects, and you can create a class on the fly, dynamically.", "%%HTML\n<p style=\"color:red;font-size: 150%;\">[Creating a class on the fly, dynamically] is what Python does when you use the keyword class, and it does so by using a metaclass.</p>\n\n%%HTML\n<p style=\"color:red;font-size: 150%;\">Metaclasses are the 'stuff' that creates classes.</p>", "You define classes in order to create objects, right?\nBut we learned that Python classes are objects.", "%%HTML\n\n<p style=\"color:red;font-size: 150%;\">Well, metaclasses are what create these objects. They are the classes' classes.</p>\n\n%%HTML\n\n<p style=\"color:red;font-size: 150%;\">Everything, and I mean everything, is an object in Python. That includes ints, strings, functions and classes. All of them are objects. And all of them have been created from a class (which is also an object).</p>", "Changing to blog post entitled Python 3 OOP Part 5—Metaclasses\nobject, which inherits from nothing.\nreminds me of Eastern teachings of 'sunyata': \nemptiness, voidness, openness, nonexistence, thusness, etc.\n```python\n\n\n\na = 5\ntype(a)\n<class 'int'>\na.class\n<class 'int'>\na.class.bases\n(<class 'object'>,)\nobject.bases\n() # object, which inherits from nothing.\ntype(a)\n<class 'int'>\ntype(int)\n<class 'type'>\ntype(float)\n<class 'type'>\ntype(dict)\n<class 'type'>\ntype(object)\n<class 'type'>\ntype.bases\n(<class 'object'>,)\n```\n\n\nWhen you think you grasped the type/object matter read this and start thinking again\n\n```python\n\n\n\ntype(type)\n<class 'type'>\n```", "class MyType(type):\n pass\n\nclass MySpecialClass(metaclass=MyType):\n pass\n\n\nmsp = MySpecialClass()\n\ntype(msp)\n\ntype(MySpecialClass)\n\ntype(MyType)", "Metaclasses are a very advanced topic in Python, but they have many practical uses. For example, by means of a custom metaclass you may log any time a class is instanced, which can be important for applications that shall keep a low memory usage or have to monitor it.", "%%HTML\n\n<p style=\"color:red;font-size: 150%;\">\"Build a class\"? This is a task for metaclasses. The following implementation comes from Python 3 Patterns, Recipes and Idioms.</p>\n\nclass Singleton(type):\n instance = None\n def __call__(cls, *args, **kwargs):\n if not cls.instance:\n cls.instance = super(Singleton, cls).__call__(*args, **kwargs)\n return cls.instance\n\nclass ASingleton(metaclass=Singleton):\n pass\n\na = ASingleton()\nb = ASingleton()\nprint(a is b)\n\nprint(hex(id(a)))\nprint(hex(id(b)))", "The constructor mechanism in Python is on the contrary very important, and it is implemented by two methods, instead of just one: new() and init().", "%%HTML\n\n<p style=\"color:red;font-size: 150%;\">The tasks of the two methods are very clear and distinct: __new__() shall perform actions needed when creating a new instance while __init__ deals with object initialization.</p>\n\nclass MyClass:\n def __new__(cls, *args, **kwargs):\n obj = super().__new__(cls, *args, **kwargs)\n # do something here\n obj.one = 1\n return obj # instance of the container class, so __init__ is called\n\n%%HTML\n<p style=\"color:red;font-size: 150%;\"> Anyway, __init__() will be called only if you return an instance of the container class. </p>\n\nmy_class = MyClass()\n\nmy_class.one\n\nclass MyInt:\n def __new__(cls, *args, **kwargs):\n obj = super().__new__(cls, *args, **kwargs)\n obj.join = ':'.join\n return obj\n\nmi = MyInt()\nprint(mi.join(str(n) for n in range(10)))", "Subclassing int\n\nObject creation is behaviour. For most classes it is enough to provide a different __init__ method, but for immutable classes one often have to provide a different __new__ method.\nIn this subsection, as preparation for enumerated integers, we will start to code a subclass of int that behave like bool. We will start with string representation, which is fairly easy.", "class MyBool(int):\n def __repr__(self):\n return 'MyBool.' + ['False', 'True'][self]\n\nt = MyBool(1)\n\nt\n\nbool(2) == 1\n\nMyBool(2) == 1\n\n%%HTML\n<p style=\"color:red;font-size: 150%;\">In many classes we use __init__ to mutate the newly constructed object, typically by storing or otherwise using the arguments to __init__. But we can’t do this with a subclass of int (or any other immuatable) because they are immutable.</p>", "The solution to the problem is to use new. Here we will show that it works, and later we will explain elsewhere exactly what happens.", "bool.__doc__\n\nclass NewBool(int):\n def __new__(cls, value):\n # bool \n return int.__new__(cls, bool(value))\n\ny = NewBool(56)\ny == 1" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jpzhangvincent/MobileAppMarketAnalysis
notebooks/Multiple Languages Effects Analysis (Q4).ipynb
mit
[ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\napp = pd.read_pickle('/Users/krystal/Desktop/app_clean.p')\napp.head()\n\napp = app.drop_duplicates()\n\n\napp['overall reviews'] = map(lambda x: int(x) if x!='' else np.nan, app['overall reviews'])\napp['overall rating'] = map(lambda x: float(x) if x!='' else np.nan, app['overall rating'])\napp['current rating'] = map(lambda x: float(x) if x!='' else np.nan, app['current rating'])", "<b>Question 4 Do multiple languages influent the reviews of apps?</b>", "multi_language = app.loc[app['multiple languages'] == 'Y']\nsin_language = app.loc[app['multiple languages'] == 'N']\nmulti_language['overall rating'].plot(kind = \"density\")\nsin_language['overall rating'].plot(kind = \"density\")\nplt.xlabel('Overall Rating')\nplt.legend(labels = ['multiple languages','single language'], loc='upper right')\nplt.title('Distribution of overall rating among apps with multiple/single languages')\nplt.show()", "<p>First, the data set is splitted into two parts, one is app with multiple languages and another is app with single language. Then the density plots for the two subsets are made and from the plots we can see that the overall rating of apps with multiple languages is generally higher than the overall rating of apps with single language. Some specific tests are still needed to perform.</p>", "import scipy.stats\n\nmulti_language = list(multi_language['overall rating'])\nsin_language = list(sin_language['overall rating'])\n\nmultiple = []\nsingle = []\nfor each in multi_language:\n if each > 0:\n multiple.append(each)\nfor each in sin_language:\n if each > 0:\n single.append(each)\n\nprint(np.mean(multiple))\nprint(np.mean(single))\n\nscipy.stats.ttest_ind(multiple, single, equal_var = False)", "<p>I perform t test here. We have two samples here, one is apps with multiple languages and another is apps with single language. So I want to test whether the mean overall rating for these two samples are different.</p>\n\n<p>The null hypothesis is mean overall rating for apps with multiple languages and mean overall rating for apps with single language are the same and the alternative hypothesis is that the mean overall rating for these two samples are not the same.</p>\n\n<p>From the result we can see that the p value is 1.7812330368645647e-26, which is smaller than 0.05, so we should reject null hypothesis at significance level 0.05, that is, we should conclude that the mean of overall rating for these two samples are not the same and multiple languages do influent the rating of an app.</p>", "scipy.stats.f_oneway(multiple, single)", "<p>I also perform one-way ANOVA test here.</p>\n\n<p>The null hypothesis is mean overall rating for apps with multiple languages and mean overall rating for apps with single language are the same and the alternative hypothesis is that the mean overall rating for these two samples are not the same.</p>\n\n<p>From the result we can see that the p value is 3.0259308024434954e-26, which is smaller than 0.05, so we should reject null hypothesis at significance level 0.05, that is, we should conclude that the mean of overall rating for these two samples are not the same and multiple languages do influent the rating of an app.</p>", "scipy.stats.kruskal(multiple, single)", "<p>I perform Kruskal-Wallis H-test here, which is a non-parametric version of ANOVA. Since t test and one-way ANOVA test all need assumption that the samples shoule come from a normally distributed population, here we use this test, which do not need these assumptions but will lose some power.</p>\n\n<p>The null hypothesis is mean overall rating for apps with multiple languages and mean overall rating for apps with single language are the same and the alternative hypothesis is that the mean overall rating for these two samples are not the same.</p>\n\n<p>From the result we can see that the p value is 3.9085109588433391e-25, which is smaller than 0.05, so we should reject null hypothesis at significance level 0.05, that is, we should conclude that the mean of overall rating for these two samples are not the same and multiple languages do influent the rating of an app.</p>\n\n<b>In general, from the results in these three tests, we can conclude that whether providing multiple languages can influent the rating of an app and the association needs further exploration.</b>" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Centre-Alt-Rendiment-Esportiu/att
notebooks/Hit Processor.ipynb
gpl-3.0
[ "<h1>Hit Processor</h1>\n<hr style=\"border: 1px solid #000;\">\n<span>\n<h2>ATT raw Hit processor.</h2>\n</span>\n<br>\n<span>\nThis notebook shows how the hit processor works.<br>\nThe Hit processors aim is to parse the raw hit readings from the serial port.\n</span>\n<span>\nSet modules path first:\n</span>", "import sys\n#sys.path.insert(0, '/home/asanso/workspace/att-spyder/att/src/python/')\nsys.path.insert(0, 'i:/dev/workspaces/python/att-workspace/att/src/python/')", "<span>\nLet's parse\n</span>", "from hit.process.processor import ATTMatrixHitProcessor\nfrom hit.process.processor import ATTPlainHitProcessor\n\nplainProcessor = ATTPlainHitProcessor()\nmatProcessor = ATTMatrixHitProcessor()", "<span>\nParse a Hit with Plain Processor\n</span>", "plainHit = plainProcessor.parse_hit(\"hit: {0:25 1549:4 2757:4 1392:4 2264:7 1764:7 1942:5 2984:5 r}\")\nprint plainHit", "<span>\nCompute diffs:\n</span>", "plainDiffs = plainProcessor.hit_diffs(plainHit[\"sensor_timings\"])\nprint plainDiffs", "<span>\nParse a Hit with Matrix Processor\n</span>", "matHit = matProcessor.parse_hit(\"hit: {0:25 1549:4 2757:4 1392:4 2264:7 1764:7 1942:5 2984:5 r}\")\nprint matHit", "<span>\nCompute diffs:\n</span>", "matDiffs = matProcessor.hit_diffs((matHit[\"sensor_timings\"]))\nprint matDiffs\n\nmatDiffs" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
roatienza/Deep-Learning-Experiments
versions/2022/tools/python/einsum_demo.ipynb
mit
[ "Illustrates numpy vs einsum\nIn deep learning, we perform a lot of tensor operations. einsum simplifies and unifies the APIs for these operations.\neinsum can be found in numerical computation libraries and deep learning frameworks.\nLet us demonstrate how to import and use einsum in numpy, TensorFlow and PyTorch.", "import numpy as np\n\nfrom numpy import einsum\n\nw = np.arange(6).reshape(2,3).astype(np.float32)\nx = np.ones((3,1), dtype=np.float32)\n\nprint(\"w:\\n\", w)\nprint(\"x:\\n\", x)\n\ny = np.matmul(w, x)\nprint(\"y:\\n\", y)\n\ny = einsum('ij,jk->ik', torch.from_numpy(w), torch.from_numpy(x))\nprint(\"y:\\n\", y)\n", "Tensor multiplication with transpose in numpy and einsum", "w = np.arange(6).reshape(2,3).astype(np.float32)\nx = np.ones((1,3), dtype=np.float32)\n\nprint(\"w:\\n\", w)\nprint(\"x:\\n\", x)\n\ny = np.matmul(w, np.transpose(x))\nprint(\"y:\\n\", y)\n\ny = einsum('ij,kj->ik', w, x)\nprint(\"y:\\n\", y)", "Properties of square matrices in numpy and einsum\nWe demonstrate diagonal.", "w = np.arange(9).reshape(3,3).astype(np.float32)\nd = np.diag(w)\nprint(\"w:\\n\", w)\nprint(\"d:\\n\", d)\nd = einsum('ii->i', w)\nprint(\"d:\\n\", d)", "Trace.", "t = np.trace(w)\nprint(\"t:\\n\", t)\n\nt = einsum('ii->', w)\nprint(\"t:\\n\", t)", "Sum along an axis.", "s = np.sum(w, axis=0)\nprint(\"s:\\n\", s)\n\ns = einsum('ij->j', w)\nprint(\"s:\\n\", s)", "Let us demonstrate tensor transpose. We can also use w.T to transpose w in numpy.", "t = np.transpose(w)\nprint(\"t:\\n\", t)\n\nt = einsum(\"ij->ji\", w)\nprint(\"t:\\n\", t)", "Dot, inner and outer products in numpy and einsum.", "a = np.ones((3,), dtype=np.float32)\nb = np.ones((3,), dtype=np.float32) * 2\n\nprint(\"a:\\n\", a)\nprint(\"b:\\n\", b)\n\nd = np.dot(a,b)\nprint(\"d:\\n\", d)\nd = einsum(\"i,i->\", a, b)\nprint(\"d:\\n\", d)\n\ni = np.inner(a, b)\nprint(\"i:\\n\", i)\ni = einsum(\"i,i->\", a, b)\nprint(\"i:\\n\", i)\n\no = np.outer(a,b)\nprint(\"o:\\n\", o)\no = einsum(\"i,j->ij\", a, b)\nprint(\"o:\\n\", o)\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bpgc-cte/python2017
Week 4/Lecture_9_Inheritance_Overloading_Overidding.ipynb
mit
[ "Object Oriented Programming - Inheritance, Overloading and Overidding\nConstructor Overloading", "class Student():\n def __init__(self, name, id_no=None):\n self.name = name\n self.id_no = id_no if id_no is not None else \"Not Allocated\"\n \n def __str__(self):\n s = self.name\n return s + \"\\n\" + \"Name : \" + self.name + \" , ID : \" + self.id_no\n\n def __add__(self, a):\n return self.name + a.name\n \n def __eq__(self, a):\n return self.id_no == a.id_no\n \nA = Student(\"Sebastin\", \"2015B4A70370G\")\nB = Student(\"Mayank\", \"2015B4A70370G\")\nprint(A)\nprint(B)\nprint(A + B)\nprint(A.__add__(B))\nprint(A == B)", "Inheritance\nInheritance is an OOP practice where a certain class(called subclass/child class) inherits the properties namely data and behaviour of another class(called superclass/parent class). Let us see through an example.", "# BITSian class\nclass BITSian():\n def __init__(self, name, id_no, hostel):\n self.name = name\n self.id_no = id_no\n self.hostel = hostel\n \n def get_name(self):\n return self.name\n \n def get_id(self):\n return self.id_no\n \n def get_hostel(self):\n return self.hostel\n\n\n# IITian class\nclass IITian():\n def __init__(self, name, id_no, hall):\n self.name = name\n self.id_no = id_no\n self.hall = hall\n \n def get_name(self):\n return self.name\n \n def get_id(self):\n return self.id_no\n \n def get_hall(self):\n return self.hall", "While writing code you must always make sure that you keep it as concise as possible and avoid any sort of repitition. Now, we can clearly see the commonalitites between BITSian and IITian classes.\nIt would be natural to assume that every college student whether from BITS or IIT or pretty much any other institution in the world will have a name and a unique ID number.\nSuch a degree of commonality means that there could be a higher level of abstraction to describe both BITSian and IITian to a decent extent.", "class CollegeStudent():\n def __init__(self, name, id_no):\n self.name = name\n self.id_no = id_no\n \n def get_name(self):\n return self.name\n \n def get_id(self):\n return self.id_no\n\n# BITSian class\nclass BITSian(CollegeStudent):\n def __init__(self, name, id_no, hostel):\n self.name = name\n self.id_no = id_no\n self.hostel = hostel\n \n def get_hostel(self):\n return self.hostel\n\n\n# IITian class\nclass IITian(CollegeStudent):\n def __init__(self, name, id_no, hall):\n self.name = name\n self.id_no = id_no\n self.hall = hall\n \n def get_hall(self):\n return self.hall\n\na = BITSian(\"Arif\", \"2015B4A70370G\", \"AH-5\")\nb = IITian(\"Abhishek\", \"2213civil32K\", \"Hall-10\")\nprint(a.get_name())\nprint(b.get_name())\nprint(a.get_hostel())\nprint(b.get_hall())", "So, the class definition is as such : class SubClassName(SuperClassName):\nUsing super()\nThe main usage of super() in Python is to refer to parent classes without naming them expicitly. This becomes really useful in multiple inheritance where you won't have to worry about parent class name.", "class Student():\n def __init__(self, name):\n self.name = name\n \n def get_name(self):\n return self.name\n\nclass CollegeStudent(Student):\n def __init__(self, name, id_no):\n super().__init__(name)\n self.id_no = id_no\n \n def get_id(self):\n return self.id_no\n \n# BITSian class\nclass BITSian(CollegeStudent):\n def __init__(self, name, id_no, hostel):\n super().__init__(name, id_no)\n self.hostel = hostel\n \n def get_hostel(self):\n return self.hostel\n\n\n# IITian class\nclass IITian(CollegeStudent):\n def __init__(self, name, id_no, hall):\n super().__init__(name, id_no)\n self.hall = hall\n \n def get_hall(self):\n return self.hall\n \na = BITSian(\"Arif\", \"2015B4A70370G\", \"AH-5\")\nb = IITian(\"Abhishek\", \"2213civil32K\", \"Hall-10\")\nprint(a.get_name())\nprint(b.get_name())\nprint(a.get_hostel())\nprint(b.get_hall())", "You may come across the following constructor call for a superclass on the net : super(self.__class__, self).__init__(). Please do not do this. It can lead to infinite recursion.\nGo through this link for more clarification : Understanding Python Super with init methods\nMethod Overidding\nThis is a phenomenon where a subclass method with the same name is executed in preference to it's superclass method with a similar name.", "class Student():\n def __init__(self, name):\n self.name = name\n \n def get_name(self):\n return \"Student : \" + self.name\n\nclass CollegeStudent(Student):\n def __init__(self, name, id_no):\n super().__init__(name)\n self.id_no = id_no\n \n def get_id(self):\n return self.id_no\n \n def get_name(self):\n return \"College Student : \" + self.name\n\nclass BITSian(CollegeStudent):\n def __init__(self, name, id_no, hostel):\n super().__init__(name, id_no)\n self.hostel = hostel\n \n def get_hostel(self):\n return self.hostel\n \n def get_name(self):\n return \"Gen BITSian --> \" + self.name\n\nclass IITian(CollegeStudent):\n def __init__(self, name, id_no, hall):\n super().__init__(name, id_no)\n self.hall = hall\n \n def get_hall(self):\n return self.hall\n \n def get_name(self):\n return \"IITian --> \" + self.name\n\na = BITSian(\"Arif\", \"2015B4A70370G\", \"AH-5\")\nb = IITian(\"Abhishek\", \"2213civil32K\", \"Hall-10\")\n\nprint(a.get_name())\nprint(b.get_name())\nprint()\nprint(super(BITSian, a).get_name())\nprint(super(IITian, b).get_name())\nprint(super(CollegeStudent, a).get_name())" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dataewan/deep-learning
image-classification/dlnd_image_classification.ipynb
mit
[ "Image Classification\nIn this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.\nGet the Data\nRun the following cell to download the CIFAR-10 dataset for python.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nfrom urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\nimport problem_unittests as tests\nimport tarfile\n\ncifar10_dataset_folder_path = 'cifar-10-batches-py'\n\n# Use Floyd's cifar-10 dataset if present\nfloyd_cifar10_location = '/input/cifar-10/python.tar.gz'\nif isfile(floyd_cifar10_location):\n tar_gz_path = floyd_cifar10_location\nelse:\n tar_gz_path = 'cifar-10-python.tar.gz'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(tar_gz_path):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:\n urlretrieve(\n 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',\n tar_gz_path,\n pbar.hook)\n\nif not isdir(cifar10_dataset_folder_path):\n with tarfile.open(tar_gz_path) as tar:\n tar.extractall()\n tar.close()\n\n\ntests.test_folder_path(cifar10_dataset_folder_path)", "Explore the Data\nThe dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:\n* airplane\n* automobile\n* bird\n* cat\n* deer\n* dog\n* frog\n* horse\n* ship\n* truck\nUnderstanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.\nAsk yourself \"What are all possible labels?\", \"What is the range of values for the image data?\", \"Are the labels in order or random?\". Answers to questions like these will help you preprocess the data and end up with better predictions.", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport helper\nimport numpy as np\n\n# Explore the dataset\nbatch_id = 1\nsample_id = 5\nhelper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)", "Implement Preprocess Functions\nNormalize\nIn the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.", "def normalize(x):\n \"\"\"\n Normalize a list of sample image data in the range of 0 to 1\n : x: List of image data. The image shape is (32, 32, 3)\n : return: Numpy array of normalize data\n \"\"\"\n # TODO: Implement Function\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_normalize(normalize)", "One-hot encode\nJust like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.\nHint: Don't reinvent the wheel.", "def one_hot_encode(x):\n \"\"\"\n One hot encode a list of sample labels. Return a one-hot encoded vector for each label.\n : x: List of sample Labels\n : return: Numpy array of one-hot encoded labels\n \"\"\"\n # TODO: Implement Function\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_one_hot_encode(one_hot_encode)", "Randomize Data\nAs you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.\nPreprocess all the data and save it\nRunning the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport pickle\nimport problem_unittests as tests\nimport helper\n\n# Load the Preprocessed Validation data\nvalid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))", "Build the network\nFor the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.\n\nNote: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the \"Convolutional and Max Pooling Layer\" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.\nHowever, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. \n\nLet's begin!\nInput\nThe neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions\n* Implement neural_net_image_input\n * Return a TF Placeholder\n * Set the shape using image_shape with batch size set to None.\n * Name the TensorFlow placeholder \"x\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_label_input\n * Return a TF Placeholder\n * Set the shape using n_classes with batch size set to None.\n * Name the TensorFlow placeholder \"y\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_keep_prob_input\n * Return a TF Placeholder for dropout keep probability.\n * Name the TensorFlow placeholder \"keep_prob\" using the TensorFlow name parameter in the TF Placeholder.\nThese names will be used at the end of the project to load your saved model.\nNote: None for shapes in TensorFlow allow for a dynamic size.", "import tensorflow as tf\n\ndef neural_net_image_input(image_shape):\n \"\"\"\n Return a Tensor for a batch of image input\n : image_shape: Shape of the images\n : return: Tensor for image input.\n \"\"\"\n # TODO: Implement Function\n return None\n\n\ndef neural_net_label_input(n_classes):\n \"\"\"\n Return a Tensor for a batch of label input\n : n_classes: Number of classes\n : return: Tensor for label input.\n \"\"\"\n # TODO: Implement Function\n return None\n\n\ndef neural_net_keep_prob_input():\n \"\"\"\n Return a Tensor for keep probability\n : return: Tensor for keep probability.\n \"\"\"\n # TODO: Implement Function\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntf.reset_default_graph()\ntests.test_nn_image_inputs(neural_net_image_input)\ntests.test_nn_label_inputs(neural_net_label_input)\ntests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)", "Convolution and Max Pooling Layer\nConvolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:\n* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.\n* Apply a convolution to x_tensor using weight and conv_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\n* Add bias\n* Add a nonlinear activation to the convolution.\n* Apply Max Pooling using pool_ksize and pool_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\nNote: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.", "def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):\n \"\"\"\n Apply convolution then max pooling to x_tensor\n :param x_tensor: TensorFlow Tensor\n :param conv_num_outputs: Number of outputs for the convolutional layer\n :param conv_ksize: kernal size 2-D Tuple for the convolutional layer\n :param conv_strides: Stride 2-D Tuple for convolution\n :param pool_ksize: kernal size 2-D Tuple for pool\n :param pool_strides: Stride 2-D Tuple for pool\n : return: A tensor that represents convolution and max pooling of x_tensor\n \"\"\"\n # TODO: Implement Function\n return None \n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_con_pool(conv2d_maxpool)", "Flatten Layer\nImplement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.", "def flatten(x_tensor):\n \"\"\"\n Flatten x_tensor to (Batch Size, Flattened Image Size)\n : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.\n : return: A tensor of size (Batch Size, Flattened Image Size).\n \"\"\"\n # TODO: Implement Function\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_flatten(flatten)", "Fully-Connected Layer\nImplement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.", "def fully_conn(x_tensor, num_outputs):\n \"\"\"\n Apply a fully connected layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n # TODO: Implement Function\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_fully_conn(fully_conn)", "Output Layer\nImplement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.\nNote: Activation, softmax, or cross entropy should not be applied to this.", "def output(x_tensor, num_outputs):\n \"\"\"\n Apply a output layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n # TODO: Implement Function\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_output(output)", "Create Convolutional Model\nImplement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:\n\nApply 1, 2, or 3 Convolution and Max Pool layers\nApply a Flatten Layer\nApply 1, 2, or 3 Fully Connected Layers\nApply an Output Layer\nReturn the output\nApply TensorFlow's Dropout to one or more layers in the model using keep_prob.", "def conv_net(x, keep_prob):\n \"\"\"\n Create a convolutional neural network model\n : x: Placeholder tensor that holds image data.\n : keep_prob: Placeholder tensor that hold dropout keep probability.\n : return: Tensor that represents logits\n \"\"\"\n # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers\n # Play around with different number of outputs, kernel size and stride\n # Function Definition from Above:\n # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)\n \n\n # TODO: Apply a Flatten Layer\n # Function Definition from Above:\n # flatten(x_tensor)\n \n\n # TODO: Apply 1, 2, or 3 Fully Connected Layers\n # Play around with different number of outputs\n # Function Definition from Above:\n # fully_conn(x_tensor, num_outputs)\n \n \n # TODO: Apply an Output Layer\n # Set this to the number of classes\n # Function Definition from Above:\n # output(x_tensor, num_outputs)\n \n \n # TODO: return output\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\n\n##############################\n## Build the Neural Network ##\n##############################\n\n# Remove previous weights, bias, inputs, etc..\ntf.reset_default_graph()\n\n# Inputs\nx = neural_net_image_input((32, 32, 3))\ny = neural_net_label_input(10)\nkeep_prob = neural_net_keep_prob_input()\n\n# Model\nlogits = conv_net(x, keep_prob)\n\n# Name logits Tensor, so that is can be loaded from disk after training\nlogits = tf.identity(logits, name='logits')\n\n# Loss and Optimizer\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))\noptimizer = tf.train.AdamOptimizer().minimize(cost)\n\n# Accuracy\ncorrect_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')\n\ntests.test_conv_net(conv_net)", "Train the Neural Network\nSingle Optimization\nImplement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:\n* x for image input\n* y for labels\n* keep_prob for keep probability for dropout\nThis function will be called for each batch, so tf.global_variables_initializer() has already been called.\nNote: Nothing needs to be returned. This function is only optimizing the neural network.", "def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):\n \"\"\"\n Optimize the session on a batch of images and labels\n : session: Current TensorFlow session\n : optimizer: TensorFlow optimizer function\n : keep_probability: keep probability\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n \"\"\"\n # TODO: Implement Function\n pass\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_train_nn(train_neural_network)", "Show Stats\nImplement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.", "def print_stats(session, feature_batch, label_batch, cost, accuracy):\n \"\"\"\n Print information about loss and validation accuracy\n : session: Current TensorFlow session\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n : cost: TensorFlow cost function\n : accuracy: TensorFlow accuracy function\n \"\"\"\n # TODO: Implement Function\n pass", "Hyperparameters\nTune the following parameters:\n* Set epochs to the number of iterations until the network stops learning or start overfitting\n* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:\n * 64\n * 128\n * 256\n * ...\n* Set keep_probability to the probability of keeping a node using dropout", "# TODO: Tune Parameters\nepochs = None\nbatch_size = None\nkeep_probability = None", "Train on a Single CIFAR-10 Batch\nInstead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nprint('Checking the Training on a Single Batch...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n batch_i = 1\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)", "Fully Train the Model\nNow that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_model_path = './image_classification'\n\nprint('Training...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n # Loop over all batches\n n_batches = 5\n for batch_i in range(1, n_batches + 1):\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)\n \n # Save Model\n saver = tf.train.Saver()\n save_path = saver.save(sess, save_model_path)", "Checkpoint\nThe model has been saved to disk.\nTest Model\nTest your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport tensorflow as tf\nimport pickle\nimport helper\nimport random\n\n# Set batch size if not already set\ntry:\n if batch_size:\n pass\nexcept NameError:\n batch_size = 64\n\nsave_model_path = './image_classification'\nn_samples = 4\ntop_n_predictions = 3\n\ndef test_model():\n \"\"\"\n Test the saved model against the test dataset\n \"\"\"\n\n test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))\n loaded_graph = tf.Graph()\n\n with tf.Session(graph=loaded_graph) as sess:\n # Load model\n loader = tf.train.import_meta_graph(save_model_path + '.meta')\n loader.restore(sess, save_model_path)\n\n # Get Tensors from loaded model\n loaded_x = loaded_graph.get_tensor_by_name('x:0')\n loaded_y = loaded_graph.get_tensor_by_name('y:0')\n loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n loaded_logits = loaded_graph.get_tensor_by_name('logits:0')\n loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')\n \n # Get accuracy in batches for memory limitations\n test_batch_acc_total = 0\n test_batch_count = 0\n \n for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):\n test_batch_acc_total += sess.run(\n loaded_acc,\n feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})\n test_batch_count += 1\n\n print('Testing Accuracy: {}\\n'.format(test_batch_acc_total/test_batch_count))\n\n # Print Random Samples\n random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))\n random_test_predictions = sess.run(\n tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),\n feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})\n helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)\n\n\ntest_model()", "Why 50-80% Accuracy?\nYou might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores well above 80%. That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_image_classification.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Diyago/Machine-Learning-scripts
DEEP LEARNING/Pytorch from scratch/MLP/Part 3 - Training Neural Networks (Solution).ipynb
apache-2.0
[ "Training Neural Networks\nThe network we built in the previous part isn't so smart, it doesn't know anything about our handwritten digits. Neural networks with non-linear activations work like universal function approximators. There is some function that maps your input to the output. For example, images of handwritten digits to class probabilities. The power of neural networks is that we can train them to approximate this function, and basically any function given enough data and compute time.\n<img src=\"assets/function_approx.png\" width=500px>\nAt first the network is naive, it doesn't know the function mapping the inputs to the outputs. We train the network by showing it examples of real data, then adjusting the network parameters such that it approximates this function.\nTo find these parameters, we need to know how poorly the network is predicting the real outputs. For this we calculate a loss function (also called the cost), a measure of our prediction error. For example, the mean squared loss is often used in regression and binary classification problems\n$$\n\\large \\ell = \\frac{1}{2n}\\sum_i^n{\\left(y_i - \\hat{y}_i\\right)^2}\n$$\nwhere $n$ is the number of training examples, $y_i$ are the true labels, and $\\hat{y}_i$ are the predicted labels.\nBy minimizing this loss with respect to the network parameters, we can find configurations where the loss is at a minimum and the network is able to predict the correct labels with high accuracy. We find this minimum using a process called gradient descent. The gradient is the slope of the loss function and points in the direction of fastest change. To get to the minimum in the least amount of time, we then want to follow the gradient (downwards). You can think of this like descending a mountain by following the steepest slope to the base.\n<img src='assets/gradient_descent.png' width=350px>\nBackpropagation\nFor single layer networks, gradient descent is straightforward to implement. However, it's more complicated for deeper, multilayer neural networks like the one we've built. Complicated enough that it took about 30 years before researchers figured out how to train multilayer networks.\nTraining multilayer networks is done through backpropagation which is really just an application of the chain rule from calculus. It's easiest to understand if we convert a two layer network into a graph representation.\n<img src='assets/backprop_diagram.png' width=550px>\nIn the forward pass through the network, our data and operations go from bottom to top here. We pass the input $x$ through a linear transformation $L_1$ with weights $W_1$ and biases $b_1$. The output then goes through the sigmoid operation $S$ and another linear transformation $L_2$. Finally we calculate the loss $\\ell$. We use the loss as a measure of how bad the network's predictions are. The goal then is to adjust the weights and biases to minimize the loss.\nTo train the weights with gradient descent, we propagate the gradient of the loss backwards through the network. Each operation has some gradient between the inputs and outputs. As we send the gradients backwards, we multiply the incoming gradient with the gradient for the operation. Mathematically, this is really just calculating the gradient of the loss with respect to the weights using the chain rule.\n$$\n\\large \\frac{\\partial \\ell}{\\partial W_1} = \\frac{\\partial L_1}{\\partial W_1} \\frac{\\partial S}{\\partial L_1} \\frac{\\partial L_2}{\\partial S} \\frac{\\partial \\ell}{\\partial L_2}\n$$\nNote: I'm glossing over a few details here that require some knowledge of vector calculus, but they aren't necessary to understand what's going on.\nWe update our weights using this gradient with some learning rate $\\alpha$. \n$$\n\\large W^\\prime_1 = W_1 - \\alpha \\frac{\\partial \\ell}{\\partial W_1}\n$$\nThe learning rate $\\alpha$ is set such that the weight update steps are small enough that the iterative method settles in a minimum.\nLosses in PyTorch\nLet's start by seeing how we calculate the loss with PyTorch. Through the nn module, PyTorch provides losses such as the cross-entropy loss (nn.CrossEntropyLoss). You'll usually see the loss assigned to criterion. As noted in the last part, with a classification problem such as MNIST, we're using the softmax function to predict class probabilities. With a softmax output, you want to use cross-entropy as the loss. To actually calculate the loss, you first define the criterion then pass in the output of your network and the correct labels.\nSomething really important to note here. Looking at the documentation for nn.CrossEntropyLoss,\n\nThis criterion combines nn.LogSoftmax() and nn.NLLLoss() in one single class.\nThe input is expected to contain scores for each class.\n\nThis means we need to pass in the raw output of our network into the loss, not the output of the softmax function. This raw output is usually called the logits or scores. We use the logits because softmax gives you probabilities which will often be very close to zero or one but floating-point numbers can't accurately represent values near zero or one (read more here). It's usually best to avoid doing calculations with probabilities, typically we use log-probabilities.", "import torch\nfrom torch import nn\nimport torch.nn.functional as F\nfrom torchvision import datasets, transforms\n\n# Define a transform to normalize the data\ntransform = transforms.Compose([transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),\n ])\n# Download and load the training data\ntrainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)\ntrainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)\n\n# Build a feed-forward network\nmodel = nn.Sequential(nn.Linear(784, 128),\n nn.ReLU(),\n nn.Linear(128, 64),\n nn.ReLU(),\n nn.Linear(64, 10))\n\n# Define the loss\ncriterion = nn.CrossEntropyLoss()\n\n# Get our data\nimages, labels = next(iter(trainloader))\n# Flatten images\nimages = images.view(images.shape[0], -1)\n\n# Forward pass, get our logits\nlogits = model(images)\n# Calculate the loss with the logits and the labels\nloss = criterion(logits, labels)\n\nprint(loss)", "In my experience it's more convenient to build the model with a log-softmax output using nn.LogSoftmax or F.log_softmax (documentation). Then you can get the actual probabilites by taking the exponential torch.exp(output). With a log-softmax output, you want to use the negative log likelihood loss, nn.NLLLoss (documentation).\n\nExercise: Build a model that returns the log-softmax as the output and calculate the loss using the negative log likelihood loss.", "## Solution\n\n# Build a feed-forward network\nmodel = nn.Sequential(nn.Linear(784, 128),\n nn.ReLU(),\n nn.Linear(128, 64),\n nn.ReLU(),\n nn.Linear(64, 10),\n nn.LogSoftmax(dim=1))\n\n# Define the loss\ncriterion = nn.NLLLoss()\n\n# Get our data\nimages, labels = next(iter(trainloader))\n# Flatten images\nimages = images.view(images.shape[0], -1)\n\n# Forward pass, get our log-probabilities\nlogps = model(images)\n# Calculate the loss with the logps and the labels\nloss = criterion(logps, labels)\n\nprint(loss)", "Autograd\nNow that we know how to calculate a loss, how do we use it to perform backpropagation? Torch provides a module, autograd, for automatically calculating the gradients of tensors. We can use it to calculate the gradients of all our parameters with respect to the loss. Autograd works by keeping track of operations performed on tensors, then going backwards through those operations, calculating gradients along the way. To make sure PyTorch keeps track of operations on a tensor and calculates the gradients, you need to set requires_grad = True on a tensor. You can do this at creation with the requires_grad keyword, or at any time with x.requires_grad_(True).\nYou can turn off gradients for a block of code with the torch.no_grad() content:\n```python\nx = torch.zeros(1, requires_grad=True)\n\n\n\nwith torch.no_grad():\n... y = x * 2\ny.requires_grad\nFalse\n```\n\n\n\nAlso, you can turn on or off gradients altogether with torch.set_grad_enabled(True|False).\nThe gradients are computed with respect to some variable z with z.backward(). This does a backward pass through the operations that created z.", "x = torch.randn(2,2, requires_grad=True)\nprint(x)\n\ny = x**2\nprint(y)", "Below we can see the operation that created y, a power operation PowBackward0.", "## grad_fn shows the function that generated this variable\nprint(y.grad_fn)", "The autgrad module keeps track of these operations and knows how to calculate the gradient for each one. In this way, it's able to calculate the gradients for a chain of operations, with respect to any one tensor. Let's reduce the tensor y to a scalar value, the mean.", "z = y.mean()\nprint(z)", "You can check the gradients for x and y but they are empty currently.", "print(x.grad)", "To calculate the gradients, you need to run the .backward method on a Variable, z for example. This will calculate the gradient for z with respect to x\n$$\n\\frac{\\partial z}{\\partial x} = \\frac{\\partial}{\\partial x}\\left[\\frac{1}{n}\\sum_i^n x_i^2\\right] = \\frac{x}{2}\n$$", "z.backward()\nprint(x.grad)\nprint(x/2)", "These gradients calculations are particularly useful for neural networks. For training we need the gradients of the weights with respect to the cost. With PyTorch, we run data forward through the network to calculate the loss, then, go backwards to calculate the gradients with respect to the loss. Once we have the gradients we can make a gradient descent step. \nLoss and Autograd together\nWhen we create a network with PyTorch, all of the parameters are initialized with requires_grad = True. This means that when we calculate the loss and call loss.backward(), the gradients for the parameters are calculated. These gradients are used to update the weights with gradient descent. Below you can see an example of calculating the gradients using a backwards pass.", "# Build a feed-forward network\nmodel = nn.Sequential(nn.Linear(784, 128),\n nn.ReLU(),\n nn.Linear(128, 64),\n nn.ReLU(),\n nn.Linear(64, 10),\n nn.LogSoftmax(dim=1))\n\ncriterion = nn.NLLLoss()\nimages, labels = next(iter(trainloader))\nimages = images.view(images.shape[0], -1)\n\nlogps = model(images)\nloss = criterion(logps, labels)\n\nprint('Before backward pass: \\n', model[0].weight.grad)\n\nloss.backward()\n\nprint('After backward pass: \\n', model[0].weight.grad)", "Training the network!\nThere's one last piece we need to start training, an optimizer that we'll use to update the weights with the gradients. We get these from PyTorch's optim package. For example we can use stochastic gradient descent with optim.SGD. You can see how to define an optimizer below.", "from torch import optim\n\n# Optimizers require the parameters to optimize and a learning rate\noptimizer = optim.SGD(model.parameters(), lr=0.01)", "Now we know how to use all the individual parts so it's time to see how they work together. Let's consider just one learning step before looping through all the data. The general process with PyTorch:\n\nMake a forward pass through the network \nUse the network output to calculate the loss\nPerform a backward pass through the network with loss.backward() to calculate the gradients\nTake a step with the optimizer to update the weights\n\nBelow I'll go through one training step and print out the weights and gradients so you can see how it changes. Note that I have a line of code optimizer.zero_grad(). When you do multiple backwards passes with the same parameters, the gradients are accumulated. This means that you need to zero the gradients on each training pass or you'll retain gradients from previous training batches.", "print('Initial weights - ', model[0].weight)\n\nimages, labels = next(iter(trainloader))\nimages.resize_(64, 784)\n\n# Clear the gradients, do this because gradients are accumulated\noptimizer.zero_grad()\n\n# Forward pass, then backward pass, then update weights\noutput = model(images)\nloss = criterion(output, labels)\nloss.backward()\nprint('Gradient -', model[0].weight.grad)\n\n# Take an update step and few the new weights\noptimizer.step()\nprint('Updated weights - ', model[0].weight)", "Training for real\nNow we'll put this algorithm into a loop so we can go through all the images. Some nomenclature, one pass through the entire dataset is called an epoch. So here we're going to loop through trainloader to get our training batches. For each batch, we'll doing a training pass where we calculate the loss, do a backwards pass, and update the weights.\n\nExercise: Implement the training pass for our network. If you implemented it correctly, you should see the training loss drop with each epoch.", "model = nn.Sequential(nn.Linear(784, 128),\n nn.ReLU(),\n nn.Linear(128, 64),\n nn.ReLU(),\n nn.Linear(64, 10),\n nn.LogSoftmax(dim=1))\n\ncriterion = nn.NLLLoss()\noptimizer = optim.SGD(model.parameters(), lr=0.003)\n\nepochs = 5\nfor e in range(epochs):\n running_loss = 0\n for images, labels in trainloader:\n # Flatten MNIST images into a 784 long vector\n images = images.view(images.shape[0], -1)\n \n # TODO: Training pass\n optimizer.zero_grad()\n \n output = model(images)\n loss = criterion(output, labels)\n loss.backward()\n optimizer.step()\n \n running_loss += loss.item()\n else:\n print(f\"Training loss: {running_loss/len(trainloader)}\")", "With the network trained, we can check out it's predictions.", "%matplotlib inline\nimport helper\n\nimages, labels = next(iter(trainloader))\n\nimg = images[0].view(1, 784)\n# Turn off gradients to speed up this part\nwith torch.no_grad():\n logps = model(img)\n\n# Output of the network are log-probabilities, need to take exponential for probabilities\nps = torch.exp(logps)\nhelper.view_classify(img.view(1, 28, 28), ps)", "Now our network is brilliant. It can accurately predict the digits in our images. Next up you'll write the code for training a neural network on a more complex dataset." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
banyh/ShareIPythonNotebook
Gensim - Word2Vec.ipynb
gpl-3.0
[ "Prepare Dictionary and Corpus", "import re, json, os, nltk, string, gensim, bz2\nfrom gensim import corpora, models, similarities, utils\nfrom nltk.corpus import stopwords\nfrom os import listdir\nfrom datetime import datetime as dt\nimport numpy as np\nimport codecs\nimport sys\nstdin, stdout, stderr = sys.stdin, sys.stdout, sys.stderr\nreload(sys)\nsys.stdin, sys.stdout, sys.stderr = stdin, stdout, stderr\nsys.setdefaultencoding('utf-8')\n\nimport logging\nfmtstr = '%(asctime)s [%(levelname)s][%(name)s] %(message)s'\ndatefmtstr = '%Y/%m/%d %H:%M:%S'\nlog_fn = str(dt.now().date()) + '.txt'\nlogger = logging.getLogger()\nif len(logger.handlers) >= 1:\n logger.removeHandler(a.handlers[0])\n logger.addHandler(logging.FileHandler(log_fn))\n logger.handlers[0].setFormatter(logging.Formatter(fmtstr, datefmtstr))\nelse:\n logging.basicConfig(filename=log_fn, format=fmtstr,\n datefmt=datefmtstr, level=logging.NOTSET)\n\nstop_words = set(stopwords.words('english'))\n\ndef docs_out(line):\n j = json.loads(line)\n tmp = j.get('brief') + j.get('claim') + j.get('description')\n tmp = re.sub('([,?!:;%$&*#~\\<\\>=+/\"(){}\\[\\]\\'])',' ',tmp)\n tmp = tmp.replace(u\"\\u2018\", \" \").replace(u\"\\u2019\", \" \").replace(u\"\\u201c\",\" \").replace(u\"\\u201d\", \" \")\n tmp = tmp.replace(u\"\\u2022\", \" \").replace(u\"\\u2013\", \" \").replace(u\"\\u2014\", \" \").replace(u\"\\u2026\", \" \")\n tmp = tmp.replace(u\"\\u20ac\", \" \").replace(u\"\\u201a\", \" \").replace(u\"\\u201e\", \" \").replace(u\"\\u2020\", \" \")\n tmp = tmp.replace(u\"\\u2021\", \" \").replace(u\"\\u02C6\", \" \").replace(u\"\\u2030\", \" \").replace(u\"\\u2039\", \" \")\n tmp = tmp.replace(u\"\\u02dc\", \" \").replace(u\"\\u203a\", \" \").replace(u\"\\ufffe\", \" \").replace(u\"\\u00b0\", \" \")\n tmp = tmp.replace(u\"\\u00b1\", \" \").replace(u\"\\u0020\", \" \").replace(u\"\\u00a0\", \" \").replace(u\"\\u1680\", \" \")\n tmp = tmp.replace(u\"\\u2000\", \" \").replace(u\"\\u2001\", \" \").replace(u\"\\u2002\", \" \").replace(u\"\\u2003\", \" \")\n tmp = tmp.replace(u\"\\u2004\", \" \").replace(u\"\\u2005\", \" \").replace(u\"\\u2006\", \" \").replace(u\"\\u2007\", \" \")\n tmp = tmp.replace(u\"\\u2008\", \" \").replace(u\"\\u2009\", \" \").replace(u\"\\u200a\", \" \").replace(u\"\\u202f\", \" \")\n tmp = tmp.replace(u\"\\u205f\", \" \").replace(u\"\\u3000\", \" \").replace(u\"\\u20ab\", \" \").replace(u\"\\u201b\", \" \")\n tmp = tmp.replace(u\"\\u201f\", \" \").replace(u\"\\u2e02\", \" \").replace(u\"\\u2e04\", \" \").replace(u\"\\u2e09\", \" \")\n tmp = tmp.replace(u\"\\u2e0c\", \" \").replace(u\"\\u2e1c\", \" \").replace(u\"\\u2e20\", \" \").replace(u\"\\u00bb\", \" \")\n tmp = tmp.replace(u\"\\u2e03\", \" \").replace(u\"\\u2e05\", \" \").replace(u\"\\u2e0a\", \" \").replace(u\"\\u2e0d\", \" \")\n tmp = tmp.replace(u\"\\u2e1d\", \" \").replace(u\"\\u2e21\", \" \").replace(u\"\\u2032\", \" \").replace(u\"\\u2031\", \" \")\n tmp = tmp.replace(u\"\\u2033\", \" \").replace(u\"\\u2034\", \" \").replace(u\"\\u2035\", \" \").replace(u\"\\u2036\", \" \")\n tmp = tmp.replace(u\"\\u2037\", \" \").replace(u\"\\u2038\", \" \")\n tmp = re.sub('[.] ',' ',tmp)\n return tmp, j.get('patentNumber')\n\ndocuments = []\nf = codecs.open('/share/USPatentData/tokenized_appDate_2013/2013USPTOPatents_by_skip_1.txt.tokenized','r', 'UTF-8')\nfor line in f:\n documents.append(''.join(docs_out(line)[0]) + '\\n')\n\ndictionary = corpora.Dictionary([doc.split() for doc in documents])\n\nstop_ids = [dictionary.token2id[stopword] for stopword in stop_words\n if stopword in dictionary.token2id]\nonce_ids = [tokenid for tokenid, docfreq in dictionary.dfs.iteritems() if docfreq <= 1]\ndictionary.filter_tokens(stop_ids + once_ids)\ndictionary.compactify()\n#dictionary.save('USPTO_2013.dict')\n\ncorpus = [dictionary.doc2bow(doc.split()) for doc in documents]", "Build LSI Model", "model_tfidf = models.TfidfModel(corpus)\ncorpus_tfidf = model_tfidf[corpus]", "LsiModel的參數\n\nnum_topics=200: 設定SVD分解後要保留的維度\nid2word: 提供corpus的字典,方便將id轉換為word\nchunksize=20000: 在記憶體中一次處理的量,值越大則占用記憶體越多,處理速度也越快\ndecay=1.0: 因為資料會切成chunk來計算,所以會分成新舊資料,當新的chunk進來時,decay是舊chunk的加權,如果設小於1.0的值,則舊的資料會慢慢「遺忘」\ndistributed=False: 是否開啟分散式計算,每個core會分到一塊chunk\nonepass=True: 設為False強制使用multi-pass stochastic algoritm\npower_iters=2: 在multi-pass時設定power iteration,越大則accuracy越高,但時間越久\n\n令$X$代表corpus的TF-IDF矩陣,作完SVD分解後,會得到左矩陣lsi.projection.u及singular value lsi.projection.s。\n$X = USV^T$, where $U \\in \\mathbb{R}^{|V|\\times m}$, $S \\in \\mathbb{R}^{m\\times m}$, $V \\in \\mathbb{R}^{m\\times |D|}$\nlsi[X]等同於$U^{-1}X=VS$。所以要求$V$的值,可以用$S^{-1}U^{-1}X$,也就是lsi[X]除以$S$。\n因為lsi[X]本身沒有值,只是一個generator,要先透過gensim.matutils.corpus2dense轉換成numpy array,再除以lsi.projection.s。", "model_lsi = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=200)\ncorpus_lsi = model_lsi[corpus_tfidf]\n\n# 計算V的方法,可以作為document vector\ndocvec_lsi = gensim.matutils.corpus2dense(corpus_lsi, len(model_lsi.projection.s)).T / model_lsi.projection.s\n\n# word vector直接用U的column vector\nwordsim_lsi = similarities.MatrixSimilarity(model_lsi.projection.u, num_features=model_lsi.projection.u.shape[1])\n\n# 第二個版本,word vector用U*S\nwordsim_lsi2 = similarities.MatrixSimilarity(model_lsi.projection.u * model_lsi.projection.s,\n num_features=model_lsi.projection.u.shape[1])\n\ndef lsi_query(query, use_ver2=False):\n qvec = model_lsi[model_tfidf[dictionary.doc2bow(query.split())]]\n if use_ver2:\n s = wordsim_lsi2[qvec]\n else:\n s = wordsim_lsi[qvec]\n return [dictionary[i] for i in s.argsort()[-10:]]\n\nprint lsi_query('energy')\n\nprint lsi_query('energy', True)", "Build Word2Vec Model\nWord2Vec的參數\n\nsentences: 用來訓練的list of list of words,但不是必要的,因為可以先建好model,再慢慢丟資料訓練\nsize=100: vector的維度\nalpha=0.025: 初始的學習速度\nwindow=5: context window的大小\nmin_count=5: 出現次數小於min_count的單字直接忽略\nmax_vocab_size: 限制vocabulary的大小,如果單字太多,就忽略最少見的單字,預設為無限制\nsample=0.001: subsampling,隨機刪除機率小於0.001的單字,兼具擴大context windows與減少stopword的功能\nseed=1: 隨機產生器的random seed\nworkers=3: 在多核心的系統上,要用幾個核心來train\nmin_alpha=0.0001: 學習速度最後收斂的最小值\nsg=0: 0表示用CBOW,1表示用skip-gram\nhs=0: 1表示用hierarchical soft-max,0表示用negative sampling\nnegative=5: 表示使用幾組negative sample來訓練\ncbow_mean=1: 在使用CBOW的前提下,0表示使用sum作為hidden layer,1表示使用mean作為hidden layer\nhashfxn=&lt;build-in hash function&gt;: 隨機初始化weights使用的hash function\niter=5: 整個corpus要訓練幾次\ntrim_rule: None表示小於min_count的單字會被忽略,也可以指定一個function(word, count, min_count),這個function的傳回值有三種,util.RULE_DISCARD、util.RULE_KEEP、util.RULE_DEFAULT。這個參數會影響dictionary的生成\nsorted_vocab=1: 1表示在指定word index前,先按照頻率將單字排序\nbatch_words=10000: 要傳給worker的單字長度\n\n訓練方法\n先產生一個空的model\nmodel_w2v = models.Word2Vec(size=200, sg=1)\n傳入一個list of words更新vocabulary\nsent = [['first','sent'], ['second','sent']]\nmodel_w2v.build_vocab(sent)\n傳入一個list of words更新model\nmodel_w2v.train(sent)", "all_text = [doc.split() for doc in documents]\n\nmodel_w2v = models.Word2Vec(size=200, sg=1)\n\n%timeit model_w2v.build_vocab(all_text)\n\n%timeit model_w2v.train(all_text)\n\nmodel_w2v.most_similar_cosmul(['deep','learning'])", "Build Doc2Vec Model\nDoc2Vec的參數\n\ndocuments=None: 用來訓練的document,可以是list of TaggedDocument,或TaggedDocument generator\nsize=300: vector的維度\nalpha=0.025: 初始的學習速度\nwindow=8: context window的大小\nmin_count=5: 出現次數小於min_count的單字直接忽略\nmax_vocab_size=None: 限制vocabulary的大小,如果單字太多,就忽略最少見的單字,預設為無限制\nsample=0: subsampling,隨機刪除機率小於sample的單字,兼具擴大context windows與減少stopword的功能\nseed=1: 隨機產生器的random seed\nworkers=1: 在多核心的系統上,要用幾個核心來train\nmin_alpha=0.0001: 學習速度最後收斂的最小值\nhs=1: 1表示用hierarchical soft-max,0表示用negative sampling\nnegative=0: 表示使用幾組negative sample來訓練\ndbow_words=0: 1表示同時訓練出word-vector(用skip-gram)及doc-vector(用DBOW),0表示只訓練doc-vector\ndm=1: 1表示用distributed memory(PV-DM)來訓練,0表示用distributed bag-of-word(PV-DBOW)來訓練\ndm_concat=0: 1表示不要sum/average而用concatenation of context vectors,0表示用sum/average。使用concatenation會產生較大的model,而且輸入的vector長度會變長\ndm_mean=0: 在使用DBOW而且dm_concat=0的前提下,0表示使用sum作為hidden layer,1表示使用mean作為hidden layer\ndm_tag_count=1: 當dm_concat=1時,預期每個document有幾個document tags\ntrim_rule=None: None表示小於min_count的單字會被忽略,也可以指定一個function(word, count, min_count),這個function的傳回值有三種,util.RULE_DISCARD、util.RULE_KEEP、util.RULE_DEFAULT。這個參數會影響dictionary的生成", "from gensim.models.doc2vec import Doc2Vec, TaggedDocument\n\nclass PatentDocGenerator(object):\n def __init__(self, filename):\n self.filename = filename\n \n def __iter__(self):\n f = codecs.open(self.filename, 'r', 'UTF-8')\n for line in f:\n text, appnum = docs_out(line)\n yield TaggedDocument(text.split(), appnum.split())\n\ndoc = PatentDocGenerator('/share/USPatentData/tokenized_appDate_2013/2013USPTOPatents_by_skip_1.txt.tokenized')\n%timeit model_d2v = Doc2Vec(doc, size=200, window=8, sample=1e-5, hs=0, negative=5)\n\ndoc = PatentDocGenerator('/share/USPatentData/tokenized_appDate_2013/2013USPTOPatents_by_skip_1.txt.tokenized')\nmodel_d2v = Doc2Vec(doc, size=200, window=8, sample=1e-5, hs=0, negative=5)\n\nmodel_d2v.docvecs.most_similar(['20140187118'])\n\nm = Doc2Vec(size=200, window=8, sample=1e-5, hs=0, negative=5)\n\nm.build_vocab(doc)\n\nm.train(doc)\n\nm.docvecs.most_similar(['20140187118'])", "Build Doc2Vec Model from 2013 USPTO Patents", "from gensim.models.doc2vec import Doc2Vec, TaggedDocument\n\nclass PatentDocGenerator(object):\n def __init__(self, filename):\n self.filename = filename\n \n def __iter__(self):\n f = codecs.open(self.filename, 'r', 'UTF-8')\n for line in f:\n text, appnum = docs_out(line)\n yield TaggedDocument(text.split(), appnum.split())\n\nmodel_d2v = Doc2Vec(size=200, window=8, sample=1e-5, hs=0, negative=5)\nroot = '/share/USPatentData/tokenized_appDate_2013/'\n\nfor fn in sorted(listdir(root)):\n doc = PatentDocGenerator(os.path.join(root, fn))\n start = dt.now()\n model_d2v.build_vocab(doc)\n model_d2v.train(doc)\n logging.info('{} training time: {}'.format(fn, str(dt.now() - start)))\n\nmodel_d2v.save(\"doc2vec_uspto_2013.model\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
lmcinnes/umap
notebooks/AnimatingUMAP.ipynb
bsd-3-clause
[ "Making Animations of UMAP Hyper-parameters\nSometimes one of the best ways to see the effects of hyperparameters is simply to visualise what happens as they change. We can do that in practice with UMAP by simply creating an animation that transitions between embeddings generated with variations of hyperparameters. To do this we'll make use of matplotlib and its animation capabilities. Jake Vanderplas has a great tutorial if you want to know more about creating animations with matplotlib.\nNote:\nThis is a self contained example of how to use UMAP and the impact of individual hyper-parameters. To make sure everything works correctly please use conda.\nFor install and usage details see here\nTo create animations we need ffmpeg. It can be installed with conda.\nIf you already have ffmpeg installed on your machine and you know what you are doing you do not need conda. It is only used to install ffmpeg.\n=> Remove the next two cells if you are not using conda.", "!conda --version\n\n!conda install -c conda-forge ffmpeg -y\n\n!python --version", "To start we'll need some basic libraries. First numpy will be needed for basic array manipulation. Since we will be visualising the results we will need matplotlib and seaborn. Finally we will need umap for doing the dimension reduction itself.", "!pip install numpy matplotlib seaborn umap-learn", "To start let's load everything we'll need", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\nfrom matplotlib import animation\nfrom IPython.display import HTML\nimport seaborn as sns\nimport itertools\nsns.set(style='white', rc={'figure.figsize':(14, 12), 'animation.html': 'html5'})\n\n# Ignore UserWarnings\nimport warnings\nwarnings.simplefilter('ignore', UserWarning)\n\nfrom sklearn.datasets import load_digits\n\nfrom umap import UMAP", "To try this out we'll needs a reasonably small dataset (so embedding runs don't take too long since we'll be doing a lot of them). For ease of reproducibility for everyone else I'll use the digits dataset from sklearn. If you want to try other datasets just drop them in here -- COIL20 might be interesting, or you might have your own data.", "digits = load_digits()\ndata = digits.data\ndata", "We need to move the points in between the embeddings given by different parameter values. There are potentially fancy ways to do this (Something using rotation and reflection to get an initial alignment might be interesting), but we'll use straighforward linear interpolation between the two embeddings. To do this we'll need a simple function that can turn out intermediate embeddings for the in-between frames of the animation.", "def tween(e1, e2, n_frames=20):\n for i in range(5):\n yield e1\n for i in range(n_frames):\n alpha = i / float(n_frames - 1)\n yield (1 - alpha) * e1 + alpha * e2\n for i in range(5):\n yield(e2)\n return", "Now that we can fill in intermediate frame we just need to generate all the embeddings. We'll create a function that can take an argument and set of parameter values and then generate all the embeddings including the in-between frames.", "def generate_frame_data(data, arg_name='n_neighbors', arg_list=[]):\n result = []\n es = []\n for arg in arg_list:\n kwargs = {arg_name:arg}\n if len(es) > 0:\n es.append(UMAP(init=es[-1], negative_sample_rate=3, **kwargs).fit_transform(data))\n else:\n es.append(UMAP(negative_sample_rate=3, **kwargs).fit_transform(data))\n \n for e1, e2 in zip(es[:-1], es[1:]):\n result.extend(list(tween(e1, e2)))\n \n return result", "Next we just need to create a function to actually generate the animation given a list of embeddings (one for each frame). This is really just a matter of workign through the details of how matplotlib generates animations -- I would refer you again to Jake's tutorial if you are interested in the detailed mechanics of this.", "def create_animation(frame_data, arg_name='n_neighbors', arg_list=[]):\n fig, ax = plt.subplots()\n all_data = np.vstack(frame_data)\n frame_bounds = (all_data[:, 0].min() * 1.1, \n all_data[:, 0].max() * 1.1,\n all_data[:, 1].min() * 1.1, \n all_data[:, 1].max() * 1.1)\n ax.set_xlim(frame_bounds[0], frame_bounds[1])\n ax.set_ylim(frame_bounds[2], frame_bounds[3])\n points = ax.scatter(frame_data[0][:, 0], frame_data[0][:, 1], \n s=5, c=digits.target, cmap='Spectral', animated=True)\n title = ax.set_title('', fontsize=24)\n ax.set_xticks([])\n ax.set_yticks([])\n\n cbar = fig.colorbar(\n points,\n cax=make_axes_locatable(ax).append_axes(\"right\", size=\"5%\", pad=0.05),\n orientation=\"vertical\",\n values=np.arange(10),\n boundaries=np.arange(11)-0.5,\n ticks=np.arange(10),\n drawedges=True,\n )\n cbar.ax.yaxis.set_ticklabels(np.arange(10), fontsize=18)\n\n def init():\n points.set_offsets(frame_data[0])\n arg = arg_list[0]\n arg_str = f'{arg:.3f}' if isinstance(arg, float) else f'{arg}'\n title.set_text(f'UMAP with {arg_name}={arg_str}')\n return (points,)\n\n def animate(i):\n points.set_offsets(frame_data[i])\n if (i + 15) % 30 == 0:\n arg = arg_list[(i + 15) // 30]\n arg_str = f'{arg:.3f}' if isinstance(arg, float) else f'{arg}'\n title.set_text(f'UMAP with {arg_name}={arg_str}')\n return (points,)\n\n anim = animation.FuncAnimation(fig, animate, init_func=init, frames=len(frame_data), interval=20, blit=True)\n plt.close()\n return anim", "Finally a little bit of glue to make it all go together.", "def animate_param(data, arg_name='n_neighbors', arg_list=[]):\n frame_data = generate_frame_data(data, arg_name, arg_list)\n return create_animation(frame_data, arg_name, arg_list)", "Now we can create an animation. It will be embedded as an HTML5 video into this notebook.", "animate_param(data, 'n_neighbors', [3, 4, 5, 7, 10, 15, 25, 50, 100, 200])\n\nanimate_param(data, 'min_dist', [0.0, 0.01, 0.1, 0.2, 0.4, 0.6, 0.9])\n\nanimate_param(data, 'local_connectivity', [0.1, 0.2, 0.5, 1, 2, 5, 10])\n\nanimate_param(data, 'set_op_mix_ratio', np.linspace(0.0, 1.0, 10))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mrcinv/matpy
01a_enacbe.ipynb
gpl-2.0
[ "<< nazaj: Uvod\nEnačbe in neenačbe\nV tem delu si bomo ogledali različne pristope, kako se spopademo z enačbami. Spoznali bomo nekaj dodatnih knjižnic za python: SymPy, matplotlib in SciPy.\nSimbolično reševanje s SymPy\nSimbolično reševanje je reševanje enačb s simboli. Ločimo ga od numeričnega reševanja enačb, pri katerem računamo z decimalnimi približki števil. Na vajah navadno uporabljamo simbolično reševanje. Enačbo, ki jo rešujemo, mrcvarimo, dokler ni zapisana v obliki, iz katere lahko preprosto razberemo njeno rešitev. V Pythonu lahko nekaj podobnega počnemo s SymPy.\nPrimer\nPoišči vse rešitve enačbe\n$$x+\\frac{2}{x}=3.$$\nRešitev\nEnačbo najprej pomnožimo z $x$ in preoblikujemo v polinomsko enačbo\n$$ x^2+2-3x=0,$$\nv kateri faktoriziramo levo stran\n$$(x-2)(x-1)=0.$$\nSklepamo, da je leva stran enaka $0$, če je en od faktorjev enak $0$. \nTako dobimo dve možnosti\n\\begin{eqnarray}\nx-2=0 & \\implies & x=2\\\nx-1=0 & \\implies & x=1.\n\\end{eqnarray}\nSympy\nPoskusimo priti do rešitve še s Pythonom. Najprej naložimo knjižnico za simbolično računanje SymPy, nato pa deklariramo, naj se spremenljivka x obravnava kot matematični simbol.", "import sympy as sym\nx = sym.symbols(\"x\") # spremenljivka x je matematični simbol", "Za začetek povsem sledimo korakom, ki smo jih naredili „na roke“. Povzamimo „algoritem“\n\nvse člene damo na levo stran\nenačbo pomnožimo z $x$\nlevo stran faktoriziramo\niz faktorjev preberemo rešitev", "enacba = sym.Eq(x+2/x,3)\nenacba", "Vključimo izpis formul v lepši obliki, ki ga omogoča SymPy.", "sym.init_printing() # lepši izpis formul\nenacba\n\n# vse člene damo na levo stran in pomnožimo z x\nleva = (enacba.lhs - enacba.rhs)*x\nleva\n\n# levo stran razpišemo/zmnožimo\nleva = sym.expand(leva)\nleva\n\n# levo stran faktoriziramo\nleva = sym.factor(leva)\nleva", "Od tu naprej postane precej komplicirano, kako rešitve programsko izluščiti iz zadnjega rezultata. Če nas zanimajo le rešitve, lahko zgornji postopek izpustimo in preprosto uporabimo funkcijo solve.", "# rešitve enačbe najlažje dobimo s funkcijo solve\nresitve = sym.solve(enacba)\nresitve", "Grafična rešitev\nRešitve enačbe si lahko predstavljamo grafično. Iščemo vrednosti $x$, pri katerih je leva stran enaka desni. Če narišemo graf leve in desne strani na isto sliko, so rešitve enačbe ravno x-koordinate presečišča obeh grafov. Za risanje grafov uporabimo knjižnico matplotlib. Graf funkcije narišemo tako, da funkcijo tabeliramo v veliko točkah. Da lažje računamo s tabelami, uporabimo tudi knjižnico numpy, ki je namenjena delu z vektorji in matrikami.", "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nt = np.arange(-1,3,0.01) # zaporedje x-ov, v katerih bomo tabelirali funkcijo\nleva_f = sym.lambdify(x,enacba.lhs) # lambdify iz leve strani enačbe naredi python funkcijo, ki jo uporabimo na t\ndesna_f = sym.lambdify(x,enacba.rhs) # podobno za desno stran (rhs - right hand side, lhs - left hand side)\nplt.plot(t,leva_f(t)) # leva stran /funkcija leva_f deluje po komponentah seznama t \nplt.plot(t,[desna_f(ti) for ti in t]) # funkcija desna_t je konstanta (število 3) in zato ne vrne seznama iste dolžine kot t \nplt.ylim(0,5)\nplt.plot(resitve,[leva_f(r) for r in resitve],'or')\nplt.show()", "Naloga\nPoišči vse rešitve enačbe \n$$x^2-2=1/x.$$\nUporabi sympy.solve in grafično predstavi rešitve.\nnaprej: neenačbe >>", "import disqus\n%reload_ext disqus\n%disqus matpy" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
csiu/datasci
text/2015-07-23_nltk-and-POS.ipynb
mit
[ "Purpose: To experiment with Python's Natural Language Toolkit.\n\nNLTK is a leading platform for building Python programs to work with human language data", "import pandas as pd\nimport nltk\nfrom nltk.corpus import stopwords\nfrom nltk.stem import SnowballStemmer\nfrom collections import Counter", "Input", "bloboftext = \"\"\"\nThis little piggy went to market,\nThis little piggy stayed home,\nThis little piggy had roast beef,\nThis little piggy had none,\nAnd this little piggy went wee wee wee all the way home.\n\"\"\"", "Workflow\n\nTokenization to break text into units e.g. words, phrases, or symbols\nStop word removal to get rid of common words \ne.g. this, a, is", "## Tokenization \nbagofwords = nltk.word_tokenize(bloboftext.lower())\nprint len(bagofwords)\n\n## Stop word removal\nstop = stopwords.words('english')\nbagofwords = [i for i in bagofwords if i not in stop]\nprint len(bagofwords)", "About stemmers and lemmatisation\n\nStemming to reduce a word to its roots \n\ne.g. having => hav\n\n\nLemmatisation to determine a word's lemma/canonical form \n\ne.g. having => have\n\n\nEnglish Stemmers and Lemmatizers\nFor stemming English words with NLTK, you can choose between the PorterStemmer or the LancasterStemmer. The Porter Stemming Algorithm is the oldest stemming algorithm supported in NLTK, originally published in 1979. The Lancaster Stemming Algorithm is much newer, published in 1990, and can be more aggressive than the Porter stemming algorithm.\nThe WordNet Lemmatizer uses the WordNet Database to lookup lemmas. Lemmas differ from stems in that a lemma is a canonical form of the word, while a stem may not be a real word.\n\n\nResources:\nPorterStemmer or the SnowballStemmer (Snowball == Porter2)\nStemming and Lemmatization\nWhat are the major differences and benefits of Porter and Lancaster Stemming algorithms?", "snowball_stemmer = SnowballStemmer(\"english\")\n\n## What words was stemmed?\n_original = set(bagofwords) \n_stemmed = set([snowball_stemmer.stem(i) for i in bagofwords])\n\nprint 'BEFORE:\\t%s' % ', '.join(map(lambda x:'\"%s\"'%x, _original-_stemmed))\nprint ' AFTER:\\t%s' % ', '.join(map(lambda x:'\"%s\"'%x, _stemmed-_original))\n\ndel _original, _stemmed\n\n## Proceed with stemming\nbagofwords = [snowball_stemmer.stem(i) for i in bagofwords]", "Count & POS tag of each stemmed/non-stop word\n\nmeaning of POS tags: Penn Part of Speech Tags\nNN Noun, singular or mass\nVBD Verb, past tense", "for token, count in Counter(bagofwords).most_common():\n print '%d\\t%s\\t%s' % (count, nltk.pos_tag([token])[0][1], token)", "Proportion of POS tags", "record = {}\nfor token, count in Counter(bagofwords).most_common():\n postag = nltk.pos_tag([token])[0][1]\n\n if record.has_key(postag):\n record[postag] += count\n else:\n record[postag] = count\n\nrecordpd = pd.DataFrame.from_dict([record]).T\nrecordpd.columns = ['count']\nN = sum(recordpd['count'])\nrecordpd['percent'] = recordpd['count']/N*100\nrecordpd" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
survey-methods/samplics
docs/source/tutorial/sample_size_calculation.ipynb
mit
[ "Sample size calculation for stage sampling\nIn the cells below, we illustrate a simple example of sample size calculation in the context of household surveys using stage sampling designs. Let's assume that we want to calculate sample size for a vaccination survey in Senegal. We want to stratify the sample by administrative region. We will use the 2017 Senegal Demographic and Health Survey (DHS) (see https://www.dhsprogram.com/publications/publication-FR345-DHS-Final-Reports.cfm) to get an idea of the vaccination coverage rates for some main vaccine-doses. Below, we show coverage rates of hepatitis B birth dose (hepB0) vaccine, first and third dose of diphtheria, tetanus and pertussis (DTP), first dose of measles containing vaccine (MCV1) and coverage of basic vaccination. Basic vaccination refers to the 12-23 months old children that received BCG vaccine, three doses of DTP containing vaccine, three doses of polio vaccine, and the first dose of measles containing vaccine.The table below shows the 2017 Senegal DHS vaccination coverage of a few vaccine-doses for children aged 12 to 23 months old.\n| Region | HepB0 | DTP1 | DTP3 | MCV1 | Basic vaccination |\n| :------------ | :-----: | :-----: | :-----: | :-----: | :----------------: |\n| Dakar | 53.6 | 99.1 | 98.5 | 97.0 | 84.9 |\n| Ziguinchor | 47.1 | 98.6 | 94.1 | 93.6 | 80.9 |\n| Diourbel | 62.8 | 94.6 | 88.2 | 86.1 | 68.2 |\n| Saint-Louis | 40.1 | 99.1 | 97.2 | 94.7 | 80.6 |\n| Tambacounda | 45.0 | 83.3 | 72.7 | 65.3 | 47.0 |\n| Kaolack | 63.9 | 99.6 | 92.2 | 89.3 | 79.7 |\n| Thies | 62.3 | 100.0 | 98.8 | 91.6 | 83.4 |\n| Louga | 49.8 | 96.2 | 87.8 | 81.5 | 67.8 |\n| Fatick | 62.7 | 98.5 | 93.8 | 90.3 | 76.6 |\n| Kolda | 32.8 | 94.4 | 87.3 | 85.6 | 63.7 |\n| Matam | 43.1 | 94.3 | 88.1 | 79.4 | 68.7 |\n| Kaffrine | 56.9 | 98.0 | 93.6 | 88.7 | 76.6 |\n| Kedougou | 44.4 | 70.7 | 60.2 | 46.5 | 33.6 |\n| Sedhiou | 46.6 | 96.8 | 90.4 | 89.9 | 74.2 |\nThe 2017 Senegal DHS data collection happened from April to December 2018. Therefore, the data shown in the table represent children born from October 2016 to December 2017. For the purpose of this tutorial, we will assume that these vaccine coverage rates still hold. Furthermore, we will use the basic vaccination coverage rates to calculate sample size.", "import numpy as np\nimport pandas as pd\n\nimport samplics\nfrom samplics.sampling import SampleSize", "The first step is to create and object using the SampleSize class with the parameter of interest, the sample size calculation method, and the stratification status. In this example, we want to calculate sample size for proportions, using wald method for a stratified design. This is achived with the following snippet of code.\npython\nSampleSize(\n parameter=\"proportion\", method=\"wald\", stratification=True\n)\nBecause, we are using a stratified sample design, it is best to specify the expected coverage levels by stratum. If the information is not available then aggregated values can be used across the strata. The 2017 Senegal DHS published the coverage rates by region hence we have the information available by stratum. To provide the informmation to Samplics we use the python dictionaries as follows\npython\nexpected_coverage = {\n \"Dakar\": 0.849,\n \"Ziguinchor\": 0.809,\n \"Diourbel\": 0.682,\n \"Saint-Louis\": 0.806,\n \"Tambacounda\": 0.470,\n \"Kaolack\": 0.797,\n \"Thies\": 0.834,\n \"Louga\": 0.678,\n \"Fatick\": 0.766,\n \"Kolda\": 0.637,\n \"Matam\": 0.687,\n \"Kaffrine\": 0.766,\n \"Kedougou\": 0.336,\n \"Sedhiou\": 0.742,\n}\nNow, we want to calculate the sample size with desired precision of 0.07 which means that we want the expected vaccination coverage rates to have 7% half confidence intervals e.g. expected rate of 90% will have a confidence interval of [83%, 97%]. Note that the desired precision can be specified by stratum in a similar way as the target coverage using a python dictionary.\nGiven that information, we can calculate the sample size using SampleSize class as follows.", "# target coverage rates\nexpected_coverage = {\n \"Dakar\": 0.849,\n \"Ziguinchor\": 0.809,\n \"Diourbel\": 0.682,\n \"Saint-Louis\": 0.806,\n \"Tambacounda\": 0.470,\n \"Kaolack\": 0.797,\n \"Thies\": 0.834,\n \"Louga\": 0.678,\n \"Fatick\": 0.766,\n \"Kolda\": 0.637,\n \"Matam\": 0.687,\n \"Kaffrine\": 0.766,\n \"Kedougou\": 0.336,\n \"Sedhiou\": 0.742,\n}\n\n# Declare the sample size calculation parameters\nsen_vaccine_wald = SampleSize(\n parameter=\"proportion\", method=\"wald\", stratification=True\n)\n\n# calculate the sample size\nsen_vaccine_wald.calculate(target=expected_coverage, half_ci=0.07)\n\n# show the calculated sample size\nprint(\"\\nCalculated sample sizes by stratum:\")\nsen_vaccine_wald.samp_size", "SampleSize calculates the sample sizes and store the in teh samp_size attributes which is a python dictinary object. If a dataframe is better suited for the use case, the method to_dataframe() can be used to create a pandas dataframe.", "sen_vaccine_wald_size = sen_vaccine_wald.to_dataframe()\n\nsen_vaccine_wald_size", "The sample size calculation above assumes that the design effect (DEFF) was equal to 1. A design effect of 1 correspond to sampling design with a variance equivalent to a simple random selection of same sample size. In the context of complex sampling designs, DEFF is often different from 1. Stage sampling and unequal weights usually increase the design effect above 1. The 2017 Senegal DHS indicated a design effect equal to 1.963 (1.401^2) for basic vaccination. Hence, to calculate the sample size, we will use the design effect provided by DHS.", "sen_vaccine_wald.calculate(target=expected_coverage, half_ci=0.07, deff=1.401 ** 2)\n\nsen_vaccine_wald.to_dataframe()", "Since the sample design is stratified, the sample size calculation will be more precised if DEFF is specified at the stratum level which is available from the 2017 Senegal DHS provided report. Some regions have a design effect below 1. To be conservative with our sample size calculation, we will use 1.21 as the minimum design effect to use in the sample size calculation.", "# Target coverage rates\nexpected_deff = {\n \"Dakar\": 1.100 ** 2,\n \"Ziguinchor\": 1.100 ** 2,\n \"Diourbel\": 1.346 ** 2,\n \"Saint-Louis\": 1.484 ** 2,\n \"Tambacounda\": 1.366 ** 2,\n \"Kaolack\": 1.360 ** 2,\n \"Thies\": 1.109 ** 2,\n \"Louga\": 1.902 ** 2,\n \"Fatick\": 1.100 ** 2,\n \"Kolda\": 1.217 ** 2,\n \"Matam\": 1.403 ** 2,\n \"Kaffrine\": 1.256 ** 2,\n \"Kedougou\": 2.280 ** 2,\n \"Sedhiou\": 1.335 ** 2,\n}\n\n# Calculate sample sizes using deff at the stratum level\nsen_vaccine_wald.calculate(target=expected_coverage, half_ci=0.07, deff=expected_deff)\n\n# Convert sample sizes to a dataframe\nsen_vaccine_wald.to_dataframe()", "The sample size calculation above does not account for attrition of sample sizes due to non-response. In the 2017 Semegal DHS, the overal household and women reponse rate was abou 94.2%.", "# Calculate sample sizes with a resp_rate of 94.2%\nsen_vaccine_wald.calculate(\n target=expected_coverage, half_ci=0.07, deff=expected_deff, resp_rate=0.942\n)\n\n# Convert sample sizes to a dataframe\nsen_vaccine_wald.to_dataframe(\n col_names=[\"region\", \"vaccine_coverage\", \"precision\", \"number_12_23_months\"]\n)", "Fleiss method\nThe World Health OR=rganization (WHO) recommends using the Fleiss method for calculating sample size for vaccination coverage survey (see https://www.who.int/immunization/documents/who_ivb_18.09/en/). To use the Fleiss method, the examples shown above are the same with method=\"fleiss\".", "sen_vaccine_fleiss = SampleSize(\n parameter=\"proportion\", method=\"fleiss\", stratification=True\n)\n\nsen_vaccine_fleiss.calculate(\n target=expected_coverage, half_ci=0.07, deff=expected_deff, resp_rate=0.942\n)\n\n\nsen_vaccine_sample = sen_vaccine_fleiss.to_dataframe(\n col_names=[\"region\", \"vaccine_coverage\", \"precision\", \"number_12_23_months\"]\n)\nsen_vaccine_sample", "At this point, we have the number of 12-23 months needed to achieve the desired precision given the expected proportions using wald or fleiss calculation methods.\nNumber of households\nTo obtain the number of households, we need to know the expected average number of children aged 12-23 months per household. This information can be obtained from census data or from surveys' rosters. Since, the design is stratified, it is best to obtain the information per stratum. In this example, we wil assume that 5.2% of the population is between 12 and 23 months of age and apply that to all strata and household. Hence, the minimum number of households to select is:", "sen_vaccine_sample[\"number_households\"] = round(\n sen_vaccine_sample[\"number_12_23_months\"] / 0.052, 0\n)\n\nsen_vaccine_sample", "Similarly, the number of clusters to select can be obtained by dividing the number of households by the number of households per cluster to be selected." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Vasilyeu/mobile_customer
Vasilev_Sergey_eng.ipynb
mit
[ "Forecasting the outflow of clients of the mobile operator\nVasilyeu Siarhei, vasiluev@tut.by, +375 29 7731272\n1. Import libraries and load data", "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.preprocessing import OneHotEncoder\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.ensemble import VotingClassifier\nfrom sklearn.linear_model import SGDClassifier\nfrom sklearn.metrics import roc_curve\nfrom sklearn.cross_validation import cross_val_score\nfrom sklearn.metrics import auc\nfrom sklearn.ensemble import ExtraTreesClassifier\n\npd.set_option('display.max_columns', 500)\npd.set_option('display.width', 1000)\npd.set_option('display.max_rows', 100)\n\nimport warnings\nwarnings.simplefilter('ignore')\n\nfrom pylab import rcParams\nrcParams['figure.figsize'] = 12, 8\n\ntrain = pd.read_csv(\"train.csv\", sep=';')\ntest = pd.read_csv(\"test.csv\", sep=';')\n\n# Verify the correctness of the load\ntrain.head()\n\ntest.head()", "2. Explore the data and process the missing values", "train.info()\n\ntest.info()\n\n# Define the function to fill the missing values\ndef replace_nan(data):\n # в столбцах 'START_PACK' и 'OFFER_GROUP' заменим NaN на 'Unknown'\n data['START_PACK'] = data['START_PACK'].fillna('Unknown')\n data['OFFER_GROUP'] = data['OFFER_GROUP'].fillna('Unknown')\n \n # столбцы с датами приведем к формату datetime\n data['ACT_DATE'] = pd.to_datetime(data['ACT_DATE'], format='%Y-%m-%d', errors='ignore')\n data['BIRTHDAY'] = pd.to_datetime(data['BIRTHDAY'], format='%Y-%m-%d', errors='ignore')\n \n # в столбце GENDER заменим NaN на M, так как 16034 из 28600 записей имеют значение M\n data['GENDER'] = data['GENDER'].fillna('M')\n \n # по условию задачи, NaN в столбце 'MLLS_STATE' означает что абонент не является участником программы лояльности\n data['MLLS_STATE'] = data['MLLS_STATE'].fillna('No')\n \n # по условиям задачи NaN в столбце 'OBLIG_NUM' означает, что абонент не пользовался рассрочкой\n data['OBLIG_NUM'] = data['OBLIG_NUM'].fillna(0.0)\n \n # NaN в столбце 'ASSET_TYPE_LAST' вероятно означает, что абонент не приобретал оборудование в компании\n data['ASSET_TYPE_LAST'] = data['ASSET_TYPE_LAST'].fillna('Not buying')\n \n # в столбце 'USAGE_AREA' заменим NaN на 'Undefined'\n data['USAGE_AREA'] = data['USAGE_AREA'].fillna('Undefined')\n \n # в остальных столбцах заменим NaN на 0.0, считая что отсутствие данных означает отсутствие активности\n data['REFILL_OCT_16'] = data['REFILL_OCT_16'].fillna(0.0)\n data['REFILL_NOV_16'] = data['REFILL_NOV_16'].fillna(0.0)\n data['OUTGOING_OCT_16'] = data['OUTGOING_OCT_16'].fillna(0.0)\n data['OUTGOING_NOV_16'] = data['OUTGOING_NOV_16'].fillna(0.0)\n data['GPRS_OCT_16'] = data['GPRS_OCT_16'].fillna(0.0)\n data['GPRS_NOV_16'] = data['GPRS_NOV_16'].fillna(0.0)\n data['REVENUE_OCT_16'] = data['REVENUE_OCT_16'].fillna(0.0)\n data['REVENUE_NOV_16'] = data['REVENUE_NOV_16'].fillna(0.0)\n\n# переведем BYR в BYN\ndef byr_to_byn(data):\n data['REFILL_OCT_16'] = data['REFILL_OCT_16']/10000.0\n data['REFILL_NOV_16'] = data['REFILL_NOV_16']/10000.0\n\n# Create several new features\ndef new_features(data):\n \n # срок с даты подключения до 1 декабря 2016 в днях\n data['AGE_ACT'] = [int(i.days) for i in (pd.datetime(2016, 12, 1) - data['ACT_DATE'])]\n \n # день недели, в который состоялось подключение\n data['WEEKDAY'] = data['ACT_DATE'].dt.dayofweek\n \n # добавим год рождения абонента и заменим пропущенные данные средним\n data['BIRTH_YEAR'] = pd.DatetimeIndex(data['BIRTHDAY']).year\n data['BIRTH_YEAR'] = data['BIRTH_YEAR'].fillna(data['BIRTH_YEAR'].mean())\n \n # добавим столбец с возрастом абонента на момент подключения\n data['AGE_AB'] = pd.DatetimeIndex(data['ACT_DATE']).year - data['BIRTH_YEAR']\n \n # добавим столбцы с разностями показателей ноября и октября\n data['REFIL_DELTA'] = data['REFILL_NOV_16'] - data['REFILL_OCT_16']\n data['OUTGOING_DELTA'] = data['OUTGOING_NOV_16'] - data['OUTGOING_OCT_16']\n data['GPRS_DELTA'] = data['GPRS_NOV_16'] - data['GPRS_OCT_16']\n data['REVENUE_DELTA'] = data['REVENUE_NOV_16'] - data['REVENUE_OCT_16']\n \n # удалим столбецы 'BIRTHDAY' и 'ACT_DATE'\n del data['BIRTHDAY']\n del data['ACT_DATE']\n\n# переведем BYR в BYN\nbyr_to_byn(train)\nbyr_to_byn(test)\n\n# Process the training data\nreplace_nan(train)\nnew_features(train)\n\n# Process the test data\nreplace_nan(test)\nnew_features(test)\n\ntrain.info()", "Now we have test and train data sets without missing data and with a few new features\n3. Preparing data for machine learning", "# Conversion of categorical data\nle = LabelEncoder()\nfor n in ['STATUS', 'TP_CURRENT', 'START_PACK', 'OFFER_GROUP', 'GENDER', 'MLLS_STATE', \n 'PORTED_IN', 'PORTED_OUT', 'OBLIG_ON_START', 'ASSET_TYPE_LAST', 'DEVICE_TYPE_BUS', 'USAGE_AREA']:\n le.fit(train[n])\n train[n] = le.transform(train[n])\n test[n] = le.transform(test[n])\n\n# Standardization of data\nfeatures = list(train.columns)\ndel features[0]\ndel features[22]\nscaler = StandardScaler()\nfor n in features:\n scaler.fit(train[n])\n train[n] = scaler.transform(train[n])\n test[n] = scaler.transform(test[n])\n\n# Break train into training and test set\nX_train, X_test, y_train, y_test = train_test_split(train[features], \n train.ACTIVITY_DEC_16, \n test_size=0.20, \n random_state=123)", "4. Built the first model to all features", "# Ensemble of classifiers by Weighted Average Probabilities\nclf1 = LogisticRegression(random_state=42)\nclf2 = RandomForestClassifier(random_state=42)\nclf3 = SGDClassifier(loss='log', random_state=42)\n\neclf = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('sgd', clf3)], voting='soft', weights=[1,1,1])\n\n# Quality control of the model by cross-validation with calculation of ROC AUC\nfor clf, label in zip([clf1, clf2, clf3, eclf], \n ['Logistic Regression', 'Random Forest', 'SGD', 'Ensemble']):\n scores2 = cross_val_score(estimator=clf, X=X_train, y=y_train, cv=10, scoring='roc_auc')\n print(\"ROC AUC: %0.6f (+/- %0.6f) [%s]\" % (scores2.mean(), scores2.std(), label))", "On the training data, the best result is provided by an ensemble of three algorithms\n5. Determine the importance of attributes using the Random Forest", "# Построим лес и подсчитаем важность признаков\nforest = ExtraTreesClassifier(n_estimators=250,\n random_state=0)\n\nforest.fit(X_train, y_train)\nimportances = forest.feature_importances_\nstd = np.std([tree.feature_importances_ for tree in forest.estimators_],\n axis=0)\nindices = np.argsort(importances)[::-1]\n\n# Выведем ранг признаков по важности\nprint(\"Feature ranking:\")\n\nfor f in range(X_train.shape[1]):\n print(\"%d. %s (%f)\" % (f + 1, list(X_train.columns)[indices[f]], importances[indices[f]]))\n\n# Сделаем график важности признаков\nplt.figure()\nplt.title(\"Feature importances\")\nplt.bar(range(X_train.shape[1]), importances[indices],\n color=\"r\", yerr=std[indices], align=\"center\")\nplt.xticks(range(X_train.shape[1]), indices)\nplt.xlim([-1, X_train.shape[1]])\nplt.show()", "As we can see, the most important features are STATUS, USAGE_AREA, DEVICE_TYPE_BUS и REVENUE_NOV_16\n6. Select the features for classification", "# Create a list of features sorted by importance\nimp_features = []\nfor i in indices:\n imp_features.append(features[i])\n\n# the best accuracy is obtained by using the 17 most important features\nbest_features = imp_features[:17]\nX_train2 = X_train[best_features]\n# Quality control of the model by cross-validation with calculation of ROC AUC\nfor clf, label in zip([clf1, clf2, clf3, eclf], \n ['Logistic Regression', 'Random Forest', 'SGD', 'Ensemble']):\n scores2 = cross_val_score(estimator=clf, X=X_train2, y=y_train, cv=10, scoring='roc_auc')\n print(\"ROC AUC: %0.6f (+/- %0.6f) [%s]\" % (scores2.mean(), scores2.std(), label))", "7. Building a classifier based on test data", "# roc curve on test data\ncolors = ['black', 'orange', 'blue', 'green']\nlinestyles = [':', '--', '-.', '-']\nfor clf, label, clr, ls in zip([clf1, clf2, clf3, eclf], \n ['Logistic Regression', 'Random Forest', 'SGD', 'Ensemble'], \n colors, linestyles):\n y_pred = clf.fit(X_train[best_features], y_train).predict_proba(X_test[best_features])[:, 1]\n fpr, tpr, thresholds = roc_curve(y_true=y_test, y_score=y_pred)\n roc_auc = auc(x=fpr, y=tpr)\n plt.plot(fpr, tpr, color=clr, linestyle=ls, label='%s (auc = %0.2f)' % (label, roc_auc))\nplt.legend(loc='lower right')\nplt.plot([0, 1], [0, 1], linestyle='--', color='gray', linewidth=2)\nplt.xlim([-0.1, 1.1])\nplt.ylim([-0.1, 1.1])\nplt.grid()\nplt.xlabel('False Positive Rate')\nplt.ylabel('True Positive Rate')\nplt.show()", "The ROC AUC values obtained for the cross validation and for the test sample are the same, which indicates that the model is not overfitted and not underfitted.\n8. Getting the final result", "result_pred = eclf.fit(X_train[best_features], y_train).predict_proba(test[best_features])\nresult = pd.DataFrame(test['USER_ID'])\nresult['ACTIVITY_DEC_16_PROB'] = list(result_pred[:, 1])\nresult.to_csv('result.csv', encoding='utf8', index=None)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
guma44/GEOparse
tests/Analyse_hsa-miR-124a-3p_transfection_time-course.ipynb
bsd-3-clause
[ "Analyse hsa-miR-124a-3p transfection time-course\nIn order to do this analysis you have to be in the tests directory of GEOparse.\nIn the paper Systematic identification of microRNA functions by combining target prediction and expression profiling Wang and Wang provided a series of microarrays from 7 time-points after miR-124a transfection. The series can be found in GEO under the GSE6207 accession. We use this series to demonstrate general principles of GEOparse. Mind that this tutorial is not abut how to properly calculate log fold changes - the approach undertaken here is simplistic.\nWe start with the imports:", "%matplotlib inline\nimport GEOparse\nimport pandas as pd\nimport pylab as pl\nimport seaborn as sns\npl.rcParams['figure.figsize'] = (14, 10)\npl.rcParams['ytick.labelsize'] = 12\npl.rcParams['xtick.labelsize'] = 11\npl.rcParams['axes.labelsize'] = 23\npl.rcParams['legend.fontsize'] = 20\nsns.set_style('ticks')\nc1, c2, c3, c4 = sns.color_palette(\"Set1\", 4)", "We also prepared a simple tabulated file with the description of each GSM. It will be usefull to calculate LFC.", "experiments = pd.read_table(\"GSE6207_experiments.tab\")", "We can look in to this file:", "experiments", "Now we select the GSMs that are controls.", "controls = experiments[experiments.Type == 'control'].Experiment.tolist()", "Using GEOparse we can download experiments and look into the data:", "gse = GEOparse.get_GEO(\"GSE6207\")", "The GPL we are interested:", "gse.gpls['GPL570'].columns", "And the columns that are available for exemplary GSM:", "gse.gsms[\"GSM143385\"].columns", "We take the opportunity and check if everything is OK with the control samples. For this we just use simple histogram. To obtain table with each GSM as column, ID_REF as index and VALUE in each cell we use pivot_samples method from GSE object (we restrict the columns to the controls):", "pivoted_control_samples = gse.pivot_samples('VALUE')[controls]\npivoted_control_samples.head()", "And we plot:", "pivoted_control_samples.hist()\nsns.despine(offset=10, trim=True)", "Next we would like to filter out probes that are not expressed. The gene is expressed (in definition here) when its average log2 intensity in control samples is above 0.25 quantile. I.e. we filter out worst 25% genes.", "pivoted_control_samples_average = pivoted_control_samples.median(axis=1)\nprint \"Number of probes before filtering: \", len(pivoted_control_samples_average)\n\nexpression_threshold = pivoted_control_samples_average.quantile(0.25)\n\nexpressed_probes = pivoted_control_samples_average[pivoted_control_samples_average >= expression_threshold].index.tolist()\nprint \"Number of probes above threshold: \", len(expressed_probes)", "We can see that the filtering succeeded. Now we can pivot all the samples and filter out probes that are not expressed:", "samples = gse.pivot_samples(\"VALUE\").ix[expressed_probes]", "The most important thing is to calculate log fold changes. What we have to do is for each time-point identify control and transfected sample and subtract the VALUES (they are provided as log2 transformed already, we subtract transfection from the control). In the end we create new DataFrame with LFCs:", "lfc_results = {}\nsequence = ['4 hours',\n '8 hours',\n '16 hours',\n '24 hours',\n '32 hours',\n '72 hours',\n '120 hours']\nfor time, group in experiments.groupby(\"Time\"):\n print time\n control_name = group[group.Type == \"control\"].Experiment.iloc[0]\n transfection_name = group[group.Type == \"transfection\"].Experiment.iloc[0]\n lfc_results[time] = (samples[transfection_name] - samples[control_name]).to_dict()\nlfc_results = pd.DataFrame(lfc_results)[sequence]", "Let's look at the data sorted by 24-hours time-point:", "lfc_results.sort(\"24 hours\").head()", "We are interested in the gene expression changes upon transfection. Thus, we have to annotate each probe with ENTREZ gene ID, remove probes without ENTREZ or with multiple assignments. Although this strategy might not be optimal, after this we average the LFC for each gene over probes.", "# annotate with GPL\nlfc_result_annotated = lfc_results.reset_index().merge(gse.gpls['GPL570'].table[[\"ID\", \"ENTREZ_GENE_ID\"]],\n left_on='index', right_on=\"ID\").set_index('index')\ndel lfc_result_annotated[\"ID\"]\n# remove probes without ENTREZ\nlfc_result_annotated = lfc_result_annotated.dropna(subset=[\"ENTREZ_GENE_ID\"])\n# remove probes with more than one gene assigned\nlfc_result_annotated = lfc_result_annotated[~lfc_result_annotated.ENTREZ_GENE_ID.str.contains(\"///\")]\n# for each gene average LFC over probes\nlfc_result_annotated = lfc_result_annotated.groupby(\"ENTREZ_GENE_ID\").median()", "We can now look at the data:", "lfc_result_annotated.sort(\"24 hours\").head()", "At that point our job is basicaly done. However, we might want to check if the experiments worked out at all. To do this we will use hsa-miR-124a-3p targets predicted by MIRZA-G algorithm. The targets should be downregulated. First we read MIRZA-G results:", "header = [\"GeneID\", \"miRNA\", \"Total score without conservation\", \"Total score with conservation\"]\nmiR124_targets = pd.read_table(\"seed-mirza-g_all_mirnas_per_gene_scores_miR_124a.tab\", names=header)\nmiR124_targets.head()", "We shall extract targets as a simple list of strings:", "miR124_targets_list = map(str, miR124_targets.GeneID.tolist())\nprint \"Number of targets:\", len(miR124_targets_list)", "As can be seen there is a lot of targets (genes that posses a seed match in their 3'UTRs). We will use all of them. As first stem we will annotate genes if they are targets or not and add this information as a column to DataFrame:", "lfc_result_annotated[\"Is miR-124a target\"] = [i in miR124_targets_list for i in lfc_result_annotated.index]\n\ncols_to_plot = [i for i in lfc_result_annotated.columns if \"hour\" in i]", "In the end we can plot the results:", "a = sns.pointplot(data=lfc_result_annotated[lfc_result_annotated[\"Is miR-124a target\"]][cols_to_plot],\n color=c2,\n label=\"miR-124a target\")\nb = sns.pointplot(data=lfc_result_annotated[~lfc_result_annotated[\"Is miR-124a target\"]][cols_to_plot],\n color=c1,\n label=\"No miR-124a target\")\nsns.despine()\npl.legend([pl.mpl.patches.Patch(color=c2), pl.mpl.patches.Patch(color=c1)],\n [\"miR-124a target\", \"No miR-124a target\"], frameon=True, loc='lower left')\npl.xlabel(\"Time after transfection\")\npl.ylabel(\"Median log2 fold change\")", "As can be seen the targets of hsa-miR-124a-3p behaves in the expected way. With each time-point their downregulation is stronger up the 72 hours. After 120 hours the transfection is probably lost. This means that the experiments worked out." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kayzhou22/DSBiz_Project_LendingClub
Data_Preprocessing/LendingClub_DataExploratory.ipynb
mit
[ "Lending Club Data", "import pandas as pd\nimport numpy as np\nfrom sklearn import preprocessing\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.model_selection import cross_val_score\n\nfrom sklearn.feature_selection import RFE\n\nfrom sklearn.svm import SVR\nfrom sklearn.svm import LinearSVC\nfrom sklearn.svm import LinearSVR\n\nimport seaborn as sns\nimport matplotlib.pylab as pl\n%matplotlib inline\n#import matplotlib.pyplot as plt\n", "Columns Interested\nloan_status -- Current status of the loan<br/>\nloan_amnt -- The listed amount of the loan applied for by the borrower. If at some point in time, the credit department reduces the loan amount, then it will be reflected in this value.<br/>\nint_rate -- interest rate of the loan <br/>\ngrade -- LC assigned loan grade<br/>\nsub_grade -- LC assigned sub loan grade <br/>\npurpose -- A category provided by the borrower for the loan request. <br/> -- dummy\nannual_inc -- The self-reported annual income provided by the borrower during registration.<br/>\nemp_length -- Employment length in years. Possible values are between 0 and 10 where 0 means less than one year and 10 means ten or more years. <br/> -- dummie\nfico_range_low\nfico_range_high\nhome_ownership -- The home ownership status provided by the borrower during registration or obtained from the credit report. Our values are: RENT, OWN, MORTGAGE, OTHER <br/>\ntot_cur_bal -- Total current balance of all accounts \nnum_actv_bc_tl -- number of active bank accounts<br/>\n (avg_cur_bal -- average current balance of all accounts )<br/>\nmort_acc -- number of mortgage accounts<br/>\nnum_actv_rev_tl -- Number of currently active revolving trades<br/>\ndti -- A ratio calculated using the borrower’s total monthly debt payments on the total debt obligations, excluding mortgage and the requested LC loan, divided by the borrower’s self-reported monthly income. \npub_rec_bankruptcies - Number of public record bankruptcies<br/>\ndelinq_amnt -- \n\ntitle -- \nmths_since_last_delinq -- The number of months since the borrower's last delinquency.<br/>\nmths_since_recent_revol_delinq -- Months since most recent revolving delinquency.<br/>\ntotal_cu_tl -- Number of finance trades<br/>\nlast_credit_pull_d -- The most recent month LC pulled credit for this loan<br/>", "## 2015\ndf_app_2015 = pd.read_csv('data/LoanStats3d_securev1.csv.zip', compression='zip', low_memory=False,\\\n header=1)\n\ndf_app_2015.loan_status.unique()\n\ndf_app_2015.head(5)\n\ndf_app_2015['delinq_amnt'].unique()\n\ndf_app_2015.info(max_cols=111)\n\ndf_app_2015.groupby('title').loan_amnt.mean()\n\ndf_app_2015.groupby('purpose').loan_amnt.mean()\n\ndf_app_2015['emp_length'].unique()", "Decriptive Analyss\n\nAnnual income distribution\nTotal loan amount groupby interest rate chunks\nAverage loan amount groupby grade\nAverage loan amount groupby", "## selected columns\ndf = df_app_2015.ix[:, ['loan_status','loan_amnt', 'int_rate', 'grade', 'sub_grade',\\\n 'purpose',\\\n 'annual_inc', 'emp_length', 'home_ownership',\\\n 'fico_range_low','fico_range_high',\\\n 'num_actv_bc_tl', 'tot_cur_bal', 'mort_acc','num_actv_rev_tl',\\\n 'pub_rec_bankruptcies','dti' ]]\n \n\ndf.head(3)\n\nlen(df.dropna())\n\ndf.shape\n\ndf.loan_status.unique()\n\nlen(df[df['loan_status']=='Fully Paid'])\n\nlen(df[df['loan_status']=='Default'])\n\nlen(df[df['loan_status']=='Charged Off'])\n\nlen(df[df['loan_status']=='Late (31-120 days)'])\n\ndf.info()\n\ndf.loan_status.unique()\n\n## Convert applicable fields to numeric (I only select \"Interest Rate\" to use for this analysis)\ndf.ix[:,'int_rate'] = df.ix[:,['int_rate']]\\\n .applymap(lambda e: pd.to_numeric(str(e).rstrip()[:-1], errors='coerce'))\ndf.info()\n\ndf = df.rename(columns={\"int_rate\": \"int_rate(%)\"})\n\ndf.head(3)\n\n#len(df.dropna(thresh= , axis=1).columns)\n\ndf.describe()\n\n# 1. Loan Amount distribution\n# # create plots and histogram to visualize total loan amounts \nfig = pl.figure(figsize=(8,10))\nax1 = fig.add_subplot(211)\nax1.plot(range(len(df)), sorted(df.loan_amnt), '.', color='purple')\nax1.set_xlabel('Loan Applicant Count')\nax1.set_ylabel('Loan Amount ($)')\nax1.set_title('Fig 1a - Sorted Issued Loan Amount (2015)', size=15)\n\n# all_ histogram\n# pick upper bound 900 to exclude too large numbers\nax2 = fig.add_subplot(212)\nax2.hist(df.loan_amnt, range=(df.loan_amnt.min(), 36000), color='purple')\nax2.set_xlabel('Loan Amount -$', size=12)\nax2.set_ylabel('Counts',size=12)\nax2.set_title('Fig 1b - Sorted Issued Loan Amount (2015)', size=15)", "Fig 1a shows the sorted issued loan amounts from low to high.<br/>\nFig 2c is a histogram showing the distribution of the issued loan amounts.\nObeservation<br/>\nThe Loan amounts vary from $1000 to $35,000, and the most frequent loan amounts issued are around $10,000.", "inc_75 = df.describe().loc['75%', 'annual_inc']\ncount_75 = int(len(df)*0.75)\n\n# 2. Applicant Anual Income Distribution\n\nfig = pl.figure(figsize=(8,16))\n\nax0 = fig.add_subplot(311)\nax0.plot(range(len(df.annual_inc)), sorted(df.annual_inc), '.', color='blue')\nax0.set_xlabel('Loan Applicant Count')\nax0.set_ylabel('Applicant Annual Income ($)')\nax0.set_title('Fig 2a - Sorted Applicant Annual Income-all ($) (2015)', size=15)\n\n# use 75% quantile to plot the graph and histograms -- excluding extreme values\ninc_75 = df.describe().loc['75%', 'annual_inc']\ninc_below75 = df.annual_inc[df.annual_inc <= inc_75]\ncount_75 = int(len(df)*0.75)\n\nax1 = fig.add_subplot(312)\nax1.plot(range(count_75), sorted(df.annual_inc)[:count_75], '.', color='blue')\nax1.set_xlabel('Loan Applicant Count')\nax1.set_ylabel('Applicant Annual Income ($)')\nax1.set_title('Fig 2b - Sorted Applicant Annual Income-75% ($) (2015)',size=15)\n\n# all_ histogram\n# pick upper bound 900 to exclude too large numbers\nax2 = fig.add_subplot(313)\nax2.hist(df.annual_inc, range=(df.annual_inc.min(), inc_75), color='blue')\nax2.set_xlabel('Applicant Annual Income -$', size=12)\nax2.set_ylabel('Counts',size=12)\nax2.set_title('Fig 2c - Sorted Applicant Income-75% ($) (2015)',size=15)", "Fig 2a and Fig 2b both show the sorted applicant annual income from low to high. The former indicates extreme values, and the latter plots only those values below the 75% quantile, which looks more sensible.<br/>\nFig 2c is a histogram showing the distribution of the applicants' income (below 75% quantile).\nObeservation\nThe most frequent annual income amounts of ths applicants are between $40,000 and below $60,000.", "4.600000e+04\n\n# 3. Loan amount and Applicant Annual Income\n# View all\npl.figure(figsize=(6,4))\npl.plot(df.annual_inc, df.loan_amnt, '.')\npl.ylim(0, 40000)\npl.xlim(0, 0.2e7) # df.annual_inc.max()\npl.title('Fig 3a - Loan Amount VS Applicant Annual Income_all', size=15)\npl.ylabel('Loan Amount ($)', size=15)\npl.xlabel('Applicant Annual Income ($)', size=15)", "Fig 3a shows the approved loan amount against the applicants' annual income. <br/>\n Oberservation:<br/>\nWe can see that there are a few people with self-reported income that is very high, while majority of the applicants are with income less than $100,000. These extreme values indicate a possibility of outliers. \nMethod to deal with Outliers <br/>\nLocate Outliers using Median-Absolute-Deviation (MAD) test and remove them for further analysis\nPick samples to set outlier range using the mean of the outlier boundries-- the method could be improved by using ramdom sampling", "# 3b\npl.figure(figsize=(6,4))\npl.plot(df.annual_inc, df.loan_amnt, '.')\npl.ylim(0, 40000)\npl.xlim(0, inc_75)\npl.title('Fig 3b - Loan Amount VS Applicant Annual Income_75%', size=15)\npl.ylabel('Loan Amount ($)', size=15)\npl.xlabel('Applicant Annual Income ($)', size=15)", "Fig 3b is plot of the loan amount VS applicant annual income with all extreme income amounts being excluded. \nObservation:<br/>\nNow it is clearer to see that there is quite \"rigid\" standard to determine loan amounts based on income, however, there are still exceptions (sparse points above the \"division line\".", "pl.plot(np.log(df.annual_inc), np.log(df.loan_amnt), '.')\n\n# 4. Average loan amount groupby grade\nmean_loan_grade = df.groupby('grade')['loan_amnt'].mean()\nmean_loan_grade\n\nsum_loan_grade = df.groupby('grade')['loan_amnt'].sum()\nsum_loan_grade\n\nfig = pl.figure(figsize=(8,12)) #16,5\n\nax0 = fig.add_subplot(211)\nax0.plot(range(len(mean_loan_grade)), mean_loan_grade, 'o', color='blue')\n\nax0.set_ylim(0, 23000)\nax0.set_xlim(-0.5, len(mean_loan_grade))\n\nax0.set_xticks(range(len(mean_loan_grade)))\nax0.set_xticklabels(('A','B','C','D','E','F','G'))\nax0.set_xlabel('Grade')\nax0.set_ylabel('Average Loan Amount ($)')\nax0.set_title('Fig 4a - Average Loan Amount by Grade ($) (2015)', size=15)\n\n\nax1 = fig.add_subplot(212)\nax1.plot(range(len(sum_loan_grade)), sum_loan_grade, 'o', color='brown')\n\nax1.set_ylim(0, 2.3e9)\nax1.set_xlim(-0.5, len(sum_loan_grade))\n\nax1.set_xticks(range(len(sum_loan_grade)))\nax1.set_xticklabels(('A','B','C','D','E','F','G'))\nax1.set_xlabel('Grade')\nax1.set_ylabel('Total Loan Amount ($)')\nax1.set_title('Fig 4b - Total Loan Amount by Grade ($) (2015)', size=15)\n", "Fig 4a shows the avereage approved loan amounts corresponded to the grades determined by the Lending Club. <br/>\nFig 4b shows the total approved loan amounts corresponded to the grades determined by the Lending Club. <br/>\n Oberservation:<br/>\nIt is interesting to see that the points in these two charts have different trends-- the total loan amount gets higher from grade A to C, and then fall to a very low level; the average loan amount falls a little from grade A to grade B, and then gradually increases as the grade goes from B to G (increased by more than $5,000 from B to G)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
CELMA-project/CELMA
MES/polAvg/calculations/exactSolutions.ipynb
lgpl-3.0
[ "Exact solution used in MES runs\nWe would like to MES the operation\n\\begin{eqnarray}\n\\frac{\\int_0^{2\\pi} f \\rho d\\theta}{\\int_0^{2\\pi} \\rho d\\theta}\n= \\frac{\\int_0^{2\\pi} f d\\theta}{\\int_0^{2\\pi} d\\theta}\n= \\frac{\\int_0^{2\\pi} f d\\theta}{2\\pi}\n\\end{eqnarray}\nUsing cylindrical geometry.", "%matplotlib notebook\n\nfrom sympy import init_printing\nfrom sympy import S\nfrom sympy import sin, cos, tanh, exp, pi, sqrt\nfrom sympy import integrate\nimport numpy as np\n\nfrom boutdata.mms import x, y, z, t\n\nimport os, sys\n# If we add to sys.path, then it must be an absolute path\ncommon_dir = os.path.abspath('./../../../common')\n# Sys path is a list of system paths\nsys.path.append(common_dir)\nfrom CELMAPy.MES import get_metric, make_plot, BOUT_print\n\ninit_printing()", "Initialize", "folder = '../zHat/'\nmetric = get_metric()", "Define the variables", "# Initialization\nthe_vars = {}", "Define the function to take the derivative of\nNOTE:\nThese do not need to be fulfilled in order to get convergence\n\nz must be periodic\nThe field $f(\\rho, \\theta)$ must be of class infinity in $z=0$ and $z=2\\pi$\nThe field $f(\\rho, \\theta)$ must be continuous in the $\\rho$ direction with $f(\\rho, \\theta + \\pi)$\n\nBut this needs to be fulfilled:\n1. The field $f(\\rho, \\theta)$ must be single valued when $\\rho\\to0$\n2. Eventual BC in $\\rho$ must be satisfied", "# We need Lx\nfrom boututils.options import BOUTOptions\nmyOpts = BOUTOptions(folder)\nLx = eval(myOpts.geom['Lx'])\n\n# Z hat function\n\n# NOTE: The function is not continuous over origo\n\ns = 2\nc = pi\nw = pi/2\nthe_vars['f'] = ((1/2)*(tanh(s*(z-(c-w/2)))-tanh(s*(z-(c+w/2)))))*sin(3*2*pi*x/Lx)", "Calculating the solution", "the_vars['S'] = (integrate(the_vars['f'], (z, 0, 2*np.pi))/(2*np.pi)).evalf()", "Plot", "make_plot(folder=folder, the_vars=the_vars, plot2d=True, include_aux=False)", "Print the variables in BOUT++ format", "BOUT_print(the_vars, rational=False)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
phoebe-project/phoebe2-docs
2.3/tutorials/compute_times_phases.ipynb
gpl-3.0
[ "Advanced: compute_times & compute_phases\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).", "#!pip install -I \"phoebe>=2.3,<2.4\"", "Let's get started with some basic imports.", "import phoebe\nfrom phoebe import u # units\n\nb = phoebe.default_binary()\n\nb.add_dataset('lc', times=phoebe.linspace(0,10,101), dataset='lc01')", "Overriding Computation Times\nIf compute_times is not empty (by either providing compute_times or compute_phases), the provided value will be used to compute the model instead of those in the times parameter.\nIn the case of a mesh dataset or orbit dataset, observations cannot be attached to the dataset, so a times parameter does not exist. In this case compute_times or compute_phases will always be used.", "print(b.filter(qualifier=['times', 'compute_times'], context='dataset'))\n\nb.set_value('compute_times', phoebe.linspace(0,3,11))\n\nb.run_compute()\n\nprint(\"dataset times: {}\\ndataset compute_times: {}\\nmodel times: {}\".format(\n b.get_value('times', context='dataset'),\n b.get_value('compute_times', context='dataset'),\n b.get_value('times', context='model')\n ))", "compute_times (when not empty) overrides the value of times when computing the model. However, passing times as a keyword argument to run_compute will take precedence over either - and override the computed times across all enabled datasets.", "b.run_compute(times=[0,0.2])\n\nprint(\"dataset times: {}\\ndataset compute_times: {}\\nmodel times: {}\".format(\n b.get_value('times', context='dataset'),\n b.get_value('compute_times', context='dataset'),\n b.get_value('times', context='model')\n ))\n\nb.run_compute()\n\nprint(\"dataset times: {}\\ndataset compute_times: {}\\nmodel times: {}\".format(\n b.get_value('times', context='dataset'),\n b.get_value('compute_times', context='dataset'),\n b.get_value('times', context='model')\n ))", "Phase-Time Conversion\nIn addition to the ability to provide compute_times, we can alternatively provide compute_phases. These two parameters are linked via a constraint (see the constraints tutorial), with compute_phases constrained by default.", "print(b.filter(qualifier=['times', 'compute_times', 'compute_phases', 'compute_phases_t0'], context='dataset'))", "Essentially, this constraint does the same thing as b.to_phase or b.to_time, using the appropriate t0 according to phases_t0 from the top-level orbit in the hierarchy.\nNote that in the case of time-dependent systems, this mapping will also adhere to phases_dpdt (in the case of dpdt and/or phases_period (in the case of apsidal motion (dperdt).", "print(b.get_constraint('compute_phases'))\n\nprint(b.get_parameter('phases_t0').choices)", "In order to provide compute_phases instead of compute_times, we must call b.flip_constraint.", "b.flip_constraint('compute_phases', solve_for='compute_times')\n\nb.set_value('compute_phases', phoebe.linspace(0,1,11))\n\nprint(b.filter(qualifier=['times', 'compute_times', 'compute_phases', 'phases_t0'], context='dataset'))", "Note that under the hood, PHOEBE always works in time-space, meaning it is the constrained value of compute_times that is being passed under-the-hood.\nAlso note that if directly passing compute_phases to b.add_dataset, the constraint will be flipped on our behalf. We would then need to flip the constraint in order to provide compute_times instead.\nFinally, it is important to make the distinction that this is not adding support for observations in phases. If we have an old light curve that is only available in phase, we still must convert these to times manually (or via b.to_time). This restriction is intentional: we do not want the mapping between phase and time to change as the ephemeris is changed or fitted, rather we want to try to map from phase to time using the ephemeris that was originally used when the dataset was recorded (if possible, or the best possible guess).\nInterpolating the Model\nWhether or not we used compute_times/compute_phases or not, it is sometimes useful to be able to interpolate on the resulting model. In the case where we provided compute_times/compute_phases to \"down-sample\" from a large dataset, this can be particularly useful.\nWe can call interp_value on any FloatArrayParameter.", "b.get_parameter('fluxes', context='model').get_value()\n\nb.get_parameter('fluxes', context='model').interp_value(times=1.0)\n\nb.get_parameter('fluxes', context='model').interp_value(times=phoebe.linspace(0,3,101))", "In the case of times, this will automatically interpolate in phase-space if the provided time is outside the range of the referenced times array. If you have a logger enabled with at least the 'warning' level, this will raise a warning and state the phases at which the interpolation will be completed.", "b.get_parameter('fluxes', context='model').interp_value(times=5)", "Determining & Plotting Residuals\nOne particularly useful case for interpolating is to compare a model (perhaps computed in phase-space) to a dataset with a large number of datapoints. We can do this directly by calling compute_residuals, which will handle any necessary interpolation and compare the dependent variable between the dataset and models.\nNote that if there are more than one dataset or model attached to the bundle, it may be necessary to pass dataset and/or model (or filter in advanced and call compute_residuals on the filtered ParameterSet.\nTo see this in action, we'll first create a \"fake\" observational dataset, add some noise, recompute the model using compute_phases, and then calculate the residuals.", "b.add_dataset('lc', \n times=phoebe.linspace(0,10,1000),\n dataset='lc01',\n overwrite=True)\n\nb.run_compute(irrad_method='none')\n\nfluxes = b.get_value('fluxes', context='model')\nb.set_value('fluxes', context='dataset', value=fluxes)\n\nb.flip_constraint('compute_phases', solve_for='compute_times')\n\nb.set_value('compute_phases', phoebe.linspace(0,1,101))\n\nb.set_value('teff', component='primary', value=5950)\n\nb.run_compute(irrad_method='none')\n\nprint(len(b.get_value('fluxes', context='dataset')), len(b.get_value('fluxes', context='model')))\n\nb.calculate_residuals()", "If we plot the dataset and model, we see that the model was only computed for one cycle, whereas the dataset extends further in time.", "afig, mplfig = b.plot(show=True)", "But we can also plot the residuals. Here, calculate_residuals is called internally, interpolating in phase-space, and then plotted in time-space. See the options for y in the plot API docs for more details.", "afig, mplfig = b.plot(y='residuals', show=True)", "See Also\nThe following other advanced tutorials may interest you:\n* Advanced: Phase Masking\n* Advanced: Solver Times" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GitYiheng/reinforcement_learning_test
test05_machine_learning/Code snippets.ipynb
mit
[ "Decision Tree\nCART (Classification and Regression Tree)\nTraining a Decision Tree with Scikit-Learn Library", "import pandas as pd\n\nfrom sklearn import tree\n\nX = [[0, 0], [1, 2]]\ny = [0, 1]\n\nclf = tree.DecisionTreeClassifier()\n\nclf = clf.fit(X, y)\n\nclf.predict([[2., 2.]])\n\nclf.predict_proba([[2. , 2.]])\n\nclf.predict([[0.4, 1.2]])\n\nclf.predict_proba([[0.4, 1.2]])\n\nclf.predict_proba([[0, 0.2]])", "DecisionTreeClassifier is capable of both binary (where the labels are [-1, 1]) classification and multiclass (where the labels are [0, …, K-1]) classification.\nApplying to Iris Dataset", "from sklearn.datasets import load_iris\nfrom sklearn import tree\niris = load_iris()\n\niris.data[0:5]\n\niris.feature_names\n\nX = iris.data[:, 2:]\n\ny = iris.target\n\ny\n\nclf = tree.DecisionTreeClassifier(random_state=42)\n\nclf = clf.fit(X, y)\n\nfrom sklearn.tree import export_graphviz\n\nexport_graphviz(clf,\n out_file=\"tree.dot\",\n feature_names=iris.feature_names[2:],\n class_names=iris.target_names,\n rounded=True,\n filled=True)\n\nimport graphviz\n\ndot_data = tree.export_graphviz(clf, out_file=None,\n feature_names=iris.feature_names[2:],\n class_names=iris.target_names,\n rounded=True,\n filled=True)\n\ngraph = graphviz.Source(dot_data)\n\nimport numpy as np\nimport seaborn as sns\nsns.set_style('whitegrid')\nimport matplotlib.pyplot as plt\n%matplotlib inline", "Start Here", "df = sns.load_dataset('iris')\ndf.head()\n\ncol = ['petal_length', 'petal_width']\nX = df.loc[:, col]\n\nspecies_to_num = {'setosa': 0,\n 'versicolor': 1,\n 'virginica': 2}\ndf['tmp'] = df['species'].map(species_to_num)\ny = df['tmp']\n\nclf = tree.DecisionTreeClassifier()\nclf = clf.fit(X, y)\n\nX[0:5]\n\nX.values\n\nX.values.reshape(-1,1)\n\nXv = X.values.reshape(-1,1)\n\nXv\n\nh = 0.02 # set the spacing\n\nXv.min()\n\nXv.max() + 1\n\nx_min, x_max = Xv.min(), Xv.max() + 1\n\ny.min()\n\ny.max() + 1\n\ny_min, y_max = y.min(), y.max() + 1\n\ny_min\n\ny_max\n\nnp.arange(x_min, x_max, h)\n\nnp.arange(y_min, y_max, h)\n\nnp.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))\n\nxx, yy = np.meshgrid(np.arange(x_min, x_max, h),\n np.arange(y_min, y_max, h))\n\nxx\n\nyy\n\nxx.ravel()\n\nxx.ravel?\n\nyy.ravel()\n\nnp.c_[xx.ravel(), yy.ravel()]\n\nnp.c_?\n\npd.DataFrame(np.c_[xx.ravel(), yy.ravel()])\n\nz = clf.predict(np.c_[xx.ravel(), yy.ravel()])\n\nz\n\nxx.shape\n\nz.shape\n\nz = z.reshape(xx.shape)\n\nz.shape\n\nplt.contourf?", "matplotlib documentation", "fig = plt.figure(figsize=(16,10))\nax = plt.contourf(xx, yy, z, cmap = 'afmhot', alpha=0.3);\n\nfig = plt.figure(figsize=(16,10))\nplt.scatter(X.values[:, 0], X.values[:, 1], c=y, s=80, \n alpha=0.9, edgecolors='g');\n\nfig = plt.figure(figsize=(16,10))\nax = plt.contourf(xx, yy, z, cmap = 'afmhot', alpha=0.3);\nplt.scatter(X.values[:, 0], X.values[:, 1], c=y, s=80, \n alpha=0.9, edgecolors='g');", "" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
danielfather7/teach_Python
SEDS_Hw/seds-hw-2-procedural-python-part-1-danielfather7/SEDS-HW2.ipynb
gpl-3.0
[ "Part 1 : For a single file", "import os\nfilename = 'HCEPDB_moldata.zip'\nif os.path.exists(filename):\n print('File already exists.') \nelse:\n print(\"File doesn't exist.\")\n\nimport requests \nurl = 'http://faculty.washington.edu/dacb/HCEPDB_moldata.zip'\nreq = requests.get(url)\nassert req.status_code == 200\nwith open(filename, 'wb') as f:\n f.write(req.content)\n\nimport zipfile\nimport pandas as pd\ncsv_filename = 'HCEPDB_moldata.csv'\nzf = zipfile.ZipFile(filename)\ndata = pd.read_csv(zf.open(csv_filename))\n\ndata.head()", "Part 2 : For three or more files\nSet 1: download and unzip files, and read data.\n\nCreate a list for all files, and two dictionaries to conect to their url and file name of .csv. \nCheck which file exists by using os.path.exists in for and if loop, and print out results.\nOnly download files which don't exist by putting code in else loop.\nAdd some print commands in the loop to show which file is downloading and tell after it is done.\nUnzip the files, and use zf list and data lits to read 3 .csv files respectively.\n<span style=\"color:red\">Since 3 sets of data are the same kind of data, I first creat a blank data frame outside the for loop, and then use append command to merge all the data.\nUse shape and tail command to check data.", "import os\nimport requests\nimport zipfile\nimport pandas as pd\n\nzipfiles = ['HCEPDB_moldata_set1.zip','HCEPDB_moldata_set2.zip','HCEPDB_moldata_set3.zip']\nurl = {'HCEPDB_moldata_set1.zip':'http://faculty.washington.edu/dacb/HCEPDB_moldata_set1.zip','HCEPDB_moldata_set2.zip':'http://faculty.washington.edu/dacb/HCEPDB_moldata_set2.zip','HCEPDB_moldata_set3.zip':'http://faculty.washington.edu/dacb/HCEPDB_moldata_set3.zip'}\ncsvfile = {'HCEPDB_moldata_set1.zip':'HCEPDB_moldata_set1.csv','HCEPDB_moldata_set2.zip':'HCEPDB_moldata_set2.csv','HCEPDB_moldata_set3.zip':'HCEPDB_moldata_set3.csv'}\nzf = []\ndata = []\nalldata = pd.DataFrame()\nfor i in range(len(zipfiles)):\n#check whether file exists.\n if os.path.exists(zipfiles[i]):\n print(zipfiles[i],'exists.')\n else:\n print(zipfiles[i],\"doesn't exist.\")\n#Download files.\n print(zipfiles[i],'is downloading.')\n req = requests.get(url[zipfiles[i]])\n assert req.status_code == 200\n with open(zipfiles[i], 'wb') as f:\n f.write(req.content)\n print(zipfiles[i],'is downloaded.')\n#Unzip and read .csv files. \n zf.append(zipfile.ZipFile(zipfiles[i]))\n data.append(pd.read_csv(zf[i].open(csvfile[zipfiles[i]])))\n alldata = alldata.append(data[i],ignore_index=True)\n#Check data\nprint('\\nCheck data') \nprint('shape of',csvfile[zipfiles[0]],'=',data[0].shape,'\\nshape of',csvfile[zipfiles[1]],'=',data[1].shape,'\\nshape of',csvfile[zipfiles[2]],'=',data[2].shape, '\\nshape of all data =',alldata.shape)\nprint('\\n')\nalldata.tail()", "Set 2: analyza data", "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport math\nalldata['(xi-x)^2'] = (alldata['mass'] - alldata['mass'].mean())**2\nSD = math.sqrt(sum(alldata['(xi-x)^2'])/alldata.shape[0])\nM = alldata['mass'].mean()\nprint('standard diviation of mass = ',SD,', mean of mass = ',M,\"\\n\")\nalldata['mass_group'] = pd.cut(alldata['mass'],bins=[min(alldata['mass']),M-3*SD,M-2*SD,M-SD,M+SD,M+2*SD,M+3*SD,max(alldata['mass'])],labels=[\"<(-3SD)\",\"-3SD~-2SD\",\"-2SD~-SD\",\"-SD~+SD\",\"+SD~+2SD\",\"+2SD~+3SD\",\">(+3SD)\"])\ncount = pd.value_counts(alldata['mass_group'],normalize=True)\nprint(\"Count numbers in each group(%)\\n\",count,\"\\n\")\nprint(\"within 1 standard diviation:\",count[3],\"\\nwithin 2 standard diviation:\",count[2]+count[3]+count[4],\"\\nwithin 3 standard diviation:\",count[2]+count[3]+count[4]+count[1]+count[5],\"\\n\")\nprint(\"Conclusions: mass is nearly normal distribution!\")", "Part 3: Compare Part 1 and Part 2\nIn part 2, I can download mutiple files which are not exist yet, and the length of the code is almost as much as part 1, which means it's much shorther than to replicate codes for 3 times. Furthermore, I just have to add a new file to the list zipfiles by append command, and add its url and .csv filename to dictionaries if there are new collected data files which need to be downloaded. The rest parts of codes are unchanged, which makes it easy to maintain." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
xiaozhouw/663
Code_Report.ipynb
mit
[ "Sta 663 Final Project\nby Hao Sheng, Xiaozhou Wang\n\nnetid: hs220, xw106\nemail: {hao.sheng,xiaozhou.wang}@duke.edu\nPlease make sure you have installed our package before runing this ipython notebook !!\nhmmlearn (implmented by others) can be installed through pip3 install hmmlearn in shell\n\nThis project implements the memory sparse version of Viterbi algorithm and Baum-Welch algorithm to hidden Markov Model. \nThe whole project is based on the paper ''Implementing EM and Viterbi algorithms for Hidden Markov Model in linear memory'', written by Alexander Churbanov and Stephen Winters-Hilt.\nLoading packages", "import numpy as np\nfrom numpy import random\nfrom collections import deque\nimport matplotlib.pyplot as plt\nimport HMM\nimport pandas as pd\nfrom hmmlearn import hmm", "Benchmark of vectorization", "pi=np.array([.3,.3,.4])\nA=np.array([[.2,.3,.5],[.1,.5,.4],[.6,.1,.3]])\nB=np.array([[0.1,0.5,0.4],[0.2,0.4,0.4],[0.3,0.6,0.1]])\nstates,sequence=HMM.sim_HMM(A,B,pi,100)\n\n%timeit HMM.Baum_Welch(A,B,pi,sequence,1000,0,scale=True)\n%timeit HMM.hmm_unoptimized.Baum_Welch(A,B,pi,sequence,1000,0,scale=True)\n%timeit HMM.Baum_Welch(A,B,pi,sequence,1000,0,scale=False)\n%timeit HMM.hmm_unoptimized.Baum_Welch(A,B,pi,sequence,1000,0,scale=False)", "As for the optimization, we employed vectorization to avoid the use of triple for-loops under the update section of the Baum-Welch algorithm. We used broadcasting with numpy.newaxis to implement Baum-Welch algorithm much faster. As we can see from Benchmark part in the report, under class HMM we have 2 functions for Baum-Welch algorithm called Baum_Welch and Baum_Welch_fast. In Baum_Welch_fast, vectorization is applied when calculating \n$\\xi$ while in Baum_Welch, we use a for loop. Notice in Baum_Welch, all other parts are implemented with vectorization. This is just an example how vectorization greatly improve the speed. Notice that the run time for vectorized Baum-Welch algorithm is 2.43 s per loop (with scaling) and 1 s per loop (without scaling) compared to 4.01 s per loop (with scaling) and 261 s per loop (without scaling). Other functions are implemented with vectorization as well. Vectorization greatly improves our time performance.\nSimulations\nEffect of chain length", "A=np.array([[0.1,0.5,0.4],[0.3,0.5,0.2],[0.7,0.2,0.1]])\nB=np.array([[0.1,0.1,0.1,0.7],[0.5,0.5,0,0],[0.7,0.1,0.1,0.1]])\npi=np.array([0.25,0.25,0.5])\nA_init=np.array([[0.2,0.6,0.2],[0.25,0.5,0.25],[0.6,0.2,0.2]])\nB_init=np.array([[0.05,0.1,0.15,0.7],[0.4,0.4,0.1,0.1],[0.6,0.2,0.2,0.2]])\npi_init=np.array([0.3,0.3,0.4])\n\nlengths=[50,100,200,500,1000]\nacc=[]\nk=30\nfor i in lengths:\n mean_acc=0\n for j in range(k):\n states,sequence=HMM.sim_HMM(A,B,pi,i)\n Ahat,Bhat,pihat=HMM.Baum_Welch(A_init,B_init,\n pi_init,sequence,10,0,True)\n seq_hat=HMM.Viterbi(Ahat,Bhat,pihat,sequence)\n mean_acc=mean_acc+np.mean(seq_hat==states)\n acc.append(mean_acc/k)\n\nplt.plot(lengths,acc)", "From the plot we can see that the length of the chain does have an effect on the performance of Baum-Welch Algorithm and Viterbi decoding. We can see that when the chain is too long, the algorithms tend to have a bad results. \nEffects of initial values in Baum-Welch Algorithm", "A=np.array([[0.1,0.5,0.4],[0.3,0.5,0.2],[0.7,0.2,0.1]])\nB=np.array([[0.1,0.1,0.1,0.7],[0.5,0.5,0,0],[0.7,0.1,0.1,0.1]])\npi=np.array([0.25,0.25,0.5])\n\n############INITIAL VALUES 1###############\nA_init=np.array([[0.2,0.6,0.2],[0.25,0.5,0.25],[0.6,0.2,0.2]])\nB_init=np.array([[0.05,0.1,0.15,0.7],[0.4,0.4,0.1,0.1],[0.6,0.2,0.2,0.2]])\npi_init=np.array([0.3,0.3,0.4])\nk=50\nacc=np.zeros(k)\nfor i in range(k):\n states,sequence=HMM.sim_HMM(A,B,pi,500)\n Ahat,Bhat,pihat=HMM.Baum_Welch(A_init,B_init,pi_init,\n sequence,10,0,False)\n seq_hat=HMM.Viterbi(Ahat,Bhat,pihat,sequence)\n acc[i]=np.mean(seq_hat==states)\nprint(\"Accuracy: \",np.mean(acc))\n\n############INITIAL VALUES 2###############\nA_init=np.array([[0.5,0.25,0.25],[0.1,0.4,0.5],[0.25,0.1,0.65]])\nB_init=np.array([[0.3,0.4,0.2,0.1],[0.1,0.5,0.2,0.2],[0.1,0.1,0.4,0.4]])\npi_init=np.array([0.5,0.2,0.3])\nk=50\nacc=np.zeros(k)\nfor i in range(k):\n states,sequence=HMM.sim_HMM(A,B,pi,500)\n Ahat,Bhat,pihat=HMM.Baum_Welch(A_init,B_init,pi_init,\n sequence,10,0,True)\n seq_hat=HMM.Viterbi(Ahat,Bhat,pihat,sequence)\n acc[i]=np.mean(seq_hat==states)\nprint(\"Accuracy: \",np.mean(acc))\n\n############INITIAL VALUES 3###############\nA_init=np.array([[0.2,0.6,0.2],[0.25,0.5,0.25],[0.6,0.2,0.2]])\nB_init=np.array([[0.3,0.4,0.2,0.1],[0.1,0.5,0.2,0.2],[0.1,0.1,0.4,0.4]])\npi_init=np.array([0.5,0.2,0.3])\nk=50\nacc=np.zeros(k)\nfor i in range(k):\n states,sequence=HMM.sim_HMM(A,B,pi,500)\n Ahat,Bhat,pihat=HMM.Baum_Welch(A_init,B_init,pi_init,\n sequence,10,0,True)\n seq_hat=HMM.Viterbi(Ahat,Bhat,pihat,sequence)\n acc[i]=np.mean(seq_hat==states)\nprint(\"Accuracy: \",np.mean(acc))\n\n############INITIAL VALUES 4###############\nA_init=np.array([[0.5,0.25,0.25],[0.1,0.4,0.5],[0.25,0.1,0.65]])\nB_init=np.array([[0.05,0.1,0.15,0.7],[0.4,0.4,0.1,0.1],[0.6,0.2,0.2,0.2]])\npi_init=np.array([0.5,0.2,0.3])\nk=50\nacc=np.zeros(k)\nfor i in range(k):\n states,sequence=HMM.sim_HMM(A,B,pi,500)\n Ahat,Bhat,pihat=HMM.Baum_Welch(A_init,B_init,pi_init,\n sequence,10,0,True)\n seq_hat=HMM.Viterbi(Ahat,Bhat,pihat,sequence)\n acc[i]=np.mean(seq_hat==states)\nprint(\"Accuracy: \",np.mean(acc))", "From this part, we can see that the choice of initial values are greatly important. Because Baum-Welch algorithm does not guarantee global maximum, a bad choice of initial values will make Baum-Welch Algorithm to be trapped in a local maximum. Moreover, our experiments show that the initial values for emission matrix $B$ are more important by comparing initial values 3 and 4. The initial parameters represent your belief.\nEffect of number of iteration in Baum-Welch Algorithm", "############INITIAL VALUES 1###############\nA_init=np.array([[0.2,0.6,0.2],[0.25,0.5,0.25],[0.6,0.2,0.2]])\nB_init=np.array([[0.05,0.1,0.15,0.7],[0.4,0.4,0.1,0.1],[0.6,0.2,0.2,0.2]])\npi_init=np.array([0.3,0.3,0.4])\nn_iter=[1,5,10,25,50,100,500]\nacc=np.zeros([k,len(n_iter)])\nk=30\nfor j in range(k):\n states,sequence=HMM.sim_HMM(A,B,pi,100)\n t=0\n for i in n_iter:\n Ahat,Bhat,pihat=HMM.Baum_Welch(A_init,B_init,pi_init,\n sequence,i,0,False)\n seq_hat=HMM.Viterbi(Ahat,Bhat,pihat,sequence)\n acc[j,t]=np.mean(seq_hat==states)\n t+=1\nplt.plot(n_iter,np.mean(acc,axis=0))", "In this initial condition, we can see one feature of Baum-Welch Algorithm: Baum-Welch Algorithm tends to overfit the data, which is $P(Y|\\theta_{final})>P(Y|\\theta_{true})$.", "############INITIAL VALUES 2###############\nA_init=np.array([[0.5,0.25,0.25],[0.1,0.4,0.5],[0.25,0.1,0.65]])\nB_init=np.array([[0.3,0.4,0.2,0.1],[0.1,0.5,0.2,0.2],[0.1,0.1,0.4,0.4]])\npi_init=np.array([0.5,0.2,0.3])\nn_iter=[1,5,10,25,50,100,500]\nacc=np.zeros([k,len(n_iter)])\nk=30\nfor j in range(k):\n states,sequence=HMM.sim_HMM(A,B,pi,100)\n t=0\n for i in n_iter:\n Ahat,Bhat,pihat=HMM.Baum_Welch(A_init,B_init,pi_init,\n sequence,i,0,False)\n seq_hat=HMM.Viterbi(Ahat,Bhat,pihat,sequence)\n acc[j,t]=np.mean(seq_hat==states)\n t+=1\nplt.plot(n_iter,np.mean(acc,axis=0))\n\n############INITIAL VALUES 3###############\nA_init=np.array([[0.2,0.6,0.2],[0.25,0.5,0.25],[0.6,0.2,0.2]])\nB_init=np.array([[0.3,0.4,0.2,0.1],[0.1,0.5,0.2,0.2],[0.1,0.1,0.4,0.4]])\npi_init=np.array([0.5,0.2,0.3])\nn_iter=[1,5,10,25,50,100,500]\nacc=np.zeros([k,len(n_iter)])\nk=30\nfor j in range(k):\n states,sequence=HMM.sim_HMM(A,B,pi,100)\n t=0\n for i in n_iter:\n Ahat,Bhat,pihat=HMM.Baum_Welch(A_init,B_init,pi_init,\n sequence,i,0,False)\n seq_hat=HMM.Viterbi(Ahat,Bhat,pihat,sequence)\n acc[j,t]=np.mean(seq_hat==states)\n t+=1\nplt.plot(n_iter,np.mean(acc,axis=0))\n\n############INITIAL VALUES 4###############\nA_init=np.array([[0.5,0.25,0.25],[0.1,0.4,0.5],[0.25,0.1,0.65]])\nB_init=np.array([[0.05,0.1,0.15,0.7],[0.4,0.4,0.1,0.1],[0.6,0.2,0.2,0.2]])\npi_init=np.array([0.5,0.2,0.3])\nn_iter=[1,5,10,25,50,100,500]\nacc=np.zeros([k,len(n_iter)])\nk=30\nfor j in range(k):\n states,sequence=HMM.sim_HMM(A,B,pi,100)\n t=0\n for i in n_iter:\n Ahat,Bhat,pihat=HMM.Baum_Welch(A_init,B_init,pi_init,\n sequence,i,0,False)\n seq_hat=HMM.Viterbi(Ahat,Bhat,pihat,sequence)\n acc[j,t]=np.mean(seq_hat==states)\n t+=1\nplt.plot(n_iter,np.mean(acc,axis=0))", "In other situations, increasing the number of iterations in Baum-Welch Algorithm tends to better fit the data.\nApplications", "dat=pd.read_csv(\"data/weather-test2-1000.txt\",skiprows=1,header=None)\ndat.head(5)\n\nseq=dat[1].map({\"no\":0,\"yes\":1}).tolist()\nstates=dat[0].map({\"sunny\":0,\"rainy\":1,\"foggy\":2})\ninitial_A=np.array([[0.7,0.2,0.1],[0.3,0.6,0.1],[0.1,0.6,0.3]])\ninitial_B=np.array([[0.9,0.1],[0.1,0.9],[0.4,0.6]])\ninitial_pi=np.array([0.4,0.4,0.2])\nAhat,Bhat,pihat=HMM.Baum_Welch(initial_A,initial_B,initial_pi,seq,\n max_iter=100,threshold=0,scale=True)\nstates_hat=HMM.Viterbi(Ahat,Bhat,pihat,seq)\nprint(np.mean(states_hat==states))", "Comparative Analysis", "A=np.array([[0.1,0.5,0.4],[0.3,0.5,0.2],[0.7,0.2,0.1]])\nB=np.array([[0.1,0.1,0.1,0.7],[0.5,0.5,0,0],[0.7,0.1,0.1,0.1]])\npi=np.array([0.25,0.25,0.5])\nA_init=np.array([[0.2,0.6,0.2],[0.25,0.5,0.25],[0.6,0.2,0.2]])\nB_init=np.array([[0.05,0.1,0.15,0.7],[0.4,0.4,0.1,0.1],[0.6,0.2,0.2,0.2]])\npi_init=np.array([0.3,0.3,0.4])\nstates,sequence=HMM.sim_HMM(A,B,pi,100)", "Comparing Viterbi decoding", "mod=hmm.MultinomialHMM(n_components=3)\nmod.startprob_=pi\nmod.transmat_=A\nmod.emissionprob_=B\nres_1=mod.decode(np.array(sequence).reshape([100,1]))[1]\n\nres_2=HMM.Viterbi(A,B,pi,sequence)\n\nnp.array_equal(res_1,res_2)\n\n%timeit -n100 mod.decode(np.array(sequence).reshape([100,1]))\n%timeit -n100 HMM.Viterbi(A,B,pi,sequence)", "From the above we can see that we coded our Viterbi algorith correctly. But the time complexity is not good enought. When we check the source code of hmmlearn, we see that they used C to make things faster. In the future, we might want to use c++ to implement this algorithm and wrap it for python.\nComparing Baum-Welch Algorithm", "k=50\nacc=[]\nfor i in range(k):\n Ahat,Bhat,pihat=HMM.Baum_Welch(A_init,B_init,\n pi_init,sequence,max_iter=10,\n threshold=0,scale=True)\n states_hat=HMM.Viterbi(Ahat,Bhat,pihat,sequence)\n acc.append(np.mean(states_hat==states))\nplt.plot(acc)\n\nk=50\nacc=[]\nfor i in range(k):\n mod=hmm.MultinomialHMM(n_components=3)\n mod=mod.fit(np.array(sequence).reshape([100,1]))\n pred_states=mod.decode(np.array(sequence).reshape([100,1]))[1]\n acc.append(np.mean(pred_states==states))\nplt.plot(acc)", "From the above results, we can see that our version gives a stable estimate because we specify initial values for Baum-Welch algorithm. However, the mod.fit in hmmlearn does not take in any initial values. This makes their function easy to use. However, this action may adversely affect the results. According to the authors of this package, they are modifying their package so that users can input their prior belief." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/image_understanding/labs/cnn.ipynb
apache-2.0
[ "Convolutional Neural Network (CNN)\nLearning Objectives\n 1. We will learn how to configure our CNN to process inputs of CIFAR images\n 2. We will learn how to compile and train the CNN model\n 3. We will learn how to evaluate the CNN model\nIntroduction\nThis notebook demonstrates training a simple Convolutional Neural Network (CNN) to classify CIFAR images. Because this notebook uses the Keras Sequential API, creating and training our model will take just a few lines of code.\nEach learning objective will correspond to a #TODO in the notebook where you will complete the notebook cell's code before running. Refer to the solution for reference.", "# Use the chown command to change the ownership of the repository.\n!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst", "Import TensorFlow", "# Importing necessary TF version and modules\nimport tensorflow as tf\n\nfrom tensorflow.keras import datasets, layers, models\nimport matplotlib.pyplot as plt", "This notebook uses TF2.x. Please check your tensorflow version using the cell below.", "# Show the currently installed version of TensorFlow\nprint(tf.__version__)", "Download and prepare the CIFAR10 dataset\nThe CIFAR10 dataset contains 60,000 color images in 10 classes, with 6,000 images in each class. The dataset is divided into 50,000 training images and 10,000 testing images. The classes are mutually exclusive and there is no overlap between them.", "# Download the CIFAR10 dataset.\n(train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data()\n\n# Normalize pixel values to be between 0 and 1\ntrain_images, test_images = train_images / 255.0, test_images / 255.0", "Verify the data\nTo verify that the dataset looks correct, let's plot the first 25 images from the training set and display the class name below each image.", "# Plot the first 25 images and display the class name below each image.\nclass_names = ['airplane', 'automobile', 'bird', 'cat', 'deer',\n 'dog', 'frog', 'horse', 'ship', 'truck']\n\nplt.figure(figsize=(10,10))\nfor i in range(25):\n plt.subplot(5,5,i+1)\n plt.xticks([])\n plt.yticks([])\n plt.grid(False)\n plt.imshow(train_images[i], cmap=plt.cm.binary)\n # The CIFAR labels happen to be arrays, \n # which is why you need the extra index\n plt.xlabel(class_names[train_labels[i][0]])\nplt.show()", "Lab Task 1: Create the convolutional base\nThe 6 lines of code below define the convolutional base using a common pattern: a stack of Conv2D and MaxPooling2D layers.\nAs input, a CNN takes tensors of shape (image_height, image_width, color_channels), ignoring the batch size. If you are new to these dimensions, color_channels refers to (R,G,B). In this example, you will configure our CNN to process inputs of shape (32, 32, 3), which is the format of CIFAR images. You can do this by passing the argument input_shape to our first layer.", "# TODO 1 - Write a code to configure our CNN to process inputs of CIFAR images.\n", "Let's display the architecture of our model so far.", "# Now, print a useful summary of the model.\nmodel.summary()", "Above, you can see that the output of every Conv2D and MaxPooling2D layer is a 3D tensor of shape (height, width, channels). The width and height dimensions tend to shrink as you go deeper in the network. The number of output channels for each Conv2D layer is controlled by the first argument (e.g., 32 or 64). Typically, as the width and height shrink, you can afford (computationally) to add more output channels in each Conv2D layer.\nAdd Dense layers on top\nTo complete our model, you will feed the last output tensor from the convolutional base (of shape (4, 4, 64)) into one or more Dense layers to perform classification. Dense layers take vectors as input (which are 1D), while the current output is a 3D tensor. First, you will flatten (or unroll) the 3D output to 1D, then add one or more Dense layers on top. CIFAR has 10 output classes, so you use a final Dense layer with 10 outputs and a softmax activation.", "# Here, the model.add() method adds a layer instance incrementally for a sequential model.\nmodel.add(layers.Flatten())\nmodel.add(layers.Dense(64, activation='relu'))\nmodel.add(layers.Dense(10))", "Here's the complete architecture of our model.", "# Print a useful summary of the model.\nmodel.summary()", "As you can see, our (4, 4, 64) outputs were flattened into vectors of shape (1024) before going through two Dense layers.\nLab Task 2: Compile and train the model", "# TODO 2 - Write a code to compile and train a model\n", "Lab Task 3: Evaluate the model", "# TODO 3 - Write a code to evaluate a model.\n\n\n# Print the test accuracy.\nprint(test_acc)", "Our simple CNN has achieved a test accuracy of over 70%. Not bad for a few lines of code! For another CNN style, see an example using the Keras subclassing API and a tf.GradientTape here." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.13/_downloads/plot_read_epochs.ipynb
bsd-3-clause
[ "%matplotlib inline", "Reading epochs from a raw FIF file\nThis script shows how to read the epochs from a raw file given\na list of events. For illustration, we compute the evoked responses\nfor both MEG and EEG data by averaging all the epochs.", "# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n# Matti Hamalainen <msh@nmr.mgh.harvard.edu>\n#\n# License: BSD (3-clause)\n\nimport mne\nfrom mne import io\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()", "Set parameters", "raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\nevent_id, tmin, tmax = 1, -0.2, 0.5\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname)\nevents = mne.read_events(event_fname)\n\n# Set up pick list: EEG + MEG - bad channels (modify to your needs)\nraw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more\npicks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True, eog=True,\n exclude='bads')\n\n# Read epochs\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,\n picks=picks, baseline=(None, 0), preload=True,\n reject=dict(grad=4000e-13, mag=4e-12, eog=150e-6))\n\nevoked = epochs.average() # average epochs to get the evoked response", "Show result", "evoked.plot()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ajhenrikson/phys202-2015-work
assignments/assignment04/TheoryAndPracticeEx01.ipynb
mit
[ "Theory and Practice of Visualization Exercise 1\nImports", "from IPython.display import Image", "Graphical excellence and integrity\nFind a data-focused visualization on one of the following websites that is a positive example of the principles that Tufte describes in The Visual Display of Quantitative Information.\n\nVox\nUpshot\n538\nBuzzFeed\n\nUpload the image for the visualization to this directory and display the image inline in this notebook.", "# Add your filename and uncomment the following line:\nImage(filename='main.0 (1).png')", "Describe in detail the ways in which the visualization exhibits graphical integrity and excellence:\nThe graphs are fairly clear and straightforward with units for the vertical columns but they do not hav any on the horizontal. It also includes a descripitve title which helps to make the purpose of graphs clear." ]
[ "markdown", "code", "markdown", "code", "markdown" ]
CELMA-project/CELMA
MES/divOfScalarTimesVector/2a-divSource/calculations/exactSolutions.ipynb
lgpl-3.0
[ "Exact solution used in MES runs\nWe would like to MES the operation (in a cylindrical geometry)\n$$\n\\nabla \\cdot \\left(S_n\\frac{\\nabla_\\perp \\phi}{B}\\right)\n$$\nAs we have a homogenenous $B$-field, we have normalized it out, and remain with\n$$\n\\nabla \\cdot \\left(S_n\\nabla_\\perp \\phi\\right)\n$$", "%matplotlib notebook\n\nfrom sympy import init_printing\nfrom sympy import S\nfrom sympy import sin, cos, tanh, exp, pi, sqrt\n\nfrom boutdata.mms import x, y, z, t\nfrom boutdata.mms import Delp2, DDX, DDY, DDZ\n\nimport os, sys\n# If we add to sys.path, then it must be an absolute path\ncommon_dir = os.path.abspath('./../../../../common')\n# Sys path is a list of system paths\nsys.path.append(common_dir)\nfrom CELMAPy.MES import get_metric, make_plot, BOUT_print\n\ninit_printing()", "Initialize", "folder = '../twoGaussians/'\nmetric = get_metric()", "Define the variables", "# Initialization\nthe_vars = {}", "Define manifactured solutions\nWe have that\n$$S = \\nabla\\cdot(S_n\\nabla_\\perp\\phi) = S_n\\nabla_\\perp^2\\phi + \\nabla S_n\\cdot \\nabla_\\perp \\phi = S_n\\nabla_\\perp^2\\phi + \\nabla_\\perp S_n\\cdot \\nabla_\\perp \\phi$$\nWe will use the Delp2 operator for the perpendicular Laplace operator (as the y-derivatives vanishes in cylinder geometry). We have\nDelp2$(f)=g^{xx}\\partial_x^2 f + g^{zz}\\partial_z^2 f + 2g^{xz}\\partial_x\\partial_z f + G^1\\partial_x f + G^3\\partial_z f$\nUsing the cylinder geometry, we get that\nDelp2$(f)=\\partial_x^2 f + \\frac{1}{x^2}\\partial_z^2 f + \\frac{1}{x}\\partial_x f$\nFurther on, due to orthogonality we have that\n$$\\nabla_\\perp S_n\\cdot \\nabla_\\perp \\phi = \\mathbf{e}^i\\cdot \\mathbf{e}^i(\\partial_i S_n)(\\partial_i \\phi)\n = g^{xx}(\\partial_x S_n)(\\partial_x \\phi) + g^{zz}(\\partial_z S_n)(\\partial_z \\phi) = (\\partial_x S_n)(\\partial_x \\phi) + \\frac{1}{x^2}(\\partial_z S_n)(\\partial_z \\phi)$$\nThis gives\n$$S = \\nabla\\cdot(S_n\\nabla_\\perp\\phi) = S_n\\partial_x^2 \\phi + S_n\\frac{1}{x^2}\\partial_z^2 \\phi + S_n\\frac{1}{x}\\partial_x \\phi + (\\partial_x S_n)(\\partial_x \\phi) + \\frac{1}{x^2}(\\partial_z S_n)(\\partial_z \\phi)$$\nWe will use this to calculate the analytical solution.\nNOTE:\n\nz must be periodic\nThe field $f(\\rho, \\theta)$ must be of class infinity in $z=0$ and $z=2\\pi$\nThe field $f(\\rho, \\theta)$ must be single valued when $\\rho\\to0$\nThe field $f(\\rho, \\theta)$ must be continuous in the $\\rho$ direction with $f(\\rho, \\theta + \\pi)$\nEventual BC in $\\rho$ must be satisfied", "# We need Lx\nfrom boututils.options import BOUTOptions\nmyOpts = BOUTOptions(folder)\nLx = eval(myOpts.geom['Lx'])\n\n# Two normal gaussians\n\n# The gaussian\n# In cartesian coordinates we would like\n# f = exp(-(1/(2*w^2))*((x-x0)^2 + (y-y0)^2))\n# In cylindrical coordinates, this translates to\n# f = exp(-(1/(2*w^2))*(x^2 + y^2 + x0^2 + y0^2 - 2*(x*x0+y*y0) ))\n# = exp(-(1/(2*w^2))*(rho^2 + rho0^2 - 2*rho*rho0*(cos(theta)*cos(theta0)+sin(theta)*sin(theta0)) ))\n# = exp(-(1/(2*w^2))*(rho^2 + rho0^2 - 2*rho*rho0*(cos(theta - theta0)) ))\n\nw = 0.8*Lx\nrho0 = 0.3*Lx\ntheta0 = 5*pi/4\nthe_vars['phi'] = exp(-(1/(2*w**2))*(x**2 + rho0**2 - 2*x*rho0*(cos(z - theta0)) ))\n\nw = 0.5*Lx\nrho0 = 0.2*Lx\ntheta0 = 0\nthe_vars['S_n'] = exp(-(1/(2*w**2))*(x**2 + rho0**2 - 2*x*rho0*(cos(z - theta0)) ))", "Calculate the solution", "the_vars['S'] = the_vars['S_n']*Delp2(the_vars['phi'], metric=metric)\\\n + metric.g11*DDX(the_vars['S_n'], metric=metric)*DDX(the_vars['phi'], metric=metric)\\\n + metric.g33*DDZ(the_vars['S_n'], metric=metric)*DDZ(the_vars['phi'], metric=metric)", "Plot", "make_plot(folder=folder, the_vars=the_vars, plot2d=True, include_aux=False)", "Print the variables in BOUT++ format", "BOUT_print(the_vars, rational=False)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/mlops-on-gcp
examples/tfdv-structured-data/tfdv-covertype.ipynb
apache-2.0
[ "# Copyright 2019 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Analyzing structured data with Tensorflow Data Validation\nThis notebook demonstrates how TensorFlow Data Validation (TFDV) can be used to analyze and validate structured data, including generating descriptive statistics, inferring and fine tuning schema, checking for and fixing anomalies, and detecting drift and skew. It's important to understand your dataset's characteristics, including how it might change over time in your production pipeline. It's also important to look for anomalies in your data, and to compare your training, evaluation, and serving datasets to make sure that they're consistent. TFDV is the tool to achieve it.\nYou are going to use a variant of Cover Type dataset. For more information about the dataset refer to the dataset summary page.\nLab setup\nMake sure to set the Jupyter kernel for this notebook to tfx.\nImport packages and check the versions", "import os\nimport tempfile\nimport tensorflow as tf\nimport tensorflow_data_validation as tfdv\nimport time\n\nfrom apache_beam.options.pipeline_options import PipelineOptions, GoogleCloudOptions, StandardOptions, SetupOptions, DebugOptions, WorkerOptions\nfrom google.protobuf import text_format\nfrom tensorflow_metadata.proto.v0 import schema_pb2, statistics_pb2\n\nprint('TensorFlow version: {}'.format(tf.__version__))\nprint('TensorFlow Data Validation version: {}'.format(tfdv.__version__))", "Set the GCS locations of datasets used during the lab", "TRAINING_DATASET='gs://workshop-datasets/covertype/training/dataset.csv'\nTRAINING_DATASET_WITH_MISSING_VALUES='gs://workshop-datasets/covertype/training_missing/dataset.csv'\nEVALUATION_DATASET='gs://workshop-datasets/covertype/evaluation/dataset.csv'\nEVALUATION_DATASET_WITH_ANOMALIES='gs://workshop-datasets/covertype/evaluation_anomalies/dataset.csv'\nSERVING_DATASET='gs://workshop-datasets/covertype/serving/dataset.csv'", "Set the local path to the lab's folder.", "LAB_ROOT_FOLDER='/home/mlops-labs/lab-31-tfdv-structured-data'", "Configure GCP project, region, and staging bucket", "PROJECT_ID = 'mlops-workshop'\nREGION = 'us-central1'\nSTAGING_BUCKET = 'gs://{}-staging'.format(PROJECT_ID)", "Computing and visualizing descriptive statistics\nTFDV can compute descriptive statistics that provide a quick overview of the data in terms of the features that are present and the shapes of their value distributions.\nInternally, TFDV uses Apache Beam's data-parallel processing framework to scale the computation of statistics over large datasets. For applications that wish to integrate deeper with TFDV (e.g., attach statistics generation at the end of a data-generation pipeline), the API also exposes a Beam PTransform for statistics generation.\nLet's start by using tfdv.generate_statistics_from_csv to compute statistics for the training data split.\nNotice that although your dataset is in Google Cloud Storage you will run you computation locally on the notebook's host, using the Beam DirectRunner. Later in the lab, you will use Cloud Dataflow to calculate statistics on a remote distributed cluster.", "train_stats = tfdv.generate_statistics_from_csv(\n data_location=TRAINING_DATASET_WITH_MISSING_VALUES\n)", "You can now use tfdv.visualize_statistics to create a visualization of your data. tfdv.visualize_statistics uses Facets that provides succinct, interactive visualizations to aid in understanding and analyzing machine learning datasets.", "tfdv.visualize_statistics(train_stats)", "The interactive widget you see is Facets Overview. \n- Numeric features and categorical features are visualized separately, including charts showing the distributions for each feature.\n- Features with missing or zero values display a percentage in red as a visual indicator that there may be issues with examples in those features. The percentage is the percentage of examples that have missing or zero values for that feature.\n- Try clicking \"expand\" above the charts to change the display\n- Try hovering over bars in the charts to display bucket ranges and counts\n- Try switching between the log and linear scales\n- Try selecting \"quantiles\" from the \"Chart to show\" menu, and hover over the markers to show the quantile percentages\nInfering Schema\nNow let's use tfdv.infer_schema to create a schema for the data. A schema defines constraints for the data that are relevant for ML. Example constraints include the data type of each feature, whether it's numerical or categorical, or the frequency of its presence in the data. For categorical features the schema also defines the domain - the list of acceptable values. Since writing a schema can be a tedious task, especially for datasets with lots of features, TFDV provides a method to generate an initial version of the schema based on the descriptive statistics.\nInfer the schema from the training dataset statistics", "schema = tfdv.infer_schema(train_stats)\ntfdv.display_schema(schema=schema)", "In general, TFDV uses conservative heuristics to infer stable data properties from the statistics in order to avoid overfitting the schema to the specific dataset. It is strongly advised to review the inferred schema and refine it as needed, to capture any domain knowledge about the data that TFDV's heuristics might have missed.\nIn our case tfdv.infer_schema did not interpreted the Soil_Type and Cover_Type fields properly. Although both fields are encoded as integers, they should be interpreted as categorical rather than numeric. \nYou can use TFDV to manually update the schema including, specifing which features are categorical and which ones are quantitative and defining feature domains.\nFine tune the schema\nYou are going to modify the schema:\n- Particularize the Soil_Type and Cover_Type as categorical features. Notice that at this point you don't set the domain of Soil_Type as enumerating all possible values is not possible without a full scan of the dataset. After you re-generate the statistics using the correct feature designations you can retrieve the domain from the new statistics and finalize the schema\n- Set contstraints on the values of the Slope feature. You know that the slope is measured in degrees of arc and should be in the 0-90 range.", "tfdv.get_feature(schema, 'Soil_Type').type = schema_pb2.FeatureType.BYTES\ntfdv.set_domain(schema, 'Soil_Type', schema_pb2.StringDomain(name='Soil_Type', value=[]))\n\ntfdv.set_domain(schema, 'Cover_Type', schema_pb2.IntDomain(name='Cover_Type', min=1, max=7, is_categorical=True))\n\ntfdv.get_feature(schema, 'Slope').type = schema_pb2.FeatureType.FLOAT\ntfdv.set_domain(schema, 'Slope', schema_pb2.FloatDomain(name='Slope', min=0, max=90))\n\ntfdv.display_schema(schema=schema)", "Generate new statistics using the updated schema.", "stats_options = tfdv.StatsOptions(schema=schema, infer_type_from_schema=True)\n\ntrain_stats = tfdv.generate_statistics_from_csv(\n data_location=TRAINING_DATASET_WITH_MISSING_VALUES,\n stats_options=stats_options,\n)\n\ntfdv.visualize_statistics(train_stats)", "Finalize the schema\nThe train_stats object is a instance of the statistics_pb2 class, which is a Python wrapper around the statistics.proto protbuf. You can use the protobuf Python interface to retrieve the generated statistics, including the infered domains of categorical features.", "soil_type_stats = [feature for feature in train_stats.datasets[0].features if feature.path.step[0]=='Soil_Type'][0].string_stats\nsoil_type_domain = [bucket.label for bucket in soil_type_stats.rank_histogram.buckets]\n\ntfdv.set_domain(schema, 'Soil_Type', schema_pb2.StringDomain(name='Soil_Type', value=soil_type_domain))\ntfdv.display_schema(schema=schema)", "Creating statistics using Cloud Dataflow\nPreviously, you created descriptive statistics using local compute. This may work for smaller datasets. But for large datasets you may not have enough local compute power. The tfdv.generate_statistics_* functions can utilize DataflowRunner to run Beam processing on a distributed Dataflow cluster.\nTo run TFDV on Google Cloud Dataflow, the TFDV library must be must be installed on the Dataflow workers. There are different ways to install additional packages on Dataflow. You are going to use the Python setup.py file approach.\nYou also configure tfdv.generate_statistics_from_csv to use the final schema created in the previous steps.\nConfigure Dataflow settings\nCreate the setup.py configured to install TFDV.", "%%writefile setup.py\n\nfrom setuptools import setup\n\nsetup(\n name='tfdv',\n description='TFDV Runtime.',\n version='0.1',\n install_requires=[\n 'tensorflow_data_validation==0.15.0'\n ]\n)", "Regenerate statistics\nRe-generate the statistics using Dataflow and the final schema. You can monitor the job progress using Dataflow UI", "options = PipelineOptions()\noptions.view_as(GoogleCloudOptions).project = PROJECT_ID\noptions.view_as(GoogleCloudOptions).region = REGION\noptions.view_as(GoogleCloudOptions).job_name = \"tfdv-{}\".format(time.strftime(\"%Y%m%d-%H%M%S\"))\noptions.view_as(GoogleCloudOptions).staging_location = STAGING_BUCKET + '/staging/'\noptions.view_as(GoogleCloudOptions).temp_location = STAGING_BUCKET + '/tmp/'\noptions.view_as(StandardOptions).runner = 'DataflowRunner'\noptions.view_as(SetupOptions).setup_file = os.path.join(LAB_ROOT_FOLDER, 'setup.py')\n\nstats_options = tfdv.StatsOptions(schema=schema, infer_type_from_schema=True)\n\ntrain_stats = tfdv.generate_statistics_from_csv(\n data_location=TRAINING_DATASET_WITH_MISSING_VALUES,\n stats_options=stats_options,\n pipeline_options=options,\n output_path=STAGING_BUCKET + '/output/'\n)\n\ntfdv.visualize_statistics(train_stats)", "Analyzing evaluation data\nSo far we've only been looking at the training data. It's important that our evaluation data is consistent with our training data, including that it uses the same schema. It's also important that the evaluation data includes examples of roughly the same ranges of values for our numerical features as our training data, so that our coverage of the loss surface during evaluation is roughly the same as during training. The same is true for categorical features. Otherwise, we may have training issues that are not identified during evaluation, because we didn't evaluate part of our loss surface.\nYou will now generate statistics for the evaluation split and visualize both training and evaluation splits on the same chart:\n\nThe training and evaluation datasets overlay, making it easy to compare them.\nThe charts now include a percentages view, which can be combined with log or the default linear scales.\nClick expand on the Numeric Features chart, and select the log scale. Review the Slope feature, and notice the difference in the max. Will that cause problems?", "stats_options = tfdv.StatsOptions(schema=schema, infer_type_from_schema=True)\n\neval_stats = tfdv.generate_statistics_from_csv(\n data_location=EVALUATION_DATASET_WITH_ANOMALIES,\n stats_options=stats_options\n)\n\ntfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,\n lhs_name='EVAL DATASET', rhs_name='TRAIN_DATASET')\n", "Checking for anomalies\nDoes our evaluation dataset match the schema from our training dataset? This is especially important for categorical features, where we want to identify the range of acceptable values.\nWhat would happen if we tried to evaluate using data with categorical feature values that were not in our training dataset? What about numeric features that are outside the ranges in our training dataset?", "anomalies = tfdv.validate_statistics(statistics=eval_stats, schema=schema)\ntfdv.display_anomalies(anomalies)", "Fixing evaluation anomalies in the schema\nIt looks like we have some new values for Soil_Type and some out-of-range values for Slope in our evaluation data, that we didn't have in our training data. Whever it should be considered anomaly, depends on our domain knowledge of the data. If an anomaly truly indicates a data error, then the underlying data should be fixed. Otherwise, we can simply update the schema to include the values in the eval dataset.\nIn our case, you are going to add the 5151 value to the domain of Soil_Type as 5151 is a valid USFS Ecological Landtype Units code representing the unspecified soil type. The out-of-range values in Slope are data errors and should be fixed at the source.", "tfdv.get_domain(schema, 'Soil_Type').value.append('5151')", "Re-validate with the updated schema", "updated_anomalies = tfdv.validate_statistics(eval_stats, schema)\ntfdv.display_anomalies(updated_anomalies)", "The unexpected string values error in Soil_Type is gone but the out-of-range error in Slope is still there. Let's pretend you have fixed the source and re-evaluate the evaluation split without corrupted Slope.", "stats_options = tfdv.StatsOptions(schema=schema, infer_type_from_schema=True)\n\neval_stats = tfdv.generate_statistics_from_csv(\n data_location=EVALUATION_DATASET,\n stats_options=stats_options\n)\nupdated_anomalies = tfdv.validate_statistics(eval_stats, schema)\ntfdv.display_anomalies(updated_anomalies)\n\ntfdv.display_schema(schema=schema)", "Schema environments\nIn supervised learning we need to include labels in our dataset, but when we serve the model for inference the labels will not be included. In cases like that introducing slight schema variations is necessary.\nFor example, in this dataset the Cover_Type feature is included as the label for training, but it's missing in the serving data. If you validate the serving data statistics against the current schema you get an anomaly", "stats_options = tfdv.StatsOptions(schema=schema, infer_type_from_schema=True)\n\neval_stats = tfdv.generate_statistics_from_csv(\n data_location=SERVING_DATASET,\n stats_options=stats_options\n)\nserving_anomalies = tfdv.validate_statistics(eval_stats, schema)\ntfdv.display_anomalies(serving_anomalies)", "Environments can be used to address such scenarios. In particular, specific features in schema can be associated with specific environments.", "schema.default_environment.append('TRAINING')\nschema.default_environment.append('SERVING')\ntfdv.get_feature(schema, 'Cover_Type').not_in_environment.append('SERVING')", "If you validate the serving statistics against the serving environment in schema you will not get anomaly", "serving_anomalies = tfdv.validate_statistics(eval_stats, schema, environment='SERVING')\ntfdv.display_anomalies(serving_anomalies)", "Freezing the schema\nWhen the schema is finalized it can be saved as a textfile and managed under source control like any other code artifact.", "output_dir = os.path.join(tempfile.mkdtemp(),'covertype_schema')\n\ntf.io.gfile.makedirs(output_dir)\nschema_file = os.path.join(output_dir, 'schema.pbtxt')\ntfdv.write_schema_text(schema, schema_file)\n\n!cat {schema_file}" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mavillan/SciProg
01_intro/01_intro.ipynb
gpl-3.0
[ "<h1 align=\"center\">Scientific Programming in Python</h1>\n<h2 align=\"center\">Topic 1: Introduction and basic tools </h2>\n\nNotebook created by Martín Villanueva - martin.villanueva@usm.cl - DI UTFSM - April 2017.", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport scipy as sp", "Table of Contents\n\n1.- Anaconda\n2.- GIT\n3.- IPython\n4.- Jupyter Notebook\n5.- Inside Ipython and Kernels\n6.- Magics\n\n<div id='anaconda' />\n1.- Anaconda\nAlthough Python is an open-source, cross-platform language, installing it with the usual scientific packages used to be overly complicated. Fortunately, there is now an all-in-one scientific Python distribution, Anaconda (by Continuum Analytics), that is free, cross-platform, and easy to install. \nNote: There are other distributions and installation options (like Canopy, WinPython, Python(x, y), and others).\nWhy to use Anaconda:\n1. User level install of the version of python you want.\n2. Able to install/update packages completely independent of system libraries or admin privileges.\n3. No risk of messing up required system libraries.\n4. Comes with the conda manager which allows us to handle the packages and magage environments.\n5. It \"completely\" solves the problem of packages dependencies.\n6. Most important scientific packages (NumPy, SciPy, Scikit-Learn and others) are compiled with MKL support.\n7. Many scientific communities are using it!.\nNote: In this course we will use Python3.\nInstallation\nDownload installation script here. Run in a terminal:\nbash\n bash Anaconda3-4.3.1-Linux-x86_64.sh\nThen modify the PATH environment variable in your ~/.bashrc appending the next line:\nbash\nexport PATH=~/anaconda3/bin:$PATH\nRun source ~/.bashrc and test your installation by calling the python interpreter!\nConda and useful comands\n\nInstall packages\nbash\nconda install package_name\nUpdate packages\nbash\nconda update package_name\nconda update --all\nSearch packages\nbash\nconda search package_pattern\nClean Installation\nbash\nconda clean {--lock, --tarballs, --index-cache, --packages, --source-cache, --all}\n\nEnvironments\nIsolated distribution of packages.\n\nCreate an environments\nbash\nconda create --name env_name python=version packages_to_install_in_env\nconda create --name python2 python=2.7 anaconda\nconda create --name python26 python=2.6 python\nSwitch to environments\nbash\nsource activate env_name\nList all available environments\nbash\nconda info --envs\nDelete an environment\nbash\nconda remove --name env_name --all\n\nImportant Note: If you install packages with pip, they will be installed in the running environment.\nFor more info about conda see here\n<div id='git' />\n2.- Git\nGit is a version control system (VCS) for tracking changes in computer files and coordinating work on those files among multiple people. It is primarily used for software development, but it can be used to keep track of changes in any files. As a distributed revision control system it is aimed at speed, data integrity, and support for distributed, non-linear workflows.\n\nOnline providers supporting Git include GitHub (https://github.com), Bitbucket (https://bitbucket.org), Google code (https://code.google.com), Gitorious (https://gitorious.org), and SourceForge (https://sourceforge.net).\nIn order to get your git repository ready for use, follow these instructions:\n\nCreate the project directory.\nbash\nmkdir project &amp;&amp; cd project\nInitialize the local directory as a Git repository.\nbash\ngit init\nAdd the files in your new local repository. This stages them for the first commit.\n```bash\ntouch README\ngit add .\n\nTo unstage a file, use 'git reset HEAD YOUR-FILE'.\n4. Commit the files that you've staged in your local repository.bash\ngit commit -m \"First commit\"\nTo remove this commit and modify the file, use 'git reset --soft HEAD~1' and commit and add the file again.\n5. Add the URL for the remote repository where your local repository will be pushed.bash\ngit remote add origin remote_repository_URL\nSets the new remote\ngit remote -v\nVerifies the new remote URL\n6. Push the changes in your local repository to GitHub.bash\ngit push -u origin master\n```\n<div id='ipython' />\n3.- IPython\nIPython its just an improved version of the standard Python shell, that provides tools for interactive computing in Python. \nHere are some cool features of IPython:\n\nBetter syntax highlighting.\nCode completion.\nDirect access to bash/linux commands (cd, ls, pwd, rm, mkdir, etc). Additional commands can be exectuted with: !command.\nwho command to see defined variables in the current session.\nInspect objects with ?.\nAnd magics, which we will see briefly. \n\n<div id='jupyter' />\n4.- Jupyter Notebook\nIt is a web-based interactive environment that combines code, rich text, images, videos, animations, mathematics, and plots into a single document. This modern tool is an ideal gateway to high-performance numerical computing and data science in Python.\nNew paragraph\nThis is rich text with links, equations:\n$$\n\\hat{f}(\\xi) = \\int_{-\\infty}^{+\\infty} f(x) \\, \\mathrm{e}^{-i \\xi x} dx,\n$$\ncode with syntax highlighting\npython\ndef fibonacci(n):\n if n &lt;= 1:\n return n\n else:\n return fibonacci(n-1) + fibonacci(n-2)\nimages: \nand plots:", "xgrid = np.linspace(-3,3,50)\nf1 = np.exp(-xgrid**2)\nf2 = np.tanh(xgrid)\nplt.figure(figsize=(8,6))\nplt.plot(xgrid, f1, 'bo-')\nplt.plot(xgrid, f2, 'ro-')\nplt.title('Just a demo plot')\nplt.grid()\nplt.show()", "IPython also comes with a sophisticated display system that lets us insert rich web elements in the notebook. Here you can see an example of how to add Youtube videos in a notebook", "from IPython.display import YouTubeVideo\nYouTubeVideo('HrxX9TBj2zY')", "<div id='inside' />\n5.- Inside Ipython and Kernels\nThe IPython Kernel is a separate IPython process which is responsible for running user code, and things like computing possible completions. Frontends, like the notebook or the Qt console, communicate with the IPython Kernel using JSON messages sent over ZeroMQ sockets.\nThe core execution machinery for the kernel is shared with terminal IPython:\n\nA kernel process can be connected to more than one frontend simultaneously. In this case, the different frontends will have access to the same variables.\nThe Client-Server architecture\nThe Notebook frontend does something extra. In addition to running your code, it stores code and output, together with markdown notes, in an editable document called a notebook. When you save it, this is sent from your browser to the notebook server, which saves it on disk as a JSON file with a .ipynb extension.\n\nThe notebook server, not the kernel, is responsible for saving and loading notebooks, so you can edit notebooks even if you don’t have the kernel for that language —you just won’t be able to run code. The kernel doesn’t know anything about the notebook document: it just gets sent cells of code to execute when the user runs them.\nOthers Kernels\nThere are two ways to develop a kernel for another language. Wrapper kernels reuse the communications machinery from IPython, and implement only the core execution part. Native kernels implement execution and communications in the target language:\n\nNote: To see a list of all available kernels (and installation instructions) visit here.\nConvert notebooks to other formats\nIt is also possible to convert the original JSON notebook to the following formats: html, latex, pdf, rst, markdown and script. For that you must run\nbash\njupyter-nbconvert --to FORMAT notebook.ipynb\nwith FORMAT as one of the above options. Lets convert this notebook to htlm!\n<div id='magics' />\n6.- Magics\nIPython magics are custom commands that let you interact with your OS and filesystem. There are line magics % (which just affect the behavior of such line) and cell magics %% (which affect the whole cell). \nHere we test some useful magics:", "# this will list all magic commands\n%lsmagic\n\n# also work in ls, cd, mkdir, etc\n%pwd\n\n%history\n\n# this will execute and show the output of the program\n%run ./hola_mundo.py\n\ndef naive_loop():\n for i in range(100):\n for j in range(100):\n for k in range(100):\n a = 1+1\n return None\n\n%timeit -n 10 naive_loop()\n\n%time naive_loop()\n\n%%bash\ncd ..\nls", "lets you capture the standard output and error output of some code into a Python variable. \n Here is an example (the outputs are captured in the output Python variable).", "%%capture output\n!ls\n\n%%writefile myfile.txt\nHolanda que talca!\n\n!cat myfile.txt\n!rm myfile.txt", "Writting our own magics!\nIn this section we will create a new cell magic that compiles and executes C++ code in the Notebook.", "from IPython.core.magic import register_cell_magic", "To create a new cell magic, we create a function that takes a line (containing possible options) and a cell's contents as its arguments, and we decorate it with @register_cell_magic.", "@register_cell_magic\ndef cpp(line, cell):\n \"\"\"Compile, execute C++ code, and return the\n standard output.\"\"\"\n # We first retrieve the current IPython interpreter instance.\n ip = get_ipython()\n # We define the source and executable filenames.\n source_filename = '_temp.cpp'\n program_filename = '_temp'\n # We write the code to the C++ file.\n with open(source_filename, 'w') as f:\n f.write(cell)\n # We compile the C++ code into an executable.\n compile = ip.getoutput(\"g++ {0:s} -o {1:s}\".format(\n source_filename, program_filename))\n # We execute the executable and return the output.\n output = ip.getoutput('./{0:s}'.format(program_filename))\n print('\\n'.join(output))\n\n%%cpp\n#include<iostream>\nint main() \n{\n std::cout << \"Hello world!\";\n}", "This cell magic is currently only available in your interactive session. To distribute it, you need to create an IPython extension. This is a regular Python module or package that extends IPython.\nTo create an IPython extension, copy the definition of the cpp() function (without the decorator) to a Python module, named cpp_ext.py for example. Then, add the following at the end of the file:\npython\ndef load_ipython_extension(ipython):\n \"\"\"This function is called when the extension is loaded.\n It accepts an IPython InteractiveShell instance.\n We can register the magic with the `register_magic_function`\n method of the shell instance.\"\"\"\n ipython.register_magic_function(cpp, 'cell')\nThen, you can load the extension with %load_ext cpp_ext. The cpp_ext.py le needs to be in the PYTHONPATH, for example in the current directory.", "%load_ext cpp_ext" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
agile-geoscience/bruges
docs/_userguide/_A_quick_wedge_model.ipynb
apache-2.0
[ "A quick wedge model\nLet's make a quick wedge model and its associated synthetic.\nWe'll do a zero-offset (aka normal incidence) synthetic, and a full offset one.\n\nZero offset\nWe can produce a simple wedge model just by calling the wedge() function.", "import matplotlib.pyplot as plt\nimport bruges as bg\n\nw, top, base, ref = bg.models.wedge()\n\nplt.imshow(w, interpolation='none')\nplt.axvline(ref, color='k', ls='--')\nplt.plot(top, 'r-', lw=4)\nplt.plot(base, 'r-', lw=4)\nplt.show()", "You can then use this integer model to index into an array of rock properties:", "import numpy as np\n\nvps = np.array([2320, 2350, 2350])\nrhos = np.array([2650, 2600, 2620])", "We can use these to make vp and rho earth models. We can use NumPy’s fancy indexing by passing our array of indicies to access the rock properties (in this case acoustic impedance) for every element at once.", "vp = vps[w]\nrho = rhos[w]", "Each of these new arrays is the shape of the model, but is filled with a rock property:", "vp.shape\n\nvp[:5, :5]", "Now we can create the reflectivity profile:", "rc = bg.reflection.acoustic_reflectivity(vp, rho)", "Then make a wavelet and convolve it with the reflectivities:", "ricker, _ = bg.filters.ricker(duration=0.064, dt=0.001, f=40)\n\nsyn = bg.filters.convolve(rc, ricker)\n\nsyn.shape", "The easiest way to check everything worked is probably to plot it.", "fig, axs = plt.subplots(figsize=(17, 4), ncols=5,\n gridspec_kw={'width_ratios': (4, 4, 4, 1, 4)})\naxs[0].imshow(w)\naxs[0].set_title('Wedge model')\naxs[1].imshow(vp * rho) \naxs[1].set_title('Impedance')\naxs[2].imshow(rc)\naxs[2].set_title('Reflectivity')\naxs[3].plot(ricker, np.arange(ricker.size))\naxs[3].axis('off')\naxs[3].set_title('Wavelet')\naxs[4].imshow(syn)\naxs[4].set_title('Synthetic')\naxs[4].plot(top, 'w', alpha=0.5)\naxs[4].plot(base, 'w', alpha=0.5)\nplt.show()", "Alternative workflow\nIn the last example, we made an array of integers, then used indexing to place rock properties in the array, using the index as a sort of look-up.\nBut we could make the impedance model directly, passing rock properties in to the wedge() function via teh strat argument. It just depends how you want to make your models. \nThe strat argument was the default [0, 1, 2] in the last example. Let's pass in the rock properties instead.", "vps = np.array([2320, 2350, 2350])\nrhos = np.array([2650, 2600, 2620])\n\nimpedances = vps * rhos\n\nw, top, base, ref = bg.models.wedge(strat=impedances)", "And look at the result:", "plt.imshow(w, interpolation='none') \nplt.axvline(ref, color='k', ls='--')\nplt.plot(top, 'r-', lw=4)\nplt.plot(base, 'r-', lw=4)\nplt.colorbar()\nplt.show()", "Now the wedge contains rock properties, not integer labels.\nOffset reflectivity\nLet's make things more realistic by computing offset reflectivities, not just normal incidence (acoustic) reflectivity. We'll need Vs as well:", "vps = np.array([2320, 2350, 2350])\nvss = np.array([1150, 1250, 1200])\nrhos = np.array([2650, 2600, 2620])", "We need the model with integers like 0, 1, 2 again:", "w, top, base, ref = bg.models.wedge()", "Index to get the property models:", "vp = vps[w]\nvs = vss[w]\nrho = rhos[w]", "Compute the reflectivity for angles up to 45 degrees:", "rc = bg.reflection.reflectivity(vp, vs, rho, theta=range(46))\n\nrc.shape", "The result is three-dimensional: the angles are in the first dimension. So the zero-offset reflectivities are in w[0] and 30 degrees is at w[30].\nOr, you can slice this cube in another orientation and see how reflectivity varies with angle:", "plt.imshow(rc.real[:, :, 50].T)", "Notice that we're looking only at the real components (offset reflectivities are complex numbers), and we have to transpose the array to get it the right way around.\n\n&copy; 2022 Agile Scientific, licensed CC-BY / Apache 2.0" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tesera/pygypsy
notebooks/#32-address-testing-findings/#32-isolated-profiling-2.ipynb
mit
[ "Recap\nIn order of priority/time taken\n\npandas init dict\nbasal_area_aw_df = pd.DataFrame(columns=['BA_Aw'], index=xrange(max_age))\nfind a faster way to create this data frame\nrelax the tolerance for aspen\n\n\npandas set item\nuse at method \nhttp://pandas.pydata.org/pandas-docs/stable/indexing.html#fast-scalar-value-getting-and-setting\n\n\nlambdas\nuse cython for the gross tot vol and merch vol functions\nmight be wise to refactor these first to have conventional names, keyword arguments, and a base implementation to get rid of the boilerplate\ndon't be deceived - the callable is a miniscule portion; series.getitem is taking most of the time\nagain, using .at here would probably be a significant improvement\n\n\nbasalareaincremementnonspatialaw\nthis is actually slow because of the number of times the BAFromZeroToDataAw function is called as shown above\nrelaxing the tolerance may help\nindeed the tolerance is 0.01 * some value while the other factor finder functions have 0.1 tolerance i think\ncan also use cython for the increment functions\n\n\n\ndo a profiling run with IO (of reading input data and writing the plot curves to files) in next run\nCharacterize what is happening\nIndexing with df[] or series[] is slow for scalars (lambdas, pandas set)\nbasalareaincrement is running a lot for aw, use the same tolerance as is used for other species\nmerchvol, increment, and gross vol functions use pure python. cython would be effective.\nDecide on the action\n\nuse same tolerance for aw as other species\nuse at instead of [] or ix? - compare these in MWE\ncreating data frame is slow, maybe because its fromdict. see if this can be improved\n\nMWEs", "import pandas as pd\nimport numpy as np", "init from dict and xrange index vs from somethign else\nTimings", "%%timeit\nd = pd.DataFrame(columns=['A'], index=xrange(1000))\n\n%%timeit\nd = pd.DataFrame(columns=['A'], index=xrange(1000), dtype='float')\n\n%%timeit\nd = pd.DataFrame({'A': np.zeros(1000)})", "The problem here is that dataframe init being called 7000 times because of the aw ba factor finder\nMaybe it's not worth using a data frame here. use a list or numpy and then convert to dataframe when the factor is found, e.g.:", "%%timeit\nfor _ in xrange(5000):\n d = pd.DataFrame(columns=['A'], index=xrange(1000))\n\n%%timeit\nfor _ in xrange(5000):\n d = np.zeros(1000)", "Review the code to see how this can be applied\nThe numpy/purepython approach as potential\nBut there's a couple issues for which the code must be examined\nThe problem comes from the following call chain\nsimulate_forwards_df (called 1x) ->\nget_factors_for_all_species (called 10x, 1x per plot) ->\nBAfactorFinder_Aw (called 2x, 1x per plot that has aw) ->\nBAfromZeroToDataAw (called 7191 times, most of which in this chain) -> \nDataFrame.__init__ (called 7932 times, most of which in this chain) ... \nwhy does BAfromZeroToDataAw create a dataframe? It's good to see the code:\nFirst, simulate_forwards_df calls get_factors_for_all_species and then BAfromZeroToDataAw with some parameters and simulation choice of false\nNote that when simulation==False, that is the only time that the list is created. otherwise the list is left empty.\nNote also that simulation_choice defaults to True in forward simulation, i.e. for when BAfromZeroToData__ are called from forward simulation.\nget_factors_for_all_species calls factor finder functions for each species, if the species is present, and returns a dict of the factors\nBAfactorFinder_Aw is the main suspect, for sime reason aspen has a harder time converging, so the loop in this function runs many times\nIt calls BAfromZeroToDataAw with simulation_choice of 'yes' and simulation=True BUT IT ONLY USES THE 1ST RETURN VALUE\nslow lambdas\nbelow is left here for the record, but the time is actually spent in getitem, not so much in the callables applied, that is an easy fix\nWith the df init improved by using np array, the next suspect is the lambdas. The method for optimizing is generally to use cython, the functiosn themselves can be examined for opportunities:\nthey are pretty basic - everything is a float.\n``` python\ndef MerchantableVolumeAw(N_bh_Aw, BA_Aw, topHeight_Aw, StumpDOB_Aw,\n StumpHeight_Aw, TopDib_Aw, Tvol_Aw):\n # ...\n if N_bh_Aw > 0:\n k_Aw = (BA_Aw * 10000.0 / N_bh_Aw)**0.5\n else:\n k_Aw = 0\nif k_Aw &gt; 0 and topHeight_Aw &gt; 0:\n b0 = 0.993673\n b1 = 923.5825\n b2 = -3.96171\n b3 = 3.366144\n b4 = 0.316236\n b5 = 0.968953\n b6 = -1.61247\n k1 = Tvol_Aw * (k_Aw**b0)\n k2 = (b1* (topHeight_Aw**b2) * (StumpDOB_Aw**b3) * (StumpHeight_Aw**b4) * (TopDib_Aw**b5) * (k_Aw**b6)) + k_Aw\n MVol_Aw = k1/k2\nelse:\n MVol_Aw = 0\n\nreturn MVol_Aw\n\n```\n``` python\ndef GrossTotalVolume_Aw(BA_Aw, topHeight_Aw):\n # ...\n Tvol_Aw = 0\nif topHeight_Aw &gt; 0:\n a1 = 0.248718\n a2 = 0.98568\n a3 = 0.857278\n a4 = -24.9961\n Tvol_Aw = a1 * (BA_Aw**a2) * (topHeight_Aw**a3) * numpy.exp(1+(a4/((topHeight_Aw**2)+1)))\n\nreturn Tvol_Aw\n\n```\nTimings for getitem\nThere's a few ways to get an item from a series:", "d = pd.Series(np.random.randint(0,100, size=(100)), index=['%d' %d for d in xrange(100)])\n\n%%timeit\nd['1']\n\n%%timeit \nd.at('1')\n\n%%timeit\nd.loc('1')", "loc or at are faster than [] indexing.\nRevise the code\nGo on. Do it.\nReview code changes", "%%bash\ngit log --since 2016-11-09 --oneline\n\n! git diff HEAD~23 ../gypsy", "Tests\nDo tests still pass?\nRun timings", "%%bash\n# git checkout dev\n# time gypsy simulate ../private-data/prepped_random_sample_300.csv --output-dir tmp\n# rm -rfd tmp\n\n# real\t8m18.753s\n# user\t8m8.980s\n# sys\t0m1.620s\n\n%%bash\n# after factoring dataframe out of zerotodata functions\n# git checkout -b da080a79200f50d2dda7942c838b7f3cad845280 df-factored-out-zerotodata\n# time gypsy simulate ../private-data/prepped_random_sample_300.csv --output-dir tmp\n# rm -rfd tmp\n\n# real\t5m51.028s\n# user\t5m40.130s\n# sys\t0m1.680s", "Removing the data frame init gets a 25% time reduction", "%%bash\n# after using a faster indexing method for the arguments put into the apply functions\n# git checkout 6b541d5fb8534d6fb055961a9d5b09e1946f0b46 -b applys-use-faster-getitem\n# time gypsy simulate ../private-data/prepped_random_sample_300.csv --output-dir tmp\n# rm -rfd tmp\n\n# real\t6m16.021s\n# user\t5m59.620s\n# sys\t0m2.030s", "Hm, this actually got worse, although it is a small sample......... if anything i suspect its because we're calling row.at[] instead of assigning the variable outside the loop. It's ok as the code has less reptition, it's a good tradeoff.", "%%bash\n# after fixing `.at` redundancy - calling it in each apply call\n# git checkout 4c978aff110001efdc917ed60cb611139e1b54c9 -b remove-getitem-redundancy\n# time gypsy simulate ../private-data/prepped_random_sample_300.csv --output-dir tmp\n# rm -rfd tmp\n\n# real\t5m36.407s\n# user\t5m25.740s\n# sys\t0m2.140s", "It doesn't totrally remove redundancy, we still get an attr/value of an object, but now its a dict instead of a pandas series. Hopefully it's faster. Should have tested first using MWE.\nIt is moderately faster. Not much.\nLeave cython optimization for next iteration\nRun profiling", "from gypsy.forward_simulation import simulate_forwards_df\n\ndata = pd.read_csv('../private-data/prepped_random_sample_300.csv', index_col=0, nrows=10)\n\n%%prun -D forward-sim-2.prof -T forward-sim-2.txt -q\nresult = simulate_forwards_df(data)\n\n!head forward-sim-2.txt", "Compare performance visualizations\nNow use either of these commands to visualize the profiling\n```\npyprof2calltree -k -i forward-sim-1.prof forward-sim-1.txt\nor\ndc run --service-ports snakeviz notebooks/forward-sim-1.prof\n```\nOld\n\nNew\n\nSummary of performance improvements\nforward_simulation is now 2x faster than last iteration, 8 times in total, due to the changes outlined in the code review section above\non my hardware, this takes 1000 plots to ~4 minutes\non carol's hardware, this takes 1000 plots to ~13 minutes\nFor 1 million plots, we're looking at 2 to 9 days on desktop hardware\nProfile with I/O", "! rm -rfd gypsy-output\n\noutput_dir = 'gypsy-output'\n\n%%prun -D forward-sim-2.prof -T forward-sim-2.txt -q\n# restart the kernel first\ndata = pd.read_csv('../private-data/prepped_random_sample_300.csv', index_col=0, nrows=10)\nresult = simulate_forwards_df(data)\nos.makedirs(output_dir)\nfor plot_id, df in result.items():\n filename = '%s.csv' % plot_id\n output_path = os.path.join(output_dir, filename)\n df.to_csv(output_path)\n", "Identify new areas to optimize\n\nfrom last time:\nparallel (3 cores) gets us to 2 - 6 days - save for last\nAWS with 36 cores gets us to 4 - 12 hours ($6.70 - $20.10 USD on a c4.8xlarge instance in US West Region)\naws lambda and split up the data \n\n\nnow:\ngetting items in apply is still slow - vectorize the functions\ncython for icnrement functions epsecially bA" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
bjshaw/phys202-2015-work
assignments/assignment11/OptimizationEx01.ipynb
mit
[ "Optimization Exercise 1\nImports", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.optimize as opt", "Hat potential\nThe following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the \"hat potential\":\n$$ V(x) = -a x^2 + b x^4 $$\nWrite a function hat(x,a,b) that returns the value of this function:", "def hat(x,a,b):\n v = -a*x**2+b*x**4\n return v\n\nassert hat(0.0, 1.0, 1.0)==0.0\nassert hat(0.0, 1.0, 1.0)==0.0\nassert hat(1.0, 10.0, 1.0)==-9.0", "Plot this function over the range $x\\in\\left[-3,3\\right]$ with $b=1.0$ and $a=5.0$:", "a = 5.0\nb = 1.0\n\nv = []\nx = np.linspace(-3,3,50)\nfor i in x:\n v.append(hat(i,5.0,1.0))\nplt.figure(figsize=(7,5))\nplt.plot(x,v)\nplt.tick_params(top=False,right=False,direction='out')\nplt.xlabel('x')\nplt.ylabel('V(x)')\nplt.title('V(x) vs. x');\n\nassert True # leave this to grade the plot", "Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$.\n\nUse scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima.\nPrint the x values of the minima.\nPlot the function as a blue line.\nOn the same axes, show the minima as red circles.\nCustomize your visualization to make it beatiful and effective.", "x1=opt.minimize(hat,-1.8,args=(5.0,1.0))['x']\nx2=opt.minimize(hat,1.8,args=(5.0,1.0))['x']\n\nprint(x1,x2)\n\nv = []\nx = np.linspace(-3,3,50)\nfor i in x:\n v.append(hat(i,5.0,1.0))\nplt.figure(figsize=(7,5))\nplt.plot(x,v)\nplt.scatter(x1,hat(x1,5.0,1.0),color='r',label='Local Minima')\nplt.scatter(x2,hat(x2,5.0,1.0),color='r')\nplt.tick_params(top=False,right=False,direction='out')\nplt.xlabel('x')\nplt.ylabel('V(x)')\nplt.xlim(-3,3)\nplt.ylim(-10,35)\nplt.legend()\nplt.title('V(x) vs. x');\n\nassert True # leave this for grading the plot", "To check your numerical results, find the locations of the minima analytically. Show and describe the steps in your derivation using LaTeX equations. Evaluate the location of the minima using the above parameters.\nTo find the minima of the equation $V(x) = -a x^2 + b x^4$, we first have to find the $x$ values where the slope is $0$.\nTo do this, we first compute the derivative, $V'(x)=-2ax+4bx^3$\nThen we set $V'(x)=0$ and solve for $x$ with our parameters $a=5.0$ and $b=1.0$\n$\\hspace{15 mm}$$0=-10x+4x^3$ $\\Rightarrow$ $10=4x^2$ $\\Rightarrow$ $x^{2}=\\frac{10}{4}$ $\\Rightarrow$ $x=\\pm \\sqrt{\\frac{10}{4}}$\nComputing $x$:", "x_1 = np.sqrt(10/4)\nx_2 = -np.sqrt(10/4)\n\nprint(x_1,x_2)", "We see that the locations computed with scipy.optimize.minimize are very close to the locations computed analytically." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kaleoyster/nbi-data-science
Bridge Life-Cycle Models/CDF+Probability+Reconstruction+vs+Age+of+Bridges+in+the+Southwest+United+States.ipynb
gpl-2.0
[ "Libraries and Packages", "import pymongo\nfrom pymongo import MongoClient\nimport time\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\nfrom matplotlib.pyplot import *\nimport matplotlib.pyplot as plt\nimport folium\nimport datetime as dt\nimport random as rnd\nimport warnings\nimport datetime as dt\nimport csv\n%matplotlib inline", "Connecting to National Data Service: The Lab Benchwork's NBI - MongoDB instance", "warnings.filterwarnings(action=\"ignore\")\nClient = MongoClient(\"mongodb://bridges:readonly@nbi-mongo.admin/bridge\")\ndb = Client.bridge\ncollection = db[\"bridges\"]", "Extracting Data of Southwest states of the United states from 1992 - 2016.\nThe following query will extract data from the mongoDB instance and project only selected attributes such as structure number, yearBuilt, deck, year, superstructure, owner, countryCode, structure type, type of wearing surface, and subtructure.", "def getData(state):\n pipeline = [{\"$match\":{\"$and\":[{\"year\":{\"$gt\":1991, \"$lt\":2017}},{\"stateCode\":state}]}},\n {\"$project\":{\"_id\":0,\n \"structureNumber\":1,\n \"yearBuilt\":1,\n \"yearReconstructed\":1,\n \"deck\":1, ## Rating of deck\n \"year\":1,\n 'owner':1,\n \"countyCode\":1,\n \"substructure\":1, ## Rating of substructure\n \"superstructure\":1, ## Rating of superstructure\n \"Structure Type\":\"$structureTypeMain.typeOfDesignConstruction\",\n \"Type of Wearing Surface\":\"$wearingSurface/ProtectiveSystem.typeOfWearingSurface\",\n }}]\n dec = collection.aggregate(pipeline)\n conditionRatings = pd.DataFrame(list(dec))\n\n ## Creating new column: Age\n conditionRatings['Age'] = conditionRatings['year'] - conditionRatings['yearBuilt']\n \n return conditionRatings\n", "Filteration of NBI Data\nThe following routine removes the missing data such as 'N', 'NA' from deck, substructure,and superstructure , and also removing data with structure Type - 19 and type of wearing surface - 6.", "## filter and convert them into interger\ndef filterConvert(conditionRatings):\n before = len(conditionRatings)\n print(\"Total Records before filteration: \",len(conditionRatings))\n conditionRatings = conditionRatings.loc[~conditionRatings['deck'].isin(['N','NA'])]\n conditionRatings = conditionRatings.loc[~conditionRatings['substructure'].isin(['N','NA'])]\n conditionRatings = conditionRatings.loc[~conditionRatings['superstructure'].isin(['N','NA'])]\n conditionRatings = conditionRatings.loc[~conditionRatings['Structure Type'].isin([19])]\n conditionRatings = conditionRatings.loc[~conditionRatings['Type of Wearing Surface'].isin(['6'])]\n after = len(conditionRatings)\n print(\"Total Records after filteration: \",len(conditionRatings))\n print(\"Difference: \", before - after)\n return conditionRatings\n\n", "Particularly in the area of determining a deterioration model of bridges, There is an observed sudden increase in condition ratings of bridges over the period of time, This sudden increase in the condition rating is attributed to the reconstruction of the bridges. NBI dataset contains an attribute to record this reconstruction of the bridge. An observation of an increase in condition rating of bridges over time without any recorded information of reconstruction of that bridge in NBI dataset suggests that dataset is not updated consistently. In order to have an accurate deterioration model, such unrecorded reconstruction activities must be accounted in the deterioration model of the bridges.", "\ndef findSurvivalProbablities(conditionRatings):\n \n i = 1\n j = 2\n probabilities = []\n while j < 121:\n v = list(conditionRatings.loc[conditionRatings['Age'] == i]['deck'])\n k = list(conditionRatings.loc[conditionRatings['Age'] == i]['structureNumber'])\n Age1 = {key:int(value) for key, value in zip(k,v)}\n #v = conditionRatings.loc[conditionRatings['Age'] == j]\n\n v_2 = list(conditionRatings.loc[conditionRatings['Age'] == j]['deck'])\n k_2 = list(conditionRatings.loc[conditionRatings['Age'] == j]['structureNumber'])\n Age2 = {key:int(value) for key, value in zip(k_2,v_2)}\n\n\n intersectedList = list(Age1.keys() & Age2.keys())\n reconstructed = 0\n for structureNumber in intersectedList:\n if Age1[structureNumber] < Age2[structureNumber]:\n if (Age1[structureNumber] - Age2[structureNumber]) < -1:\n reconstructed = reconstructed + 1\n try:\n probability = reconstructed / len(intersectedList)\n except ZeroDivisionError:\n probability = 0\n\n probabilities.append(probability*100)\n\n i = i + 1\n j = j + 1\n \n return probabilities\n", "A utility function to plot the graphs.", "def plotCDF(cumsum_probabilities):\n fig = plt.figure(figsize=(15,8))\n ax = plt.axes()\n\n plt.title('CDF of Reonstruction Vs Age')\n plt.xlabel('Age')\n plt.ylabel('CDF of Reonstruction')\n plt.yticks([0,10,20,30,40,50,60,70,80,90,100])\n plt.ylim(0,100)\n\n x = [i for i in range(1,120)]\n y = cumsum_probabilities\n ax.plot(x,y)\n return plt.show()\n\n", "The following script will select all the bridges in the Southwest United States, filter missing and not required data. The script also provides information of how much of the data is being filtered.", "states = ['48','40','35','04']\n\n# Mapping state code to state abbreviation \nstateNameDict = {'25':'MA',\n '04':'AZ',\n '08':'CO',\n '38':'ND',\n '09':'CT',\n '19':'IA',\n '26':'MI',\n '48':'TX',\n '35':'NM',\n '17':'IL',\n '51':'VA',\n '23':'ME',\n '16':'ID',\n '36':'NY',\n '56':'WY',\n '29':'MO',\n '39':'OH',\n '28':'MS',\n '11':'DC',\n '21':'KY',\n '18':'IN',\n '06':'CA',\n '47':'TN',\n '12':'FL',\n '24':'MD',\n '34':'NJ',\n '46':'SD',\n '13':'GA',\n '55':'WI',\n '30':'MT',\n '54':'WV',\n '15':'HI',\n '32':'NV',\n '37':'NC',\n '10':'DE',\n '33':'NH',\n '44':'RI',\n '50':'VT',\n '42':'PA',\n '05':'AR',\n '20':'KS',\n '45':'SC',\n '22':'LA',\n '40':'OK',\n '72':'PR',\n '41':'OR',\n '27':'MN',\n '53':'WA',\n '01':'AL',\n '31':'NE',\n '02':'AK',\n '49':'UT'\n }\n\ndef getProbs(states, stateNameDict):\n # Initializaing the dataframes for deck, superstructure and subtructure\n df_prob_recon = pd.DataFrame({'Age':range(1,61)})\n df_cumsum_prob_recon = pd.DataFrame({'Age':range(1,61)})\n \n\n for state in states:\n conditionRatings_state = getData(state)\n stateName = stateNameDict[state]\n print(\"STATE - \",stateName)\n conditionRatings_state = filterConvert(conditionRatings_state)\n print(\"\\n\")\n probabilities_state = findSurvivalProbablities(conditionRatings_state)\n cumsum_probabilities_state = np.cumsum(probabilities_state)\n \n df_prob_recon[stateName] = probabilities_state[:60]\n df_cumsum_prob_recon[stateName] = cumsum_probabilities_state[:60]\n \n #df_prob_recon.set_index('Age', inplace = True)\n #df_cumsum_prob_recon.set_index('Age', inplace = True)\n \n return df_prob_recon, df_cumsum_prob_recon\n \ndf_prob_recon, df_cumsum_prob_recon = getProbs(states, stateNameDict)\n\ndf_prob_recon.to_csv('prsouthwest.csv')\ndf_cumsum_prob_recon.to_csv('cprsouthwest.csv')", "In following figures, shows the cumulative distribution function of the probability of reconstruction over the bridges' lifespan, of bridges in the Southwest United States, as the bridges grow older the probability of reconstruction increases.", "plt.figure(figsize=(12,8))\nplt.title(\"CDF Probability of Reconstruction vs Age\")\n\npalette = [\n 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red', 'silver', 'purple', 'gold', 'black','olive'\n]\nlinestyles =[':','-.','--','-',':','-.','--','-',':','-.','--','-']\nfor num, state in enumerate(df_cumsum_prob_recon.drop('Age', axis = 1)):\n \n plt.plot(df_cumsum_prob_recon[state], color = palette[num], linestyle = linestyles[num], linewidth = 4)\n \nplt.xlabel('Age'); plt.ylabel('Probablity of Reconstruction'); \nplt.legend([state for state in df_cumsum_prob_recon.drop('Age', axis = 1)], loc='upper left', ncol = 2)\nplt.ylim(1,100)\nplt.show()", "The below figure presents CDF Probability of reconstruction, of bridge in the Southwest United States.", "plt.figure(figsize = (16,12))\nplt.xlabel('Age')\nplt.ylabel('Mean')\n\n# Initialize the figure\nplt.style.use('seaborn-darkgrid')\n \n# create a color palette\npalette = [\n 'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red', 'silver', 'purple', 'gold', 'black','olive'\n]\n# multiple line plot\nnum = 1\nlinestyles = [':','-.','--','-',':','-.','--','-',':','-.','--','-']\nfor n, column in enumerate(df_cumsum_prob_recon.drop('Age', axis=1)):\n \n # Find the right spot on the plot\n plt.subplot(4,3, num)\n \n # Plot the lineplot\n plt.plot(df_cumsum_prob_recon['Age'], df_cumsum_prob_recon[column], linestyle = linestyles[n] , color=palette[num], linewidth=4, alpha=0.9, label=column)\n \n # Same limits for everybody!\n plt.xlim(1,60)\n plt.ylim(1,100)\n \n \n # Add title\n plt.title(column, loc='left', fontsize=12, fontweight=0, color=palette[num])\n plt.text(30, -1, 'Age', ha='center', va='center')\n plt.text(1, 50, 'Probability', ha='center', va='center', rotation='vertical')\n num = num + 1\n \n# general title\nplt.suptitle(\"CDF Probability of Reconstruction vs Age\", fontsize=13, fontweight=0, color='black', style='italic', y=1.02)\n ", "In the following figures, provides the probability of reconstruction at every age. Note this is not a cumulative probability function. the constant number of reconstruction of the bridges can be explained by various factors.\none particularly interesting reason could be funding provided to reconstruct bridges, this explain why some of the states have perfect linear curve.", "plt.figure(figsize=(12,8))\nplt.title(\"Probability of Reconstruction vs Age\")\n\npalette = [\n 'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red', 'silver', 'purple', 'gold', 'black','olive'\n]\nlinestyles =[':','-.','--','-',':','-.','--','-',':','-.','--','-']\nfor num, state in enumerate(df_cumsum_prob_recon.drop('Age', axis = 1)):\n \n plt.plot(df_prob_recon[state], color = palette[num], linestyle = linestyles[num], linewidth = 4)\n \nplt.xlabel('Age'); plt.ylabel('Probablity of Reconstruction'); \nplt.legend([state for state in df_cumsum_prob_recon.drop('Age', axis = 1)], loc='upper left', ncol = 2)\nplt.ylim(1,25)\nplt.show()", "A key observation in this investigation of several state reveals a constant number of bridges are reconstructed every year, this could be an effect of fixed budget allocated for reconstruction by the state. This also highlights the fact that not all bridges that might require reconstruction are reconstructed.\nTo Understand this phenomena in clearing, the following figure presents probability of reconstruction vs age of all individual states in the Southwest United States.", "plt.figure(figsize = (16,12))\nplt.xlabel('Age')\nplt.ylabel('Mean')\n\n\n# Initialize the figure\nplt.style.use('seaborn-darkgrid')\n \n# create a color palette\npalette = [\n 'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red', 'silver', 'purple', 'gold', 'black','olive'\n]\n# multiple line plot\nnum = 1\nlinestyles = [':','-.','--','-',':','-.','--','-',':','-.','--','-']\nfor n, column in enumerate(df_prob_recon.drop('Age', axis=1)):\n \n # Find the right spot on the plot\n plt.subplot(4,3, num)\n \n # Plot the lineplot\n plt.plot(df_prob_recon['Age'], df_prob_recon[column], linestyle = linestyles[n] , color=palette[num], linewidth=4, alpha=0.9, label=column)\n \n # Same limits for everybody!\n plt.xlim(1,60)\n plt.ylim(1,25)\n \n \n \n # Add title\n plt.title(column, loc='left', fontsize=12, fontweight=0, color=palette[num])\n plt.text(30, -1, 'Age', ha='center', va='center')\n plt.text(1, 12.5, 'Probability', ha='center', va='center', rotation='vertical')\n num = num + 1\n \n# general title\nplt.suptitle(\"Probability of Reconstruction vs Age\", fontsize=13, fontweight=0, color='black', style='italic', y=1.02)\n " ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
PMEAL/OpenPNM-Examples
PaperRecreations/Wu2010_part_a.ipynb
mit
[ "Example: Regenerating Data from\nR. Wu et al. / Elec Acta 54 25 (2010) 7394–7403\nImport the modules", "import openpnm as op\nimport matplotlib.pyplot as plt\nimport scipy as sp\nimport numpy as np\nimport openpnm.models.geometry as gm\nimport openpnm.topotools as tt\n%matplotlib inline", "Set the workspace loglevel to not print anything", "wrk = op.Workspace()\nwrk.loglevel=50", "As the paper requires some lengthy calculation we have split it into parts and put the function in a separate notebook to be re-used in each part. The following code runs and loads the shared functions into this kernel", "%run shared_funcs.ipynb", "The main function runs the simulation for a given network size 'n' and number of points for the relative diffusivity curve. Setting 'npts' to 1 will return the single phase diffusivity. the network size is doubled in the z direction for percolation but the diffusion calculation is effectively only calculated on the middle square section of length 'n'. This is achieved by copying the saturation distribution from the larger network to a smaller one.\nWe can inspect the source in this notebook by running a code cell with the following: simulation??\nRun the simulation once for a network of size 8 x 8 x 8", "x_values, y_values = simulation(n=8)\n\nplt.figure()\nplt.plot(x_values, y_values, 'ro')\nplt.title('normalized diffusivity versus saturation')\nplt.xlabel('saturation')\nplt.ylabel('normalized diffusivity')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GeosoftInc/gxpy
examples/jupyter_notebooks/Tutorials/Grids and Images.ipynb
bsd-2-clause
[ "Copyright (c) 2018 Geosoft Inc.\nhttps://github.com/GeosoftInc/gxpy\nBSD 2-clause License\nWorking with grids and images\nLessons\n<!--- # Run this from a code cell to create TOC markdown: -->\n<!--- import geosoft.gxpy.utility; print(geosoft.gxpy.utility.jupyter_markdown_toc('grids and images')) -->\n\nWhat is a grid?\nImports, Geosoft context, get data from GitHub\nConvert a grid from one format to another\nWorking with Grid instances\nDisplaying a grid\nGrid Coordinate System\nDisplay with coordinate systems\nBasic Grid Statistics\nGrid Iterator\n\nWhat is a grid?\nA grid is a form of spatial data that represents information (such as a gravity intensity, a magnetic reading, or a colour) at points organized as a 2-dimensional array on a right-handed cartesian plane:\n<img src=\"https://github.com/GeosoftInc/gxpy/raw/9.3/examples/tutorial/Grids%20and%20Images/image2017-6-14_13-9-19.png\" alt=\"Drawing\" style=\"width: 500px;\"/>\nThe plane on which the grid is located can be oriented in three dimensions relative to a Coordinate System on the Earth. The most common grids are located on a horizontal surface relative to the Coordinate System. For example, common surfaces might be sea-level, or the ground surface, or a constant elevation above or below the ground surface, or a constant elevation. Vertical cross-sections through the Earth are oriented to be orthogonal to the surface of the Earth.\nA common term used with grids is the concept of a 'grid cell'. In Geosoft's usage, grids are an array of points at a point location, and a 'grid cell' is the rectangular area that extends half-way to the neighboring grid points.\nSpatial reference angles throughout the geosoft.gxpy module will consistently use an angle in degrees azimuth, which is a clockwise-positive angle relative to a coordinate system frame (North, or positive Y). The geosoft.gxapi.GXIMG class specifies the rotation angle in degrees counterclockwise-positive, and other references within the geosoft.gxapi may differ.\nGrids are stored as files on the file system, and there are many common grid file formats in existence. When working with grid files in GX Developer you define the grid file format with the use of a decorator string appended to the grid file name. Geosoft supports the 11 formats (and their many derivatives) described in the Grid File Name Decorations section of the GX Developer documentation. For example:\n| grid file string | grid file type |\n|:----------------:|:-------------- |\n| 'c:/project/mag.grd(GRD)' | Geosoft format grid. |\n| 'c:/project/mag.tif(TIF)' | GeoTIF |\n| 'c:/project/image.jpg(IMG;T=5)' | jpeg image file\n| 'c:/project/mag.grd(GRD;TYPE=COLOR)' | Geosoft colour grid |\nSee also: Tutorial Page\nImports, Geosoft context, get data from GitHub", "import geosoft.gxpy.gx as gx\nimport geosoft.gxpy.grid as gxgrid\nimport geosoft.gxpy.utility as gxu\nfrom IPython.display import Image\n\ngxc = gx.GXpy()\n\nurl = 'https://github.com/GeosoftInc/gxpy/raw/9.3/examples/tutorial/Grids%20and%20Images/'\ngxu.url_retrieve(url + 'elevation_surfer.GRD')", "Convert a grid from one format to another\nWe will start with a common simple task, converting a grid from one format to another. Geosoft supports many common geospatial grid formats which can all be openned as a geosoft.gxpy.grid.Grid instance. Different formats and characteristics are specified using grid decorations, which are appended to the grid file name. See Grid File Name Decorations for all supported grid and image types and how to decorate the grid file name.\nProblem: You have a grid in a Geosoft-supported format, and you need the grid in some other format to use in a different application.\nGrid: elevation_surfer.grd, which is a Surfer v7 format grid file.\nApproach:\n\nOpen the surfer grid with decoration (SRF;VER=V7).\nUse the gxgrid.Grid.copy class method to create an ER Mapper grid, which will have decoration (ERM).", "# open surfer grid\nwith gxgrid.Grid.open('elevation_surfer.grd(SRF;VER=V7)') as grid_surfer:\n\n # copy the grid to an ER Mapper format grid file\n with gxgrid.Grid.copy(grid_surfer, 'elevation.ers(ERM)', overwrite=True) as grid_erm:\n print('file:', grid_erm.file_name, \n '\\ndecorated:', grid_erm.file_name_decorated)", "Working with Grid instances\nYou work with a grid using a geosoft.gxpy.grid.Grid instance, which is a spatial dataset sub-class of a geosoft.gxpy.geometry.Geometry. In Geosoft, all spatial objects are sub-classed from the Geometry class, and all Geometry instances have a coordinate system and spatial extents. Other spatial datasets include Geosoft databases (gdb files), voxels (geosoft_voxel files), surfaces (geosoft_surface files), 2d views, which are contained in Geosoft map files, and 3d views which can be contained in a Geosoft map file or a geosoft_3dv file.\nDataset instances will usually be associated with a file on your computer and, like Python files, you should open and work with datasets using the python with statement, which ensures that the instance and associated resources are freed after the with statement looses context.\nFor example, the following shows two identical ways work with a grid instance, though the with is prefered:", "# open surfer grid, then set to None to free resources\ngrid_surfer = gxgrid.Grid.open('elevation_surfer.grd(SRF;VER=V7)')\nprint(grid_surfer.name)\ngrid_surfer = None\n\n# open surfer grid using with\nwith gxgrid.Grid.open('elevation_surfer.grd(SRF;VER=V7)') as grid_surfer:\n print(grid_surfer.name)", "Displaying a grid\nOne often needs to see what a grid looks like, and this is accomplished by displaying the grid as an image in which the colours represent data ranges. A simple way to do this is to create a grid image file as a png file ising the image_file() method.\nIn this example we create a shaded image with default colouring, and we create a 500 pixel-wide image:", "image_file = gxgrid.Grid.open('elevation_surfer.grd(SRF;VER=V7)').image_file(shade=True, pix_width=500)\nImage(image_file)", "A nicer image might include a neat-line outline, colour legend, scale bar and title. The gxgrid.figure_map() function will create a figure-style map, which can be saved to an image file using the image_file() method of the map instance.", "image_file = gxgrid.figure_map('elevation_surfer.grd(SRF;VER=V7)', title='Elevation').image_file(pix_width=800) \nImage(image_file)", "Grid Coordinate System\nIn Geosoft all spatial data should have a defined coordinate system which allows data to be located on the Earth. This also takes advantage of Geosoft's ability to reproject data as required. However, in this example the Surfer grid does not store the coordinate system information, but we know that the grid uses projection 'UTM zone 54S' on datum 'GDA94'. Let's modify this script to set the coordinate system, which will be saved as part of the ER Mapper grid, which does have the ability to store the coordinate system description.\nIn Geosoft, well-known coordinate systems like this can be described using the form 'GDA94 / UTM zone 54S', which conforms to the SEG Grid Exchange Format standard for describing coordinate systems. You only need to set the coordinate_system property of the grid_surfer instance.", "# define the coordinate system of the Surfer grid\nwith gxgrid.Grid.open('elevation_surfer.grd(SRF;VER=V7)') as grid_surfer: \n grid_surfer.coordinate_system = 'GDA94 / UTM zone 54S'\n \n # copy the grid to an ER Mapper format grid file and the coordinate system is transferred\n with gxgrid.Grid.copy(grid_surfer, 'elevation.ers(ERM)', overwrite=True) as grid_erm:\n print(str(grid_erm.coordinate_system))", "Coordinate systems also contain the full coordinate system parameter information, from which you can construct coordinate systems in other applications.", "with gxgrid.Grid.open('elevation.ers(ERM)') as grid_erm:\n print('Grid Exchange Format coordinate system:\\n', grid_erm.coordinate_system.gxf)\n\nwith gxgrid.Grid.open('elevation.ers(ERM)') as grid_erm:\n print('ESRI WKT format:\\n', grid_erm.coordinate_system.esri_wkt)\n\nwith gxgrid.Grid.open('elevation.ers(ERM)') as grid_erm:\n print('JSON format:\\n', grid_erm.coordinate_system.json)", "Display with coordinate systems\nThe grids now have known coordinate systems and displaying the grid will show the coordinate system on the scale bar. We can also annotate geographic coordinates. This requires a Geosoft Desktop License.", "# show the grid as an image\nImage(gxgrid.figure_map('elevation.ers(ERM)', features=('NEATLINE', 'SCALE', 'LEGEND', 'ANNOT_LL')).image_file(pix_width=800))", "Basic Grid Statistics\nIn this exercise we will work with the data stored in a grid. One common need is to determine some basic statistical information about the grid data, such as the minimum, maximum, mean and standard deviation. This exercise will work with the grid data a number of ways that demonstrate some useful patterns.\nStatistics using numpy\nThe smallest code and most efficient approach is to read the grid into a numpy array and then use the optimized numpy methods to determine statistics. This has the benefit of speed and simplicity at the expense memory, which may be a concern for very large grids, though on modern 64-bit computers with most grids this would be the approach of choice.", "import numpy as np\n\n# open the grid, using the with construct ensures resources are released\nwith gxgrid.Grid.open('elevation_surfer.grd(SRF;VER=V7)') as grid:\n \n # get the data in a numpy array\n data_values = grid.xyzv()[:, :, 3]\n \n# print statistical properties\nprint('minimum: ', np.nanmin(data_values))\nprint('maximum: ', np.nanmax(data_values))\nprint('mean: ', np.nanmean(data_values))\nprint('standard deviation: ', np.nanstd(data_values))", "Statistics using Geosoft VVs\nMany Geosoft methods will work with a geosoft.gxpy.vv.GXvv, which wraps the geosoft.gxapi.GXVV class that deals with very long single-value vectors. The Geosoft GXVV methods works with Geosoft data types and, like numpy, is optimized to take advantage of multi-core processors to improve performance. The pattern in this exercise reads a grid one grid row at a time, returning a GXvv instance and accumulate statistics in an instance of the geosoft.gxapi.GXST class.", "import geosoft.gxapi as gxapi\n\n# the GXST class requires a desktop license\nif gxc.entitled:\n\n # create a gxapi.GXST instance to accumulate statistics\n stats = gxapi.GXST.create()\n\n # open the grid\n with gxgrid.Grid.open('elevation_surfer.grd(SRF;VER=V7)') as grid:\n\n # add data from each row to the stats instance\n for row in range(grid.ny):\n stats.data_vv(grid.read_row(row).gxvv)\n\n # print statistical properties\n print('minimum: ', stats.get_info(gxapi.ST_MIN))\n print('maximum: ', stats.get_info(gxapi.ST_MAX))\n print('mean: ', stats.get_info(gxapi.ST_MEAN))\n print('standard deviation: ', stats.get_info(gxapi.ST_STDDEV))", "Grid Iterator\nA grid instance also behaves as an iterator that works through the grid data points by row, then by column, each iteration returning the (x, y, z, grid_value). In this example we will iterate through all points in the grid and accumulate the statistics a point at a time. This is the least-efficient way to work through a grid, but the pattern can be useful to deal with a very simple need. For example, any Geosoft supported grid can be easily converted to an ASCII file that has lists the (x, y, z, grid_value) for all points in a grid.", "# the GXST class requires a desktop license\nif gxc.entitled:\n\n # create a gxapi.GXST instance to accumulate statistics\n stats = gxapi.GXST.create()\n\n # add each data to stats point-by-point (slow, better to use numpy or vector approach)\n number_of_dummies = 0\n with gxgrid.Grid.open('elevation_surfer.grd(SRF;VER=V7)') as grid:\n for x, y, z, v in grid:\n if v is None:\n number_of_dummies += 1\n else:\n stats.data(v)\n total_points = grid.nx * grid.ny\n\n # print statistical properties\n print('minimum: ', stats.get_info(gxapi.ST_MIN))\n print('maximum: ', stats.get_info(gxapi.ST_MAX))\n print('mean: ', stats.get_info(gxapi.ST_MEAN))\n print('standard deviation: ', stats.get_info(gxapi.ST_STDDEV))\n print('number of dummies: ', number_of_dummies)\n print('number of valid data points: ', total_points - number_of_dummies)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
NuGrid/NuPyCEE
DOC/Teaching/ExtraSources.ipynb
bsd-3-clause
[ "How to use different extra sources such as CCSN neutrino-driven winds\nPrepared by Christian Ritter", "%matplotlib nbagg\nimport matplotlib.pyplot as plt\nimport sys\nimport matplotlib\nimport numpy as np\n\nfrom NuPyCEE import sygma as s\nfrom NuPyCEE import omega as o\nfrom NuPyCEE import read_yields as ry", "AGB and massive star tables used", "table='yield_tables/agb_and_massive_stars_nugrid_MESAonly_fryer12delay.txt'", "Setup", "# OMEGA parameters for MW\nmass_loading = 0.0\nnb_1a_per_m = 3.0e-3\nsfe=0.04\nSF_law=True\nDM_evolution=False\nimf_yields_range=[1.0,30.0]\nspecial_timesteps=30\nZ_trans=0.0\niniZ=0.0001", "Default setup", "o0=o.omega(iniZ=iniZ,galaxy='milky_way',Z_trans=Z_trans, table=table,sfe=sfe, DM_evolution=DM_evolution,\\\n mass_loading=mass_loading, nb_1a_per_m=nb_1a_per_m, special_timesteps=special_timesteps,\n imf_yields_range=imf_yields_range,\n SF_law=SF_law)", "Setup with different extra sources\nHere we use yields in two (extra source) yield tables which we apply in the mass range from 8Msun to 12Msun and from 12Msun to 30Msun respectively. We apply a factor of 0.5 to the extra yields of the first yield table and 1. to the second yield table.", "extra_source_table=['yield_tables/r_process_arnould_2007.txt',\n 'yield_tables/r_process_arnould_2007.txt']\n#Apply yields only in specific mass ranges;\nextra_source_mass_range = [[8,12],[12,30]]\n#percentage of stars to which the yields are added. First entry for first yield table etc.\nf_extra_source = [0.5,1.]\n#metallicity to exclude (in this case none)\nextra_source_exclude_Z = [[], []]\n\n#you can look at the yields directly with the y1 and y2 parameter below.\ny1=ry.read_yields_Z(\"./NuPyCEE/\"+extra_source_table[0])\ny2=ry.read_yields_Z(\"./NuPyCEE/\"+extra_source_table[1])", "SYGMA", "s0 = s.sygma(iniZ=0.0001,extra_source_on=False) #default False\n\ns0p1 = s.sygma(iniZ=0.0001,extra_source_on=True,\n extra_source_table=extra_source_table,extra_source_mass_range=extra_source_mass_range,\n f_extra_source=f_extra_source, extra_source_exclude_Z=extra_source_exclude_Z)", "OMEGA", "o0p1=o.omega(iniZ=iniZ,galaxy='milky_way',Z_trans=Z_trans, table=table,sfe=sfe, DM_evolution=DM_evolution,\\\n mass_loading=mass_loading, nb_1a_per_m=nb_1a_per_m, special_timesteps=special_timesteps,\n imf_yields_range=imf_yields_range,SF_law=SF_law,extra_source_on=True,\n extra_source_table=extra_source_table,extra_source_mass_range=extra_source_mass_range,\n f_extra_source=f_extra_source, extra_source_exclude_Z=extra_source_exclude_Z)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
madmax983/h2o-3
h2o-py/demos/H2O_tutorial_medium.ipynb
apache-2.0
[ "H2O Tutorial\nAuthor: Spencer Aiello\nContact: spencer@h2oai.com\nThis tutorial steps through a quick introduction to H2O's Python API. The goal of this tutorial is to introduce through a complete example H2O's capabilities from Python. Also, to help those that are accustomed to Scikit Learn and Pandas, the demo will be specific call outs for differences between H2O and those packages; this is intended to help anyone that needs to do machine learning on really Big Data make the transition. It is not meant to be a tutorial on machine learning or algorithms.\nDetailed documentation about H2O's and the Python API is available at http://docs.h2o.ai.\nSetting up your system for this demo\nThe following code creates two csv files using data from the Boston Housing dataset which is built into scikit-learn and adds them to the local directory", "import pandas as pd\nimport numpy\nfrom numpy.random import choice\nfrom sklearn.datasets import load_boston\nfrom h2o.estimators.random_forest import H2ORandomForestEstimator\n\n\nimport h2o\nh2o.init()\n\n# transfer the boston data from pandas to H2O\nboston_data = load_boston()\nX = pd.DataFrame(data=boston_data.data, columns=boston_data.feature_names)\nX[\"Median_value\"] = boston_data.target\nX = h2o.H2OFrame.from_python(X.to_dict(\"list\"))\n\n# select 10% for valdation\nr = X.runif(seed=123456789)\ntrain = X[r < 0.9,:]\nvalid = X[r >= 0.9,:]\n\nh2o.export_file(train, \"Boston_housing_train.csv\", force=True)\nh2o.export_file(valid, \"Boston_housing_test.csv\", force=True)", "Enable inline plotting in the Jupyter Notebook", "%matplotlib inline\nimport matplotlib.pyplot as plt", "Intro to H2O Data Munging\nRead csv data into H2O. This loads the data into the H2O column compressed, in-memory, key-value store.", "fr = h2o.import_file(\"Boston_housing_train.csv\")", "View the top of the H2O frame.", "fr.head()", "View the bottom of the H2O Frame", "fr.tail()", "Select a column\nfr[\"VAR_NAME\"]", "fr[\"CRIM\"].head() # Tab completes", "Select a few columns", "columns = [\"CRIM\", \"RM\", \"RAD\"]\nfr[columns].head()", "Select a subset of rows\nUnlike in Pandas, columns may be identified by index or column name. Therefore, when subsetting by rows, you must also pass the column selection.", "fr[2:7,:] # explicitly select all columns with :", "Key attributes:\n * columns, names, col_names\n * len, shape, dim, nrow, ncol\n * types\nNote: \nSince the data is not in local python memory\nthere is no \"values\" attribute. If you want to \npull all of the data into the local python memory\nthen do so explicitly with h2o.export_file and\nreading the data into python memory from disk.", "# The columns attribute is exactly like Pandas\nprint \"Columns:\", fr.columns, \"\\n\"\nprint \"Columns:\", fr.names, \"\\n\"\nprint \"Columns:\", fr.col_names, \"\\n\"\n\n# There are a number of attributes to get at the shape\nprint \"length:\", str( len(fr) ), \"\\n\"\nprint \"shape:\", fr.shape, \"\\n\"\nprint \"dim:\", fr.dim, \"\\n\"\nprint \"nrow:\", fr.nrow, \"\\n\"\nprint \"ncol:\", fr.ncol, \"\\n\"\n\n# Use the \"types\" attribute to list the column types\nprint \"types:\", fr.types, \"\\n\"", "Select rows based on value", "fr.shape", "Boolean masks can be used to subselect rows based on a criteria.", "mask = fr[\"CRIM\"]>1\nfr[mask,:].shape", "Get summary statistics of the data and additional data distribution information.", "fr.describe()", "Set up the predictor and response column names\nUsing H2O algorithms, it's easier to reference predictor and response columns\nby name in a single frame (i.e., don't split up X and y)", "x = fr.names[:]\ny=\"Median_value\"\nx.remove(y)", "Machine Learning With H2O\nH2O is a machine learning library built in Java with interfaces in Python, R, Scala, and Javascript. It is open source and well-documented.\nUnlike Scikit-learn, H2O allows for categorical and missing data.\nThe basic work flow is as follows:\n* Fit the training data with a machine learning algorithm\n* Predict on the testing data\nSimple model", "# Define and fit first 400 points\nmodel = H2ORandomForestEstimator(seed=42)\nmodel.train(x=x, y=y, training_frame=fr[:400,:])\n\nmodel.predict(fr[400:fr.nrow,:]) # Predict the rest", "The performance of the model can be checked using the holdout dataset", "perf = model.model_performance(fr[400:fr.nrow,:])\nperf.r2() # get the r2 on the holdout data\nperf.mse() # get the mse on the holdout data\nperf # display the performance object", "Train-Test Split\nInstead of taking the first 400 observations for training, we can use H2O to create a random test train split of the data.", "r = fr.runif(seed=12345) # build random uniform column over [0,1]\ntrain= fr[r<0.75,:] # perform a 75-25 split\ntest = fr[r>=0.75,:]\n\nmodel = H2ORandomForestEstimator(seed=42)\nmodel.train(x=x, y=y, training_frame=train, validation_frame=test)\n\nperf = model.model_performance(test)\nperf.r2()", "There was a massive jump in the R^2 value. This is because the original data is not shuffled.\nCross validation\nH2O's machine learning algorithms take an optional parameter nfolds to specify the number of cross-validation folds to build. H2O's cross-validation uses an internal weight vector to build the folds in an efficient manner (instead of physically building the splits).\nIn conjunction with the nfolds parameter, a user may specify the way in which observations are assigned to each fold with the fold_assignment parameter, which can be set to either:\n * AUTO: Perform random assignment\n * Random: Each row has a equal (1/nfolds) chance of being in any fold.\n * Modulo: Observations are in/out of the fold based by modding on nfolds", "model = H2ORandomForestEstimator(nfolds=10) # build a 10-fold cross-validated model\nmodel.train(x=x, y=y, training_frame=fr)\n\nscores = numpy.array([m.r2() for m in model.xvals]) # iterate over the xval models using the xvals attribute\nprint \"Expected R^2: %.2f +/- %.2f \\n\" % (scores.mean(), scores.std()*1.96)\nprint \"Scores:\", scores.round(2)", "However, you can still make use of the cross_val_score from Scikit-Learn\nCross validation: H2O and Scikit-Learn", "from sklearn.cross_validation import cross_val_score\nfrom h2o.cross_validation import H2OKFold\nfrom h2o.model.regression import h2o_r2_score\nfrom sklearn.metrics.scorer import make_scorer", "You still must use H2O to make the folds. Currently, there is no H2OStratifiedKFold. Additionally, the H2ORandomForestEstimator is similar to the scikit-learn RandomForestRegressor object with its own train method.", "model = H2ORandomForestEstimator(seed=42)\n\nscorer = make_scorer(h2o_r2_score) # make h2o_r2_score into a scikit_learn scorer\ncustom_cv = H2OKFold(fr, n_folds=10, seed=42) # make a cv \nscores = cross_val_score(model, fr[x], fr[y], scoring=scorer, cv=custom_cv)\n\nprint \"Expected R^2: %.2f +/- %.2f \\n\" % (scores.mean(), scores.std()*1.96)\nprint \"Scores:\", scores.round(2)", "There isn't much difference in the R^2 value since the fold strategy is exactly the same. However, there was a major difference in terms of computation time and memory usage.\nSince the progress bar print out gets annoying let's disable that", "h2o.__PROGRESS_BAR__=False\nh2o.no_progress()", "Grid Search\nGrid search in H2O is still under active development and it will be available very soon. However, it is possible to make use of Scikit's grid search infrastructure (with some performance penalties)\nRandomized grid search: H2O and Scikit-Learn", "from sklearn import __version__\nsklearn_version = __version__\nprint sklearn_version", "If you have 0.16.1, then your system can't handle complex randomized grid searches (it works in every other version of sklearn, including the soon to be released 0.16.2 and the older versions).\nThe steps to perform a randomized grid search:\n1. Import model and RandomizedSearchCV\n2. Define model\n3. Specify parameters to test\n4. Define grid search object\n5. Fit data to grid search object\n6. Collect scores\nAll the steps will be repeated from above.\nBecause 0.16.1 is installed, we use scipy to define specific distributions\nADVANCED TIP:\nTurn off reference counting for spawning jobs in parallel (n_jobs=-1, or n_jobs > 1).\nWe'll turn it back on again in the aftermath of a Parallel job.\nIf you don't want to run jobs in parallel, don't turn off the reference counting.\nPattern is:\n >>> h2o.turn_off_ref_cnts()\n >>> .... parallel job ....\n >>> h2o.turn_on_ref_cnts()", "%%time\nfrom sklearn.grid_search import RandomizedSearchCV # Import grid search\nfrom scipy.stats import randint, uniform\n\nmodel = H2ORandomForestEstimator(seed=42) # Define model\n\nparams = {\"ntrees\": randint(20,50),\n \"max_depth\": randint(1,10),\n \"min_rows\": randint(1,10), # scikit's min_samples_leaf\n \"mtries\": randint(2,fr[x].shape[1]),} # Specify parameters to test\n\nscorer = make_scorer(h2o_r2_score) # make h2o_r2_score into a scikit_learn scorer\ncustom_cv = H2OKFold(fr, n_folds=10, seed=42) # make a cv \nrandom_search = RandomizedSearchCV(model, params, \n n_iter=30, \n scoring=scorer, \n cv=custom_cv, \n random_state=42,\n n_jobs=1) # Define grid search object\n\nrandom_search.fit(fr[x], fr[y])\n\nprint \"Best R^2:\", random_search.best_score_, \"\\n\"\nprint \"Best params:\", random_search.best_params_", "We might be tempted to think that we just had a large improvement; however we must be cautious. The function below creates a more detailed report.", "def report_grid_score_detail(random_search, charts=True):\n \"\"\"Input fit grid search estimator. Returns df of scores with details\"\"\"\n df_list = []\n\n for line in random_search.grid_scores_:\n results_dict = dict(line.parameters)\n results_dict[\"score\"] = line.mean_validation_score\n results_dict[\"std\"] = line.cv_validation_scores.std()*1.96\n df_list.append(results_dict)\n\n result_df = pd.DataFrame(df_list)\n result_df = result_df.sort(\"score\", ascending=False)\n \n if charts:\n for col in get_numeric(result_df):\n if col not in [\"score\", \"std\"]:\n plt.scatter(result_df[col], result_df.score)\n plt.title(col)\n plt.show()\n\n for col in list(result_df.columns[result_df.dtypes == \"object\"]):\n cat_plot = result_df.score.groupby(result_df[col]).mean()[0]\n cat_plot.sort()\n cat_plot.plot(kind=\"barh\", xlim=(.5, None), figsize=(7, cat_plot.shape[0]/2))\n plt.show()\n return result_df\n\ndef get_numeric(X):\n \"\"\"Return list of numeric dtypes variables\"\"\"\n return X.dtypes[X.dtypes.apply(lambda x: str(x).startswith((\"float\", \"int\", \"bool\")))].index.tolist()\n\nreport_grid_score_detail(random_search).head()", "Based on the grid search report, we can narrow the parameters to search and rerun the analysis. The parameters below were chosen after a few runs:", "%%time\n\nparams = {\"ntrees\": randint(30,40),\n \"max_depth\": randint(4,10),\n \"mtries\": randint(4,10),}\n\ncustom_cv = H2OKFold(fr, n_folds=5, seed=42) # In small datasets, the fold size can have a big\n # impact on the std of the resulting scores. More\nrandom_search = RandomizedSearchCV(model, params, # folds --> Less examples per fold --> higher \n n_iter=10, # variation per sample\n scoring=scorer, \n cv=custom_cv, \n random_state=43, \n n_jobs=1) \n\nrandom_search.fit(fr[x], fr[y])\n\nprint \"Best R^2:\", random_search.best_score_, \"\\n\"\nprint \"Best params:\", random_search.best_params_\n\nreport_grid_score_detail(random_search)", "Transformations\nRule of machine learning: Don't use your testing data to inform your training data. Unfortunately, this happens all the time when preparing a dataset for the final model. But on smaller datasets, you must be especially careful.\nAt the moment, there are no classes for managing data transformations. On the one hand, this requires the user to tote around some extra state, but on the other, it allows the user to be more explicit about transforming H2OFrames.\nBasic steps:\n\nRemove the response variable from transformations.\nImport transformer\nDefine transformer\nFit train data to transformer\nTransform test and train data\nRe-attach the response variable.\n\nFirst let's normalize the data using the means and standard deviations of the training data.\nThen let's perform a principal component analysis on the training data and select the top 5 components.\nUsing these components, let's use them to reduce the train and test design matrices.", "from h2o.transforms.preprocessing import H2OScaler\nfrom h2o.transforms.decomposition import H2OPCA", "Normalize Data: Use the means and standard deviations from the training data.", "y_train = train.pop(\"Median_value\")\ny_test = test.pop(\"Median_value\")\n\nnorm = H2OScaler()\nnorm.fit(train)\nX_train_norm = norm.transform(train)\nX_test_norm = norm.transform(test)\n\nprint X_test_norm.shape\nX_test_norm", "Then, we can apply PCA and keep the top 5 components. A user warning is expected here.", "pca = H2OPCA(k=5)\npca.fit(X_train_norm)\nX_train_norm_pca = pca.transform(X_train_norm)\nX_test_norm_pca = pca.transform(X_test_norm)\n\n# prop of variance explained by top 5 components?\n\nprint X_test_norm_pca.shape\nX_test_norm_pca[:5]\n\nmodel = H2ORandomForestEstimator(seed=42)\nmodel.train(x=X_train_norm_pca.names, y=y_train.names, training_frame=X_train_norm_pca.cbind(y_train))\ny_hat = model.predict(X_test_norm_pca)\n\nh2o_r2_score(y_test,y_hat)", "Although this is MUCH simpler than keeping track of all of these transformations manually, it gets to be somewhat of a burden when you want to chain together multiple transformers.\nPipelines\n\"Tranformers unite!\"\nIf your raw data is a mess and you have to perform several transformations before using it, use a pipeline to keep things simple.\nSteps:\n\nImport Pipeline, transformers, and model\nDefine pipeline. The first and only argument is a list of tuples where the first element of each tuple is a name you give the step and the second element is a defined transformer. The last step is optionally an estimator class (like a RandomForest).\nFit the training data to pipeline\nEither transform or predict the testing data", "from h2o.transforms.preprocessing import H2OScaler\nfrom h2o.transforms.decomposition import H2OPCA\n\nfrom sklearn.pipeline import Pipeline # Import Pipeline <other imports not shown>\nmodel = H2ORandomForestEstimator(seed=42)\npipe = Pipeline([(\"standardize\", H2OScaler()), # Define pipeline as a series of steps\n (\"pca\", H2OPCA(k=5)),\n (\"rf\", model)]) # Notice the last step is an estimator\n\npipe.fit(train, y_train) # Fit training data\ny_hat = pipe.predict(test) # Predict testing data (due to last step being an estimator)\nh2o_r2_score(y_test, y_hat) # Notice the final score is identical to before", "This is so much easier!!!\nBut, wait a second, we did worse after applying these transformations! We might wonder how different hyperparameters for the transformations impact the final score.\nCombining randomized grid search and pipelines\n\"Yo dawg, I heard you like models, so I put models in your models to model models.\"\nSteps:\n\nImport Pipeline, grid search, transformers, and estimators <Not shown below>\nDefine pipeline\nDefine parameters to test in the form: \"(Step name)__(argument name)\" A double underscore separates the two words.\nDefine grid search\nFit to grid search", "pipe = Pipeline([(\"standardize\", H2OScaler()),\n (\"pca\", H2OPCA()),\n (\"rf\", H2ORandomForestEstimator(seed=42))])\n\nparams = {\"standardize__center\": [True, False], # Parameters to test\n \"standardize__scale\": [True, False],\n \"pca__k\": randint(2, 6),\n \"rf__ntrees\": randint(50,80),\n \"rf__max_depth\": randint(4,10),\n \"rf__min_rows\": randint(5,10), }\n# \"rf__mtries\": randint(1,4),} # gridding over mtries is \n # problematic with pca grid over \n # k above \n\nfrom sklearn.grid_search import RandomizedSearchCV\nfrom h2o.cross_validation import H2OKFold\nfrom h2o.model.regression import h2o_r2_score\nfrom sklearn.metrics.scorer import make_scorer\n\ncustom_cv = H2OKFold(fr, n_folds=5, seed=42)\nrandom_search = RandomizedSearchCV(pipe, params,\n n_iter=30,\n scoring=make_scorer(h2o_r2_score),\n cv=custom_cv,\n random_state=42,\n n_jobs=1)\n\n\nrandom_search.fit(fr[x],fr[y])\nresults = report_grid_score_detail(random_search)\nresults.head()", "Currently Under Development (drop-in scikit-learn pieces):\n * Richer set of transforms (only PCA and Scale are implemented)\n * Richer set of estimators (only RandomForest is available)\n * Full H2O Grid Search\nOther Tips: Model Save/Load\nIt is useful to save constructed models to disk and reload them between H2O sessions. Here's how:", "best_estimator = random_search.best_estimator_ # fetch the pipeline from the grid search\nh2o_model = h2o.get_model(best_estimator._final_estimator._id) # fetch the model from the pipeline\n\nsave_path = h2o.save_model(h2o_model, path=\".\", force=True)\nprint save_path\n\n# assumes new session\nmy_model = h2o.load_model(path=save_path)\n\nmy_model.predict(X_test_norm_pca)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rueedlinger/python-snippets
geojson/geojson_stations.ipynb
mit
[ "Convert a pandas DataFrame to GeoJSON (Stations)\nIn this Python snippet we use stations (geographic) from the Swiss public transportation and convert the data to a GeoJSON (http://geojson.org/) file.\n\nDonwload the 'bfkoordgeo.csv' file, https://opentransportdata.swiss/en/dataset/bhlist/resource/b92a372f-7843-4ddd-b1c6-c9c6397e1097\nConvert the file from ISO-8859-1 to UTF-8\n\niconv -f ISO-8859-1 -t UTF-8 bfkoordgeo.csv &gt; out.csv\n\nNote: The data is from the Open Data Platform Swiss Public Transport, https://opentransportdata.swiss/en/\n\nRequired libraries\n\npandas, http://pandas.pydata.org/\ngeojson, https://pypi.python.org/pypi/geojson/\n\nLoad the data\nFirst let's load the data with pandas. The data frame contains the stations from the public transportations from Switzerland and some from adjoining countries. We have the columns:\n- StationID\n- Longitude\n- Latitude\n- Height\n- Remark\nLongitude and Latitude should be WGS 84 coordinates.", "import pandas as pd\n\ndf = pd.read_csv('data/bfkoordgeo_utf8.csv')\ndf.head()", "Now we do some data cleaning and remove all rows where Longitude and Latitude are 'null'.", "df = df[df['Longitude'].notnull()]\ndf = df[df['Latitude'].notnull()]\n\n# will display all rows that have null values\n#df[df.isnull().any(axis=1)]", "Convert pandas data frame to GeoJSON\nNext we convert the panda data frame to geosjon objects (FeatureCollection/Feature/Point).", "import geojson as geojson\n\nvalues = zip(df['Longitude'], df['Latitude'], df['Remark'])\npoints = [geojson.Feature(geometry=geojson.Point((v[0], v[1])), properties={'name': v[2]}) for v in values]\n\ngeo_collection = geojson.FeatureCollection(points)\n\nprint(points[0])", "Save the GeoJSON (FeatureCollection) to a file\nFinally we dump the GeoJSON objects to a file.", "dump = geojson.dumps(geo_collection, sort_keys=True)\n\n'''\nwith open('stations.geojson', 'w') as file:\n file.write(dump)\n'''", "Result\nYou can find the resulat (GeoJSON file) from this snippet here\n- stations.geojson" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.22/_downloads/ff83425ee773d1d588a6994e5560c06c/plot_mne_dspm_source_localization.ipynb
bsd-3-clause
[ "%matplotlib inline", "Source localization with MNE/dSPM/sLORETA/eLORETA\nThe aim of this tutorial is to teach you how to compute and apply a linear\nminimum-norm inverse method on evoked/raw/epochs data.", "import numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.minimum_norm import make_inverse_operator, apply_inverse", "Process MEG data", "data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\n\nraw = mne.io.read_raw_fif(raw_fname) # already has an average reference\nevents = mne.find_events(raw, stim_channel='STI 014')\n\nevent_id = dict(aud_l=1) # event trigger and conditions\ntmin = -0.2 # start of each epoch (200ms before the trigger)\ntmax = 0.5 # end of each epoch (500ms after the trigger)\nraw.info['bads'] = ['MEG 2443', 'EEG 053']\nbaseline = (None, 0) # means from the first instant to t = 0\nreject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)\n\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,\n picks=('meg', 'eog'), baseline=baseline, reject=reject)", "Compute regularized noise covariance\nFor more details see tut_compute_covariance.", "noise_cov = mne.compute_covariance(\n epochs, tmax=0., method=['shrunk', 'empirical'], rank=None, verbose=True)\n\nfig_cov, fig_spectra = mne.viz.plot_cov(noise_cov, raw.info)", "Compute the evoked response\nLet's just use the MEG channels for simplicity.", "evoked = epochs.average().pick('meg')\nevoked.plot(time_unit='s')\nevoked.plot_topomap(times=np.linspace(0.05, 0.15, 5), ch_type='mag',\n time_unit='s')", "It's also a good idea to look at whitened data:", "evoked.plot_white(noise_cov, time_unit='s')\ndel epochs, raw # to save memory", "Inverse modeling: MNE/dSPM on evoked and raw data\nHere we first read the forward solution. You will likely need to compute\none for your own data -- see tut-forward for information on how\nto do it.", "fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-oct-6-fwd.fif'\nfwd = mne.read_forward_solution(fname_fwd)", "Next, we make an MEG inverse operator.", "inverse_operator = make_inverse_operator(\n evoked.info, fwd, noise_cov, loose=0.2, depth=0.8)\ndel fwd\n\n# You can write it to disk with::\n#\n# >>> from mne.minimum_norm import write_inverse_operator\n# >>> write_inverse_operator('sample_audvis-meg-oct-6-inv.fif',\n# inverse_operator)", "Compute inverse solution\nWe can use this to compute the inverse solution and obtain source time\ncourses:", "method = \"dSPM\"\nsnr = 3.\nlambda2 = 1. / snr ** 2\nstc, residual = apply_inverse(evoked, inverse_operator, lambda2,\n method=method, pick_ori=None,\n return_residual=True, verbose=True)", "Visualization\nWe can look at different dipole activations:", "fig, ax = plt.subplots()\nax.plot(1e3 * stc.times, stc.data[::100, :].T)\nax.set(xlabel='time (ms)', ylabel='%s value' % method)", "Examine the original data and the residual after fitting:", "fig, axes = plt.subplots(2, 1)\nevoked.plot(axes=axes)\nfor ax in axes:\n ax.texts = []\n for line in ax.lines:\n line.set_color('#98df81')\nresidual.plot(axes=axes)", "Here we use peak getter to move visualization to the time point of the peak\nand draw a marker at the maximum peak vertex.", "vertno_max, time_max = stc.get_peak(hemi='rh')\n\nsubjects_dir = data_path + '/subjects'\nsurfer_kwargs = dict(\n hemi='rh', subjects_dir=subjects_dir,\n clim=dict(kind='value', lims=[8, 12, 15]), views='lateral',\n initial_time=time_max, time_unit='s', size=(800, 800), smoothing_steps=10)\nbrain = stc.plot(**surfer_kwargs)\nbrain.add_foci(vertno_max, coords_as_verts=True, hemi='rh', color='blue',\n scale_factor=0.6, alpha=0.5)\nbrain.add_text(0.1, 0.9, 'dSPM (plus location of maximal activation)', 'title',\n font_size=14)\n\n# The documentation website's movie is generated with:\n# brain.save_movie(..., tmin=0.05, tmax=0.15, interpolation='linear',\n# time_dilation=20, framerate=10, time_viewer=True)", "There are many other ways to visualize and work with source data, see\nfor example:\n\ntut-viz-stcs\nex-morph-surface\nex-morph-volume\nex-vector-mne-solution\ntut-dipole-orientations\ntut-mne-fixed-free\nexamples using apply_inverse\n &lt;sphx_glr_backreferences_mne.minimum_norm.apply_inverse&gt;." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
rally12/deep-learning
tv-script-generation/dlnd_tv_script_generation.ipynb
mit
[ "TV Script Generation\nIn this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.\nGet the Data\nThe data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like \"Moe's Cavern\", \"Flaming Moe's\", \"Uncle Moe's Family Feed-Bag\", etc..", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\n\ndata_dir = './data/simpsons/moes_tavern_lines.txt'\ntext = helper.load_data(data_dir)\n# Ignore notice, since we don't use it for analysing the data\ntext = text[81:]", "Explore the Data\nPlay around with view_sentence_range to view different parts of the data.", "view_sentence_range = (8, 100)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))\nscenes = text.split('\\n\\n')\nprint('Number of scenes: {}'.format(len(scenes)))\nsentence_count_scene = [scene.count('\\n') for scene in scenes]\nprint('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))\n\nsentences = [sentence for scene in scenes for sentence in scene.split('\\n')]\nprint('Number of lines: {}'.format(len(sentences)))\nword_count_sentence = [len(sentence.split()) for sentence in sentences]\nprint('Average number of words in each line: {}'.format(np.average(word_count_sentence)))\n\nprint()\nprint('The sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))", "Implement Preprocessing Functions\nThe first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:\n- Lookup Table\n- Tokenize Punctuation\nLookup Table\nTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:\n- Dictionary to go from the words to an id, we'll call vocab_to_int\n- Dictionary to go from the id to word, we'll call int_to_vocab\nReturn these dictionaries in the following tuple (vocab_to_int, int_to_vocab)", "import numpy as np\nimport problem_unittests as tests\n\ndef create_lookup_tables(text):\n \"\"\"\n Create lookup tables for vocabulary\n :param text: The text of tv scripts split into words\n :return: A tuple of dicts (vocab_to_int, int_to_vocab)\n \"\"\"\n # TODO: Implement Function\n words = set()\n index_to_word = {}\n word_to_index = {}\n \n for word in text:\n words.add(word)\n \n for index, word in enumerate(words):\n #print (word,index)\n index_to_word[index] = word\n word_to_index[word] = index\n \n return word_to_index, index_to_word\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_create_lookup_tables(create_lookup_tables)", "Tokenize Punctuation\nWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word \"bye\" and \"bye!\".\nImplement the function token_lookup to return a dict that will be used to tokenize symbols like \"!\" into \"||Exclamation_Mark||\". Create a dictionary for the following symbols where the symbol is the key and value is the token:\n- Period ( . )\n- Comma ( , )\n- Quotation Mark ( \" )\n- Semicolon ( ; )\n- Exclamation mark ( ! )\n- Question mark ( ? )\n- Left Parentheses ( ( )\n- Right Parentheses ( ) )\n- Dash ( -- )\n- Return ( \\n )\nThis dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token \"dash\", try using something like \"||dash||\".", "def token_lookup():\n \"\"\"\n Generate a dict to turn punctuation into a token.\n :return: Tokenize dictionary where the key is the punctuation and the value is the token\n \"\"\"\n # TODO: Implement Function\n ret = {}\n ret['.'] = \"||Period||\" #( . )\n ret[','] = \"||Comma||\" #( , )\n ret['\"'] = \"||Quotation_Mark||\" # ( \" )\n ret[';'] = \"||Semicolon||\" #( ; )\n ret['!'] = \"||Exclamation_mark||\" #( ! )\n ret['?'] = \"||Question_mark||\" #( ? )\n ret['('] = \"||Left_Parentheses||\" #( ( )\n ret[')'] = \"||Right_Parentheses||\" #( ) )\n ret['--'] = \"||Dash||\" # ( -- )\n ret['\\n'] = \"||Return||\" # ( \\n )\n \n return ret\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_tokenize(token_lookup)", "Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport numpy as np\nimport problem_unittests as tests\n\nint_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()", "Build the Neural Network\nYou'll build the components necessary to build a RNN by implementing the following functions below:\n- get_inputs\n- get_init_cell\n- get_embed\n- build_rnn\n- build_nn\n- get_batches\nCheck the Version of TensorFlow and Access to GPU", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))", "Input\nImplement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n- Input text placeholder named \"input\" using the TF Placeholder name parameter.\n- Targets placeholder\n- Learning Rate placeholder\nReturn the placeholders in the following tuple (Input, Targets, LearningRate)", "def get_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate)\n \"\"\"\n # TODO: Implement Function\n inputs = tf.placeholder(tf.int32, [None, None ], name=\"input\")\n targets = tf.placeholder(tf.int32, [None, None ], name=\"targets\")\n learning_rate = tf.placeholder(tf.float32, None, name=\"LearningRate\")\n return inputs, targets, learning_rate\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_inputs(get_inputs)", "Build RNN Cell and Initialize\nStack one or more BasicLSTMCells in a MultiRNNCell.\n- The Rnn size should be set using rnn_size\n- Initalize Cell State using the MultiRNNCell's zero_state() function\n - Apply the name \"initial_state\" to the initial state using tf.identity()\nReturn the cell and initial state in the following tuple (Cell, InitialState)", "def get_init_cell(batch_size, rnn_size):\n \"\"\"\n Create an RNN Cell and initialize it.\n :param batch_size: Size of batches\n :param rnn_size: Size of RNNs\n :return: Tuple (cell, initialize state)\n \"\"\"\n # TODO: Implement Function\n layer_count = 2\n keep_prob = tf.constant(0.7,tf.float32, name=\"keep_prob\")\n lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size, state_is_tuple=True)\n lstm2 = tf.contrib.rnn.BasicLSTMCell(rnn_size, state_is_tuple=True)\n \n dropout = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n \n cell = tf.contrib.rnn.MultiRNNCell([lstm, lstm2], state_is_tuple=True)\n initial_state = cell.zero_state( batch_size, tf.float32)\n initial_state = tf.identity(initial_state, name=\"initial_state\" )\n #_outputs, final_state = tf.nn.rnn(cell, rnn_inputs, initial_state=init_state) \n \n return cell, initial_state\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_init_cell(get_init_cell)", "Word Embedding\nApply embedding to input_data using TensorFlow. Return the embedded sequence.", "import random\nimport math\ndef get_embed(input_data, vocab_size, embed_dim):\n \"\"\"\n Create embedding for <input_data>.\n :param input_data: TF placeholder for text input.\n :param vocab_size: Number of words in vocabulary.\n :param embed_dim: Number of embedding dimensions\n :return: Embedded input.\n \"\"\"\n # TODO: Implement Function \n ret = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))\n ret = tf.nn.embedding_lookup(ret, input_data)\n print(\"shape {}\".format(ret.get_shape().as_list()))\n return ret\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_embed(get_embed)", "Build RNN\nYou created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.\n- Build the RNN using the tf.nn.dynamic_rnn()\n - Apply the name \"final_state\" to the final state using tf.identity()\nReturn the outputs and final_state state in the following tuple (Outputs, FinalState)", "def build_rnn(cell, inputs):\n \"\"\"\n Create a RNN using a RNN Cell\n :param cell: RNN Cell\n :param inputs: Input text data\n :return: Tuple (Outputs, Final State)\n \"\"\"\n # TODO: Implement Function\n output, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype = tf.float32)\n final_state = tf.identity (final_state, \"final_state\")\n return output, final_state\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_rnn(build_rnn)", "Build the Neural Network\nApply the functions you implemented above to:\n- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.\n- Build RNN using cell and your build_rnn(cell, inputs) function.\n- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.\nReturn the logits and final state in the following tuple (Logits, FinalState)", "def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):\n \"\"\"\n Build part of the neural network\n :param cell: RNN cell\n :param rnn_size: Size of rnns\n :param input_data: Input data\n :param vocab_size: Vocabulary size\n :param embed_dim: Number of embedding dimensions\n :return: Tuple (Logits, FinalState)\n \"\"\"\n # TODO: Implement Function\n embedded = get_embed(input_data, vocab_size, rnn_size)\n out, fin = build_rnn(cell, embedded) \n out = tf.contrib.layers.fully_connected(out,vocab_size, activation_fn=None)\n \n out_shape = out.get_shape().as_list() \n print(\"build_nn embedded{}, out:{}, fin:{}\".format(embedded.get_shape().as_list(),out_shape, fin.get_shape().as_list()))\n print()\n return out, fin\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_nn(build_nn)", "Batches\nImplement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:\n- The first element is a single batch of input with the shape [batch size, sequence length]\n- The second element is a single batch of targets with the shape [batch size, sequence length]\nIf you can't fill the last batch with enough data, drop the last batch.\nFor exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:\n```\n[\n # First Batch\n [\n # Batch of Input\n [[ 1 2 3], [ 7 8 9]],\n # Batch of targets\n [[ 2 3 4], [ 8 9 10]]\n ],\n# Second Batch\n [\n # Batch of Input\n [[ 4 5 6], [10 11 12]],\n # Batch of targets\n [[ 5 6 7], [11 12 13]]\n ]\n]\n```", "def get_batches(int_text, batch_size, seq_length):\n \"\"\"\n Return batches of input and target\n :param int_text: Text with the words replaced by their ids\n :param batch_size: The size of batch\n :param seq_length: The length of sequence\n :return: Batches as a Numpy array\n \"\"\"\n # TODO: Implement Function\n text = int_text\n ret = np.array([])\n inputs = []\n targets = []\n text_len = len(text) - len(text) % (seq_length*batch_size)\n print (\"get_batches text:{}, batch:{}, seq:{}\".format(text_len, batch_size, seq_length))\n ret=[] \n \n for i in range(0, text_len-1, seq_length):\n seq = list(int_text[i:i+seq_length])\n inputs.append(list(int_text[i:i+seq_length]))\n targets.append(list(int_text[i+1:i+seq_length+1]))\n \n \n for i in range(0,len(inputs),batch_size):\n pos=batch_size\n #batch_pair = n\n ret.append([inputs[i:i+batch_size], targets[i:i+batch_size]])\n ret = np.asanyarray(ret)\n print(\"batch test \", ret.shape, ret[3,:,2])\n return ret\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_batches(get_batches)", "Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet num_epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet embed_dim to the size of the embedding.\nSet seq_length to the length of sequence.\nSet learning_rate to the learning rate.\nSet show_every_n_batches to the number of batches the neural network should print progress.", "# Number of Epochs\nnum_epochs = 300 # previously 150, but want to get lower loss.\n# Batch Size\nbatch_size = 128\n# RNN Size\nrnn_size = 1024\n# Embedding Dimension Size\nembed_dim = None\n# Sequence Length\nseq_length = 12 # already discouraged from using 6 and 16, avg sentence length being 10-12\n# I'm favoring this formula frm the curse of lerning rate being a function of parameter count. \n#This is guess work (empirical), but gives good results.\nlearning_rate = 1/np.sqrt(rnn_size*seq_length*6700)\nprint( \"learning rate {}, vocab_size {}\".format(learning_rate,6700))\n\"\"\"\n 100 inf\n 0.0012 -- 1.666 860-1210: 1.259\n 0.00012 -- 5.878 1920-2190: 1.070\n 0.000012 7.4 3000: 2.107\n 0.00012 -- 6.047 3000: 0.964-- embedding w truncated normal.\n \n 1024\n 0.00812 -- 1.182 stuck\n 0.00612 -- 0.961 stuck\n\"\"\"\n\n# Show stats for every n number of batches\nshow_every_n_batches = 20\n\ntf.set_random_seed(42)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nsave_dir = './save'", "Build the Graph\nBuild the graph using the neural network you implemented.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom tensorflow.contrib import seq2seq\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n vocab_size = len(int_to_vocab)\n input_text, targets, lr = get_inputs()\n input_data_shape = tf.shape(input_text)\n cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)\n logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)\n\n # Probabilities for generating words\n probs = tf.nn.softmax(logits, name='probs')\n\n # Loss function\n cost = seq2seq.sequence_loss(\n logits,\n targets,\n tf.ones([input_data_shape[0], input_data_shape[1]]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)", "Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nbatches = get_batches(int_text, batch_size, seq_length)\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(num_epochs):\n state = sess.run(initial_state, {input_text: batches[0][0]})\n \n\n for batch_i, (x, y) in enumerate(batches):\n feed = {\n input_text: x,\n targets: y,\n initial_state: state,\n lr: learning_rate}\n train_loss, state, _ = sess.run([cost, final_state, train_op], feed)\n\n # Show every <show_every_n_batches> batches\n if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:\n print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(\n epoch_i,\n batch_i,\n len(batches),\n train_loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_dir)\n print('Model Trained and Saved')", "Save Parameters\nSave seq_length and save_dir for generating a new TV script.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params((seq_length, save_dir))", "Checkpoint", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\nseq_length, load_dir = helper.load_params()", "Implement Generate Functions\nGet Tensors\nGet tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:\n- \"input:0\"\n- \"initial_state:0\"\n- \"final_state:0\"\n- \"probs:0\"\nReturn the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)", "def get_tensors(loaded_graph):\n \"\"\"\n Get input, initial state, final state, and probabilities tensor from <loaded_graph>\n :param loaded_graph: TensorFlow graph loaded from file\n :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)\n \"\"\"\n # TODO: Implement Function\n \n inputs = loaded_graph.get_tensor_by_name(\"input:0\")\n initials = loaded_graph.get_tensor_by_name(\"initial_state:0\")\n finals = loaded_graph.get_tensor_by_name(\"final_state:0\")\n probs = loaded_graph.get_tensor_by_name(\"probs:0\") \n \n return inputs, initials, finals, probs\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_tensors(get_tensors)", "Choose Word\nImplement the pick_word() function to select the next word using probabilities.", "def pick_word(probabilities, int_to_vocab):\n \"\"\"\n Pick the next word in the generated text\n :param probabilities: Probabilites of the next word\n :param int_to_vocab: Dictionary of word ids as the keys and words as the values\n :return: String of the predicted word\n \"\"\"\n # TODO: Implement Function\n # As suggested by the last reviewer - tuning randomness\n #print(\"probabs:{}, - {}\".format(probabilities.shape, int_to_vocab[np.argmax(probabilities)]))\n mostprobable = np.argsort(probabilities)\n ret = np.random.choice(mostprobable[-3:],1, p=[0.1, 0.2, 0.7])\n return int_to_vocab[ret[0]]\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_pick_word(pick_word)", "Generate TV Script\nThis will generate the TV script for you. Set gen_length to the length of TV script you want to generate.", "gen_length = 200\n# homer_simpson, moe_szyslak, or Barney_Gumble\nprime_word = 'moe_szyslak'\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_dir + '.meta')\n loader.restore(sess, load_dir)\n\n # Get Tensors from loaded model\n input_text, initial_state, final_state, probs = get_tensors(loaded_graph)\n\n # Sentences generation setup\n gen_sentences = [prime_word + ':']\n prev_state = sess.run(initial_state, {input_text: np.array([[1]])})\n\n # Generate sentences\n for n in range(gen_length):\n # Dynamic Input\n dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]\n dyn_seq_length = len(dyn_input[0])\n\n # Get Prediction\n probabilities, prev_state = sess.run(\n [probs, final_state],\n {input_text: dyn_input, initial_state: prev_state})\n \n pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)\n\n gen_sentences.append(pred_word)\n \n # Remove tokens\n tv_script = ' '.join(gen_sentences)\n for key, token in token_dict.items():\n ending = ' ' if key in ['\\n', '(', '\"'] else ''\n tv_script = tv_script.replace(' ' + token.lower(), key)\n tv_script = tv_script.replace('\\n ', '\\n')\n tv_script = tv_script.replace('( ', '(')\n \n print(tv_script)", "The TV Script is Nonsensical\nIt's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of another dataset. We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_tv_script_generation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ericmjl/Network-Analysis-Made-Simple
archive/bonus-1-network-statistical-inference-instructor.ipynb
mit
[ "# Load the data\nimport pandas as pd\nimport networkx as nx\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport numpy.random as npr\nfrom scipy.stats import norm, ks_2samp # no scipy - comment out\nfrom custom import load_data as cf\nfrom custom import ecdf\n\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n%config InlineBackend.figure_format = 'retina'", "Introduction\nIn this notebook, we will walk through a hacker's approach to statistical thinking, as applied to network analysis.\nStatistics in a Nutshell\nAll of statistics can be broken down into two activities:\n\nDescriptively summarizing data. (a.k.a. \"descriptive statistics\")\nFiguring out whether something happened by random chance. (a.k.a. \"inferential statistics\")\n\nDescriptive Statistics\n\nCentrality measures: mean, median, mode\nVariance measures: inter-quartile range (IQR), variance and standard deviation\n\nInferential Statistics\n\nModels of Randomness (see below)\nHypothesis Testing\nFitting Statistical Models\n\nLoad Data\nLet's load a protein-protein interaction network dataset.\n\nThis undirected network contains protein interactions contained in yeast. Research showed that proteins with a high degree were more important for the surivial of the yeast than others. A node represents a protein and an edge represents a metabolic interaction between two proteins. The network contains loops.", "# Read in the data.\n# Note from above that we have to skip the first two rows, and that there's no header column,and that the edges are\n# delimited by spaces in between the nodes. Hence the syntax below:\nG = cf.load_propro_network()", "Exercise\nCompute some basic descriptive statistics about the graph, namely:\n\nthe number of nodes,\nthe number of edges,\nthe graph density,\nthe distribution of degree centralities in the graph,", "# Number of nodes:\nlen(G.nodes())\n\n# Number of edges:\nlen(G.edges())\n\n# Graph density:\nnx.density(G)\n\n# Degree centrality distribution:\nlist(nx.degree_centrality(G).values())[0:5]", "How are protein-protein networks formed? Are they formed by an Erdos-Renyi process, or something else?\n\nIn the G(n, p) model, a graph is constructed by connecting nodes randomly. Each edge is included in the graph with probability p independent from every other edge.\n\nIf protein-protein networks are formed by an E-R process, then we would expect that properties of the protein-protein graph would look statistically similar to those of an actual E-R graph.\nExercise\nMake an ECDF of the degree centralities for the protein-protein interaction graph, and the E-R graph.\n- The construction of an E-R graph requires a value for n and p. \n- A reasonable number for n is the number of nodes in our protein-protein graph.\n- A reasonable value for p might be the density of the protein-protein graph.", "ppG_deg_centralities = list(nx.degree_centrality(G).values())\nplt.plot(*ecdf(ppG_deg_centralities))\n\nerG = nx.erdos_renyi_graph(n=len(G.nodes()), p=nx.density(G))\nerG_deg_centralities = list(nx.degree_centrality(erG).values())\nplt.plot(*ecdf(erG_deg_centralities))\n\nplt.show()", "From visualizing these two distributions, it is clear that they look very different. How do we quantify this difference, and statistically test whether the protein-protein graph could have arisen under an Erdos-Renyi model?\nOne thing we might observe is that the variance, that is the \"spread\" around the mean, differs between the E-R model compared to our data. Therefore, we can compare variance of the data to the distribtion of variances under an E-R model.\nThis is essentially following the logic of statistical inference by 'hacking' (not to be confused with the statistical bad practice of p-hacking).\nExercise\nFill in the skeleton code below to simulate 100 E-R graphs.", "# 1. Generate 100 E-R graph degree centrality variance measurements and store them.\n# Takes ~50 seconds or so.\nn_sims = 100\ner_vars = np.zeros(n_sims) # variances for n simulaed E-R graphs.\nfor i in range(n_sims):\n erG = nx.erdos_renyi_graph(n=len(G.nodes()), p=nx.density(G))\n erG_deg_centralities = list(nx.degree_centrality(erG).values())\n er_vars[i] = np.var(erG_deg_centralities)\n\n# 2. Compute the test statistic that is going to be used for the hypothesis test.\n# Hint: numpy has a \"var\" function implemented that computes the variance of a distribution of data.\nppG_var = np.var(ppG_deg_centralities)\n\n# Do a quick visual check\nn, bins, patches = plt.hist(er_vars)\nplt.vlines(ppG_var, ymin=0, ymax=max(n), color='red', lw=2)", "Visually, it should be quite evident that the protein-protein graph did not come from an E-R distribution. Statistically, we can also use the hypothesis test procedure to quantitatively test this, using our simulated E-R data.", "# Conduct the hypothesis test.\nppG_var > np.percentile(er_vars, 99) # we can only use the 99th percentile, because there are only 100 data points.", "Another way to do this is to use the 2-sample Kolmogorov-Smirnov test implemented in the scipy.stats module. From the docs:\n\nThis tests whether 2 samples are drawn from the same distribution. Note\nthat, like in the case of the one-sample K-S test, the distribution is\nassumed to be continuous.\nThis is the two-sided test, one-sided tests are not implemented.\nThe test uses the two-sided asymptotic Kolmogorov-Smirnov distribution.\nIf the K-S statistic is small or the p-value is high, then we cannot\nreject the hypothesis that the distributions of the two samples\nare the same.\n\nAs an example to convince yourself that this test works, run the synthetic examples below.", "# Scenario 1: Data come from the same distributions.\n# Notice the size of the p-value.\ndist1 = npr.random(size=(100))\ndist2 = npr.random(size=(100))\n\nks_2samp(dist1, dist2)\n# Note how the p-value, which ranges between 0 and 1, is likely to be greater than a commonly-accepted\n# threshold of 0.05\n\n# Scenario 2: Data come from different distributions. \n# Note the size of the KS statistic, and the p-value.\n\ndist1 = norm(3, 1).rvs(100)\ndist2 = norm(5, 1).rvs(100)\n\nks_2samp(dist1, dist2)\n# Note how the p-value is likely to be less than 0.05, and even more stringent cut-offs of 0.01 or 0.001.", "Exercise\nNow, conduct the K-S test for one synthetic graph and the data.", "# Now try it on the data distribution\nks_2samp(erG_deg_centralities, ppG_deg_centralities)", "Networks may be high-dimensional objects, but the logic for inference on network data essentially follows the same logic as for 'regular' data:\n\nIdentify a model of 'randomness' that may model how your data may have been generated.\nCompute a \"test statistic\" for your data and the model.\nCompute the probability of observing the data's test statistic under the model.\n\nFurther Reading\nJake Vanderplas' \"Statistics for Hackers\" slides: https://speakerdeck.com/jakevdp/statistics-for-hackers\nAllen Downey's \"There is Only One Test\": http://allendowney.blogspot.com/2011/05/there-is-only-one-test.html" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dataDogma/Computer-Science
Courses/DAT-208x/DAT208x - Week 6 - Pandas.ipynb
gpl-3.0
[ "Table of Content\n\n\nOverviw\n\n\nAccessing Rows\n\n\nElement Access\n\n\nLab\n\n\n\nOverview:\nBeing a data scientist means, we gotta have to work with \"big\" data with different types.\nWe've seen how 2D Numpy arrays gives us power to compute data in a much efficient way, but the only downside to it is, they must be of the same type.\nTo solve this issue, ther's where the Pandas package comes in. So what's in Pandas?\n\n\nHigh-level data manupalation.\n\n\nThe concept of \"Data Frames\" objects.\n\nData is stored in such data frames.\n\n\n\nMore specifically, they are tables,\n\n\nwith \"rows\" represented as \"observations\".\n\n\n\"Coloumns\" represented by \"variables\".\n\n\nEach row has a unique label, same goes for coloumns as well.\n\n\nColoumns can have different types.\n\n\n\n\nWe typically don't make data frames manually.\n\n\nWe convert .csv (Comma seperated values) files to data frames.\n\n\nWe do this importing the pandas package:\n\n\nimport pandas as pd, again pd is an \"alias\".\n\nNow we can use a built-in function that comes packaged with pandas called as:\n\nread_csv(&lt;path to .csv file)>\n\n\n\nExample:\nWe will be using pandas package to import, read in the \"brics dataset\" into python, let's look how the dataframes look like:", "# import the pandas package\nimport pandas as pd\n\n# load in the dataset and save it to brics var.\nbrics = pd.read_csv(\"C:/Users/pySag/Documents/GitHub/Computer-Science/Courses/DAT-208x/Datasets/BRICS_cummulative.csv\")\n\nbrics\n\n# we can make the table look more better, by adding a parameter index_col = 0\nbrics = pd.read_csv(\"C:/Users/pySag/Documents/GitHub/Computer-Science/Courses/DAT-208x/Datasets/BRICS_cummulative.csv\", index_col=0)\n\nbrics #notice how the indexes assigned to row observation are now deprecated.", "One of the most effective use of pandas is the ease at which we can select rows and coloumns in different ways, here's how we do it:\n\n\nTo access the coloumns, there are three different ways we can do it, these are:\n\ndata_set_var[ \"coloumn-name\" ]\n&lt; data_set_var &gt;.&lt; coloumn-name &gt;\n\n\n\nWe can add coloumns too, say we rank them:\n&lt;data_set_var&gt;[\"new-coloumn-name\"] = &lt; list of values &gt;", "# Add a new coloumn\nbrics[\"on_earth\"] = [ True, True, True, True, True ]\n\n# Print them\nbrics\n\n# Manupalating Coloumns\n\"\"\"Coloumns can be manipulated using arithematic operations\non other coloumns\"\"\"", "Accessing Rows:\n\nSyntax: dataframe.loc[ &lt;\"row name\"&gt; ]\n\nGo to top:TOC\nElement access\n\nTo get just one element in the table, we can specify both coloumn and row label in the loc().\nSyntax: \n\n\ndataframe.loc[ &lt;\"row-name, coloumn name\"&gt; ]\n\n\ndataframe[ &lt;\"row-name\"&gt; ].loc[ &lt;\"coloumn-name\"&gt; ]\n\ndataframe.loc[ &lt;\"rowName'&gt; ][&lt; \"coloumnName\" &gt;]\n\nLab:\n\nObjective:\n\n\nPractice importing data into python as Pandas DataFrame.\n\n\nPractise accessig Row and Coloumns\n\n\n\nLab content:\n\n\nCSV to DataFrame1\n\n\nCSV to DataFrame2\n\n\nSquare Brackets\n\n\nLoc1\n\n\nLoc2\n\n\n\nGo to:TOC\nCSV to DataFrame1\n\nPreface:\nThe DataFrame is one of Pandas' most important data structures. It's basically a way to store tabular data, where you can label the rows and the columns.\nIn the exercises that follow, you will be working wit vehicle data in different countries. Each observation corresponds to a country, and the columns give information about the number of vehicles per capita, whether people drive left or right, and so on. This data is available in a CSV file, named cars.csv. It is available in your current working directory, so the path to the file is simply 'cars.csv'.\nTo import CSV data into Python as a Pandas DataFrame, you can use read_csv().\nInstructions:\n\n\nTo import CSV files, you still need the pandas package: import it as pd.\n\n\nUse pd.read_csv() to import cars.csv data as a DataFrame. Store this dataframe as cars.\n\n\nPrint out cars. Does everything look OK?", "\"\"\"\n# Import pandas as pd\nimport pandas as pd\n\n# Import the cars.csv data: cars\ncars = pd.read_csv(\"cars.csv\")\n\n# Print out cars\nprint(cars)\n\"\"\"", "CSV to DataFrame2\n\nPreface:\nWe have a slight of a problem, the row labels are imported as another coloumn, that has no name.\nTo fix this issue, we are goint to pass an argument index_col = 0 to read_csv(). This is used to specify which coloumn in the CSV file should be used as row label?\nInstructions:\n\n\nRun the code with Submit Answer and assert that the first column should actually be used as row labels.\n\n\nSpecify the index_col argument inside pd.read_csv(): set it to 0, so that the first column is used as row labels.\n\n\nHas the printout of cars improved now?\n\n\n\nGo to top:TOC", "\"\"\"\n# Import pandas as pd\nimport pandas as pd\n\n# Import the cars.csv data: cars\ncars = pd.read_csv(\"cars.csv\", index_col=0)\n\n# Print out cars\nprint(cars)\n\"\"\"", "Square Brackets\n\nPreface\nSelecting coloumns can be done in two way.\n\n\nvariable_containing_CSV_file['coloumn-name']\n\n\nvariable_containing_CSV_file[['coloumn-name']]\n\n\nThe former gives a pandas series, whereas the latter gives a pandas dataframe.\nInstructions:\n\n\nUse single square brackets to print out the country column of cars as a Pandas Series.\n\n\nUse double square brackets to print out the country column of cars as a Pandas DataFrame. Do this by putting country in two square brackets this time.", "\"\"\"\n# Import cars data\nimport pandas as pd\ncars = pd.read_csv('cars.csv', index_col = 0)\n\n# Print out country column as Pandas Series\nprint( cars['country'])\n\n# Print out country column as Pandas DataFrame\nprint( cars[['country']])\n\"\"\"", "Loc1\n\nWith loc we can do practically any data selection operation on DataFrames you can think of.\nloc is label-based, which means that you have to specify rows and coloumns based on their row and coloumn labels.\nInstructions:\n\n\nUse loc to select the observation corresponding to Japan as a Series. The label of this row is JAP. Make sure to print the resulting Series.\n\n\nUse loc to select the observations for Australia and Egypt as a DataFrame.", "\"\"\"\n# Import cars data\nimport pandas as pd\ncars = pd.read_csv('cars.csv', index_col = 0)\n\n# Print out observation for Japan\nprint( cars.loc['JAP'] )\n\n# Print out observations for Australia and Egypt\nprint( cars.loc[ ['AUS', 'EG'] ])\n\"\"\"", "Loc2\n\nloc also allows us to select both, rows and coloumns from a DataFrame.\nInstructions: \n\n\nPrint out the drives_right value of the row corresponding to Morocco (its row label is MOR)\n\n\nPrint out a sub-DataFrame, containing the observations for Russia and Morocco and the columns country and drives_right." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]