markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Example output: Serving function input: bytes_inputs projects.locations.models.upload Request
model = { "display_name": "custom_job_TF" + TIMESTAMP, "metadata_schema_uri": "", "artifact_uri": model_artifact_dir, "container_spec": { "image_uri": "gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest" }, } print(MessageToJson(aip.UploadModelRequest(parent=PARENT, model=model).__dict__...
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: ``` { "parent": "projects/migration-ucaip-training/locations/us-central1", "model": { "displayName": "custom_job_TF20210227173057", "containerSpec": { "imageUri": "gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest" }, "artifactUri": "gs://migration-ucaip-trainingaip-2021022...
request = clients["model"].upload_model(parent=PARENT, model=model)
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "model": "projects/116273516712/locations/us-central1/models/8844102097923211264" }
# The full unique ID for the model model_id = result.model print(model_id)
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Make batch predictions Make the batch input file Let's now make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can be JSONL.
import base64 import json import cv2 import numpy as np import tensorflow as tf (_, _), (x_test, y_test) = tf.keras.datasets.cifar10.load_data() test_image_1, test_label_1 = x_test[0], y_test[0] test_image_2, test_label_2 = x_test[1], y_test[1] cv2.imwrite("tmp1.jpg", (test_image_1).astype(np.uint8)) cv2.imwrite("...
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: {"bytes_inputs": {"b64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQ...
batch_prediction_job = aip.BatchPredictionJob( display_name="custom_job_TF" + TIMESTAMP, model=model_id, input_config={ "instances_format": "jsonl", "gcs_source": {"uris": [gcs_input_uri]}, }, model_parameters=ParseDict( {"confidenceThreshold": 0.5, "maxPredictions": 2}, Valu...
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "parent": "projects/migration-ucaip-training/locations/us-central1", "batchPredictionJob": { "displayName": "custom_job_TF_TF20210227173057", "model": "projects/116273516712/locations/us-central1/models/8844102097923211264", "inputConfig": { "instancesFormat": "jsonl", "gcs...
request = clients["job"].create_batch_prediction_job( parent=PARENT, batch_prediction_job=batch_prediction_job )
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/batchPredictionJobs/659759753223733248", "displayName": "custom_job_TF_TF20210227173057", "model": "projects/116273516712/locations/us-central1/models/8844102097923211264", "inputConfig": { "instancesFormat": "jsonl", "gcsSource": { ...
# The fully qualified ID for the batch job batch_job_id = request.name # The short numeric ID for the batch job batch_job_short_id = batch_job_id.split("/")[-1] print(batch_job_id)
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
projects.locations.batchPredictionJobs.get Call
request = clients["job"].get_batch_prediction_job(name=batch_job_id)
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/batchPredictionJobs/659759753223733248", "displayName": "custom_job_TF_TF20210227173057", "model": "projects/116273516712/locations/us-central1/models/8844102097923211264", "inputConfig": { "instancesFormat": "jsonl", "gcsSource": { ...
def get_latest_predictions(gcs_out_dir): """ Get the latest prediction subfolder using the timestamp in the subfolder name""" folders = !gsutil ls $gcs_out_dir latest = "" for folder in folders: subfolder = folder.split("/")[-2] if subfolder.startswith("prediction-"): if subf...
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: gs://migration-ucaip-trainingaip-20210227173057/batch_output/prediction-custom_job_TF_TF20210227173057-2021_02_27T10_00_30_820Z/prediction.errors_stats-00000-of-00001 gs://migration-ucaip-trainingaip-20210227173057/batch_output/prediction-custom_job_TF_TF20210227173057-2021_02_27T10_00_30_820Z/predictio...
endpoint = {"display_name": "custom_job_TF" + TIMESTAMP} print( MessageToJson( aip.CreateEndpointRequest(parent=PARENT, endpoint=endpoint).__dict__["_pb"] ) )
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "parent": "projects/migration-ucaip-training/locations/us-central1", "endpoint": { "displayName": "custom_job_TF_TF20210227173057" } } Call
request = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/endpoints/6810814827095654400" }
# The full unique ID for the endpoint endpoint_id = result.name # The short numeric ID for the endpoint endpoint_short_id = endpoint_id.split("/")[-1] print(endpoint_id)
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
projects.locations.endpoints.deployModel Request
deployed_model = { "model": model_id, "display_name": "custom_job_TF" + TIMESTAMP, "dedicated_resources": { "min_replica_count": 1, "machine_spec": {"machine_type": "n1-standard-4", "accelerator_count": 0}, }, } print( MessageToJson( aip.DeployModelRequest( endpo...
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "endpoint": "projects/116273516712/locations/us-central1/endpoints/6810814827095654400", "deployedModel": { "model": "projects/116273516712/locations/us-central1/models/8844102097923211264", "displayName": "custom_job_TF_TF20210227173057", "dedicatedResources": { "machineSpec": {...
request = clients["endpoint"].deploy_model( endpoint=endpoint_id, deployed_model=deployed_model, traffic_split={"0": 100} )
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "deployedModel": { "id": "2064302294823862272" } }
# The unique ID for the deployed model deployed_model_id = result.deployed_model.id print(deployed_model_id)
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
projects.locations.endpoints.predict Prepare file for online prediction Request
import base64 import cv2 import tensorflow as tf (_, _), (x_test, y_test) = tf.keras.datasets.cifar10.load_data() test_image, test_label = x_test[0], y_test[0] cv2.imwrite("tmp.jpg", (test_image * 255).astype(np.uint8)) bytes = tf.io.read_file("tmp.jpg") b64str = base64.b64encode(bytes.numpy()).decode("utf-8") inst...
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: ``` { "endpoint": "projects/116273516712/locations/us-central1/endpoints/6810814827095654400", "instances": [ [ { "bytes_inputs": { "b64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgIC...
request = clients["prediction"].predict(endpoint=endpoint_id, instances=instances_list)
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "predictions": [ [ 0.0406113081, 0.125313938, 0.118626907, 0.100714684, 0.128500372, 0.0899592042, 0.157601, 0.121072263, 0.0312432405, 0.0863570943 ] ], "deployedModelId": "2064302294823862272" } projects.locations.endpoints.un...
request = clients["endpoint"].undeploy_model( endpoint=endpoint_id, deployed_model_id=deployed_model_id, traffic_split={} )
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: {} Cleaning up To clean up all GCP resources used in this project, you can delete the GCP project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial.
delete_model = True delete_endpoint = True delete_custom_job = True delete_batchjob = True delete_bucket = True # Delete the model using the Vertex AI fully qualified identifier for the model try: if delete_model: clients["model"].delete_model(name=model_id) except Exception as e: print(e) # Delete th...
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Import SHL Prediction Module: shl_pm
import shl_pm
rnd03/shl_sm_NoOCR_v010.ipynb
telescopeuser/uat_shl
mit
shl_sm parameters: shl_sm simulated real time per second price ata, fetch from csv:
# which month to predictsimulate? # shl_sm_parm_ccyy_mm = '2017-04' # shl_sm_parm_ccyy_mm_offset = 1647 # shl_sm_parm_ccyy_mm = '2017-05' # shl_sm_parm_ccyy_mm_offset = 1708 # shl_sm_parm_ccyy_mm = '2017-06' # shl_sm_parm_ccyy_mm_offset = 1769 shl_sm_parm_ccyy_mm = '2017-07' shl_sm_parm_ccyy_mm_offset = 1830 #----...
rnd03/shl_sm_NoOCR_v010.ipynb
telescopeuser/uat_shl
mit
shl_pm Initialization
shl_pm.shl_initialize(shl_sm_parm_ccyy_mm) # Upon receiving 11:29:00 second price, to predict till 11:29:49 <- one-step forward price forecasting for i in range(shl_sm_parm_ccyy_mm_offset, shl_sm_parm_ccyy_mm_offset+50): # use csv data as simulatino # for i in range(shl_sm_parm_ccyy_mm_offset, shl_sm_parm_ccyy_mm_off...
rnd03/shl_sm_NoOCR_v010.ipynb
telescopeuser/uat_shl
mit
MISC - Validation
%matplotlib inline import matplotlib.pyplot as plt shl_data_pm_k_step_local = shl_pm.shl_data_pm_k_step.copy() shl_data_pm_k_step_local.index = shl_data_pm_k_step_local.index + 1 shl_data_pm_k_step_local # bid is predicted bid-price from shl_pm plt.figure(figsize=(12,6)) plt.plot(shl_pm.shl_data_pm_k_step['f_current_...
rnd03/shl_sm_NoOCR_v010.ipynb
telescopeuser/uat_shl
mit
<h2>Reproducible Research</h2>
%%python import os os.system('python -V') os.system('python ../helper_modules/Package_Versions.py') SEED = 7 np.random.seed(SEED) CURR_DIR = os.getcwd() DATA_DIR = '/Users/jnarhan/Dropbox/Breast_Cancer_Data/Data_Thresholded/ALL_IMGS/' AUG_DIR = '/Users/jnarhan/Dropbox/Breast_Cancer_Data/Data_Thresholded/AUG_DIAGN...
src/models/JN_BC_Threshold_Diagnosis.ipynb
jnarhan/Breast_Cancer
mit
Class Balancing Here - I look at a modified version of SMOTE, growing the under-represented class via synthetic augmentation, until there is a balance among the categories:
datagen = ImageDataGenerator(rotation_range=5, width_shift_range=.01, height_shift_range=0.01, data_format='channels_first') X_data, Y_data = bc.balanceViaSmote(cls_cnts, meta_data, DATA_DIR, AUG_DIR, cats, datagen, X_data, Y_data, seed=SEED, verbose=Tr...
src/models/JN_BC_Threshold_Diagnosis.ipynb
jnarhan/Breast_Cancer
mit
Create the Training and Test Datasets
X_train, X_test, Y_train, Y_test = train_test_split(X_data, Y_data, test_size=0.20, # deviation given small data set random_state=SEED, stratify=zip(*Y_data)[0]) p...
src/models/JN_BC_Threshold_Diagnosis.ipynb
jnarhan/Breast_Cancer
mit
<h2>Support Vector Machine Model</h2>
X_train_svm = X_train.reshape( (X_train.shape[0], -1)) X_test_svm = X_test.reshape( (X_test.shape[0], -1)) SVM_model = SVC(gamma=0.001) SVM_model.fit( X_train_svm, Y_train) predictOutput = SVM_model.predict(X_test_svm) svm_acc = metrics.accuracy_score(y_true=Y_test, y_pred=predictOutput) print 'SVM Accuracy: {: >7...
src/models/JN_BC_Threshold_Diagnosis.ipynb
jnarhan/Breast_Cancer
mit
<h2>CNN Modelling Using VGG16 in Transfer Learning</h2>
def VGG_Prep(img_data): """ :param img_data: training or test images of shape [#images, height, width] :return: the array transformed to the correct shape for the VGG network shape = [#images, height, width, 3] transforms to rgb and reshapes """ images = np.zeros([len(img_data), img_...
src/models/JN_BC_Threshold_Diagnosis.ipynb
jnarhan/Breast_Cancer
mit
<h2>Core CNN Modelling</h2> Prep and package the data for Keras processing:
data = [X_train, X_test, Y_train, Y_test] X_train, X_test, Y_train, Y_test = bc.prep_data(data, cats) data = [X_train, X_test, Y_train, Y_test] print X_train.shape print X_test.shape print Y_train.shape print Y_test.shape
src/models/JN_BC_Threshold_Diagnosis.ipynb
jnarhan/Breast_Cancer
mit
Heavy Regularization
def diff_model_v7_reg(numClasses, input_shape=(3, 150,150), add_noise=False, noise=0.01, verbose=False): model = Sequential() if (add_noise): model.add( GaussianNoise(noise, input_shape=input_shape)) model.add( Convolution2D(filters=16, kernel_size=(5,5), ...
src/models/JN_BC_Threshold_Diagnosis.ipynb
jnarhan/Breast_Cancer
mit
Licensing Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License ...
import radnlp.rules as rules import radnlp.schema as schema import radnlp.utils as utils import radnlp.classifier as classifier import radnlp.split as split from IPython.display import clear_output, display, HTML from IPython.html.widgets import interact, interactive, fixed import io from IPython.html import widgets #...
notebooks/radnlp_demo.ipynb
chapmanbe/RadNLP
apache-2.0
Example Data Below are two example radiology reports pulled from the MIMIC2 demo data set.
reports = ["""1. Pulmonary embolism with filling defects noted within the upper and lower lobar branches of the right main pulmonary artery. 2. Bilateral pleural effusions, greater on the left. 3. Ascites. 4. There is edema of the gallbladder wall, without any evidence of distention, intra...
notebooks/radnlp_demo.ipynb
chapmanbe/RadNLP
apache-2.0
Define locations of knowledge, schema, and rules files
def getOptions(): """Generates arguments for specifying database and other parameters""" options = {} options['lexical_kb'] = ["https://raw.githubusercontent.com/chapmanbe/pyConTextNLP/master/KB/lexical_kb_04292013.tsv", "https://raw.githubusercontent.com/chapmanbe/pyConTextNLP...
notebooks/radnlp_demo.ipynb
chapmanbe/RadNLP
apache-2.0
Define report analysis For every report we do two steps Markup all the sentences in the report based on the provided targets and modifiers Given this markup we apply our rules and schema to generate a document classification. radnlp provides functions to do both of these steps: radnlp.utils.mark_report takes lists o...
def analyze_report(report, modifiers, targets, rules, schema): """ given an individual radiology report, creates a pyConTextGraph object that contains the context markup report: a text string containing the radiology reports """ markup = utils.mark_report(split.get_sentences(report), ...
notebooks/radnlp_demo.ipynb
chapmanbe/RadNLP
apache-2.0
radnlp.classifier.classify_document_targets returns a dictionary with keys equal to the target category (e.g. pulmonary_embolism) and the values a 3-tuple with the following values: The schema category (e.g. 8 or 2). The XML representation of the maximal schema node A list (usually empty (not really implemented yet)) ...
for key, value in rslt_0.items(): print(("%s"%key).center(42,"-")) for v in value: print(v) rslt_1 = main(reports[1]) for key, value in rslt_1.items(): print(("%s"%key).center(42,"-")) for v in value: print(v)
notebooks/radnlp_demo.ipynb
chapmanbe/RadNLP
apache-2.0
Negative Report For the third report I simply rewrote one of the findings to be negative for PE. We now see a change in the schema classification.
rslt_2 = main(reports[2]) for key, value in rslt_2.items(): print(("%s"%key).center(42,"-")) for v in value: print(v) keys = list(pec.markups.keys()) keys.sort() pec.reports.insert(pec.reports.columns.get_loc(u'markup')+1, "ConText Coding", [codingKey.get(pec.mar...
notebooks/radnlp_demo.ipynb
chapmanbe/RadNLP
apache-2.0
🔪 Pure functions JAX transformation and compilation are designed to work only on Python functions that are functionally pure: all the input data is passed through the function parameters, all the results are output through the function results. A pure function will always return the same result if invoked with the sam...
def impure_print_side_effect(x): print("Executing function") # This is a side-effect return x # The side-effects appear during the first run print ("First call: ", jit(impure_print_side_effect)(4.)) # Subsequent runs with parameters of same type and shape may not show the side-effect # This is because JAX now...
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
A Python function can be functionally pure even if it actually uses stateful objects internally, as long as it does not read or write external state:
def pure_uses_internal_state(x): state = dict(even=0, odd=0) for i in range(10): state['even' if i % 2 == 0 else 'odd'] += x return state['even'] + state['odd'] print(jit(pure_uses_internal_state)(5.))
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
It is not recommended to use iterators in any JAX function you want to jit or in any control-flow primitive. The reason is that an iterator is a python object which introduces state to retrieve the next element. Therefore, it is incompatible with JAX functional programming model. In the code below, there are some examp...
import jax.numpy as jnp import jax.lax as lax from jax import make_jaxpr # lax.fori_loop array = jnp.arange(10) print(lax.fori_loop(0, 10, lambda i,x: x+array[i], 0)) # expected result 45 iterator = iter(range(10)) print(lax.fori_loop(0, 10, lambda i,x: x+next(iterator), 0)) # unexpected result 0 # lax.scan def func1...
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
🔪 In-Place Updates In Numpy you're used to doing this:
numpy_array = np.zeros((3,3), dtype=np.float32) print("original array:") print(numpy_array) # In place, mutating update numpy_array[1, :] = 1.0 print("updated array:") print(numpy_array)
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
If we try to update a JAX device array in-place, however, we get an error! (☉_☉)
jax_array = jnp.zeros((3,3), dtype=jnp.float32) # In place update of JAX's array will yield an error! try: jax_array[1, :] = 1.0 except Exception as e: print("Exception {}".format(e))
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
Allowing mutation of variables in-place makes program analysis and transformation difficult. JAX requires that programs are pure functions. Instead, JAX offers a functional array update using the .at property on JAX arrays. ️⚠️ inside jit'd code and lax.while_loop or lax.fori_loop the size of slices can't be functions ...
updated_array = jax_array.at[1, :].set(1.0) print("updated array:\n", updated_array)
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
JAX's array update functions, unlike their NumPy versions, operate out-of-place. That is, the updated array is returned as a new array and the original array is not modified by the update.
print("original array unchanged:\n", jax_array)
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
However, inside jit-compiled code, if the input value x of x.at[idx].set(y) is not reused, the compiler will optimize the array update to occur in-place. Array updates with other operations Indexed array updates are not limited simply to overwriting values. For example, we can perform indexed addition as follows:
print("original array:") jax_array = jnp.ones((5, 6)) print(jax_array) new_jax_array = jax_array.at[::2, 3:].add(7.) print("new array post-addition:") print(new_jax_array)
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
For more details on indexed array updates, see the documentation for the .at property. 🔪 Out-of-Bounds Indexing In Numpy, you are used to errors being thrown when you index an array outside of its bounds, like this:
try: np.arange(10)[11] except Exception as e: print("Exception {}".format(e))
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
However, raising an error from code running on an accelerator can be difficult or impossible. Therefore, JAX must choose some non-error behavior for out of bounds indexing (akin to how invalid floating point arithmetic results in NaN). When the indexing operation is an array index update (e.g. index_add or scatter-like...
jnp.arange(10)[11]
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
Note that due to this behavior for index retrieval, functions like jnp.nanargmin and jnp.nanargmax return -1 for slices consisting of NaNs whereas Numpy would throw an error. Note also that, as the two behaviors described above are not inverses of each other, reverse-mode automatic differentiation (which turns index up...
np.sum([1, 2, 3])
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
JAX departs from this, generally returning a helpful error:
try: jnp.sum([1, 2, 3]) except TypeError as e: print(f"TypeError: {e}")
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
This is a deliberate design choice, because passing lists or tuples to traced functions can lead to silent performance degradation that might otherwise be difficult to detect. For example, consider the following permissive version of jnp.sum that allows list inputs:
def permissive_sum(x): return jnp.sum(jnp.array(x)) x = list(range(10)) permissive_sum(x)
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
The output is what we would expect, but this hides potential performance issues under the hood. In JAX's tracing and JIT compilation model, each element in a Python list or tuple is treated as a separate JAX variable, and individually processed and pushed to device. This can be seen in the jaxpr for the permissive_sum ...
make_jaxpr(permissive_sum)(x)
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
Each entry of the list is handled as a separate input, resulting in a tracing & compilation overhead that grows linearly with the size of the list. To prevent surprises like this, JAX avoids implicit conversions of lists and tuples to arrays. If you would like to pass a tuple or list to a JAX function, you can do so by...
jnp.sum(jnp.array(x))
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
🔪 Random Numbers If all scientific papers whose results are in doubt because of bad rand()s were to disappear from library shelves, there would be a gap on each shelf about as big as your fist. - Numerical Recipes RNGs and State You're used to stateful pseudorandom number generators (PRNGs) from numpy and other li...
print(np.random.random()) print(np.random.random()) print(np.random.random())
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
Underneath the hood, numpy uses the Mersenne Twister PRNG to power its pseudorandom functions. The PRNG has a period of $2^{19937}-1$ and at any point can be described by 624 32bit unsigned ints and a position indicating how much of this "entropy" has been used up.
np.random.seed(0) rng_state = np.random.get_state() #print(rng_state) # --> ('MT19937', array([0, 1, 1812433255, 1900727105, 1208447044, # 2481403966, 4042607538, 337614300, ... 614 more numbers..., # 3048484911, 1796872496], dtype=uint32), 624, 0, 0.0)
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
This pseudorandom state vector is automagically updated behind the scenes every time a random number is needed, "consuming" 2 of the uint32s in the Mersenne twister state vector:
_ = np.random.uniform() rng_state = np.random.get_state() #print(rng_state) # --> ('MT19937', array([2443250962, 1093594115, 1878467924, # ..., 2648828502, 1678096082], dtype=uint32), 2, 0, 0.0) # Let's exhaust the entropy in this PRNG statevector for i in range(311): _ = np.random.uniform() rng_state = np.ra...
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
The problem with magic PRNG state is that it's hard to reason about how it's being used and updated across different threads, processes, and devices, and it's very easy to screw up when the details of entropy production and consumption are hidden from the end user. The Mersenne Twister PRNG is also known to have a numb...
from jax import random key = random.PRNGKey(0) key
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
JAX's random functions produce pseudorandom numbers from the PRNG state, but do not change the state! Reusing the same state will cause sadness and monotony, depriving the end user of lifegiving chaos:
print(random.normal(key, shape=(1,))) print(key) # No no no! print(random.normal(key, shape=(1,))) print(key)
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
Instead, we split the PRNG to get usable subkeys every time we need a new pseudorandom number:
print("old key", key) key, subkey = random.split(key) normal_pseudorandom = random.normal(subkey, shape=(1,)) print(" \---SPLIT --> new key ", key) print(" \--> new subkey", subkey, "--> normal", normal_pseudorandom)
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
We propagate the key and make new subkeys whenever we need a new random number:
print("old key", key) key, subkey = random.split(key) normal_pseudorandom = random.normal(subkey, shape=(1,)) print(" \---SPLIT --> new key ", key) print(" \--> new subkey", subkey, "--> normal", normal_pseudorandom)
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
We can generate more than one subkey at a time:
key, *subkeys = random.split(key, 4) for subkey in subkeys: print(random.normal(subkey, shape=(1,)))
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
🔪 Control Flow ✔ python control_flow + autodiff ✔ If you just want to apply grad to your python functions, you can use regular python control-flow constructs with no problems, as if you were using Autograd (or Pytorch or TF Eager).
def f(x): if x < 3: return 3. * x ** 2 else: return -4 * x print(grad(f)(2.)) # ok! print(grad(f)(4.)) # ok!
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
python control flow + JIT Using control flow with jit is more complicated, and by default it has more constraints. This works:
@jit def f(x): for i in range(3): x = 2 * x return x print(f(3))
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
So does this:
@jit def g(x): y = 0. for i in range(x.shape[0]): y = y + x[i] return y print(g(jnp.array([1., 2., 3.])))
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
But this doesn't, at least by default:
@jit def f(x): if x < 3: return 3. * x ** 2 else: return -4 * x # This will fail! try: f(2) except Exception as e: print("Exception {}".format(e))
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
What gives!? When we jit-compile a function, we usually want to compile a version of the function that works for many different argument values, so that we can cache and reuse the compiled code. That way we don't have to re-compile on each function evaluation. For example, if we evaluate an @jit function on the array j...
def f(x): if x < 3: return 3. * x ** 2 else: return -4 * x f = jit(f, static_argnums=(0,)) print(f(2.))
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
Here's another example, this time involving a loop:
def f(x, n): y = 0. for i in range(n): y = y + x[i] return y f = jit(f, static_argnums=(1,)) f(jnp.array([2., 3., 4.]), 2)
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
In effect, the loop gets statically unrolled. JAX can also trace at higher levels of abstraction, like Unshaped, but that's not currently the default for any transformation ️⚠️ functions with argument-value dependent shapes These control-flow issues also come up in a more subtle way: numerical functions we want to jit...
def example_fun(length, val): return jnp.ones((length,)) * val # un-jit'd works fine print(example_fun(5, 4)) bad_example_jit = jit(example_fun) # this will fail: try: print(bad_example_jit(10, 4)) except Exception as e: print("Exception {}".format(e)) # static_argnums tells JAX to recompile on changes at these ...
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
static_argnums can be handy if length in our example rarely changes, but it would be disastrous if it changed a lot! Lastly, if your function has global side-effects, JAX's tracer can cause weird things to happen. A common gotcha is trying to print arrays inside jit'd functions:
@jit def f(x): print(x) y = 2 * x print(y) return y f(2)
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
Structured control flow primitives There are more options for control flow in JAX. Say you want to avoid re-compilations but still want to use control flow that's traceable, and that avoids un-rolling large loops. Then you can use these 4 structured control flow primitives: lax.cond differentiable lax.while_loop fwd-...
from jax import lax operand = jnp.array([0.]) lax.cond(True, lambda x: x+1, lambda x: x-1, operand) # --> array([1.], dtype=float32) lax.cond(False, lambda x: x+1, lambda x: x-1, operand) # --> array([-1.], dtype=float32)
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
while_loop python equivalent: def while_loop(cond_fun, body_fun, init_val): val = init_val while cond_fun(val): val = body_fun(val) return val
init_val = 0 cond_fun = lambda x: x<10 body_fun = lambda x: x+1 lax.while_loop(cond_fun, body_fun, init_val) # --> array(10, dtype=int32)
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
fori_loop python equivalent: def fori_loop(start, stop, body_fun, init_val): val = init_val for i in range(start, stop): val = body_fun(i, val) return val
init_val = 0 start = 0 stop = 10 body_fun = lambda i,x: x+i lax.fori_loop(start, stop, body_fun, init_val) # --> array(45, dtype=int32)
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
Summary $$ \begin{array} {r|rr} \hline \ \textrm{construct} & \textrm{jit} & \textrm{grad} \ \hline \ \textrm{if} & ❌ & ✔ \ \textrm{for} & ✔ & ✔\ \textrm{while} & ✔ & ✔\ \textrm{lax.cond} & ✔ & ✔\ \textrm{lax.while_loop} & ✔ & \textrm{fwd}\ \textrm{lax.fori_loop} & ✔ & \textrm{fwd}\ \textrm{lax.scan} & ✔ & ✔\ \hline...
x = random.uniform(random.PRNGKey(0), (1000,), dtype=jnp.float64) x.dtype
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
To use double-precision numbers, you need to set the jax_enable_x64 configuration variable at startup. There are a few ways to do this: You can enable 64bit mode by setting the environment variable JAX_ENABLE_X64=True. You can manually set the jax_enable_x64 configuration flag at startup: python # again, thi...
import jax.numpy as jnp from jax import random x = random.uniform(random.PRNGKey(0), (1000,), dtype=jnp.float64) x.dtype # --> dtype('float64')
docs/notebooks/Common_Gotchas_in_JAX.ipynb
google/jax
apache-2.0
loading different datasets : - Publicy known adresses - Features dataframe from the graph features generators
known = pd.read_csv('../data/known.csv') rogues = pd.read_csv('../data/rogues.csv') transactions = pd.read_csv('../data/edges.csv').drop('Unnamed: 0',1) #Dropping features and fill na with 0 df = pd.read_csv('../data/features_full.csv').drop('Unnamed: 0',1).fillna(0) df = df.set_index(['nodes']) #build normalize values...
notebooks/chain-clustering.ipynb
jhamilius/chain
mit
I - Clustering Nodes <hr> Exploring clustering methods on the nodes featured dataset A - k-means First a very simple kmeans method
#Define estimator / by default clusters = 6 an init = 10 kmeans = KMeans(init='k-means++', n_clusters=6, n_init=10) kmeans.fit(data)
notebooks/chain-clustering.ipynb
jhamilius/chain
mit
1 - Parameters Optimization a - Finding the best k code from http://www.slideshare.net/SarahGuido/kmeans-clustering-with-scikitlearn#notes-panel)
%%time #Determine your k range k_range = range(1,14) # Fit the kmeans model for each n_clusters = k k_means_var = [KMeans(n_clusters=k).fit(data) for k in k_range] # Pull out the centroids for each model centroids = [X.cluster_centers_ for X in k_means_var] %%time # Caluculate the Euclidean distance from each pont to ...
notebooks/chain-clustering.ipynb
jhamilius/chain
mit
Difficult to find an elbow criteria Other heuristic criteria k = sqrt(n/2) b - Other heuristic method $k=\sqrt{\frac{n}{2}}$
np.sqrt(data.shape[0]/2)
notebooks/chain-clustering.ipynb
jhamilius/chain
mit
-> Weird c - Silhouette Metrics for supervised ? 2 - Visualize with PCA reduction code from scikit learn
############################################################################## # Generate sample data batch_size = 10 #centers = [[1, 1], [-1, -1], [1, -1]] n_clusters = 6 #X, labels_true = make_blobs(n_samples=3000, centers=centers, cluster_std=0.7) X = PCA(n_components=2).fit_transform(data) ######################...
notebooks/chain-clustering.ipynb
jhamilius/chain
mit
B - Mini batch II - Outlier Detection <hr> Objectives : - Perform outlier detection on node data - Test different methods (with perf metrics) - Plot outlier detection - Tag transaction Explain : Mahalanobis Distance
X = PCA(n_components=2).fit_transform(data) # compare estimators learnt from the full data set with true parameters emp_cov = EmpiricalCovariance().fit(X) robust_cov = MinCovDet().fit(X) ############################################################################### # Display results fig = plt.figure(figsize=(15, 8))...
notebooks/chain-clustering.ipynb
jhamilius/chain
mit
<hr> III - Look at the clusters
df.head(3) k_means = KMeans(init='random', n_clusters=6, n_init=10, random_state=2) clusters = k_means.fit_predict(data) df['clusters'] = clusters df.groupby('clusters').count() tagged = pd.merge(known,df,left_on='id',how='inner',right_index=True) tagged.groupby('clusters').count().apply(lambda x: 100*x/float(x.sum...
notebooks/chain-clustering.ipynb
jhamilius/chain
mit
Rogues and tagged are overrepresnetated in cluster 1
df.groupby('clusters').mean()
notebooks/chain-clustering.ipynb
jhamilius/chain
mit
IV - Tag transactions
transactions.head(4) df.head(20) #write function def get_cluster(node,df): return df.loc[node].clusters get_cluster('0x037dd056e7fdbd641db5b6bea2a8780a83fae180',df)
notebooks/chain-clustering.ipynb
jhamilius/chain
mit
Set parameters:
base_depth = 22.0 # depth of aquifer base below ground level, m initial_water_table_depth = 2.0 # starting depth to water table, m dx = 100.0 # cell width, m pumping_rate = 0.001 # pumping rate, m3/s well_locations = [800, 1200] K = 0.001 # hydraulic conductivity, (m/s) n = 0.2 # porosity, (-) dt = 3600.0 # time...
notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb
landlab/landlab
mit
Create a grid and add fields:
# Raster grid with closed boundaries # boundaries = {'top': 'closed','bottom': 'closed','right':'closed','left':'closed'} grid = RasterModelGrid((41, 41), xy_spacing=dx) # , bc=boundaries) # Topographic elevation field (meters) elev = grid.add_zeros("topographic__elevation", at="node") # Field for the elevation of t...
notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb
landlab/landlab
mit
Instantiate the component (note use of an array/field instead of a scalar constant for recharge_rate):
gdp = GroundwaterDupuitPercolator( grid, hydraulic_conductivity=K, porosity=n, recharge_rate=recharge, regularization_f=0.01, )
notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb
landlab/landlab
mit
Define a couple of handy functions to run the model for a day or a year:
def run_for_one_day(gdp, dt): num_iter = int(3600.0 * 24 / dt) for _ in range(num_iter): gdp.run_one_step(dt) def run_for_one_year(gdp, dt): num_iter = int(365.25 * 3600.0 * 24 / dt) for _ in range(num_iter): gdp.run_one_step(dt)
notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb
landlab/landlab
mit
Run for a year and plot the water table:
run_for_one_year(gdp, dt) imshow_grid(grid, wt, colorbar_label="Water table elevation (m)")
notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb
landlab/landlab
mit
Aside: calculating a pumping rate in terms of recharge The pumping rate at a particular grid cell (in volume per time, representing pumping from a well at that location) needs to be given in terms of a recharge rate (depth of water equivalent per time) in a given grid cell. Suppose for example you're pumping 16 gallons...
Qp = 16.0 * 0.00378541 / 60.0 print(Qp)
notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb
landlab/landlab
mit
...equals about 0.001 m$^3$/s. That's $Q_p$. The corresponding negative recharge in a cell of dimensions $\Delta x$ by $\Delta x$ would be $R_p = Q_p / \Delta x^2$
Rp = Qp / (dx * dx) print(Rp)
notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb
landlab/landlab
mit
A very simple ABM with farmers who drill wells into the aquifer For the sake of illustration, our ABM will be extremely simple. There are $N$ farmers, at random locations, who each pump at a rate $Q_p$ as long as the water table lies above the depth of their well, $d_w$. Once the water table drops below their well, the...
try: from mesa import Model except ModuleNotFoundError: print( """ Mesa needs to be installed in order to run this notebook. Normally Mesa should be pre-installed alongside the Landlab notebook collection. But it appears that Mesa is not already installed on the system on which you are running this no...
notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb
landlab/landlab
mit
Defining the ABM In Mesa, an ABM is created using a class for each Agent and a class for the Model. Here's the Agent class (a Farmer). Farmers have a grid location and an attribute: whether they are actively pumping their well or not. They also have a well depth: the depth to the bottom of their well. Their action cons...
from mesa import Agent, Model from mesa.space import MultiGrid from mesa.time import RandomActivation class FarmerAgent(Agent): """An agent who pumps from a well if it's not dry.""" def __init__(self, unique_id, model, well_depth=5.0): super().__init__(unique_id, model) self.pumping = True ...
notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb
landlab/landlab
mit
Next, define the model class. The model will take as a parameter a reference to a 2D array (with the same dimensions as the grid) that contains the depth to water table at each grid location. This allows the Farmer agents to check whether their well has run dry.
class FarmerModel(Model): """A model with several agents on a grid.""" def __init__(self, N, width, height, well_depth, depth_to_water_table): self.num_agents = N self.grid = MultiGrid(width, height, True) self.depth_to_water_table = depth_to_water_table self.schedule = RandomAc...
notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb
landlab/landlab
mit
Setting up the Landlab grid, fields, and groundwater simulator
base_depth = 22.0 # depth of aquifer base below ground level, m initial_water_table_depth = 2.8 # starting depth to water table, m dx = 100.0 # cell width, m pumping_rate = 0.004 # pumping rate, m3/s well_depth = 3 # well depth, m background_recharge = 0.002 / (365.25 * 24 * 3600) # recharge rate, m/s K = 0.001 ...
notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb
landlab/landlab
mit
Set up the Farmer model
nc = grid.number_of_node_columns nr = grid.number_of_node_rows farmer_model = FarmerModel( num_agents, nc, nr, well_depth, depth_to_wt.reshape((nr, nc)) )
notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb
landlab/landlab
mit
Check the spatial distribution of wells:
import numpy as np def get_well_count(model): well_count = np.zeros((nr, nc), dtype=int) pumping_well_count = np.zeros((nr, nc), dtype=int) for cell in model.grid.coord_iter(): cell_content, x, y = cell well_count[x][y] = len(cell_content) for agent in cell_content: if ...
notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb
landlab/landlab
mit
Set the initial recharge field
recharge[:] = -(pumping_rate / (dx * dx)) * p_well_count.flatten() imshow_grid(grid, -recharge * 3600 * 24, colorbar_label="Pumping rate (m/day)")
notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb
landlab/landlab
mit
Run the model
for i in range(run_duration_yrs): # Run the groundwater simulator for one year run_for_one_year(gdp, dt) # Update the depth to water table depth_to_wt[:] = elev - wt # Run the farmer model farmer_model.step() # Count the number of pumping wells well_count, pumping_well_count = get_we...
notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb
landlab/landlab
mit
Line with Gaussian noise Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$: $$ y = m x + b + N(0,\sigma^2) $$ Be careful about the sigma=0.0 case.
def random_line(m, b, sigma, size=10): """Create a line y = m*x + b + N(0,sigma**2) between x=[-1.0,1.0] Parameters ---------- m : float The slope of the line. b : float The y-intercept of the line. sigma : float The standard deviation of the y direction normal distr...
assignments/assignment05/InteractEx04.ipynb
joshnsolomon/phys202-2015-work
mit
Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function: Make the marker color settable through a color keyword argument with a default of red. Display the range $x=[-1.1,1.1]$...
def ticks_out(ax): """Move the ticks to the outside of the box.""" ax.get_xaxis().set_tick_params(direction='out', width=1, which='both') ax.get_yaxis().set_tick_params(direction='out', width=1, which='both') def plot_random_line(m, b, sigma, size=10, color='red'): """Plot a random line with slope m, i...
assignments/assignment05/InteractEx04.ipynb
joshnsolomon/phys202-2015-work
mit
Use interact to explore the plot_random_line function using: m: a float valued slider from -10.0 to 10.0 with steps of 0.1. b: a float valued slider from -5.0 to 5.0 with steps of 0.1. sigma: a float valued slider from 0.0 to 5.0 with steps of 0.01. size: an int valued slider from 10 to 100 with steps of 10. color: a ...
interact(plot_random_line, m=(-10.0,10.0,0.1),b=(-5.0,5.0,.1),sigma=(0.0,5.0,.01),size=(10,100,10),color = ['red','green','blue']); #### assert True # use this cell to grade the plot_random_line interact
assignments/assignment05/InteractEx04.ipynb
joshnsolomon/phys202-2015-work
mit
this matrix has $\mathcal{O}(1)$ elements in a row, therefore it is sparse. Finite elements method is also likely to give you a system with a sparse matrix. How to store a sparse matrix Coordinate format (coo) (i, j, value) i.e. store two integer arrays and one real array. Easy to add elements. But how to multiply a ma...
import numpy as np import scipy as sp import scipy.sparse import scipy.sparse.linalg from scipy.sparse import csc_matrix, csr_matrix, coo_matrix, lil_matrix A = csr_matrix([10,10]) B = lil_matrix([10,10]) A[0,0] = 1 #print A B[0,0] = 1 #print B import numpy as np import scipy as sp import scipy.sparse import scipy.s...
lecture-7.ipynb
oseledets/fastpde
cc0-1.0