markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
View Image
# Show image plt.imshow(image_enhanced, cmap='gray'), plt.axis("off") plt.show()
machine-learning/enhance_contrast_of_greyscale_image.ipynb
tpin3694/tpin3694.github.io
mit
If you don't have the image viewing tool ds9, you should install it - it's very useful astronomical software. You can download it (later!) from this webpage. We can also display the image in the notebook:
plt.imshow(viz.scale_image(im, scale='log', max_cut=40), cmap='gray', origin='lower'); plt.savefig("figures/cluster_image.png")
examples/XrayImage/FirstLook.ipynb
enoordeh/StatisticalMethods
gpl-2.0
Exercise What is going on in this image? Make a list of everything that is interesting about this image with your neighbor, and we'll discuss the features you identify in about 5 minutes time. Just to prove that images really are arrays of numbers:
im[350:359,350:359] index = np.unravel_index(im.argmax(), im.shape) print("image dimensions:",im.shape) print("location of maximum pixel value:",index) print("maximum pixel value: ",im[index])
examples/XrayImage/FirstLook.ipynb
enoordeh/StatisticalMethods
gpl-2.0
A full adder has three single bit inputs, and returns the sum and the carry. The sum is the exclusive or of the 3 bits, the carry is 1 if any two of the inputs bits are 1. Here is a schematic of a full adder circuit (from logisim). <img src="images/full_adder_logisim.png" width="500"/> We start by defining a magma comb...
@m.circuit.combinational def full_adder(A: m.Bit, B: m.Bit, C: m.Bit) -> (m.Bit, m.Bit): return A ^ B ^ C, A & B | B & C | C & A # sum, carry
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
We can test our combinational function to verify that our implementation behaves as expected fault. We'll use the fault.PythonTester which will simulate the circuit using magma's Python simulator.
import fault tester = fault.PythonTester(full_adder) assert tester(1, 0, 0) == (1, 0), "Failed" assert tester(0, 1, 0) == (1, 0), "Failed" assert tester(1, 1, 0) == (0, 1), "Failed" assert tester(1, 0, 1) == (0, 1), "Failed" assert tester(1, 1, 1) == (1, 1), "Failed" print("Success!")
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
combinational functions are polymorphic over Python and magma types. If the function is called with magma values, it will produce a circuit instance, wire up the inputs, and return references to the outputs. Otherwise, it will invoke the function in Python. For example, we can use the Python function to verify the ...
assert tester(1, 0, 0) == full_adder(1, 0, 0), "Failed" assert tester(0, 1, 0) == full_adder(0, 1, 0), "Failed" assert tester(1, 1, 0) == full_adder(1, 1, 0), "Failed" assert tester(1, 0, 1) == full_adder(1, 0, 1), "Failed" assert tester(1, 1, 1) == full_adder(1, 1, 1), "Failed" print("Success!")
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
Circuits Now that we have an implementation of full_adder as a combinational function, we'll use it to construct a magma Circuit. A Circuit in magma corresponds to a module in verilog. This example shows using the combinational function inside a circuit definition, as opposed to using the Python implementation shown ...
class FullAdder(m.Circuit): io = m.IO(I0=m.In(m.Bit), I1=m.In(m.Bit), CIN=m.In(m.Bit), O=m.Out(m.Bit), COUT=m.Out(m.Bit)) O, COUT = full_adder(io.I0, io.I1, io.CIN) io.O @= O io.COUT @= COUT
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
First, notice that the FullAdder is a subclass of Circuit. All magma circuits are classes in python. Second, the function IO creates the interface to the circuit. The arguments toIO are keyword arguments. The key is the name of the argument in the circuit, and the value is its type. In this circuit, all the inputs a...
print(repr(FullAdder))
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
We see that it has created an instance of the full_adder combinational function and wired up the interface. We can also inspect the contents of the full_adder circuit definition. Notice that it has lowered the Python operators into a structural representation of the primitive logicoperations.
print(repr(full_adder.circuit_definition))
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
We can also inspect the code generated by the m.circuit.combinational decorator by looking in the .magma directory for a file named .magma/full_adder.py. When using m.circuit.combinational, magma will generate a file matching the name of the decorated function. You'll notice that the generated code introduces an extr...
with open(".magma/full_adder.py") as f: print(f.read())
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
In the code above, a mux is imported and named phi. If the combinational circuit contains any if-then-else constructs, they will be transformed into muxes. Note also the m.wire function. m.wire(O0, io.I0) is equivalent to io.O0 @= O0. Staged testing with Fault fault is a python package for testing magma circuits. By d...
import logging logging.basicConfig(level=logging.INFO) import fault
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
Earlier in the notebook, we showed an example using fault.PythonTester to simulate a circuit. This uses an interactive programming model where test actions are immediately dispatched to the underlying simulator (which is why we can perform assertions on the simulation values in Python. fault also provides a staged met...
tester = fault.Tester(FullAdder)
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
An instance of a Tester has an attribute .circuit that enables the user to record test actions. For example, inputs to a circuit can be poked by setting the attribute corresponding to the input port name.
tester.circuit.I0 = 1 tester.circuit.I1 = 1 tester.circuit.CIN = 1
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
fault's default Tester provides the semantics of a cycle accurate simulator, so, unlike verilog, pokes do not create events that trigger computation. Instead, these poke values are staged, and the propogation of their effect occurs when the user calls the eval action.
tester.eval()
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
To assert that the output of the circuit is equal to a value, we use the expect method that are defined on the attributes corresponding to circuit output ports
tester.circuit.O.expect(1) tester.circuit.COUT.expect(1)
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
Because fault is a staged programming environment, the above actions are not executed until we have advanced to the next stage. In the first stage, the user records test actions (e.g. poke, eval, expect). In the second stage, the test is compiled and run using a target runtime. Here's examples of running the test us...
# compile_and_run throws an exception if the test fails tester.compile_and_run("verilator")
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
The tester also provides the same convenient __call__ interface we saw before.
O, COUT = tester(1, 0, 0) tester.expect(O, 1) tester.expect(COUT, 0) tester.compile_and_run("verilator")
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
Generate Verilog Magma's default compiler will generate verilog using CoreIR
m.compile("build/FullAdder", FullAdder, inline=True) %cat build/FullAdder.v
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
Generate CoreIR We can also inspect the intermediate CoreIR used in the generation process.
%cat build/FullAdder.json
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
Here's an example of running a CoreIR pass on the intermediate representation.
!coreir -i build/FullAdder.json -p instancecount
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
При увеличении порога мы делаем меньше ошибок FP и больше ошибок FN, поэтому одна из кривых растет, а вторая - падает. По такому графику можно подобрать оптимальное значение порога, при котором precision и recall будут приемлемы. Если такого порога не нашлось, нужно обучать другой алгоритм. Оговоримся, что приемлемые...
############### Programming assignment: problem 1 ############### T = 0.65 for _actual, _predicted in zip([actual_1, actual_10, actual_11], [predicted_1, predicted_10, predicted_11]): print('Precision: %s' % precision_score(_actual, _predicted > T)) print('Recall: %s\n' % recall_scor...
Coursera/Machine-learning-data-analysis/Course 2/Week_02/MetricsPA.ipynb
ALEXKIRNAS/DataScience
mit
F1-метрика в двух последних случаях, когда одна из парных метрик равна 1, значительно меньше, чем в первом, сбалансированном случае. <font color="green" size=5>Programming assignment: problem 2. </font> На precision и recall влияют и характер вектора вероятностей, и установленный порог. Для тех же пар (actual, predict...
############### Programming assignment: problem 2 ############### ks = np.zeros(3) idexes = np.empty(3) for threshold in np.arange(11): T = threshold * 0.1 for actual, predicted, idx in zip([actual_1, actual_10, actual_11], [predicted_1 > T, predicted_10 > T, predicted_11 ...
Coursera/Machine-learning-data-analysis/Course 2/Week_02/MetricsPA.ipynb
ALEXKIRNAS/DataScience
mit
Как и предыдущие метрики, log_loss хорошо различает идеальный, типичный и плохой случаи. Но обратите внимание, что интерпретировать величину достаточно сложно: метрика не достигает нуля никогда и не имеет верхней границы. Поэтому даже для идеального алгоритма, если смотреть только на одно значение log_loss, невозможно ...
############### Programming assignment: problem 3 ############## ans = [] def modified_log(actual, predicted): return - np.sum(0.3 * actual * np.log(predicted) + 0.7 * (1 - actual) * np.log(1 - predicted)) / len(actual) for _actual, _predicted in zip([actual_0, actual_1, actual_2, actual_0r, actual_1r, actual_10, ...
Coursera/Machine-learning-data-analysis/Course 2/Week_02/MetricsPA.ipynb
ALEXKIRNAS/DataScience
mit
Чем больше объектов в выборке, тем более гладкой выглядит кривая (хотя на самом деле она все равно ступенчатая). Как и ожидалось, кривые всех идеальных алгоритмов проходят через левый верхний угол. На первом графике также показана типичная ROC-кривая (обычно на практике они не доходят до "идеального" угла). AUC рискую...
############### Programming assignment: problem 4 ############### ans = [] for actual, predicted in zip([actual_0, actual_1, actual_2, actual_0r, actual_1r, actual_10, actual_11], [predicted_0, predicted_1, predicted_2, predicted_0r, predicted_1r, predicted_10, predicted_11]): fpr, tpr...
Coursera/Machine-learning-data-analysis/Course 2/Week_02/MetricsPA.ipynb
ALEXKIRNAS/DataScience
mit
Exercise 1. a. $\Omega$ will be all the possible combinations we have for 150 object two have two diffent values. For example (0, 0, ..., 0), (1, 0, ..., 0), (0, 1, ..., 0), ... (1, 1, ..., 0), ... (1, 1, ..., 1). This sample space has size of $2^{150}$. The random variable $X(\omega)$ will be the number of defective o...
p = 1. / 365 1 - np.sum(binomial(p, 23 * (22) / 2, 0))
UQ/assignment_3/Assignment 3.ipynb
LorenzoBi/courses
mit
Quick Start
# import data data_fc, data_p_value = kinact.get_example_data() # import prior knowledge adj_matrix = kinact.get_kinase_targets() print data_fc.head() print print data_p_value.head() # Perform ksea using the Mean method score, p_value = kinact.ksea.ksea_mean(data_fc=data_fc['5min'].dropna(), ...
doc/KSEA_example.ipynb
saezlab/kinact
gpl-3.0
1. Loading the data In order to perform the described kinase enrichment analysis, we load the data into a Pandas DataFrame. Here, we use the data from <em>de Graaf et al., 2014</em> for demonstration of KSEA. The data is available as supplemental material to the article online under http://mcponline.org/content/13/9/24...
# Read data data_raw = pd.read_csv('../kinact/data/deGraaf_2014_jurkat.csv', sep=',', header=0) # Filter for those p-sites that were matched ambiguously data_reduced = data_raw[~data_raw['Proteins'].str.contains(';')] # Create identifier for each phosphorylation site, e.g. P06239_S59 for the Serine 59 in the protein ...
doc/KSEA_example.ipynb
saezlab/kinact
gpl-3.0
2. Import prior-knowledge kinase-substrate relationships from PhosphoSitePlus In the following example, we use the data from the PhosphoSitePlus database, which can be downloaded here: http://www.phosphosite.org/staticDownloads.action. Consider, that the downloaded file contains a disclaimer at the top of the file, wh...
# Read data ks_rel = pd.read_csv('../kinact/data/PhosphoSitePlus.txt', sep='\t') # The data from the PhosphoSitePlus database is not provided as comma-separated value file (csv), # but instead, a tab = \t delimits the individual cells # Restrict the data on interactions in the organism of interest ks_rel_human = ks_...
doc/KSEA_example.ipynb
saezlab/kinact
gpl-3.0
3. KSEA 3.1 Quick start for KSEA Together with this tutorial, we will provide an implementation of KSEA as custom Python functions. Examplary, the use of the function for the dataset by de Graaf et al. could look like this.
score, p_value = kinact.ksea.ksea_delta(data_fc=data_fc['5min'], p_values=data_p_value['5min'], interactions=adj_matrix, ) print pd.DataFrame({'score': score, 'p_value': p_value}).head() # Calculate the KSEA scores for all data with the kse...
doc/KSEA_example.ipynb
saezlab/kinact
gpl-3.0
In de Graaf et al., they associated (amongst others) the Casein kinase II alpha (CSNK2A1) with higher activity after prolonged stimulation with prostaglandin E2. Here, we plot the activity scores of CSNK2A1 for all three methods of KSEA, which are in good agreement.
kinase='CSNK2A1' df_plot = pd.DataFrame({'mean': activity_mean.loc[kinase], 'delta': activity_delta.loc[kinase], 'mean_alt': activity_mean_alt.loc[kinase]}) df_plot['time [min]'] = [5, 10, 20, 30, 60] df_plot = pd.melt(df_plot, id_vars='time [min]', var_name='method', val...
doc/KSEA_example.ipynb
saezlab/kinact
gpl-3.0
3.2. KSEA in detail In the following, we show in detail the computations that are carried out inside the provided functions. Let us concentrate on a single condition (60 minutes after stimulation with prostaglandin E2) and a single kinase (CDK1).
data_condition = data_fc['60min'].copy() p_values = data_p_value['60min'] kinase = 'CDK1' substrates = adj_matrix[kinase].replace(0, np.nan).dropna().index detected_p_sites = data_fc.index intersect = list(set(substrates).intersection(detected_p_sites))
doc/KSEA_example.ipynb
saezlab/kinact
gpl-3.0
3.2.1. Mean method
mS = data_condition.loc[intersect].mean() mP = data_fc.values.mean() m = len(intersect) delta = data_fc.values.std() z_score = (mS - mP) * np.sqrt(m) * 1/delta from scipy.stats import norm p_value_mean = norm.sf(abs(z_score)) print mS, p_value_mean
doc/KSEA_example.ipynb
saezlab/kinact
gpl-3.0
3.2.2. Alternative Mean method
cut_off = -np.log10(0.05) set_alt = data_condition.loc[intersect].where(p_values.loc[intersect] > cut_off).dropna() mS_alt = set_alt.mean() z_score_alt = (mS_alt - mP) * np.sqrt(len(set_alt)) * 1/delta p_value_mean_alt = norm.sf(abs(z_score_alt)) print mS_alt, p_value_mean_alt
doc/KSEA_example.ipynb
saezlab/kinact
gpl-3.0
3.2.3. Delta Method
cut_off = -np.log10(0.05) score_delta = len(data_condition.loc[intersect].where((data_condition.loc[intersect] > 0) & (p_values.loc[intersect] > cut_off)).dropna()) -\ len(data_condition.loc[intersect].where((data_condition.loc[intersect] < 0) & ...
doc/KSEA_example.ipynb
saezlab/kinact
gpl-3.0
1. Basic Linear Model
LM = keras.model.Sequential([Dense(Num_Classes, input_shape=(784,))]) LM.compile(optimizer=SGD(lr=0.01), loss='mse') # LM.compile(optimizer=RMSprop(lr=0.01), loss='mse')
FAI_old/lesson2/L2HW.ipynb
WNoxchi/Kaukasos
mit
2. 1-Layer Neural Network 3. Finetuned VGG16
import os, sys sys.path.insert(os.path.join(1, '../utils/')) import Vgg16
FAI_old/lesson2/L2HW.ipynb
WNoxchi/Kaukasos
mit
Setting up the notebook's environment Install AI Platform Pipelines client library For AI Platform Pipelines (Unified), which is in the Experimental stage, you need to download and install the AI Platform client library on top of the KFP and TFX SDKs that were installed as part of the initial environment setup.
AIP_CLIENT_WHEEL = "aiplatform_pipelines_client-0.1.0.caip20201123-py3-none-any.whl" AIP_CLIENT_WHEEL_GCS_LOCATION = ( f"gs://cloud-aiplatform-pipelines/releases/20201123/{AIP_CLIENT_WHEEL}" ) !gsutil cp {AIP_CLIENT_WHEEL_GCS_LOCATION} {AIP_CLIENT_WHEEL} %pip install {AIP_CLIENT_WHEEL}
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
Import notebook dependencies
import logging import tensorflow as tf import tfx from aiplatform.pipelines import client from tfx.orchestration.beam.beam_dag_runner import BeamDagRunner print("TFX Version: ", tfx.__version__)
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
Configure GCP environment If you're on AI Platform Notebooks, authenticate with Google Cloud before running the next section, by running sh gcloud auth login in the Terminal window (which you can open via File > New in the menu). You only need to do this once per notebook instance. Set the following constants to the v...
PROJECT_ID = "jk-mlops-dev" # <---CHANGE THIS PROJECT_NUMBER = "895222332033" # <---CHANGE THIS API_KEY = "AIzaSyBS_RiaK3liaVthTUD91XuPDKIbiwDFlV8" # <---CHANGE THIS USER = "user" # <---CHANGE THIS BUCKET_NAME = "jk-ann-staging" # <---CHANGE THIS VPC_NAME = "default" # <---CHANGE THIS IF USING A DIFFERENT VPC RE...
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
Defining custom components In this section of the notebook you define a set of custom TFX components that encapsulate BQ, BQML and ANN Service calls. The components are TFX Custom Python function components. Each component is created as a separate Python module. You also create a couple of helper modules that encapsul...
component_folder = "bq_components" if tf.io.gfile.exists(component_folder): print("Removing older file") tf.io.gfile.rmtree(component_folder) print("Creating component folder") tf.io.gfile.mkdir(component_folder) %cd {component_folder}
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
Creating a TFX pipeline The pipeline automates the process of preparing item embeddings (in BigQuery), training a Matrix Factorization model (in BQML), and creating and deploying an ANN Service index. The pipeline has a simple sequential flow. The pipeline accepts a set of runtime parameters that define GCP environment...
import os from compute_pmi import compute_pmi from create_index import create_index from deploy_index import deploy_index from export_embeddings import export_embeddings from extract_embeddings import extract_embeddings from tfx.orchestration.kubeflow.v2 import kubeflow_v2_dag_runner # Only required for local run. fro...
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
Testing the pipeline locally You will first run the pipeline locally using the Beam runner. Clean the metadata and artifacts from the previous runs
pipeline_root = f"/tmp/{PIPELINE_NAME}" local_mlmd_folder = "/tmp/mlmd" if tf.io.gfile.exists(pipeline_root): print("Removing previous artifacts...") tf.io.gfile.rmtree(pipeline_root) if tf.io.gfile.exists(local_mlmd_folder): print("Removing local mlmd SQLite...") tf.io.gfile.rmtree(local_mlmd_folder) ...
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
Set pipeline parameters and create the pipeline
bq_dataset_name = "song_embeddings" index_display_name = "Song embeddings" deployed_index_id_prefix = "deployed_song_embeddings_" min_item_frequency = 15 max_group_size = 100 dimensions = 50 embeddings_gcs_location = f"gs://{BUCKET_NAME}/embeddings" metadata_connection_config = sqlite_metadata_connection_config( o...
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
Inspect produced metadata During the execution of the pipeline, the inputs and outputs of each component have been tracked in ML Metadata.
from ml_metadata import metadata_store from ml_metadata.proto import metadata_store_pb2 connection_config = metadata_store_pb2.ConnectionConfig() connection_config.sqlite.filename_uri = os.path.join( local_mlmd_folder, "metadata.sqlite" ) connection_config.sqlite.connection_mode = 3 # READWRITE_OPENCREATE store =...
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
Set the the parameters for AIPP execution and create the pipeline
metadata_connection_config = None pipeline_root = PIPELINE_ROOT pipeline = ann_pipeline( pipeline_name=PIPELINE_NAME, pipeline_root=pipeline_root, metadata_connection_config=metadata_connection_config, project_id=PROJECT_ID, project_number=PROJECT_NUMBER, region=REGION, vpc_name=VPC_NAME, ...
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
Compile the pipeline
config = kubeflow_v2_dag_runner.KubeflowV2DagRunnerConfig( project_id=PROJECT_ID, display_name=PIPELINE_NAME, default_image="gcr.io/{}/caip-tfx-custom:{}".format(PROJECT_ID, USER), ) runner = kubeflow_v2_dag_runner.KubeflowV2DagRunner( config=config, output_filename="pipeline.json" ) runner.compile(pipe...
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
Submit the pipeline run
aipp_client.create_run_from_job_spec("pipeline.json")
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
creates in memory an object with the name "ObjectCreator".
%%HTML <p style="color:red;font-size: 150%;">This object (the class) is itself capable of creating objects (the instances), and this is why it's a class.</p>
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
But still, it's an object, and therefore: you can assign it to a variable
object_creator_class = ObjectCreator print(object_creator_class)
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
you can copy it
from copy import copy ObjectCreatorCopy = copy(ObjectCreator) print(ObjectCreatorCopy) print("copy ObjectCreatorCopy is not ObjectCreator: ", ObjectCreatorCopy is not ObjectCreator) print("variable object_creator_class is ObjectCreator: ", object_creator_class is ObjectCreator)
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
you can add attributes to it
print("ObjectCreator has an attribute 'new_attribute': ", hasattr(ObjectCreator, 'new_attribute')) ObjectCreator.new_attribute = 'foo' # you can add attributes to a class print("ObjectCreator has an attribute 'new_attribute': ", hasattr(ObjectCreator, 'new_attribute')) print("attribute 'new_attribute': ", ObjectCreat...
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
you can pass it as a function parameter
def echo(o): print(o) # you can pass a class as a parameter print("return value of passing Object Creator to {}: ".format(echo), echo(ObjectCreator)) %%HTML <p style="color:red;font-size: 150%;">Since classes are objects, you can create them on the fly, like any object.</p> def get_class_by(name): class Fo...
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
But it's not so dynamic, since you still have to write the whole class yourself. Since classes are objects, they must be generated by something. When you use the class keyword, Python creates this object automatically. But as with most things in Python, it gives you a way to do it manually. Remember the function type? ...
print(type(1)) print(type("1")) print(type(int)) print(type(ObjectCreator)) print(type(type))
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
Well, type has a completely different ability, it can also create classes on the fly. type can take the description of a class as parameters, and return a class.
classes = Foo, Bar = [type(name, (), {}) for name in ('Foo', 'Bar')] for class_ in classes: pprint(class_)
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
type accepts a dictionary to define the attributes of the class. So:
classes_with_attributes = Foo, Bar = [type(name, (), namespace) for name, namespace in zip( ('Foo', 'Bar'), ( ...
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
Eventually you'll want to add methods to your class. Just define a function with the proper signature and assign it as an attribute.
def an_added_function(self): return "I am an added function." Foo.added = an_added_function foo = Foo() print(foo.added())
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
You see where we are going: in Python, classes are objects, and you can create a class on the fly, dynamically.
%%HTML <p style="color:red;font-size: 150%;">[Creating a class on the fly, dynamically] is what Python does when you use the keyword class, and it does so by using a metaclass.</p> %%HTML <p style="color:red;font-size: 150%;">Metaclasses are the 'stuff' that creates classes.</p>
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
You define classes in order to create objects, right? But we learned that Python classes are objects.
%%HTML <p style="color:red;font-size: 150%;">Well, metaclasses are what create these objects. They are the classes' classes.</p> %%HTML <p style="color:red;font-size: 150%;">Everything, and I mean everything, is an object in Python. That includes ints, strings, functions and classes. All of them are objects. And all...
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
Changing to blog post entitled Python 3 OOP Part 5—Metaclasses object, which inherits from nothing. reminds me of Eastern teachings of 'sunyata': emptiness, voidness, openness, nonexistence, thusness, etc. ```python a = 5 type(a) <class 'int'> a.class <class 'int'> a.class.bases (<class 'object'>,) object.bases () ...
class MyType(type): pass class MySpecialClass(metaclass=MyType): pass msp = MySpecialClass() type(msp) type(MySpecialClass) type(MyType)
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
Metaclasses are a very advanced topic in Python, but they have many practical uses. For example, by means of a custom metaclass you may log any time a class is instanced, which can be important for applications that shall keep a low memory usage or have to monitor it.
%%HTML <p style="color:red;font-size: 150%;">"Build a class"? This is a task for metaclasses. The following implementation comes from Python 3 Patterns, Recipes and Idioms.</p> class Singleton(type): instance = None def __call__(cls, *args, **kwargs): if not cls.instance: cls.instance = su...
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
The constructor mechanism in Python is on the contrary very important, and it is implemented by two methods, instead of just one: new() and init().
%%HTML <p style="color:red;font-size: 150%;">The tasks of the two methods are very clear and distinct: __new__() shall perform actions needed when creating a new instance while __init__ deals with object initialization.</p> class MyClass: def __new__(cls, *args, **kwargs): obj = super().__new__(cls, *args...
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
Subclassing int Object creation is behaviour. For most classes it is enough to provide a different __init__ method, but for immutable classes one often have to provide a different __new__ method. In this subsection, as preparation for enumerated integers, we will start to code a subclass of int that behave like bool. ...
class MyBool(int): def __repr__(self): return 'MyBool.' + ['False', 'True'][self] t = MyBool(1) t bool(2) == 1 MyBool(2) == 1 %%HTML <p style="color:red;font-size: 150%;">In many classes we use __init__ to mutate the newly constructed object, typically by storing or otherwise using the arguments to __i...
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
The solution to the problem is to use new. Here we will show that it works, and later we will explain elsewhere exactly what happens.
bool.__doc__ class NewBool(int): def __new__(cls, value): # bool return int.__new__(cls, bool(value)) y = NewBool(56) y == 1
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
<b>Question 4 Do multiple languages influent the reviews of apps?</b>
multi_language = app.loc[app['multiple languages'] == 'Y'] sin_language = app.loc[app['multiple languages'] == 'N'] multi_language['overall rating'].plot(kind = "density") sin_language['overall rating'].plot(kind = "density") plt.xlabel('Overall Rating') plt.legend(labels = ['multiple languages','single language'], loc...
notebooks/Multiple Languages Effects Analysis (Q4).ipynb
jpzhangvincent/MobileAppMarketAnalysis
mit
<p>First, the data set is splitted into two parts, one is app with multiple languages and another is app with single language. Then the density plots for the two subsets are made and from the plots we can see that the overall rating of apps with multiple languages is generally higher than the overall rating of apps wit...
import scipy.stats multi_language = list(multi_language['overall rating']) sin_language = list(sin_language['overall rating']) multiple = [] single = [] for each in multi_language: if each > 0: multiple.append(each) for each in sin_language: if each > 0: single.append(each) print(np.mean(mult...
notebooks/Multiple Languages Effects Analysis (Q4).ipynb
jpzhangvincent/MobileAppMarketAnalysis
mit
<p>I perform t test here. We have two samples here, one is apps with multiple languages and another is apps with single language. So I want to test whether the mean overall rating for these two samples are different.</p> <p>The null hypothesis is mean overall rating for apps with multiple languages and mean overall ra...
scipy.stats.f_oneway(multiple, single)
notebooks/Multiple Languages Effects Analysis (Q4).ipynb
jpzhangvincent/MobileAppMarketAnalysis
mit
<p>I also perform one-way ANOVA test here.</p> <p>The null hypothesis is mean overall rating for apps with multiple languages and mean overall rating for apps with single language are the same and the alternative hypothesis is that the mean overall rating for these two samples are not the same.</p> <p>From the result...
scipy.stats.kruskal(multiple, single)
notebooks/Multiple Languages Effects Analysis (Q4).ipynb
jpzhangvincent/MobileAppMarketAnalysis
mit
<span> Let's parse </span>
from hit.process.processor import ATTMatrixHitProcessor from hit.process.processor import ATTPlainHitProcessor plainProcessor = ATTPlainHitProcessor() matProcessor = ATTMatrixHitProcessor()
notebooks/Hit Processor.ipynb
Centre-Alt-Rendiment-Esportiu/att
gpl-3.0
<span> Parse a Hit with Plain Processor </span>
plainHit = plainProcessor.parse_hit("hit: {0:25 1549:4 2757:4 1392:4 2264:7 1764:7 1942:5 2984:5 r}") print plainHit
notebooks/Hit Processor.ipynb
Centre-Alt-Rendiment-Esportiu/att
gpl-3.0
<span> Compute diffs: </span>
plainDiffs = plainProcessor.hit_diffs(plainHit["sensor_timings"]) print plainDiffs
notebooks/Hit Processor.ipynb
Centre-Alt-Rendiment-Esportiu/att
gpl-3.0
<span> Parse a Hit with Matrix Processor </span>
matHit = matProcessor.parse_hit("hit: {0:25 1549:4 2757:4 1392:4 2264:7 1764:7 1942:5 2984:5 r}") print matHit
notebooks/Hit Processor.ipynb
Centre-Alt-Rendiment-Esportiu/att
gpl-3.0
<span> Compute diffs: </span>
matDiffs = matProcessor.hit_diffs((matHit["sensor_timings"])) print matDiffs matDiffs
notebooks/Hit Processor.ipynb
Centre-Alt-Rendiment-Esportiu/att
gpl-3.0
Tensor multiplication with transpose in numpy and einsum
w = np.arange(6).reshape(2,3).astype(np.float32) x = np.ones((1,3), dtype=np.float32) print("w:\n", w) print("x:\n", x) y = np.matmul(w, np.transpose(x)) print("y:\n", y) y = einsum('ij,kj->ik', w, x) print("y:\n", y)
versions/2022/tools/python/einsum_demo.ipynb
roatienza/Deep-Learning-Experiments
mit
Properties of square matrices in numpy and einsum We demonstrate diagonal.
w = np.arange(9).reshape(3,3).astype(np.float32) d = np.diag(w) print("w:\n", w) print("d:\n", d) d = einsum('ii->i', w) print("d:\n", d)
versions/2022/tools/python/einsum_demo.ipynb
roatienza/Deep-Learning-Experiments
mit
Trace.
t = np.trace(w) print("t:\n", t) t = einsum('ii->', w) print("t:\n", t)
versions/2022/tools/python/einsum_demo.ipynb
roatienza/Deep-Learning-Experiments
mit
Sum along an axis.
s = np.sum(w, axis=0) print("s:\n", s) s = einsum('ij->j', w) print("s:\n", s)
versions/2022/tools/python/einsum_demo.ipynb
roatienza/Deep-Learning-Experiments
mit
Let us demonstrate tensor transpose. We can also use w.T to transpose w in numpy.
t = np.transpose(w) print("t:\n", t) t = einsum("ij->ji", w) print("t:\n", t)
versions/2022/tools/python/einsum_demo.ipynb
roatienza/Deep-Learning-Experiments
mit
Dot, inner and outer products in numpy and einsum.
a = np.ones((3,), dtype=np.float32) b = np.ones((3,), dtype=np.float32) * 2 print("a:\n", a) print("b:\n", b) d = np.dot(a,b) print("d:\n", d) d = einsum("i,i->", a, b) print("d:\n", d) i = np.inner(a, b) print("i:\n", i) i = einsum("i,i->", a, b) print("i:\n", i) o = np.outer(a,b) print("o:\n", o) o = einsum("i,j-...
versions/2022/tools/python/einsum_demo.ipynb
roatienza/Deep-Learning-Experiments
mit
Inheritance Inheritance is an OOP practice where a certain class(called subclass/child class) inherits the properties namely data and behaviour of another class(called superclass/parent class). Let us see through an example.
# BITSian class class BITSian(): def __init__(self, name, id_no, hostel): self.name = name self.id_no = id_no self.hostel = hostel def get_name(self): return self.name def get_id(self): return self.id_no def get_hostel(self): return self.hos...
Week 4/Lecture_9_Inheritance_Overloading_Overidding.ipynb
bpgc-cte/python2017
mit
While writing code you must always make sure that you keep it as concise as possible and avoid any sort of repitition. Now, we can clearly see the commonalitites between BITSian and IITian classes. It would be natural to assume that every college student whether from BITS or IIT or pretty much any other institution in ...
class CollegeStudent(): def __init__(self, name, id_no): self.name = name self.id_no = id_no def get_name(self): return self.name def get_id(self): return self.id_no # BITSian class class BITSian(CollegeStudent): def __init__(self, name, id_no, hostel): ...
Week 4/Lecture_9_Inheritance_Overloading_Overidding.ipynb
bpgc-cte/python2017
mit
So, the class definition is as such : class SubClassName(SuperClassName): Using super() The main usage of super() in Python is to refer to parent classes without naming them expicitly. This becomes really useful in multiple inheritance where you won't have to worry about parent class name.
class Student(): def __init__(self, name): self.name = name def get_name(self): return self.name class CollegeStudent(Student): def __init__(self, name, id_no): super().__init__(name) self.id_no = id_no def get_id(self): return self.id_no # BIT...
Week 4/Lecture_9_Inheritance_Overloading_Overidding.ipynb
bpgc-cte/python2017
mit
You may come across the following constructor call for a superclass on the net : super(self.__class__, self).__init__(). Please do not do this. It can lead to infinite recursion. Go through this link for more clarification : Understanding Python Super with init methods Method Overidding This is a phenomenon where a sub...
class Student(): def __init__(self, name): self.name = name def get_name(self): return "Student : " + self.name class CollegeStudent(Student): def __init__(self, name, id_no): super().__init__(name) self.id_no = id_no def get_id(self): return self.i...
Week 4/Lecture_9_Inheritance_Overloading_Overidding.ipynb
bpgc-cte/python2017
mit
In my experience it's more convenient to build the model with a log-softmax output using nn.LogSoftmax or F.log_softmax (documentation). Then you can get the actual probabilites by taking the exponential torch.exp(output). With a log-softmax output, you want to use the negative log likelihood loss, nn.NLLLoss (document...
## Solution # Build a feed-forward network model = nn.Sequential(nn.Linear(784, 128), nn.ReLU(), nn.Linear(128, 64), nn.ReLU(), nn.Linear(64, 10), nn.LogSoftmax(dim=1)) # Define the loss criterion = nn.NLLLos...
DEEP LEARNING/Pytorch from scratch/MLP/Part 3 - Training Neural Networks (Solution).ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Autograd Now that we know how to calculate a loss, how do we use it to perform backpropagation? Torch provides a module, autograd, for automatically calculating the gradients of tensors. We can use it to calculate the gradients of all our parameters with respect to the loss. Autograd works by keeping track of operation...
x = torch.randn(2,2, requires_grad=True) print(x) y = x**2 print(y)
DEEP LEARNING/Pytorch from scratch/MLP/Part 3 - Training Neural Networks (Solution).ipynb
Diyago/Machine-Learning-scripts
apache-2.0
These gradients calculations are particularly useful for neural networks. For training we need the gradients of the weights with respect to the cost. With PyTorch, we run data forward through the network to calculate the loss, then, go backwards to calculate the gradients with respect to the loss. Once we have the grad...
# Build a feed-forward network model = nn.Sequential(nn.Linear(784, 128), nn.ReLU(), nn.Linear(128, 64), nn.ReLU(), nn.Linear(64, 10), nn.LogSoftmax(dim=1)) criterion = nn.NLLLoss() images, labels = next(iter(...
DEEP LEARNING/Pytorch from scratch/MLP/Part 3 - Training Neural Networks (Solution).ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Training the network! There's one last piece we need to start training, an optimizer that we'll use to update the weights with the gradients. We get these from PyTorch's optim package. For example we can use stochastic gradient descent with optim.SGD. You can see how to define an optimizer below.
from torch import optim # Optimizers require the parameters to optimize and a learning rate optimizer = optim.SGD(model.parameters(), lr=0.01)
DEEP LEARNING/Pytorch from scratch/MLP/Part 3 - Training Neural Networks (Solution).ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Now we know how to use all the individual parts so it's time to see how they work together. Let's consider just one learning step before looping through all the data. The general process with PyTorch: Make a forward pass through the network Use the network output to calculate the loss Perform a backward pass through ...
print('Initial weights - ', model[0].weight) images, labels = next(iter(trainloader)) images.resize_(64, 784) # Clear the gradients, do this because gradients are accumulated optimizer.zero_grad() # Forward pass, then backward pass, then update weights output = model(images) loss = criterion(output, labels) loss.bac...
DEEP LEARNING/Pytorch from scratch/MLP/Part 3 - Training Neural Networks (Solution).ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Training for real Now we'll put this algorithm into a loop so we can go through all the images. Some nomenclature, one pass through the entire dataset is called an epoch. So here we're going to loop through trainloader to get our training batches. For each batch, we'll doing a training pass where we calculate the loss,...
model = nn.Sequential(nn.Linear(784, 128), nn.ReLU(), nn.Linear(128, 64), nn.ReLU(), nn.Linear(64, 10), nn.LogSoftmax(dim=1)) criterion = nn.NLLLoss() optimizer = optim.SGD(model.parameters(), lr=0.003) epoch...
DEEP LEARNING/Pytorch from scratch/MLP/Part 3 - Training Neural Networks (Solution).ipynb
Diyago/Machine-Learning-scripts
apache-2.0
With the network trained, we can check out it's predictions.
%matplotlib inline import helper images, labels = next(iter(trainloader)) img = images[0].view(1, 784) # Turn off gradients to speed up this part with torch.no_grad(): logps = model(img) # Output of the network are log-probabilities, need to take exponential for probabilities ps = torch.exp(logps) helper.view_cl...
DEEP LEARNING/Pytorch from scratch/MLP/Part 3 - Training Neural Networks (Solution).ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Build LSI Model
model_tfidf = models.TfidfModel(corpus) corpus_tfidf = model_tfidf[corpus]
Gensim - Word2Vec.ipynb
banyh/ShareIPythonNotebook
gpl-3.0
LsiModel的參數 num_topics=200: 設定SVD分解後要保留的維度 id2word: 提供corpus的字典,方便將id轉換為word chunksize=20000: 在記憶體中一次處理的量,值越大則占用記憶體越多,處理速度也越快 decay=1.0: 因為資料會切成chunk來計算,所以會分成新舊資料,當新的chunk進來時,decay是舊chunk的加權,如果設小於1.0的值,則舊的資料會慢慢「遺忘」 distributed=False: 是否開啟分散式計算,每個core會分到一塊chunk onepass=True: 設為False強制使用multi-pass stochastic algoritm po...
model_lsi = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=200) corpus_lsi = model_lsi[corpus_tfidf] # 計算V的方法,可以作為document vector docvec_lsi = gensim.matutils.corpus2dense(corpus_lsi, len(model_lsi.projection.s)).T / model_lsi.projection.s # word vector直接用U的column vector wordsim_lsi = similarities.Matri...
Gensim - Word2Vec.ipynb
banyh/ShareIPythonNotebook
gpl-3.0
Build Word2Vec Model Word2Vec的參數 sentences: 用來訓練的list of list of words,但不是必要的,因為可以先建好model,再慢慢丟資料訓練 size=100: vector的維度 alpha=0.025: 初始的學習速度 window=5: context window的大小 min_count=5: 出現次數小於min_count的單字直接忽略 max_vocab_size: 限制vocabulary的大小,如果單字太多,就忽略最少見的單字,預設為無限制 sample=0.001: subsampling,隨機刪除機率小於0.001的單字,兼具擴大context win...
all_text = [doc.split() for doc in documents] model_w2v = models.Word2Vec(size=200, sg=1) %timeit model_w2v.build_vocab(all_text) %timeit model_w2v.train(all_text) model_w2v.most_similar_cosmul(['deep','learning'])
Gensim - Word2Vec.ipynb
banyh/ShareIPythonNotebook
gpl-3.0
Build Doc2Vec Model Doc2Vec的參數 documents=None: 用來訓練的document,可以是list of TaggedDocument,或TaggedDocument generator size=300: vector的維度 alpha=0.025: 初始的學習速度 window=8: context window的大小 min_count=5: 出現次數小於min_count的單字直接忽略 max_vocab_size=None: 限制vocabulary的大小,如果單字太多,就忽略最少見的單字,預設為無限制 sample=0: subsampling,隨機刪除機率小於sample的單字,...
from gensim.models.doc2vec import Doc2Vec, TaggedDocument class PatentDocGenerator(object): def __init__(self, filename): self.filename = filename def __iter__(self): f = codecs.open(self.filename, 'r', 'UTF-8') for line in f: text, appnum = docs_out(line) ...
Gensim - Word2Vec.ipynb
banyh/ShareIPythonNotebook
gpl-3.0
Build Doc2Vec Model from 2013 USPTO Patents
from gensim.models.doc2vec import Doc2Vec, TaggedDocument class PatentDocGenerator(object): def __init__(self, filename): self.filename = filename def __iter__(self): f = codecs.open(self.filename, 'r', 'UTF-8') for line in f: text, appnum = docs_out(line) ...
Gensim - Word2Vec.ipynb
banyh/ShareIPythonNotebook
gpl-3.0
To start we'll need some basic libraries. First numpy will be needed for basic array manipulation. Since we will be visualising the results we will need matplotlib and seaborn. Finally we will need umap for doing the dimension reduction itself.
!pip install numpy matplotlib seaborn umap-learn
notebooks/AnimatingUMAP.ipynb
lmcinnes/umap
bsd-3-clause
To start let's load everything we'll need
%matplotlib inline import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable from matplotlib import animation from IPython.display import HTML import seaborn as sns import itertools sns.set(style='white', rc={'figure.figsize':(14, 12), 'animation.html': 'html5'}) # Igno...
notebooks/AnimatingUMAP.ipynb
lmcinnes/umap
bsd-3-clause
To try this out we'll needs a reasonably small dataset (so embedding runs don't take too long since we'll be doing a lot of them). For ease of reproducibility for everyone else I'll use the digits dataset from sklearn. If you want to try other datasets just drop them in here -- COIL20 might be interesting, or you might...
digits = load_digits() data = digits.data data
notebooks/AnimatingUMAP.ipynb
lmcinnes/umap
bsd-3-clause
We need to move the points in between the embeddings given by different parameter values. There are potentially fancy ways to do this (Something using rotation and reflection to get an initial alignment might be interesting), but we'll use straighforward linear interpolation between the two embeddings. To do this we'll...
def tween(e1, e2, n_frames=20): for i in range(5): yield e1 for i in range(n_frames): alpha = i / float(n_frames - 1) yield (1 - alpha) * e1 + alpha * e2 for i in range(5): yield(e2) return
notebooks/AnimatingUMAP.ipynb
lmcinnes/umap
bsd-3-clause
Now that we can fill in intermediate frame we just need to generate all the embeddings. We'll create a function that can take an argument and set of parameter values and then generate all the embeddings including the in-between frames.
def generate_frame_data(data, arg_name='n_neighbors', arg_list=[]): result = [] es = [] for arg in arg_list: kwargs = {arg_name:arg} if len(es) > 0: es.append(UMAP(init=es[-1], negative_sample_rate=3, **kwargs).fit_transform(data)) else: es.append(UMAP(negativ...
notebooks/AnimatingUMAP.ipynb
lmcinnes/umap
bsd-3-clause
Next we just need to create a function to actually generate the animation given a list of embeddings (one for each frame). This is really just a matter of workign through the details of how matplotlib generates animations -- I would refer you again to Jake's tutorial if you are interested in the detailed mechanics of t...
def create_animation(frame_data, arg_name='n_neighbors', arg_list=[]): fig, ax = plt.subplots() all_data = np.vstack(frame_data) frame_bounds = (all_data[:, 0].min() * 1.1, all_data[:, 0].max() * 1.1, all_data[:, 1].min() * 1.1, all_data[:, 1].ma...
notebooks/AnimatingUMAP.ipynb
lmcinnes/umap
bsd-3-clause