markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
Make predictions for a genetic sequenece | model = Enformer(model_path)
fasta_extractor = FastaStringExtractor(fasta_file)
# @title Make predictions for an genomic example interval
target_interval = kipoiseq.Interval('chr11', 35_082_742, 35_197_430) # @param
sequence_one_hot = one_hot_encode(fasta_extractor.extract(target_interval.resize(SEQUENCE_LENGTH)))
... | enformer/enformer-usage.ipynb | deepmind/deepmind-research | apache-2.0 |
Contribution scores example | # @title Compute contribution scores
target_interval = kipoiseq.Interval('chr12', 54_223_589, 54_338_277) # @param
sequence_one_hot = one_hot_encode(fasta_extractor.extract(target_interval.resize(SEQUENCE_LENGTH)))
predictions = model.predict_on_batch(sequence_one_hot[np.newaxis])['human'][0]
target_mask = np.zeros_... | enformer/enformer-usage.ipynb | deepmind/deepmind-research | apache-2.0 |
Variant scoring example | # @title Score the variant
variant = kipoiseq.Variant('chr16', 57025062, 'C', 'T', id='rs11644125') # @param
# Center the interval at the variant
interval = kipoiseq.Interval(variant.chrom, variant.start, variant.start).resize(SEQUENCE_LENGTH)
seq_extractor = kipoiseq.extractors.VariantSeqExtractor(reference_sequence... | enformer/enformer-usage.ipynb | deepmind/deepmind-research | apache-2.0 |
Score variants in a VCF file
Report top 20 PCs | enformer_score_variants = EnformerScoreVariantsPCANormalized(model_path, transform_path, num_top_features=20)
# Score the first 5 variants from ClinVar
# Lower-dimensional scores (20 PCs)
it = variant_centered_sequences(clinvar_vcf, sequence_length=SEQUENCE_LENGTH,
gzipped=True, chr_pre... | enformer/enformer-usage.ipynb | deepmind/deepmind-research | apache-2.0 |
Report all 5,313 features (z-score normalized) | enformer_score_variants_all = EnformerScoreVariantsNormalized(model_path, transform_path)
# Score the first 5 variants from ClinVar
# All Scores
it = variant_centered_sequences(clinvar_vcf, sequence_length=SEQUENCE_LENGTH,
gzipped=True, chr_prefix='chr')
example_list = []
for i, example... | enformer/enformer-usage.ipynb | deepmind/deepmind-research | apache-2.0 |
https://vincentarelbundock.github.io/Rdatasets/doc/cluster/plantTraits.html
Usage
data(plantTraits)
Format
A data frame with 136 observations on the following 31 variables.
pdias
Diaspore mass (mg)
longindex
Seed bank longevity
durflow
Flowering duration
height
Plant height, an ordered factor with levels 1 < 2 ... | clusdf = clusdf.drop("Unnamed: 0", axis=1)
clusdf.head()
clusdf.info()
#missing values
clusdf.apply(lambda x: sum(x.isnull().values), axis = 0)
clusdf.head(20)
clusdf=clusdf.fillna(clusdf.mean()) | Clustering.ipynb | poethacker/hello | apache-2.0 |
To measure the quality of clustering results, there are two kinds of validity indices: external indices and internal indices.
An external index is a measure of agreement between two partitions where the first partition is the a priori known clustering structure, and the second results from the clustering procedure (Dud... | from sklearn.decomposition import PCA
from sklearn.preprocessing import scale
clusdf_scale = scale(clusdf)
n_samples, n_features = clusdf_scale.shape
n_samples, n_features
reduced_data = PCA(n_components=2).fit_transform(clusdf_scale)
#assuming height to be Y variable to be predicted
#n_digits = len(np.unique(clusd... | Clustering.ipynb | poethacker/hello | apache-2.0 |
Given the knowledge of the ground truth class assignments labels_true and our clustering algorithm assignments of the same samples labels_pred.
Drawbacks
Contrary to inertia, MI-based measures require the knowledge of the ground truth classes while almost never available in practice or requires manual assignment by hum... | clustering = AgglomerativeClustering(n_clusters=4).fit(reduced_data)
clustering
clustering.labels_
np.unique(clustering.labels_, return_counts=True)
from scipy.cluster.hierarchy import dendrogram, linkage
Z = linkage(reduced_data)
dendrogram(Z)
#dn1 = hierarchy.dendrogram(Z, ax=axes[0], above_threshold_color='... | Clustering.ipynb | poethacker/hello | apache-2.0 |
DBSCAN
The DBSCAN algorithm views clusters as areas of high density separated by areas of low density. Due to this rather generic view, clusters found by DBSCAN can be any shape, as opposed to k-means which assumes that clusters are convex shaped. The central component to the DBSCAN is the concept of core samples, whic... | db = DBSCAN().fit(reduced_data)
db
db.labels_
clusdf.shape
reduced_data.shape
reduced_data[:10,:2]
for i in range(0, reduced_data.shape[0]):
if db.labels_[i] == 0:
c1 = plt.scatter(reduced_data[i,0],reduced_data[i,1],c='r',marker='+')
elif db.labels_[i] == 1:
c2 = plt.scatter(reduced_data[... | Clustering.ipynb | poethacker/hello | apache-2.0 |
Gaussian mixture models
a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observation belongs. Formally a mixture model corresponds to the mixture di... | %matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
import numpy as np
clusdf.head()
reduced_data
# Plot the data with K Means Labels
from sklearn.cluster import KMeans
kmeans = KMeans(4, random_state=0)
labels = kmeans.fit(reduced_data).predict(reduced_data)
plt.scatter(reduced_data[... | Clustering.ipynb | poethacker/hello | apache-2.0 |
mixture of 16 Gaussians serves not to find separated clusters of data, but rather to model the overall distribution of the input data |
%matplotlib inline
n_components = np.arange(1, 21)
models = [GMM(n, covariance_type='full', random_state=0).fit(Xmoon)
for n in n_components]
plt.plot(n_components, [m.bic(Xmoon) for m in models], label='BIC')
plt.plot(n_components, [m.aic(Xmoon) for m in models], label='AIC')
plt.legend(loc='best')
plt.xl... | Clustering.ipynb | poethacker/hello | apache-2.0 |
The optimal number of clusters is the value that minimizes the AIC or BIC, depending on which approximation we wish to use. Here it is 8.
BIRCH
The Birch (Balanced Iterative Reducing and Clustering using Hierarchies ) builds a tree called the Characteristic Feature Tree (CFT) for the given data. The data is essentially... | from sklearn.cluster import Birch
X = reduced_data
brc = Birch(branching_factor=50, n_clusters=None, threshold=0.5,compute_labels=True)
brc.fit(X)
brc.predict(X)
labels = brc.predict(X)
plt.scatter(reduced_data[:, 0], reduced_data[:, 1], c=labels, s=40, cmap='viridis');
plt.show() | Clustering.ipynb | poethacker/hello | apache-2.0 |
# Mini Batch K-Means
The MiniBatchKMeans is a variant of the KMeans algorithm which uses mini-batches to reduce the computation time, while still attempting to optimise the same objective function. Mini-batches are subsets of the input data, randomly sampled in each training iteration. These mini-batches drastically re... | from sklearn.cluster import MiniBatchKMeans
import numpy as np
X = reduced_data
# manually fit on batches
kmeans = MiniBatchKMeans(n_clusters=2,random_state=0,batch_size=6)
kmeans = kmeans.partial_fit(X[0:6,:])
kmeans = kmeans.partial_fit(X[6:12,:])
kmeans.cluster_centers_
kmeans.predict(X)
# fit on the whole data
... | Clustering.ipynb | poethacker/hello | apache-2.0 |
Mean Shift
MeanShift clustering aims to discover blobs in a smooth density of samples. It is a centroid based algorithm, which works by updating candidates for centroids to be the mean of the points within a given region. These candidates are then filtered in a post-processing stage to eliminate near-duplicates to form... | print(__doc__)
import numpy as np
from sklearn.cluster import MeanShift, estimate_bandwidth
from sklearn.datasets.samples_generator import make_blobs
# #############################################################################
# Generate sample data
centers = [[1, 1], [-1, -1], [1, -1]]
X = reduced_data
# #######... | Clustering.ipynb | poethacker/hello | apache-2.0 |
knowledge of the ground truth class assignments labels_true and
our clustering algorithm assignments of the same samples labels_pred
https://scikit-learn.org/stable/modules/clustering.html#clustering-performance-evaluation
- adjusted Rand index is a function that measures the similarity of the two assignments
the Mut... | from sklearn import metrics
from sklearn.metrics import pairwise_distances
from sklearn import datasets
dataset = datasets.load_iris()
X = dataset.data
y = dataset.target
import numpy as np
from sklearn.cluster import KMeans
kmeans_model = KMeans(n_clusters=3, random_state=1).fit(X)
labels = kmeans_model.labels_
la... | Clustering.ipynb | poethacker/hello | apache-2.0 |
Question: I want to know how similar 2 style are. I really like Apricot Blondes, and I want to see what other styles Apricot would go in. Perhaps it would be good in a German Pils.
How to get there: The dataset shows the percentage of votes that said a style-addition combo would likely taste good. So, we can compare th... | import math
# Square the difference of each row, and then return the mean of the column.
# This is the average difference between the two.
# It will be higher if they are different, and lower if they are similar
def similarity(styleA, styleB):
diff = np.square(wtb[styleA] - wtb[styleB])
return diff.mean()
res... | notebooks/Style Similarity.ipynb | jamesnw/wtb-data | mit |
Top 10 most similar styles | df.sort_values("similarity").head(10) | notebooks/Style Similarity.ipynb | jamesnw/wtb-data | mit |
10 Least Similar styles | df.sort_values("similarity", ascending=False).head(10) | notebooks/Style Similarity.ipynb | jamesnw/wtb-data | mit |
Similarity of a specific combo | def comboSimilarity(styleA, styleB):
# styleA needs to be before styleB alphabetically
if styleA > styleB:
addition_temp = styleA
styleA = styleB
styleB = addition_temp
return df.loc[df['styleA'] == styleA].loc[df['styleB'] == styleB]
comboSimilarity('Blonde Ale', 'German Pils') | notebooks/Style Similarity.ipynb | jamesnw/wtb-data | mit |
We can see that Blonde Ales and German Pils are right between the mean and 50th percentile, so it's not a bad idea, but it's not a good idea either.
We can also take a look at this visually to confirm. | %matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
n, bins, patches = plt.hist(df['similarity'], bins=50)
similarity = float(comboSimilarity('Blonde Ale', 'German Pils')['similarity'])
# Find the histogram bin that holds the similarity between the two
target = np.argmax(bins>similarity)
patches[ta... | notebooks/Style Similarity.ipynb | jamesnw/wtb-data | mit |
Is it working? Let's see!
TODO 1.a: Run the decode_img function and plot it to see a happy looking daisy. | img = tf.io.read_file(
"gs://cloud-ml-data/img/flower_photos/daisy/754296579_30a9ae018c_n.jpg"
)
# Uncomment to see the image string.
# print(img)
img = decode_img(img, [IMG_WIDTH, IMG_HEIGHT])
plt.imshow(img.numpy()); | notebooks/image_models/solutions/3_tf_hub_transfer_learning.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Note: It may take a 4-5 minutes to see result of different batches.
MobileNetV2
These flower photos are much larger than handwritting recognition images in MNIST. They are about 10 times as many pixels per axis and there are three color channels, making the information here over 200 times larger!
How do our current te... | eval_path = "gs://cloud-ml-data/img/flower_photos/eval_set.csv"
nclasses = len(CLASS_NAMES)
hidden_layer_1_neurons = 400
hidden_layer_2_neurons = 100
dropout_rate = 0.25
num_filters_1 = 64
kernel_size_1 = 3
pooling_size_1 = 2
num_filters_2 = 32
kernel_size_2 = 3
pooling_size_2 = 2
layers = [
Conv2D(
num_fi... | notebooks/image_models/solutions/3_tf_hub_transfer_learning.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
If your model is like mine, it learns a little bit, slightly better then random, but ugh, it's too slow! With a batch size of 32, 5 epochs of 5 steps is only getting through about a quarter of our images. Not to mention, this is a much larger problem then MNIST, so wouldn't we need a larger model? But how big do we nee... | module_selection = "mobilenet_v2_100_224"
module_handle = "https://tfhub.dev/google/imagenet/{}/feature_vector/4".format(
module_selection
)
transfer_model = tf.keras.Sequential(
[
hub.KerasLayer(module_handle, trainable=False),
tf.keras.layers.Dropout(rate=0.2),
tf.keras.layers.Dense(
... | notebooks/image_models/solutions/3_tf_hub_transfer_learning.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
We need to define model, and a cost function | # Perceptron model (or Linear regression)
Y_ = X*W + B
def distance(y, y_):
return tf.abs(y-y_)
# cost = distance(Y_, tf.sin(X))
cost = tf.reduce_mean(distance(Y_, Y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate = 0.01).minimize(cost) | Training a network with TensorFlow/.ipynb_checkpoints/Sine wave predictor-checkpoint.ipynb | aliasvishnu/TensorFlow-Creative-Applications | gpl-3.0 |
We can group data by more than one factor. Let's say we're interested in how levels of ADHD interact with groupStatus (multitasking: high or low).
We will first make a factor for ADHD (median-split), and add it as a grouping variable using the cut() function in pandas: | df["adhdF"] = pd.cut(df["adhd"],bins=2,labels=["Low","High"]) | public/tutorials/python/3_descriptives/lesson.ipynb | monicathieu/cu-psych-r-tutorial | mit |
Mémoïsation générique, non typée
C'est étrangement court ! | def memo(f):
memoire = {} # dictionnaire vide, {} ou dict()
def memo_f(n): # nouvelle fonction
if n not in memoire: # verification
memoire[n] = f(n) # stockage
return memoire[n] # lecture
return memo_f # ==> f memoisée ! | agreg/Mémoisation_en_Python_et_OCaml.ipynb | Naereen/notebooks | mit |
Essais | memo_f1 = memo(f1)
print("3 secondes...")
print(memo_f1(10)) # 13, 3 secondes après
print("0 secondes !")
print(memo_f1(10)) # instantanné !
# différent de ces deux lignes !
print("3 secondes...")
print(memo(f1)(10))
print("3 secondes...")
print(memo(f1)(10)) # 3 secondes aussi !
%timeit memo_f1(10) # instantan... | agreg/Mémoisation_en_Python_et_OCaml.ipynb | Naereen/notebooks | mit |
Et : | memo_f2 = memo(f2)
print("4 secondes...")
print(memo_f2(10)) # 100, 4 secondes après
print("0 secondes !")
print(memo_f2(10)) # instantanné !
%timeit memo_f2(10) # instantanné ! | agreg/Mémoisation_en_Python_et_OCaml.ipynb | Naereen/notebooks | mit |
Mémoïsation générique et typée
Ce n'est pas tellement plus compliquée de typer la mémoïsation. | def memo_avec_type(f):
memoire = {} # dictionnaire vide, {} ou dict()
def memo_f_avec_type(n):
if (type(n), n) not in memoire:
memoire[(type(n), n)] = f(n)
return memoire[(type(n), n)]
return memo_f_avec_type | agreg/Mémoisation_en_Python_et_OCaml.ipynb | Naereen/notebooks | mit |
Avantage, on obtient un résultat plus cohérent "au niveau de la reproducibilité des résultats", par exemple : | def fonction_sur_entiers_ou_flottants(n):
if isinstance(n, int):
return 'Int'
elif isinstance(n, float):
return 'Float'
else:
return '?'
test0 = fonction_sur_entiers_ou_flottants
print(test0(1))
print(test0(1.0)) # résultat correct !
print(test0("1"))
test1 = memo(fonction_sur_ent... | agreg/Mémoisation_en_Python_et_OCaml.ipynb | Naereen/notebooks | mit |
Bonus : on peut utiliser la syntaxe d'un décorateur en Python | def fibo(n):
if n <= 1: return 1
else: return fibo(n-1) + fibo(n-2)
print("Test de fibo() non mémoisée :")
for n in range(10):
print("F_{} = {}".format(n, fibo(n))) | agreg/Mémoisation_en_Python_et_OCaml.ipynb | Naereen/notebooks | mit |
Cette fonction récursive est terriblement lente ! | %timeit fibo(35)
# version plus rapide !
@memo
def fibo2(n):
if n <= 1: return 1
else: return fibo2(n-1) + fibo2(n-2)
print("Test de fibo() mémoisée (plus rapide) :")
for n in range(10):
print("F_{} = {}".format(n, fibo2(n)))
%timeit fibo2(35) | agreg/Mémoisation_en_Python_et_OCaml.ipynb | Naereen/notebooks | mit |
Autre exemple, ou le gain de temps est moins significatif. | def factorielle(n):
if n <= 0: return 0
elif n == 1: return 1
else: return n * factorielle(n-1)
print("Test de factorielle() non mémoisée :")
for n in range(10):
print("{}! = {}".format(n, factorielle(n)))
%timeit factorielle(30)
@memo
def factorielle2(n):
if n <= 0: return 0
elif n == 1: ret... | agreg/Mémoisation_en_Python_et_OCaml.ipynb | Naereen/notebooks | mit |
Conclusion
En Python, c'est facile, avec des dictionnaires génériques et une syntaxe facilitée avec un décorateur.
Bonus : ce décorateur est dans la bibliothèque standard dans le module functools ! | from functools import lru_cache # lru = least recently updated
@lru_cache(maxsize=None)
def fibo3(n):
if n <= 1: return 1
else: return fibo3(n-1) + fibo3(n-2)
print("Test de fibo() mémoisée avec functools.lru_cache (plus rapide) :")
for n in range(10):
print("F_{} = {}".format(n, fibo3(n)))
%timeit fibo... | agreg/Mémoisation_en_Python_et_OCaml.ipynb | Naereen/notebooks | mit |
(On obtient presque les mêmes performances que notre implémentation manuelle)
En OCaml
Je traite exactement les mêmes exemples.
J'expérimente l'utilisation de deux kernels Jupyter différents pour afficher des exemples de codes écrits dans deux langages dans le même notebook... Ce n'est pas très propre mais ça marche.... | let print = Format.printf;;
let sprintf = Format.sprintf;;
let time = Unix.time;;
let sleep n = Sys.command (sprintf "sleep %i" n);;
let timeit (repet : int) (f : 'a -> 'a) (x : 'a) () : float =
let time0 = time () in
for _ = 1 to repet do
ignore (f x);
done;
let time1 = time () in
(time1 -... | agreg/Mémoisation_en_Python_et_OCaml.ipynb | Naereen/notebooks | mit |
Exemples de fonctions à mémoïser | let f1 n =
ignore (sleep 3);
n + 2
;;
let _ = f1 10;; (* 13, après 3 secondes *)
timeit 3 f1 10 ();; (* 3 secondes *) | agreg/Mémoisation_en_Python_et_OCaml.ipynb | Naereen/notebooks | mit |
Et un autre exemple similaire : | let f2 n =
ignore (sleep 4);
n * n
;;
let _ = f2 10;; (* 100, après 3 secondes *)
timeit 3 f2 10 ();; (* 4 secondes *) | agreg/Mémoisation_en_Python_et_OCaml.ipynb | Naereen/notebooks | mit |
Mémoïsation pour des fonctions d'un argument
On utilise le module Hashtbl de la bibliothèque standard. | let memo f =
let memoire = Hashtbl.create 128 in (* taille 128 par defaut *)
let memo_f n =
if Hashtbl.mem memoire n then (* lecture *)
Hashtbl.find memoire n
else begin
let res = f n in (* calcul *)
Hashtbl.add memoire n res; (* stockage *)
... | agreg/Mémoisation_en_Python_et_OCaml.ipynb | Naereen/notebooks | mit |
Essais
Deux exemples : | let memo_f1 = memo f1 ;;
let _ = memo_f1 10 ;; (* 3 secondes *)
let _ = memo_f1 10 ;; (* instantanné *)
timeit 100 memo_f1 20 ();; (* 0.03 secondes *)
let memo_f2 = memo f2 ;;
let _ = memo_f2 10 ;; (* 4 secondes *)
let _ = memo_f2 10 ;; (* instantanné *)
timeit 100 memo_f2 20 ();; (* 0.04 secondes *) | agreg/Mémoisation_en_Python_et_OCaml.ipynb | Naereen/notebooks | mit |
Ma fonction timeit fait un nombre paramétrique de répétitions sur des entrées non aléatoires, donc le temps moyen observé dépend du nombre de répétitions ! | timeit 10000 memo_f2 50 ();; (* 0.04 secondes *) | agreg/Mémoisation_en_Python_et_OCaml.ipynb | Naereen/notebooks | mit |
Exemple de la suite de Fibonacci | let rec fibo = function
| 0 | 1 -> 1
| n -> (fibo (n - 1)) + (fibo (n - 2))
;;
fibo 40;;
timeit 10 fibo 40 ();; (* 4.2 secondes ! *) | agreg/Mémoisation_en_Python_et_OCaml.ipynb | Naereen/notebooks | mit |
Et avec la mémoïsation automatique : | let memo_fibo = memo fibo;;
memo_fibo 40;;
timeit 10 memo_fibo 41 ();; (* 0.7 secondes ! *) | agreg/Mémoisation_en_Python_et_OCaml.ipynb | Naereen/notebooks | mit |
Given the variables:
planet = "Earth"
diameter = 12742
Use .format() to print the following string:
The diameter of Earth is 12742 kilometers. | planet = "Earth"
diameter = 12742
'The diameter of {} is {} kilometers.'.format(planet,diameter) | Python-Crash-Course/Python Crash Course Exercises .ipynb | iannesbitt/ml_bootcamp | mit |
Given this nested dictionary grab the word "hello". Be prepared, this will be annoying/tricky | d = {'k1':[1,2,3,{'tricky':['oh','man','inception',{'target':[1,2,3,'hello']}]}]}
d['k1'][3]['tricky'][3]['target'][3] | Python-Crash-Course/Python Crash Course Exercises .ipynb | iannesbitt/ml_bootcamp | mit |
What is the main difference between a tuple and a list? | # Tuple is immutable, list items can be changed | Python-Crash-Course/Python Crash Course Exercises .ipynb | iannesbitt/ml_bootcamp | mit |
Create a function that grabs the email website domain from a string in the form:
user@domain.com
So for example, passing "user@domain.com" would return: domain.com | def domainGet(inp):
return inp.split('@')[1]
domainGet('user@domain.com') | Python-Crash-Course/Python Crash Course Exercises .ipynb | iannesbitt/ml_bootcamp | mit |
Create a basic function that returns True if the word 'dog' is contained in the input string. Don't worry about edge cases like a punctuation being attached to the word dog, but do account for capitalization. | def findDog(inp):
return 'dog' in inp.lower().split()
findDog('Is there a dog here?') | Python-Crash-Course/Python Crash Course Exercises .ipynb | iannesbitt/ml_bootcamp | mit |
Create a function that counts the number of times the word "dog" occurs in a string. Again ignore edge cases. | def countDog(inp):
dog = 0
for x in inp.lower().split():
if x == 'dog':
dog += 1
return dog
countDog('This dog runs faster than the other dog dude!') | Python-Crash-Course/Python Crash Course Exercises .ipynb | iannesbitt/ml_bootcamp | mit |
Use lambda expressions and the filter() function to filter out words from a list that don't start with the letter 's'. For example:
seq = ['soup','dog','salad','cat','great']
should be filtered down to:
['soup','salad'] | seq = ['soup','dog','salad','cat','great']
list(filter(lambda item:item[0]=='s',seq)) | Python-Crash-Course/Python Crash Course Exercises .ipynb | iannesbitt/ml_bootcamp | mit |
Final Problem
You are driving a little too fast, and a police officer stops you. Write a function
to return one of 3 possible results: "No ticket", "Small ticket", or "Big Ticket".
If your speed is 60 or less, the result is "No Ticket". If speed is between 61
and 80 inclusive, the result is "Small Ticket". If s... | def caught_speeding(speed, is_birthday):
if is_birthday:
speed = speed - 5
if speed > 80:
return 'Big Ticket'
elif speed > 60:
return 'Small Ticket'
else:
return 'No Ticket'
caught_speeding(81,True)
caught_speeding(81,False) | Python-Crash-Course/Python Crash Course Exercises .ipynb | iannesbitt/ml_bootcamp | mit |
Simple 3D Visualizations of a neuron
Create 3D visualizations | # Specify param.image size to work with our models input, must be a multiple of 400.
param_f = lambda: param.image(120, h=120, channels=3)
# std_transforms = [
# pad(2, mode="constant", constant_value=.5),
# jitter(2)]
# transforms = std_transforms + [crop_or_pad_to(*model.image_shape[:2])]
transforms = []
# ... | lucid_work/notebooks/feature_visualization.ipynb | davidparks21/qso_lya_detection_pipeline | mit |
Simple 1D visualizations | # Specify param.image size
param_f = lambda: param.image(400, h=1, channels=1)
transforms = []
# Specify the objective
# neuron = lambda n: objectives.neuron(LAYERS['pool1'][0], n)
# obj = neuron(0)
channel = lambda n: objectives.channel(LAYERS['pool1'][0], n)
obj = channel(0)
# Specify the number of optimzation st... | lucid_work/notebooks/feature_visualization.ipynb | davidparks21/qso_lya_detection_pipeline | mit |
Visualize the process using plt.plot with t on the x-axis and W(t) on the y-axis. Label your x and y axes. | plt.plot(t,W)
plt.xlabel("$t$")
plt.ylabel("$W(t)$")
assert True # this is for grading | assignments/assignment03/NumpyEx03.ipynb | aschaffn/phys202-2015-work | mit |
Use np.diff to compute the changes at each step of the motion, dW, and then compute the mean and standard deviation of those differences. | dW = np.diff(W)
dW.mean(), dW.std()
assert len(dW)==len(W)-1
assert dW.dtype==np.dtype(float) | assignments/assignment03/NumpyEx03.ipynb | aschaffn/phys202-2015-work | mit |
Write a function that takes $W(t)$ and converts it to geometric Brownian motion using the equation:
$$
X(t) = X_0 e^{((\mu - \sigma^2/2)t + \sigma W(t))}
$$
Use Numpy ufuncs and no loops in your function. | def geo_brownian(t, W, X0, mu, sigma):
"""Return X(t) for geometric brownian motion with drift mu, volatility sigma."""
exponent = 0.5 * t * (mu - sigma)**2 + sigma * W
return X0 * np.exp(exponent)
assert True # leave this for grading | assignments/assignment03/NumpyEx03.ipynb | aschaffn/phys202-2015-work | mit |
Use your function to simulate geometric brownian motion, $X(t)$ for $X_0=1.0$, $\mu=0.5$ and $\sigma=0.3$ with the Wiener process you computed above.
Visualize the process using plt.plot with t on the x-axis and X(t) on the y-axis. Label your x and y axes. | plt.plot(t, geo_brownian(t, W, 1.0, 0.5, 0.3))
plt.xlabel("$t$")
plt.ylabel("$X(t)$")
assert True # leave this for grading | assignments/assignment03/NumpyEx03.ipynb | aschaffn/phys202-2015-work | mit |
<div class="alert alert-info">
**Note** Typecasting
`int(response)` converted the string `response` to integer. If user enters anything other than integer, `ValueError` is raised
</div>
if-else statement
Usage:
python
if condition:
statement_1
statement_2
...
statement_n
e... | response = input("Enter an integer : ")
num = int(response)
if num % 2 == 0:
print("{} is an even number".format(num))
else:
print("{} is an odd number".format(num)) | doc/Langauge/04-Control Structures.ipynb | OpenWeavers/openanalysis | gpl-3.0 |
Single Line if-else
This serves as a replacement for ternery operator avaliable in C
Usage:
C ternery
c
result = (condition) ? value_true : value_false
Python Single Line if else
python
result = value_true if condition else value_false
Example: | response = input("Enter an integer : ")
num = int(response)
result = "even" if num % 2 == 0 else "odd"
print("{} is {} number".format(num,result)) | doc/Langauge/04-Control Structures.ipynb | OpenWeavers/openanalysis | gpl-3.0 |
if-else ladder
Usage:
python
if condition_1:
statements_1
elif condition_2:
statements_2
elif condition_3:
statements_3
...
...
...
elif condition_n:
statements_n
else:
statements_last
<div class="alert alert-info">
**Note**
`Python` uses `eli... | response = input("Enter an integer (+ve or -ve) : ")
num = int(response)
if num > 0:
print("{} is +ve".format(num))
elif num == 0:
print("Zero")
else:
print("{} is -ve".format(num)) | doc/Langauge/04-Control Structures.ipynb | OpenWeavers/openanalysis | gpl-3.0 |
<div class="alert alert-info">
**Note**: No `switch-case`
There is no `switch-case` structure in Python. It can be realized using `if-else ladder` or any other ways
</div>
while loop
Usage:
python
while condition:
statement_1
statement_2
...
statement_n
Example: | response = input("Enter an integer : ")
num = int(response)
prev,current = 0,1
i = 0
while i < num:
prev,current = current,prev + current
print('Fib[{}] = {}'.format(i,current),end=',')
i += 1 | doc/Langauge/04-Control Structures.ipynb | OpenWeavers/openanalysis | gpl-3.0 |
<div class="alert alert-info">
**Note**
- Multiple assignments in single statement can be done
-`Python` doesn't support `++` and `--` operators as in `C`
- There is no `do-while` loop in Python
</div>
for loop
Usage:
python
for object in collection:
do_something_with_object
<div class="alert alert-inf... | for i in range(10):
print(i, end=',')
for i in range(2,10,3):
print(i, end=',')
response = input("Enter an integer : ")
num = int(response)
prev,current = 0,1
for i in range(num):
prev,current = current,prev + current
print('Fib[{}] = {}'.format(i,current),end=',') | doc/Langauge/04-Control Structures.ipynb | OpenWeavers/openanalysis | gpl-3.0 |
SVC Parameter Settings | # default parameters for SVC
# ==========================
default_svc_params = {}
default_svc_params['C'] = 1.0 # penalty
default_svc_params['class_weight'] = None # Set the parameter C of class i to class_weight[i]*C
# set to 'auto' for unbalanced clas... | svm.scikit/svm_poly_pca.scikit_benchmark.ipynb | grfiv/MNIST | mit |
Learning Curves
see http://scikit-learn.org/stable/auto_examples/model_selection/plot_learning_curve.html
The score is the model accuracy
The red line shows how well the model fits the data it was trained on:
a high score indicates low bias ... the model does fit the training data
it's not unusual for the red line ... | t0 = time.time()
from sklearn.learning_curve import learning_curve
from sklearn.cross_validation import ShuffleSplit
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
"""
Generate a simple plot of the test and trainin... | svm.scikit/svm_poly_pca.scikit_benchmark.ipynb | grfiv/MNIST | mit |
Custom training and batch prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/custom/sdk-custom-image-classification-batch.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" ... | import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
! pip install {U... | notebooks/official/custom/sdk-custom-image-classification-batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, g... | import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE... | notebooks/official/custom/sdk-custom-image-classification-batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex AI runs
the code from this package. In this tutorial, Vertex AI also sa... | BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
REGION = "[your-region]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP | notebooks/official/custom/sdk-custom-image-classification-batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Send the prediction request
To make a batch prediction request, call the model object's batch_predict method with the following parameters:
- instances_format: The format of the batch prediction request file: "jsonl", "csv", "bigquery", "tf-record", "tf-record-gzip" or "file-list"
- prediction_format: The format of th... | MIN_NODES = 1
MAX_NODES = 1
# The name of the job
BATCH_PREDICTION_JOB_NAME = "cifar10_batch-" + TIMESTAMP
# Folder in the bucket to write results to
DESTINATION_FOLDER = "batch_prediction_results"
# The Cloud Storage bucket to upload results to
BATCH_PREDICTION_GCS_DEST_PREFIX = BUCKET_NAME + "/" + DESTINATION_FOLD... | notebooks/official/custom/sdk-custom-image-classification-batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Evaluate results
You can then run a quick evaluation on the prediction results:
np.argmax: Convert each list of confidence levels to a label
Compare the predicted labels to the actual labels
Calculate accuracy as correct/total
To improve the accuracy, try training for a higher number of epochs. | y_predicted = [np.argmax(result["prediction"]) for result in results]
correct = sum(y_predicted == np.array(y_test))
accuracy = len(y_predicted)
print(
f"Correct predictions = {correct}, Total predictions = {accuracy}, Accuracy = {correct/accuracy}"
) | notebooks/official/custom/sdk-custom-image-classification-batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Training Job
Model
Cloud Storage Bucket | delete_training_job = True
delete_model = True
# Warning: Setting this to true will delete everything in your bucket
delete_bucket = False
# Delete the training job
job.delete()
# Delete the model
model.delete()
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil -m rm -r $BUCKET_NAME | notebooks/official/custom/sdk-custom-image-classification-batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Basics
Set-up a simple run with a constant linear bed. We will first define the bed:
Glacier bed | # This is the bed rock, linearily decreasing from 3000m altitude to 1000m, in 200 steps
nx = 200
bed_h = np.linspace(3400, 1400, nx)
# At the begining, there is no glacier so our glacier surface is at the bed altitude
surface_h = bed_h
# Let's set the model grid spacing to 100m (needed later)
map_dx = 100
# plot this
... | docs/notebooks/flowline_model.ipynb | jlandmann/oggm | gpl-3.0 |
Now we have to decide how wide our glacier is, and what it the shape of its bed. For a start, we will use a "u-shaped" bed (see the documentation), with a constant width of 300m: | # The units of widths is in "grid points", i.e. 3 grid points = 300 m in our case
widths = np.zeros(nx) + 3.
# Define our bed
init_flowline = VerticalWallFlowline(surface_h=surface_h, bed_h=bed_h, widths=widths, map_dx=map_dx) | docs/notebooks/flowline_model.ipynb | jlandmann/oggm | gpl-3.0 |
The init_flowline variable now contains all deometrical information needed by the model. It can give access to some attributes, which are quite useless for a non-existing glacier: | print('Glacier length:', init_flowline.length_m)
print('Glacier area:', init_flowline.area_km2)
print('Glacier volume:', init_flowline.volume_km3) | docs/notebooks/flowline_model.ipynb | jlandmann/oggm | gpl-3.0 |
Mass balance
Then we will need a mass balance model. In our case this will be a simple linear mass-balance, defined by the equilibrium line altitude and an altitude gradient (in [mm m$^{-1}$]): | # ELA at 3000m a.s.l., gradient 4 mm m-1
mb_model = LinearMassBalanceModel(3000, grad=4) | docs/notebooks/flowline_model.ipynb | jlandmann/oggm | gpl-3.0 |
The mass-balance model gives you the mass-balance for any altitude you want, in units [m s$^{-1}$]. Let us compute the annual mass-balance along the glacier profile: | annual_mb = mb_model.get_mb(surface_h) * SEC_IN_YEAR
# Plot it
plt.plot(annual_mb, bed_h, color='C2', label='Mass-balance')
plt.xlabel('Annual mass-balance (m yr-1)')
plt.ylabel('Altitude (m)')
plt.legend(loc='best'); | docs/notebooks/flowline_model.ipynb | jlandmann/oggm | gpl-3.0 |
Model run
Now that we have all the ingredients to run the model, we just have to initialize it: | # The model requires the initial glacier bed, a mass-balance model, and an initial time (the year y0)
model = FlowlineModel(init_flowline, mb_model=mb_model, y0=0.) | docs/notebooks/flowline_model.ipynb | jlandmann/oggm | gpl-3.0 |
We can now run the model for 150 years and see how the output looks like: | model.run_until(150)
# Plot the initial conditions first:
plt.plot(init_flowline.bed_h, color='k', label='Bedrock')
plt.plot(init_flowline.surface_h, label='Initial glacier')
# The get the modelled flowline (model.fls[-1]) and plot it's new surface
plt.plot(model.fls[-1].surface_h, label='Glacier after {} years'.format... | docs/notebooks/flowline_model.ipynb | jlandmann/oggm | gpl-3.0 |
Let's print out a few infos about our glacier: | print('Year:', model.yr)
print('Glacier length (m):', model.length_m)
print('Glacier area (km2):', model.area_km2)
print('Glacier volume (km3):', model.volume_km3) | docs/notebooks/flowline_model.ipynb | jlandmann/oggm | gpl-3.0 |
Note that the model time is now 150. Runing the model with the sane input will do nothing: | model.run_until(150)
print('Year:', model.yr)
print('Glacier length (m):', model.length_m) | docs/notebooks/flowline_model.ipynb | jlandmann/oggm | gpl-3.0 |
If we want to compute longer, we have to set the desired date: | model.run_until(500)
# Plot the initial conditions first:
plt.plot(init_flowline.bed_h, color='k', label='Bedrock')
plt.plot(init_flowline.surface_h, label='Initial glacier')
# The get the modelled flowline (model.fls[-1]) and plot it's new surface
plt.plot(model.fls[-1].surface_h, label='Glacier after {} years'.format... | docs/notebooks/flowline_model.ipynb | jlandmann/oggm | gpl-3.0 |
Note that in order to store some intermediate steps of the evolution of the glacier, it might be useful to make a loop: | # Reinitialize the model
model = FlowlineModel(init_flowline, mb_model=mb_model, y0=0.)
# Year 0 to 600 in 6 years step
yrs = np.arange(0, 600, 5)
# Array to fill with data
nsteps = len(yrs)
length = np.zeros(nsteps)
vol = np.zeros(nsteps)
# Loop
for i, yr in enumerate(yrs):
model.run_until(yr)
length[i] = mode... | docs/notebooks/flowline_model.ipynb | jlandmann/oggm | gpl-3.0 |
We can now plot the evolution of the glacier length and volume with time: | f, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 5))
ax1.plot(yrs, length);
ax1.set_xlabel('Years')
ax1.set_ylabel('Length (m)');
ax2.plot(yrs, vol);
ax2.set_xlabel('Years')
ax2.set_ylabel('Volume (km3)'); | docs/notebooks/flowline_model.ipynb | jlandmann/oggm | gpl-3.0 |
A first experiment
Ok, now we have seen the basics. Will will now define a simple experiment, in which we will now make the glacier wider at the top (in the accumulation area). This is a common situation for valley glaciers. | # We define the widths as before:
widths = np.zeros(nx) + 3.
# But we now make our glacier 600 me wide fir the first grid points:
widths[0:15] = 6
# Define our new bed
wider_flowline = VerticalWallFlowline(surface_h=surface_h, bed_h=bed_h, widths=widths, map_dx=map_dx) | docs/notebooks/flowline_model.ipynb | jlandmann/oggm | gpl-3.0 |
We will now run our model with the new inital conditions, and store the output in a new variable for comparison: | # Reinitialize the model with the new input
model = FlowlineModel(wider_flowline, mb_model=mb_model, y0=0.)
# Array to fill with data
nsteps = len(yrs)
length_w = np.zeros(nsteps)
vol_w = np.zeros(nsteps)
# Loop
for i, yr in enumerate(yrs):
model.run_until(yr)
length_w[i] = model.length_m
vol_w[i] = model.v... | docs/notebooks/flowline_model.ipynb | jlandmann/oggm | gpl-3.0 |
Compare the results: | # Plot the initial conditions first:
plt.plot(init_flowline.bed_h, color='k', label='Bedrock')
# Then the final result
plt.plot(simple_glacier_h, label='Simple glacier')
plt.plot(wider_glacier_h, label='Wider glacier')
plt.xlabel('Grid points')
plt.ylabel('Altitude (m)')
plt.legend(loc='best');
f, (ax1, ax2) = plt.sub... | docs/notebooks/flowline_model.ipynb | jlandmann/oggm | gpl-3.0 |
Ice flow parameters
The ice flow parameters are going to have a strong influence on the behavior of the glacier. The default in OGGM is to set Glen's creep parameter A to the "standard value" defined by Cuffey and Patterson: | # Default in OGGM
print(A) | docs/notebooks/flowline_model.ipynb | jlandmann/oggm | gpl-3.0 |
We can change this and see what happens: | # Reinitialize the model with the new parameter
model = FlowlineModel(init_flowline, mb_model=mb_model, y0=0., glen_a=A / 10)
# Array to fill with data
nsteps = len(yrs)
length_s1 = np.zeros(nsteps)
vol_s1 = np.zeros(nsteps)
# Loop
for i, yr in enumerate(yrs):
model.run_until(yr)
length_s1[i] = model.length_m
... | docs/notebooks/flowline_model.ipynb | jlandmann/oggm | gpl-3.0 |
In his seminal paper, Oerlemans also uses a so-called "sliding parameter", representing basal sliding. In OGGM this parameter is set to 0 per default, but it can be modified at whish: | # Change sliding to use Oerlemans value:
model = FlowlineModel(init_flowline, mb_model=mb_model, y0=0., glen_a=A, fs=5.7e-20)
# Array to fill with data
nsteps = len(yrs)
length_s3 = np.zeros(nsteps)
vol_s3 = np.zeros(nsteps)
# Loop
for i, yr in enumerate(yrs):
model.run_until(yr)
length_s3[i] = model.length_m
... | docs/notebooks/flowline_model.ipynb | jlandmann/oggm | gpl-3.0 |
purpose
upload sketches to S3
build stimulus dictionary and write to database
upload sketches to s3 | upload_dir = './sketch'
import boto
runThis = 0
if runThis:
conn = boto.connect_s3()
b = conn.create_bucket('sketchpad_basic_pilot2_sketches')
all_files = [i for i in os.listdir(upload_dir) if i != '.DS_Store']
for a in all_files:
print a
k = b.new_key(a)
k.set_contents_from_fil... | experiments/recog/preprocess_sketches.ipynb | judithfan/graphcomm | mit |
build stimulus dictionary | ## read in experimental metadata file
path_to_metadata = '../../analysis/sketchpad_basic_pilot2_group_data.csv'
meta = pd.read_csv(path_to_metadata)
## clean up and add filename column
meta2 = meta.drop(['svg','png','Unnamed: 0'],axis=1)
filename = []
games = []
for i,row in meta2.iterrows():
filename.append('game... | experiments/recog/preprocess_sketches.ipynb | judithfan/graphcomm | mit |
upload stim dictionary to mongo (db = 'stimuli', collection='sketchpad_basic_recog') | # set vars
auth = pd.read_csv('auth.txt', header = None) # this auth.txt file contains the password for the sketchloop user
pswd = auth.values[0][0]
user = 'sketchloop'
host = 'rxdhawkins.me' ## cocolab ip address
# have to fix this to be able to analyze from local
import pymongo as pm
conn = pm.MongoClient('mongodb:... | experiments/recog/preprocess_sketches.ipynb | judithfan/graphcomm | mit |
crop 3d objects | import os
from PIL import Image
def RGBA2RGB(image, color=(255, 255, 255)):
"""Alpha composite an RGBA Image with a specified color.
Simpler, faster version than the solutions above.
Source: http://stackoverflow.com/a/9459208/284318
Keyword Arguments:
image -- PIL RGBA Image object
color -- ... | experiments/recog/preprocess_sketches.ipynb | judithfan/graphcomm | mit |
← Back to Index
Audio Representation
In performance, musicians convert sheet music representations into sound which is transmitted through the air as air pressure oscillations. In essence, sound is simply air vibrating (Wikipedia). Sound vibrates through the air as longitudinal waves, i.e. the oscillations are pa... | x, sr = librosa.load('audio/c_strum.wav')
ipd.Audio(x, rate=sr) | audio_representation.ipynb | stevetjoa/stanford-mir | mit |
(If you get an error using librosa.load, you may need to install ffmpeg.)
The change in air pressure at a certain time is graphically represented by a pressure-time plot, or simply waveform.
To plot a waveform, use librosa.display.waveplot: | plt.figure(figsize=(15, 5))
librosa.display.waveplot(x, sr, alpha=0.8) | audio_representation.ipynb | stevetjoa/stanford-mir | mit |
Digital computers can only capture this data at discrete moments in time. The rate at which a computer captures audio data is called the sampling frequency (often abbreviated fs) or sampling rate (often abbreviated sr). For this workshop, we will mostly work with a sampling frequency of 44100 Hz, the sampling rate of C... | ipd.Image("https://upload.wikimedia.org/wikipedia/commons/thumb/e/ea/ADSR_parameter.svg/640px-ADSR_parameter.svg.png") | audio_representation.ipynb | stevetjoa/stanford-mir | mit |
Timbre: Spectral Indicators
Another property used to characterize timbre is the existence of partials and their relative strengths. Partials are the dominant frequencies in a musical tone with the lowest partial being the fundamental frequency.
The partials of a sound are visualized with a spectrogram. A spectrogram sh... | T = 2.0 # seconds
f0 = 1047.0
sr = 22050
t = numpy.linspace(0, T, int(T*sr), endpoint=False) # time variable
x = 0.1*numpy.sin(2*numpy.pi*f0*t)
ipd.Audio(x, rate=sr) | audio_representation.ipynb | stevetjoa/stanford-mir | mit |
Display the spectrum of the pure tone: | X = scipy.fft(x[:4096])
X_mag = numpy.absolute(X) # spectral magnitude
f = numpy.linspace(0, sr, 4096) # frequency variable
plt.figure(figsize=(14, 5))
plt.plot(f[:2000], X_mag[:2000]) # magnitude spectrum
plt.xlabel('Frequency (Hz)') | audio_representation.ipynb | stevetjoa/stanford-mir | mit |
Oboe
Let's listen to an oboe playing a C6: | x, sr = librosa.load('audio/oboe_c6.wav')
ipd.Audio(x, rate=sr)
print(x.shape) | audio_representation.ipynb | stevetjoa/stanford-mir | mit |
Display the spectrum of the oboe: | X = scipy.fft(x[10000:14096])
X_mag = numpy.absolute(X)
plt.figure(figsize=(14, 5))
plt.plot(f[:2000], X_mag[:2000]) # magnitude spectrum
plt.xlabel('Frequency (Hz)') | audio_representation.ipynb | stevetjoa/stanford-mir | mit |
Clarinet
Let's listen to a clarinet playing a concert C6: | x, sr = librosa.load('audio/clarinet_c6.wav')
ipd.Audio(x, rate=sr)
print(x.shape)
X = scipy.fft(x[10000:14096])
X_mag = numpy.absolute(X)
plt.figure(figsize=(14, 5))
plt.plot(f[:2000], X_mag[:2000]) # magnitude spectrum
plt.xlabel('Frequency (Hz)') | audio_representation.ipynb | stevetjoa/stanford-mir | mit |
Dependency in how many states are added
Here we see whether the distribution of synaptic influences depends on the state of the vector.
First we start with only two | n_dim = 400
nn = Hopfield(n_dim=n_dim, T=T, prng=prng)
list_of_patterns = nn.generate_random_patterns(n_dim)
nn.train(list_of_patterns, normalize=normalize) | notebooks/2016-12-11(Study of connectivity distribution).ipynb | h-mayorquin/hopfield_sequences | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.