markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
Вычисление сумм
В статистике часто приходится считать выборочное среднее, т.е. по данной выборке значений $x_k$, $k=1..N$, нужно вычислить
$$\bar x=\frac1N\sum_{k=1}^N x_k.$$
С точки зрения математики не имеет значения, как считать указанную сумму, так как результат сложения всегда будет один и тот же.
Однако при вычис... | base=10 # параметр, может принимать любые целые значения > 1
def exact_sum(K):
"""Точное значение суммы всех элементов."""
return 1.
def samples(K):
""""Элементы выборки"."""
# создаем K частей из base^k одинаковых значений
parts=[np.full((base**k,), float(base)**(-k)/K) for k in range(0, K)]
... | practice/What does mean mean mean.ipynb | alepoydes/introduction-to-numerical-simulation | mit |
Создадим выборку из значений, отличающихся на 6 порядков, и просуммируем элементы выборки. | K=7 # число слагаемых
x=samples(K) # сохраняем выборку в массив
print("Число элементов:", len(x))
print("Самое маленькое и большое значения:", np.min(x), np.max(x))
exact_sum_for_x=exact_sum(K) # значение суммы с близкой к машинной погрешностью
direct_sum_for_x=direct_sum(x) # сумма всех элементов по порядку
def rela... | practice/What does mean mean mean.ipynb | alepoydes/introduction-to-numerical-simulation | mit |
Попробуем теперь просуммировать элементы в порядке возрастания. | sorted_x=x[np.argsort(x)]
sorted_sum_for_x=direct_sum(sorted_x)
print("Погрешность суммирования по возрастанию:", relative_error(exact_sum_for_x, sorted_sum_for_x)) | practice/What does mean mean mean.ipynb | alepoydes/introduction-to-numerical-simulation | mit |
Попробуем просуммировать в порядке убывания. | sorted_x=x[np.argsort(x)[::-1]]
sorted_sum_for_x=direct_sum(sorted_x)
print("Погрешность суммирования по убыванию:", relative_error(exact_sum_for_x, sorted_sum_for_x)) | practice/What does mean mean mean.ipynb | alepoydes/introduction-to-numerical-simulation | mit |
Таким образом погрешность результата зависит от порядка суммирования.
Как можно объяснить этот эффект?
На практике суммирование предпочтительно проводить не наивным способом, а компенсационным суммированием (см. алгоритм Кэхэна. | def Kahan_sum(x):
s=0.0 # частичная сумма
c=0.0 # сумма погрешностей
for i in x:
y=i-c # первоначально y равно следующему элементу последовательности
t=s+y # сумма s может быть велика, поэтому младшие биты y будут потеряны
c=(t-s)-y # (t-s) отбрасывает старшие биты, вычита... | practice/What does mean mean mean.ipynb | alepoydes/introduction-to-numerical-simulation | mit |
Задания
Объясните различие в погрешностях при различных порядках суммирования.
Почему алгорит Кэхэна имеет значительно лучшую точность, чем последовательное суммирование?
Получим ли мы те же значения погрешностей, если будем суммировать последовательность со слагаемыми разных знаков? Проверьте на следующей последовате... | # параметры выборки
mean=1e6 # среднее
delta=1e-5 # величина отклонения от среднего
def samples(N_over_two):
"""Генерирует выборку из 2*N_over_two значений с данным средним и среднеквадратическим
отклонением."""
x=np.full((2*N_over_two,), mean, dtype=np.double)
x[:N_over_two]+=delta
x[N_over_two:]... | practice/What does mean mean mean.ipynb | alepoydes/introduction-to-numerical-simulation | mit |
Как мы видим, суммирование по первой формуле дает наиболее точный результат, суммирование по второй формуле менее точно, а однопроходная формула наименее точна.
Задания
Обьясните, почему формулы оценки дисперсии имеют разные погрешности, хотя чтобы их применить, нужно выполнить одни и те же действия, но в разном поряд... | def exp_taylor(x, N=None):
"""N-ая частичная сумма ряда Тейлора для экспоненты."""
acc = 1 # k-ая частичная сумму. Начинаем с k=0.
xk = 1 # Степени x^k.
inv_fact = 1 # 1/k!.
for k in range(1, N+1):
xk = xk*x
inv_fact /= k
acc += xk*inv_fact
return acc
def exp_horner(x, N... | practice/What does mean mean mean.ipynb | alepoydes/introduction-to-numerical-simulation | mit |
Ясно, что 4-x слагаемых слишком мало, чтобы хорошо приблизить ряд. Попробуем взять больше. | make_exp_test([exp_taylor, exp_horner], args={"N": 15}, xmin=-0.001, xmax=0.001)
make_exp_test([exp_taylor, exp_horner], args={"N": 15}, xmin=-1, xmax=1)
make_exp_test([exp_taylor, exp_horner], args={"N": 15}, xmin=-10, xmax=10) | practice/What does mean mean mean.ipynb | alepoydes/introduction-to-numerical-simulation | mit |
Точность приближения растет с увеличением числа слагаемых, однако даже для умеренно больших аргументов ни одного верного знака в ответе не получается. Посмотрим, как погрешность изменяется в зависимости от числа слагаемых. | def cum_exp_taylor(x, N=None):
"""Вычисляет все частичные суммы ряда Тейлора для экспоненты по N-ую включительно."""
acc = np.empty(N+1, dtype=float)
acc[0] = 1 # k-ая частичная сумму. Начинаем с k=0.
xk = 1 # Степени x^k.
inv_fact = 1 # 1/k!.
for k in range(1, N+1):
xk = xk*x
in... | practice/What does mean mean mean.ipynb | alepoydes/introduction-to-numerical-simulation | mit |
Note: The data in reviews.txt we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, an... | len(reviews)
reviews[0]
labels[0] | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Lesson: Develop a Predictive Theory<a id='lesson_2'></a> | print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Project 1: Quick Theory Validation<a id='project_1'></a>
There are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook.
You'll find the Counter class to be useful in this exercise, as w... | from collections import Counter
import numpy as np | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words. | # Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter() | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
TODO: Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter.
Note: Throughout these ... | # TODO: Loop over all the words in all the reviews and increment the counts in the appropriate counter objects | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used. | # Examine the counts of the most common words in positive reviews
positive_counts.most_common()
# Examine the counts of the most common words in negative reviews
negative_counts.most_common() | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to ... | # Create Counter object to store positive/negative ratios
pos_neg_ratios = Counter()
# TODO: Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Examine the ratios you've calculated for a few words: | print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"])) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Looking closely at the values you just calculated, we see the following:
Words that you would expect to see more often in positive reviews – like "amazing" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be.
Words that you would expect to... | # TODO: Convert ratios to logs | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Examine the new ratios you've calculated for the same words from before: | print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"])) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terri... | # words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid con... | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
End of Project 1.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Transforming Text into Numbers<a id='lesson_3'></a>
The cells here include code Andrew shows in the next video. We've included it so you can run the code along with the video without having to type in everything. | from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png') | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Project 2: Creating the Input/Output Data<a id='project_2'></a>
TODO: Create a set named vocab that contains every word in the vocabulary. | # TODO: Create set named "vocab" containing all of the words from all of the reviews
vocab = None | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074 | vocab_size = len(vocab)
print(vocab_size) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer. | from IPython.display import Image
Image(filename='sentiment_network_2.png') | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
TODO: Create a numpy array called layer_0 and initialize it to all zeros. You will find the zeros function particularly helpful here. Be sure you create layer_0 as a 2-dimensional matrix with 1 row and vocab_size columns. | # TODO: Create layer_0 matrix with dimensions 1 by vocab_size, initially filled with zeros
layer_0 = None | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Run the following cell. It should display (1, 74074) | layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png') | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word. | # Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
# display the map of words to indices
word2index | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
TODO: Complete the implementation of update_input_layer. It should count
how many times each word is used in the given review, and then store
those counts at the appropriate indices inside layer_0. | def update_input_layer(review):
""" Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
"""
... | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0. | update_input_layer(reviews[0])
layer_0 | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
TODO: Complete the implementation of get_target_for_labels. It should return 0 or 1,
depending on whether the given label is NEGATIVE or POSITIVE, respectively. | def get_target_for_label(label):
"""Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
"""
# TODO: Your code here | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Run the following two cells. They should print out'POSITIVE' and 1, respectively. | labels[0]
get_target_for_label(labels[0]) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Run the following two cells. They should print out 'NEGATIVE' and 0, respectively. | labels[1]
get_target_for_label(labels[1]) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
End of Project 2.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Project 3: Building a Neural Network<a id='project_3'></a>
TODO: We've included the framework of a class called SentimentNetork. Implement all of the items marked TODO in the code. These include doing the following:
- ... | import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for t... | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1. | mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set).
We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from. | mlp.test(reviews[-1000:],labels[-1000:]) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing. | mlp.train(reviews[:-1000],labels[:-1000]) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network. | mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000]) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network. | mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000]) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.
End of Project 3.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Understa... | from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0... | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Project 4: Reducing Noise in Our Input Data<a id='project_4'></a>
TODO: Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following:
* Copy the SentimentNetwork class you created earlier into the following cell.
* Modify update_input_layer so it does not count how... | # TODO: -Copy the SentimentNetwork class from Projet 3 lesson
# -Modify it to reduce noise, like in the video | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1. | mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000]) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions. | mlp.test(reviews[-1000:],labels[-1000:]) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
End of Project 4.
Andrew's solution was actually in the previous video, so rewatch that video if you had any problems with that project. Then continue on to the next lesson.
Analyzing Inefficiencies in our Network<a id='lesson_5'></a>
The following cells include the code Andrew shows in the next video. We've included i... | Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
Image(filename='sentimen... | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Project 5: Making our Network More Efficient<a id='project_5'></a>
TODO: Make the SentimentNetwork class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following:
* Copy the SentimentNetwork class from the previous p... | # TODO: -Copy the SentimentNetwork class from Project 4 lesson
# -Modify it according to the above instructions | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Run the following cell to recreate the network and train it once again. | mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000]) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions. | mlp.test(reviews[-1000:],labels[-1000:]) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
End of Project 5.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Further Noise Reduction<a id='lesson_6'></a> | Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from b... | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Project 6: Reducing Noise by Strategically Reducing the Vocabulary<a id='project_6'></a>
TODO: Improve SentimentNetwork's performance by reducing more noise in the vocabulary. Specifically, do the following:
* Copy the SentimentNetwork class from the previous project into the following cell.
* Modify pre_process_data:
... | # TODO: -Copy the SentimentNetwork class from Project 5 lesson
# -Modify it according to the above instructions | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Run the following cell to train your network with a small polarity cutoff. | mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000]) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
And run the following cell to test it's performance. It should be | mlp.test(reviews[-1000:],labels[-1000:]) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Run the following cell to train your network with a much larger polarity cutoff. | mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000]) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
And run the following cell to test it's performance. | mlp.test(reviews[-1000:],labels[-1000:]) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
End of Project 6.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Analysis: What's Going on in the Weights?<a id='lesson_7'></a> | mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.... | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
The plate lies in the $xy$-plane with the surface at $z = 0$. The atoms lie in the $xz$-plane with $z>0$.
We can set the angle between the interatomic axis and the z-axis theta and the center of mass distance from the surface distance_surface. distance_atom defines the interatomic distances for which the pair potential... | theta = np.pi/2 # rad
distance_atoms = 10 # µm
distance_surface = np.linspace(distance_atoms*np.abs(np.cos(theta))/2, 2*distance_atoms,30) # µm | doc/sphinx/examples_python/vdw_near_surface.ipynb | hmenke/pairinteraction | gpl-3.0 |
Next we define the state that we are interested in using pairinteraction's StateOne class . As shown in Figures 4 and 5 of Phys. Rev. A 96, 062509 (2017) we expect changes of about 50% for the $C_6$ coefficient of the $|69s_{1/2},m_j=1/2;72s_{1/2},m_j=1/2\rangle$ pair state of Rubidium, so this provides a good example.... | state_one1 = pi.StateOne("Rb", 69, 0, 0.5, 0.5)
state_one2 = pi.StateOne("Rb", 72, 0, 0.5, 0.5)
# Set up one-atom system
system_one = pi.SystemOne(state_one1.getSpecies(), cache)
system_one.restrictEnergy(min(state_one1.getEnergy(),state_one2.getEnergy()) - 30, \
max(state_one1.getEnergy(),st... | doc/sphinx/examples_python/vdw_near_surface.ipynb | hmenke/pairinteraction | gpl-3.0 |
The pair state state_two is created from the one atom states state_one1 and state_one2 using the StateTwo class.
From the previously set up system_one we define system_two using SystemTwo class. This class also contains methods set.. to set angle, distance, surface distance and to enableGreenTensor in order implement a... | # Set up pair state
state_two = pi.StateTwo(state_one1, state_one2)
# Set up two-atom system
system_two = pi.SystemTwo(system_one, system_one, cache)
system_two.restrictEnergy(state_two.getEnergy() - 3, state_two.getEnergy() + 3)
system_two.setAngle(theta)
system_two.setDistance(distance_atoms)
system_two.set... | doc/sphinx/examples_python/vdw_near_surface.ipynb | hmenke/pairinteraction | gpl-3.0 |
We calculate the $C_6$ coefficients. The energyshift is given by the difference between the interaction energy at given surface_distance and the unperturbed energy of the two atom state state_two.getEnergy(). The $C_6$ coefficient is then given by the product of energyshift and distance_atoms**6.
idx is the index of th... | # Calculate C6 coefficients
C6 = []
for d in distance_surface:
system_two.setSurfaceDistance(d)
system_two.diagonalize()
idx = np.argmax(system_two.getOverlap(state_two, 0, -theta, 0))
energyshift = system_two.getHamiltonian().diagonal()[idx]-state_two.getEnergy()
C6.append(energyshift*distance_atom... | doc/sphinx/examples_python/vdw_near_surface.ipynb | hmenke/pairinteraction | gpl-3.0 |
Simple Sounding
Use MetPy as straightforward as possible to make a Skew-T LogP plot. | import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import metpy.calc as mpcalc
from metpy.cbook import get_test_data
from metpy.plots import add_metpy_logo, SkewT
from metpy.units import units
# Change default to be better for skew-T
plt.rcParams['figure.figsize'] = (9, 9)
# Upper air data can be... | v0.9/_downloads/ef4bfbf049be071a6c648d7918a50105/Simple_Sounding.ipynb | metpy/MetPy | bsd-3-clause |
We will pull the data out of the example dataset into individual variables and
assign units. | p = df['pressure'].values * units.hPa
T = df['temperature'].values * units.degC
Td = df['dewpoint'].values * units.degC
wind_speed = df['speed'].values * units.knots
wind_dir = df['direction'].values * units.degrees
u, v = mpcalc.wind_components(wind_speed, wind_dir)
skew = SkewT()
# Plot the data using normal plotti... | v0.9/_downloads/ef4bfbf049be071a6c648d7918a50105/Simple_Sounding.ipynb | metpy/MetPy | bsd-3-clause |
Init SparkContext | from bigdl.dllib.nncontext import init_spark_on_local, init_spark_on_yarn
import numpy as np
import os
hadoop_conf_dir = os.environ.get('HADOOP_CONF_DIR')
if hadoop_conf_dir:
sc = init_spark_on_yarn(
hadoop_conf=hadoop_conf_dir,
conda_name=os.environ.get("ZOO_CONDA_NAME", "zoo"), # The name of the created ... | apps/ray/parameter_server/sharded_parameter_server.ipynb | intel-analytics/BigDL | apache-2.0 |
A simple parameter server can be implemented as a Python class in a few lines of code.
EXERCISE: Make the ParameterServer class an actor. | dim = 10
@ray.remote
class ParameterServer(object):
def __init__(self, dim):
self.parameters = np.zeros(dim)
def get_parameters(self):
return self.parameters
def update_parameters(self, update):
self.parameters += update
ps = ParameterServer.remote(dim)
| apps/ray/parameter_server/sharded_parameter_server.ipynb | intel-analytics/BigDL | apache-2.0 |
A worker can be implemented as a simple Python function that repeatedly gets the latest parameters, computes an update to the parameters, and sends the update to the parameter server. | @ray.remote
def worker(ps, dim, num_iters):
for _ in range(num_iters):
# Get the latest parameters.
parameters = ray.get(ps.get_parameters.remote())
# Compute an update.
update = 1e-3 * parameters + np.ones(dim)
# Update the parameters.
ps.update_parameters.remote(upd... | apps/ray/parameter_server/sharded_parameter_server.ipynb | intel-analytics/BigDL | apache-2.0 |
As the worker tasks are executing, you can query the parameter server from the driver and see the parameters changing in the background. | print(ray.get(ps.get_parameters.remote())) | apps/ray/parameter_server/sharded_parameter_server.ipynb | intel-analytics/BigDL | apache-2.0 |
Sharding a Parameter Server
As the number of workers increases, the volume of updates being sent to the parameter server will increase. At some point, the network bandwidth into the parameter server machine or the computation down by the parameter server may be a bottleneck.
Suppose you have $N$ workers and $1$ paramet... | @ray.remote
class ParameterServerShard(object):
def __init__(self, sharded_dim):
self.parameters = np.zeros(sharded_dim)
def get_parameters(self):
return self.parameters
def update_parameters(self, update):
self.parameters += update
total_dim = (10 ** 8) // 8 # This work... | apps/ray/parameter_server/sharded_parameter_server.ipynb | intel-analytics/BigDL | apache-2.0 |
The code below implements a worker that does the following.
1. Gets the latest parameters from all of the parameter server shards.
2. Concatenates the parameters together to form the full parameter vector.
3. Computes an update to the parameters.
4. Partitions the update into one piece for each parameter server.
5. App... | @ray.remote
def worker_task(total_dim, num_iters, *ps_shards):
# Note that ps_shards are passed in using Python's variable number
# of arguments feature. We do this because currently actor handles
# cannot be passed to tasks inside of lists or other objects.
for _ in range(num_iters):
# Get the ... | apps/ray/parameter_server/sharded_parameter_server.ipynb | intel-analytics/BigDL | apache-2.0 |
EXERCISE: Experiment by changing the number of parameter server shards, the number of workers, and the size of the data.
NOTE: Because these processes are all running on the same machine, network bandwidth will not be a limitation and sharding the parameter server will not help. To see the difference, you would need to... | num_workers = 4
# Start some workers. Try changing various quantities and see how the
# duration changes.
start = time.time()
ray.get([worker_task.remote(total_dim, 5, *ps_shards) for _ in range(num_workers)])
print('This took {} seconds.'.format(time.time() - start)) | apps/ray/parameter_server/sharded_parameter_server.ipynb | intel-analytics/BigDL | apache-2.0 |
Authentication
In order to run this tutorial successfully, we need to be authenticated first.
Depending on where we are running this notebook, the authentication steps may vary:
| Runner | Authentiction Steps |
| ----------- | ----------- |
| Local Computer | Use a service account, or run the following comm... | try:
from google.colab import auth
print("Authenticating in Colab")
auth.authenticate_user()
print("Authenticated")
except: # noqa
print("This notebook is not running on Colab.")
print("Please make sure to follow the authentication steps.") | samples/tutorial.ipynb | llooker/public-datasets-pipelines | apache-2.0 |
Configurations
Let's make sure we enter the name of our GCP project in the next cell. | # ENTER THE GCP PROJECT HERE
gcp_project = "YOUR-GCP-PROJECT"
print(f"gcp_project is set to {gcp_project}")
def helper_function():
"""
Add a description about what this function does.
"""
return None | samples/tutorial.ipynb | llooker/public-datasets-pipelines | apache-2.0 |
Data Preparation
Query the Data | query = """
SELECT
created_date, category, complaint_type, neighborhood, latitude, longitude
FROM
`bigquery-public-data.san_francisco_311.311_service_requests`
LIMIT 1000;
"""
bqclient = bigquery.Client(project=gcp_project)
dataframe = bqclient.query(query).result().to_dataframe() | samples/tutorial.ipynb | llooker/public-datasets-pipelines | apache-2.0 |
Check the Dataframe | print(dataframe.shape)
dataframe.head() | samples/tutorial.ipynb | llooker/public-datasets-pipelines | apache-2.0 |
Process the Dataframe | # Convert the datetime to date
dataframe['created_date'] = dataframe['created_date'].apply(datetime.date) | samples/tutorial.ipynb | llooker/public-datasets-pipelines | apache-2.0 |
2.1 Remove Dups:
Write code to remove duplicates from an unsorted linked list.
FOLLOW UP
How would you solve this problem if a temporary buffer is not allowed? |
List = Node(1, Node(2, Node(3, Node(4, Node(4, Node(4, Node(3, Node(2, Node(1)))))))))
def remove_dups(List):
marks = {}
cur = List
prev = None
while cur != None:
if marks.get(cur.value, 0) == 0: # not duplicated
marks[cur.value] = 1
else: # duplicated
p... | Issues/algorithms/Linked Lists.ipynb | stereoboy/Study | mit |
2.2 Return Kth to Last:
Implement an algorithm to find the kth to last element of a singly linked list. | List = Node(1, Node(2, Node(3, Node(4, Node(4, Node(4, Node(3, Node(2, Node(1, Node(3, Node(2)))))))))))
def kth_to_last(List, k):
cur = List
size = 0
while cur != None:
size += 1
cur = cur.next
if size < k:
return None
cur = List
for _ in range(size - k):
c... | Issues/algorithms/Linked Lists.ipynb | stereoboy/Study | mit |
Generate a model
First we will generate a simple galaxy model using KinMS itself, that we can attempt to determine the parameters of later. If you have your own observed galaxy to fit then of course this step can be skipped!
The make_model function below creates a simple exponential disc:
$
\begin{align}
\large \Sigma_... | def make_model(param,obspars,rad,filename=None,plot=False):
'''
This function takes in the `param` array (along with obspars; the observational setup,
and a radius vector `rad`) and uses it to create a KinMS model.
'''
total_flux=param[0]
posAng=param[1]
inc=param[2]
v_flat=param[3]... | kinms/docs/KinMSpy_tutorial.ipynb | TimothyADavis/KinMSpy | mit |
Note that we have set fixSeed=True in the KinMS call - this is crucial if you are fitting with KinMS. It ensures if you generate two models with the same input parameters you will get an identical output model!
Now we have our model function, lets use it to generate a model which we will later fit. The first thing we ... | ### Setup cube parameters ###
obspars={}
obspars['xsize']=64.0 # arcseconds
obspars['ysize']=64.0 # arcseconds
obspars['vsize']=500.0 # km/s
obspars['cellsize']=1.0 # arcseconds/pixel
obspars['dv']=20.0 # km/s/channel
obspars['beamsize']=np.array([4.0,4.0,0]) # [bmaj,bmin,bpa] in (arcsec, arcsec, degrees) | kinms/docs/KinMSpy_tutorial.ipynb | TimothyADavis/KinMSpy | mit |
We also need to create a radius vector- you ideally want this to oversample your pixel grid somewhat to avoid interpolation errors! | rad=np.arange(0,100,0.3) | kinms/docs/KinMSpy_tutorial.ipynb | TimothyADavis/KinMSpy | mit |
Now we have all the ingredients we can create our data to fit. Here we will also output the model to disc, so we can demonstrate how to read in the header keywords from real ALMA/VLA etc data. | '''
True values for the flux, posang, inc etc, as defined in the model function
'''
guesses=np.array([30.,270.,45.,200.,2.,5.])
'''
RMS of data. Here we are making our own model so this is arbitary.
When fitting real data this should be the observational RMS
'''
error=np.array(1e-3)
fdata=make_model(guesses,obspa... | kinms/docs/KinMSpy_tutorial.ipynb | TimothyADavis/KinMSpy | mit |
Read in the data
In this example we already have our data in memory. But if you are fitting a real datacube this wont be the case! Here we read in the model we just created from a FITS file to make it clear how to do this. | ### Load in your observational data ###
hdulist = fits.open('Test_simcube.fits',ignore_blank=True)
fdata = hdulist[0].data.T
### Setup cube parameters ###
obspars={}
obspars['cellsize']=np.abs(hdulist[0].header['cdelt1']*3600.) # arcseconds/pixel
obspars['dv']=np.abs(hdulist[0].header['cdelt3']/1e3) # km/s/channel
o... | kinms/docs/KinMSpy_tutorial.ipynb | TimothyADavis/KinMSpy | mit |
Fit the model
Now we have our 'observational' data read into memory, and a model function defined, we can fit one to the other! As our fake model is currently noiseless, lets add some gaussian noise (obviously dont do this if your data is from a real telecope!): | fdata+=(np.random.normal(size=fdata.shape)*error) | kinms/docs/KinMSpy_tutorial.ipynb | TimothyADavis/KinMSpy | mit |
Below we will proceed using the MCMC code GAStimator which was specifically designed to work with KinMS, however any minimiser should work in principle. For full details of how this code works, and a tutorial, see https://github.com/TimothyADavis/GAStimator . | from gastimator import gastimator,corner_plot
mcmc = gastimator(make_model,obspars,rad)
mcmc.labels=np.array(['Flux','posAng',"Inc","VFlat","R_turn","scalerad"])
mcmc.min=np.array([30.,1.,10,50,0.1,0.1])
mcmc.max=np.array([30.,360.,80,400,20,10])
mcmc.fixed=np.array([True,False,False,False,False,False])
mcmc.precisio... | kinms/docs/KinMSpy_tutorial.ipynb | TimothyADavis/KinMSpy | mit |
Setting good priors on the flux of your source is crucial to ensure the model outputs are physical. Luckily the integrated flux of your source should be easy to measure from your datacube! If you have a good measurement of this, then I would recommend forcing the total flux to that value by fixing it in the model (set ... | model=make_model(mcmc.guesses,obspars,rad) # make a model from your guesses
KinMS_plotter(fdata, obspars['xsize'], obspars['ysize'], obspars['vsize'], obspars['cellsize'],\
obspars['dv'], obspars['beamsize'], posang=guesses[1],overcube=model,rms=error).makeplots() | kinms/docs/KinMSpy_tutorial.ipynb | TimothyADavis/KinMSpy | mit |
As you can see, the black contours of the model arent a perfect match to the moment zero, spectrum and position-velocity diagram extracted from our "observed" datacube. One could tweak by hand, but as these are already close we can go on to do a fit!
If you are experimenting then running until convergence should be go... | outputvalue, outputll= mcmc.run(fdata,error,3000,plot=False) | kinms/docs/KinMSpy_tutorial.ipynb | TimothyADavis/KinMSpy | mit |
As you can see, the final parameters (listed in the output with their 1sigma errors) are pretty close to those we input! One could use the cornor_plot routine shipped with GAStimator to visualize our results, but with only 3000 steps (and a $\approx$30% acceptance rate) these wont be very pretty. If you need good error... | bestmodel=make_model(np.median(outputvalue,1),obspars,rad) # make a model from your guesses
KinMS_plotter(fdata, obspars['xsize'], obspars['ysize'], obspars['vsize'], obspars['cellsize'],\
obspars['dv'], obspars['beamsize'], posang=guesses[1],overcube=bestmodel,rms=error).makeplots() | kinms/docs/KinMSpy_tutorial.ipynb | TimothyADavis/KinMSpy | mit |
Tiny error problem
I have found that fitting whole datacubes with kinematic modelling tools such as KinMS can yield unphysically small uncertanties, for instance constraining inclination to $\pm\approx0.1^{\circ}$ in the fit example performed above. This is essentially a form of model mismatch - you are finding the ver... | error*=((2.0*fdata.size)**(0.25))
outputvalue, outputll= mcmc.run(fdata,error,3000,plot=False) | kinms/docs/KinMSpy_tutorial.ipynb | TimothyADavis/KinMSpy | mit |
What is Monte Carlo (MC) Integration?
Let us say that we want to approximate the area between the curve defined by $f(x) = x^2 + 3x + \ln{x}$ between $x\in (0,5]$ and the x-axis. | def f(x):
return x**2 + 3*x + np.log(x)
step= 0.001
x = np.arange(1,5+step*0.1,step)
y = f(x)
print x.min(), x.max()
print y.min(), y.max()
plt.plot(x, y, lw=2., color="r")
plt.fill_between(x, 0, y, color="r", alpha=0.5)
plt.axhline(y=0, lw=1., color="k", linestyle="--")
plt.axhline(y=y.max(), lw=1., color="k", li... | Monte Carlo Integration.ipynb | napsternxg/ipython-notebooks | apache-2.0 |
Concretely, we are interested in knowing the area of the red-shaded region in the above figure. Furthermore, I have also provided a rectangular bounding box for the range of values of $x$ and $y$. The true value of the area under the curve is $\sim{81.381}$ using its analytic integral formula (see http://www.wolframalp... | @jit
def get_MC_area(x, y, f, N=10**5, plot=False):
x_rands = x.min() + np.random.rand(N) * (x.max() - x.min())
y_rands = np.random.rand(N) * y.max()
y_true = f(x_rands)
integral_idx = (y_rands <= y_true)
if plot:
plt.plot(x_rands[integral_idx], y_rands[integral_idx],
alpha=... | Monte Carlo Integration.ipynb | napsternxg/ipython-notebooks | apache-2.0 |
As we can observe, the number of points which fall inside the region of interest, are proportional to the area of the region. The area however, marginally close to the true area of $81.38$. Let us also try with a higher value of $N=10^7$ | area = get_MC_area(x, y, f, N=10**7, plot=True)
print "Area is: %.3f" % area | Monte Carlo Integration.ipynb | napsternxg/ipython-notebooks | apache-2.0 |
The above figure, shows that for $N=10^7$, the region covered by the sampled points is almost as smooth as the shaded region. Furthermore, the area is closer to the true value of $81.38$.
Now, let us also analyze, how the value of the calculated area changes with the order of number of sampled points. | for i in xrange(2,8):
area = get_MC_area(x, y, f, N=10**i, plot=False)
print i, area | Monte Carlo Integration.ipynb | napsternxg/ipython-notebooks | apache-2.0 |
Clearly, as the number of points increase, the area becomres closer to the true value.
Let us further examine this change by starting with $10^3$ points and then going all the way till $10^6$ points. | %%time
N_vals = 1000 + np.arange(1000)*1000
areas = np.zeros_like(N_vals, dtype="float")
for i, N in enumerate(N_vals):
area = get_MC_area(x, y, f, N=N, plot=False)
areas[i] = area
print "Mean area of last 100 points: %.3f" % np.mean(areas[-100:])
print "Areas of last 10 points: ", areas[-10:]
plt.plot(N_vals... | Monte Carlo Integration.ipynb | napsternxg/ipython-notebooks | apache-2.0 |
3. Enter CM360 Segmentology Recipe Parameters
Wait for BigQuery->->->Census_Join to be created.
Join the StarThinker Assets Group to access the following assets
Copy CM360 Segmentology Sample. Leave the Data Source as is, you will change it in the next step.
Click Edit Connection, and change to BigQuery->->->Census_Jo... | FIELDS = {
'account':'',
'auth_read':'user', # Credentials used for reading data.
'auth_write':'service', # Authorization used for writing data.
'recipe_name':'', # Name of report, not needed if ID used.
'date_range':'LAST_365_DAYS', # Timeframe to run report for.
'recipe_slug':'', # Name of Google Big... | colabs/cm360_segmentology.ipynb | google/starthinker | apache-2.0 |
4. Execute CM360 Segmentology
This does NOT need to be modified unless you are changing the recipe, click play. | from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'dataset':{
'description':'Create a dataset for bigquery tables.',
'hour':[
4
],
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'ser... | colabs/cm360_segmentology.ipynb | google/starthinker | apache-2.0 |
Add your own dictionary | # Dict objects can also be used to check words against a custom list of correctly-spelled words
# known as a Personal Word List. This is simply a file listing the words to be considered, one word per line.
# The following example creates a Dict object for the personal word list stored in “mywords.txt”:
pwl = enchant.... | .ipynb_checkpoints/NLP-checkpoint.ipynb | aborgher/Main-useful-functions-for-ML | gpl-3.0 |
check entire phrase | from enchant.checker import SpellChecker
chkr = SpellChecker("it_IT")
chkr.set_text("questo è un picclo esmpio per dire cm funziona")
for err in chkr:
print(err.word)
print(chkr.suggest(err.word))
print(chkr.word, chkr.wordpos)
chkr.replace('pippo')
chkr.get_text() | .ipynb_checkpoints/NLP-checkpoint.ipynb | aborgher/Main-useful-functions-for-ML | gpl-3.0 |
tokenization
As explained above, the module enchant.tokenize provides the ability to split text into its component words. The current implementation is based only on the rules for the English language, and so might not be completely suitable for your language of choice. Fortunately, it is straightforward to extend the ... | from enchant.tokenize import get_tokenizer
tknzr = get_tokenizer("en_US") # not tak for it_IT up to now
[w for w in tknzr("this is some simple text")]
from enchant.tokenize import get_tokenizer, HTMLChunker
tknzr = get_tokenizer("en_US")
[w for w in tknzr("this is <span class='important'>really important</span> text")... | .ipynb_checkpoints/NLP-checkpoint.ipynb | aborgher/Main-useful-functions-for-ML | gpl-3.0 |
Other modules:
- CmdLineChecker
The module enchant.checker.CmdLineChecker provides the class CmdLineChecker which can be used to interactively check the spelling of some text. It uses standard input and standard output to interact with the user through a command-line interface. The code below shows how to create and us... | import gensim, logging
from gensim.models import Word2Vec
model = gensim.models.KeyedVectors.load_word2vec_format(
'../Data_nlp/GoogleNews-vectors-negative300.bin.gz', binary=True)
model.doesnt_match("breakfast brian dinner lunch".split())
# give text with w1 w2 your_distance to check if model and w1-w2 have gi... | .ipynb_checkpoints/NLP-checkpoint.ipynb | aborgher/Main-useful-functions-for-ML | gpl-3.0 |
conclusion:
- distance work well
- the order of the words is not taken into account
Translate using google translate
https://github.com/ssut/py-googletrans
should be free and unlimted, interned connection required
pip install googletrans | from googletrans import Translator
o = open("../AliceNelPaeseDelleMeraviglie.txt")
all = ''
for l in o: all += l
translator = Translator()
for i in range(42, 43, 1):
print(all[i * 1000:i * 1000 + 1000], end='\n\n')
print(translator.translate(all[i * 1000:i * 1000 + 1000], dest='en').text)
## if language is ... | .ipynb_checkpoints/NLP-checkpoint.ipynb | aborgher/Main-useful-functions-for-ML | gpl-3.0 |
TreeTagger usage to tag an italian (or other languages) sentence
How To install:
- nltk need to be already installed and working
- follow the instruction from http://www.cis.uni-muenchen.de/~schmid/tools/TreeTagger/
- run TreeTagger on terminal (echo 'Ciao Giulia come stai?' | tree-tagger-italian) to see if everything... | from treetagger import TreeTagger
tt = TreeTagger(language='english')
tt.tag('What is the airspeed of an unladen swallow?')
tt = TreeTagger(language='italian')
tt.tag('Proviamo a vedere un pò se funziona bene questo tagger') | .ipynb_checkpoints/NLP-checkpoint.ipynb | aborgher/Main-useful-functions-for-ML | gpl-3.0 |
This analysis was done by DataKind DC on behalf of the Consumer Product Safety Commission. This serves as a preliminary study of the NEISS dataset. We have been been contact with the CPSC and figuring out what questions of importance that we can offer insight to. The questions that were analyzed were:
Are there produc... | data.data['product'].value_counts()[0:9] | reports/neiss.ipynb | minh5/cpsc | mit |
Looking further, I examine what hospitals report this the most, so we can examine hospitals that report these products the most. | data.get_hospitals_by_product('product_1842')
data.get_hospitals_by_product('product_1807') | reports/neiss.ipynb | minh5/cpsc | mit |
We can also view these as plots and compare the incident rates of these products through different hospitals | data.plot_product('product_1842')
data.plot_product('product_1807') | reports/neiss.ipynb | minh5/cpsc | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.