markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
1b. Book prices Calculate book prices for the following scenarios: Suppose the price of a book is 24.95 EUR, but if the book is bought by a bookstore, they get a 30 percent discount (as opposed to customers buying from an online stores). Shipping costs 3 EUR for the first copy and 75 cents for each additional copy. Shi...
# complete the code below n_books = customer_is_bookstore = # you bookprice calculations here
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
1c. The modulus operator There is one operator (like the ones for multiplication and subtraction) that we did not discuss yet, namely the modulus operator %. Could you figure out by yourself what it does when you place it between two numbers (e.g. 113 % 9)? (PS: Try to figure it out by yourself first, by trying multipl...
# try out the modulus operator!
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Help the cashier Can you use the modulus operator you just learned about to solve the following task? Imagine you want to help cashiers to return the change in a convenient way. This means you do not want to return hands full of small coins, but rather use bills and as few coins as possible. Write code that classifie...
# cashier code
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Exercise 2: Printing and user input 2a. Difference between "," and "+" What is the difference between using + and , in a print statement? Illustrate by using both in each of the following: calling the print() fuction with multiple strings printing combinations of strings and integers concatenating multiple strings and...
name = input("Hello there! What is your name? ") # finish this code
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Exercise 3: String Art 3a. Drawing figures We start with some repetition of the theory about strings: | Topic | Explanation | |-----------|--------| | quotes | A string is delimited by single quotes ('...') or double quotes ("...") | | special characters | Certain special characters can be used, such as "\n" (for...
print('hello\n') print('To print a newline use \\n') print('She said: \'hello\'') print('\tThis is indented') print('This is a very, very, very, very, very, very \ long print statement') print(''' This is a multi-line print statement First line Second line ''')
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Now write a Python script that prints the following figure using only one line of code! (so don't use triple quotes) | | | @ @ u |"""|
# your code here
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
3b. Colors We start again with some repetition of the theory: | Topic | Explanation | |-----------|--------| | a = b + c | if b and c are strings: concatenate b and c to form a new string a| | a = b * c | if b is an integer and c is a string: c is repeated b times to form a new string a | | a[0] | the first charac...
b = 'the' c = 'cat' d = ' is on the mat' a = b + ' ' + c + d print(a) a = b * 5 print(a) print('The first character of', c, 'is' , c[0]) print('The word c has,', len(c) ,'characters')
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Now write a program that asks users for their favorite color. Create the following output (assuming "red" is the chosen color). Use "+" and "*". It should work with any color name though. xml red red red red red red red red red red red red red red red re...
color = input('what is your favorite color? ') print(color) print(color) print(color) print(color)
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Exercise 4: String methods Remember that you can see all methods of the class str by using dir(). You can ignore all methods that start with one or two underscores.
dir(str)
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
To see the explanation for a method of this class, you can use help(str.method). For example:
help(str.upper)
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
4a. Counting vowels Count how many of each vowel (a,e,i,o,u) there are in the text string in the next cell. Print the count for each vowel with a single formatted string. Remember that vowels can be both lower and uppercase.
text = """But I must explain to you how all this mistaken idea of denouncing pleasure and praising pain was born and I will give you a complete account of the system, and expound the actual teachings of the great explorer of the truth, the master-builder of human happiness. No one rejects, dislikes, or avoids pleasur...
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
4b. Printing the lexicon Have a good look at the internal representation of the string below. Use a combination of string methods (you will need at least 3 different ones and some will have to be used multiple times) in the correct order to remove punctuation and redundant whitespaces, and print each word in lowercase ...
text = """ The quick, brown fox jumps over a lazy dog.\tDJs flock by when MTV ax quiz prog. Junk MTV quiz graced by fox whelps.\tBawds jog, flick quartz, vex nymphs. Waltz, bad nymph, for quick jigs vex!\tFox nymphs grab quick-jived waltz. Brick quiz whangs jumpy veldt fox. """ print(text) print() p...
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
4c. Passwords Write a program that asks a user for a password and checks some simple requirements of a password. If necessary, print out the following warnings (use if-statements): Your password should contain at least 6 characters. Your password should contain no more than 12 characters. Your password only contains a...
# your code here
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Exercise 5: Boolean Logic and Conditions 5a. Speeding Write code to solve the following scenario: You are driving a little too fast, and a police officer stops you. Write code to compute and print the result, encoded as a string: 'no ticket', 'small ticket', 'big ticket'. If speed is 60 or less, the result is 'no ticke...
# your code here
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
5b. Alarm clock Write code to set you alarm clock! Given the day of the week and information about whether you are currently on vacation or not, your code should print the time you want to be woken up following these constraints: Weekdays, the alarm should be "7:00" and on the weekend it should be "10:00". Unless we a...
# your code here
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
5c. Parcel delivery The required postage for an international parcel delivery service is calculated based on item weight and country of destination: | Tariff zone | 0 - 2 kg | 2 - 5 kg | 5 - 10 kg | 10 - 20 kg | 20 - 30 kg | |-------------|----------|----------|-----------|------------|------------| |EUR 1 | € 13.00 |...
# your code here
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
%matplotlib inline import phoebe from phoebe import u # units import numpy as np import matplotlib.pyplot as plt logger = phoebe.logger() b = phoebe.default_binary()
2.1/tutorials/spots.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Adding Spots Let's add one spot to each of our stars in the binary. A spot is a feature, and needs to be attached directly to a component upon creation. Providing a tag for 'feature' is entirely optional - if one is not provided it will be created automatically.
b.add_feature('spot', component='primary', feature='spot01')
2.1/tutorials/spots.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
As a shortcut, we can also call add_spot directly.
b.add_spot(component='secondary', feature='spot02')
2.1/tutorials/spots.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Relevant Parameters A spot is defined by the colatitude (where 0 is defined as the North (spin) Pole) and longitude (where 0 is defined as pointing towards the other star for a binary, or to the observer for a single star) of its center, its angular radius, and the ratio of temperature of the spot to the local intrinsi...
print b['spot01'] b.set_value(qualifier='relteff', feature='spot01', value=0.9) b.set_value(qualifier='radius', feature='spot01', value=30) b.set_value(qualifier='colat', feature='spot01', value=45) b.set_value(qualifier='long', feature='spot01', value=90)
2.1/tutorials/spots.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
To see the spot, add a mesh dataset and plot it.
b.add_dataset('mesh', times=[0,0.25,0.5,0.75,1.0], columns=['teffs']) b.run_compute() afig, mplfig = b.filter(component='primary', time=0.75).plot(fc='teffs', show=True)
2.1/tutorials/spots.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Spot Corotation The positions (colat, long) of a spot are defined at t0 (note: t0@system, not necessarily t0_perpass or t0_supconj). If the stars are not synchronous, then the spots will corotate with the star. To illustrate this, let's set the syncpar > 1 and plot the mesh at three different phases from above.
b.set_value('syncpar@primary', 1.5) b.run_compute(irrad_method='none')
2.1/tutorials/spots.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
At time=t0=0, we can see that the spot is where defined: 45 degrees south of the north pole and 90 degree longitude (where longitude of 0 is defined as pointing towards the companion star at t0).
print "t0 = {}".format(b.get_value('t0', context='system')) afig, mplfig = b.plot(time=0, y='ws', fc='teffs', ec='None', show=True)
2.1/tutorials/spots.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
At a later time, the spot is still technically at the same coordinates, but longitude of 0 no longer corresponds to pointing to the companion star. The coordinate system has rotated along with the asyncronous rotation of the star.
afig, mplfig = b.plot(time=0.25, y='ws', fc='teffs', facecmap='YlOrRd', ec='None', show=True) afig, mplfig = b.plot(time=0.5, y='ws', fc='teffs', facecmap='YlOrRd', ec='None', show=True) ax, artists = b.plot(time=0.75, y='ws', fc='teffs', facecmap='YlOrRd', ec='None', show=True)
2.1/tutorials/spots.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Since the syncpar was set to 1.5, one full orbit later the star (and the spot) has made an extra half-rotation.
ax, artists = b.plot(time=1.0, y='ws', fc='teffs', facecmap='YlOrRd', ec='None', show=True)
2.1/tutorials/spots.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Find Peaks Use method in P. Du, W. A. Kibbe, S. M. Lin, Bioinformatics 2006, 22, 2059. Same as in scipy.signal.find_peaks_cwt() and baselineWavelet Wavelet transform
data1 = np.genfromtxt(os.path.join('..', 'tests', 'data', 'raman-785nm.txt')) x = data1[:, 0] y = data1[:, 1] plt.plot(x, y)
notebooks/find_background.ipynb
rohanisaac/spectra
gpl-3.0
Find ridge lines
widths = np.arange(1,71) cwtmat = signal.cwt(y, signal.ricker, widths) plt.imshow(cwtmat, aspect='auto', cmap='PRGn') # Find local maxima # make a binary array containing local maximum of transform, with same shape lmax = np.zeros(cwtmat.shape) for i in range(cwtmat.shape[0]): lmax[i, signal.argrelextrema(cwtmat[i...
notebooks/find_background.ipynb
rohanisaac/spectra
gpl-3.0
For now use scipy.signal.find_peaks_cwt(), compare with my own implementation
fig, ax = plt.subplots(24, figsize=(10,10)) for w in range(3): for l in range(2, 10): a = ax[w*8 + (l-2)] peaks = peak_pos[np.all(((peak_width > w), (peak_len > l)), axis=0)] a.plot(x,y) a.plot(x[peaks], y[peaks], 'rx', label='w%i, l%i' % (w,l)) #a.legend() # find peaks usin...
notebooks/find_background.ipynb
rohanisaac/spectra
gpl-3.0
Estimate Peak widths Procedure from Zhang et al. Perform CWT with Haar wavelet w/ same scales as peak finding. Result M x N matrix Take abs of all values For each peak in peak-detection there are two parameter: index and scale a. Row corresponding to scale is taken out b. Search for local minima to three times...
# analyze the ricker wavelet to help build the ricker wavelet points = 100 for a in range(2, 11, 2): wave = signal.ricker(points, a) plt.plot(wave) # note, all integrate to 0 # make a haar mother wavelet def haar2(points, a): """ Returns a haar wavelet mother wavelet 1 if 0 <= t...
notebooks/find_background.ipynb
rohanisaac/spectra
gpl-3.0
Search for local minima in in the row corresponding to the peak's scale, within 3x peak scale or peak index
for p in peak_pos: print p
notebooks/find_background.ipynb
rohanisaac/spectra
gpl-3.0
Open questions/issues Should we be recording other observing meta-data? How about SFR, M*, etc.? DEIMOS Targeting Pull mask target info from Mask files :: parse_deimos_mask_file Pull other target info from SExtractor output Requires yaml file describing target criteria And the SExtractor output file Sample output o...
#### Sample of target file fil='/Users/xavier/CASBAH/Galaxies/PG1407+265/PG1407+265_targets.fits' targ = Table.read(fil) # mt = np.where(targ['MASK_NAME'] != 'N/A')[0] targ[mt[0:5]]
xastropy/casbah/CASBAH_galaxy_database.ipynb
profxj/xastropy
bsd-3-clause
Testing
fil='/Users/xavier/CASBAH/Galaxies/PG1407+265/PG1407+265_targets.fits' tmp = Table.read(fil,fill_values=[('N/A','0','MASK_NAME')],format='fits')
xastropy/casbah/CASBAH_galaxy_database.ipynb
profxj/xastropy
bsd-3-clause
使用 tf.data 加载文本数据 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://tensorflow.google.cn/tutorials/load_data/text"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" />在 TensorFlow.org 上查看</a> </td> <td> <a target="_blank" href="https://colab.research.goo...
import tensorflow as tf import tensorflow_datasets as tfds import os
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
三个版本的翻译分别来自于: William Cowper — text Edward, Earl of Derby — text Samuel Butler — text 本教程中使用的文本文件已经进行过一些典型的预处理,主要包括删除了文档页眉和页脚,行号,章节标题。请下载这些已经被局部改动过的文件。
DIRECTORY_URL = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' FILE_NAMES = ['cowper.txt', 'derby.txt', 'butler.txt'] for name in FILE_NAMES: text_dir = tf.keras.utils.get_file(name, origin=DIRECTORY_URL+name) parent_dir = os.path.dirname(text_dir) parent_dir
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
将文本加载到数据集中 迭代整个文件,将整个文件加载到自己的数据集中。 每个样本都需要单独标记,所以请使用 tf.data.Dataset.map 来为每个样本设定标签。这将迭代数据集中的每一个样本并且返回( example, label )对。
def labeler(example, index): return example, tf.cast(index, tf.int64) labeled_data_sets = [] for i, file_name in enumerate(FILE_NAMES): lines_dataset = tf.data.TextLineDataset(os.path.join(parent_dir, file_name)) labeled_dataset = lines_dataset.map(lambda ex: labeler(ex, i)) labeled_data_sets.append(labeled...
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
将这些标记的数据集合并到一个数据集中,然后对其进行随机化操作。
BUFFER_SIZE = 50000 BATCH_SIZE = 64 TAKE_SIZE = 5000 all_labeled_data = labeled_data_sets[0] for labeled_dataset in labeled_data_sets[1:]: all_labeled_data = all_labeled_data.concatenate(labeled_dataset) all_labeled_data = all_labeled_data.shuffle( BUFFER_SIZE, reshuffle_each_iteration=False)
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
你可以使用 tf.data.Dataset.take 与 print 来查看 (example, label) 对的外观。numpy 属性显示每个 Tensor 的值。
for ex in all_labeled_data.take(5): print(ex)
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
将文本编码成数字 机器学习基于的是数字而非文本,所以字符串需要被转化成数字列表。 为了达到此目的,我们需要构建文本与整数的一一映射。 建立词汇表 首先,通过将文本标记为单独的单词集合来构建词汇表。在 TensorFlow 和 Python 中均有很多方法来达成这一目的。在本教程中: 迭代每个样本的 numpy 值。 使用 tfds.features.text.Tokenizer 来将其分割成 token。 将这些 token 放入一个 Python 集合中,借此来清除重复项。 获取该词汇表的大小以便于以后使用。
tokenizer = tfds.features.text.Tokenizer() vocabulary_set = set() for text_tensor, _ in all_labeled_data: some_tokens = tokenizer.tokenize(text_tensor.numpy()) vocabulary_set.update(some_tokens) vocab_size = len(vocabulary_set) vocab_size
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
样本编码 通过传递 vocabulary_set 到 tfds.features.text.TokenTextEncoder 来构建一个编码器。编码器的 encode 方法传入一行文本,返回一个整数列表。
encoder = tfds.features.text.TokenTextEncoder(vocabulary_set)
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
你可以尝试运行这一行代码并查看输出的样式。
example_text = next(iter(all_labeled_data))[0].numpy() print(example_text) encoded_example = encoder.encode(example_text) print(encoded_example)
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
现在,在数据集上运行编码器(通过将编码器打包到 tf.py_function 并且传参至数据集的 map 方法的方式来运行)。
def encode(text_tensor, label): encoded_text = encoder.encode(text_tensor.numpy()) return encoded_text, label def encode_map_fn(text, label): # py_func doesn't set the shape of the returned tensors. encoded_text, label = tf.py_function(encode, inp=[text, label], ...
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
将数据集分割为测试集和训练集且进行分支 使用 tf.data.Dataset.take 和 tf.data.Dataset.skip 来建立一个小一些的测试数据集和稍大一些的训练数据集。 在数据集被传入模型之前,数据集需要被分批。最典型的是,每个分支中的样本大小与格式需要一致。但是数据集中样本并不全是相同大小的(每行文本字数并不相同)。因此,使用 tf.data.Dataset.padded_batch(而不是 batch )将样本填充到相同的大小。
train_data = all_encoded_data.skip(TAKE_SIZE).shuffle(BUFFER_SIZE) train_data = train_data.padded_batch(BATCH_SIZE) test_data = all_encoded_data.take(TAKE_SIZE) test_data = test_data.padded_batch(BATCH_SIZE)
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
现在,test_data 和 train_data 不是( example, label )对的集合,而是批次的集合。每个批次都是一对(多样本, 多标签 ),表示为数组。
sample_text, sample_labels = next(iter(test_data)) sample_text[0], sample_labels[0]
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
由于我们引入了一个新的 token 来编码(填充零),因此词汇表大小增加了一个。
vocab_size += 1
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
建立模型
model = tf.keras.Sequential()
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
第一层将整数表示转换为密集矢量嵌入。更多内容请查阅 Word Embeddings 教程。
model.add(tf.keras.layers.Embedding(vocab_size, 64))
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
下一层是 LSTM 层,它允许模型利用上下文中理解单词含义。 LSTM 上的双向包装器有助于模型理解当前数据点与其之前和之后的数据点的关系。
model.add(tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)))
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
最后,我们将获得一个或多个紧密连接的层,其中最后一层是输出层。输出层输出样本属于各个标签的概率,最后具有最高概率的分类标签即为最终预测结果。
# 一个或多个紧密连接的层 # 编辑 `for` 行的列表去检测层的大小 for units in [64, 64]: model.add(tf.keras.layers.Dense(units, activation='relu')) # 输出层。第一个参数是标签个数。 model.add(tf.keras.layers.Dense(3, activation='softmax'))
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
最后,编译这个模型。对于一个 softmax 分类模型来说,通常使用 sparse_categorical_crossentropy 作为其损失函数。你可以尝试其他的优化器,但是 adam 是最常用的。
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
训练模型 利用提供的数据训练出的模型有着不错的精度(大约 83% )。
model.fit(train_data, epochs=3, validation_data=test_data) eval_loss, eval_acc = model.evaluate(test_data) print('\nEval loss: {}, Eval accuracy: {}'.format(eval_loss, eval_acc))
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
Getting the data The planet dataset isn't available on the fastai dataset page due to copyright restrictions. You can download it from Kaggle however. Let's see how to do this by using the Kaggle API as it's going to be pretty useful to you if you want to join a competition or use other Kaggle datasets later on. First,...
# ! {sys.executable} -m pip install kaggle --upgrade
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
Then you need to upload your credentials from Kaggle on your instance. Login to kaggle and click on your profile picture on the top left corner, then 'My account'. Scroll down until you find a button named 'Create New API Token' and click on it. This will trigger the download of a file named 'kaggle.json'. Upload this ...
# ! mkdir -p ~/.kaggle/ # ! mv kaggle.json ~/.kaggle/ # For Windows, uncomment these two commands # ! mkdir %userprofile%\.kaggle # ! move kaggle.json %userprofile%\.kaggle
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
You're all set to download the data from planet competition. You first need to go to its main page and accept its rules, and run the two cells below (uncomment the shell commands to download and unzip the data). If you get a 403 forbidden error it means you haven't accepted the competition rules yet (you have to go to ...
path = Config.data_path()/'planet' path.mkdir(parents=True, exist_ok=True) path # ! kaggle competitions download -c planet-understanding-the-amazon-from-space -f train-jpg.tar.7z -p {path} # ! kaggle competitions download -c planet-understanding-the-amazon-from-space -f train_v2.csv -p {path} # ! unzip -q -n {path...
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
To extract the content of this file, we'll need 7zip, so uncomment the following line if you need to install it (or run sudo apt install p7zip-full in your terminal).
# ! conda install --yes --prefix {sys.prefix} -c haasad eidl7zip
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
And now we can unpack the data (uncomment to run - this might take a few minutes to complete).
# ! 7za -bd -y -so x {path}/train-jpg.tar.7z | tar xf - -C {path.as_posix()}
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
Multiclassification Contrary to the pets dataset studied in last lesson, here each picture can have multiple labels. If we take a look at the csv file containing the labels (in 'train_v2.csv' here) we see that each 'image_name' is associated to several tags separated by spaces.
df = pd.read_csv(path/'train_v2.csv') df.head()
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
To put this in a DataBunch while using the data block API, we then need to using ImageList (and not ImageDataBunch). This will make sure the model created has the proper loss function to deal with the multiple classes.
tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.)
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
We use parentheses around the data block pipeline below, so that we can use a multiline statement without needing to add '\'.
np.random.seed(42) src = (ImageList.from_csv(path, 'train_v2.csv', folder='train-jpg', suffix='.jpg') .split_by_rand_pct(0.2) .label_from_df(label_delim=' ')) data = (src.transform(tfms, size=128) .databunch().normalize(imagenet_stats))
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
show_batch still works, and show us the different labels separated by ;.
data.show_batch(rows=3, figsize=(12,9))
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
To create a Learner we use the same function as in lesson 1. Our base architecture is resnet50 again, but the metrics are a little bit differeent: we use accuracy_thresh instead of accuracy. In lesson 1, we determined the predicition for a given class by picking the final activation that was the biggest, but here, each...
arch = models.resnet50 acc_02 = partial(accuracy_thresh, thresh=0.2) f_score = partial(fbeta, thresh=0.2) learn = cnn_learner(data, arch, metrics=[acc_02, f_score])
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
We use the LR Finder to pick a good learning rate.
learn.lr_find() learn.recorder.plot()
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
Then we can fit the head of our network.
lr = 0.01 learn.fit_one_cycle(5, slice(lr)) learn.save('stage-1-rn50')
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
...And fine-tune the whole model:
learn.unfreeze() learn.lr_find() learn.recorder.plot() learn.fit_one_cycle(5, slice(1e-5, lr/5)) learn.save('stage-2-rn50') data = (src.transform(tfms, size=256) .databunch().normalize(imagenet_stats)) learn.data = data data.train_ds[0][0].shape learn.freeze() learn.lr_find() learn.recorder.plot() lr=1e...
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
You won't really know how you're going until you submit to Kaggle, since the leaderboard isn't using the same subset as we have for training. But as a guide, 50th place (out of 938 teams) on the private leaderboard was a score of 0.930.
learn.export()
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
fin (This section will be covered in part 2 - please don't ask about it just yet! :) )
#! kaggle competitions download -c planet-understanding-the-amazon-from-space -f test-jpg.tar.7z -p {path} #! 7za -bd -y -so x {path}/test-jpg.tar.7z | tar xf - -C {path} #! kaggle competitions download -c planet-understanding-the-amazon-from-space -f test-jpg-additional.tar.7z -p {path} #! 7za -bd -y -so x {path}/...
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
The reactGS folder "mimics" the actual react-groundstation github repository, only copying the file directory structure, but the source code itself (which is a lot) isn't completely copied over. I wanted to keep these scripts/notebooks/files built on top of that github repository to be separate from the actual working...
wherepodCommandsis = os.getcwd()+'/reactGS/server/udp/' print(wherepodCommandsis)
packetDef/podCommands.ipynb
ernestyalumni/servetheloop
mit
node.js/(JavaScript) to json; i.e. node.js/(JavaScript) $\to$ json Make a copy of server/udp/podCommands.js. In this copy, comment out var chalk = require('chalk') (this is the only thing you have to do manually). Run this in the directory containing your copy of podCommands.js: node traverse_podCommands.js This ...
import json f_podCmds_json = open(wherepodCommandsis+'podCmds_lst.json','rb') rawjson_podCmds = f_podCmds_json.read() f_podCmds_json.close() print(type(rawjson_podCmds)) podCmds_lst=json.loads(rawjson_podCmds) print(type(podCmds_lst)) print(len(podCmds_lst)) # there are 104 available commands for the pod! for cmd in...
packetDef/podCommands.ipynb
ernestyalumni/servetheloop
mit
Dirty parsing of podCommands.js and the flight control parameters
f_podCmds = open(wherepodCommandsis+'podCommands.js','rb') raw_podCmds = f_podCmds.read() f_podCmds.close() print(type(raw_podCmds)) print(len(raw_podCmds)) # get the name of the functions cmdnameslst = [func[:func.find("(")].strip() for func in raw_podCmds.split("function ")] funcparamslst = [func[func.find("(")+1:f...
packetDef/podCommands.ipynb
ernestyalumni/servetheloop
mit
So the structure of our result is as follows: Python tuples (each of size 2 for each of the tuples) """ ( (Name of pod command as a string, None if there are no function parameters or Python list of function arguments), Python list [ Subsystem name as a string, paramter1 as a hex value, paramter2 as a hex ...
podCommandparams[:10] try: import CPickle as pickle except ImportError: import pickle podCommandparamsfile = open("podCommandparams.pkl",'wb') pickle.dump( podCommandparams , podCommandparamsfile ) podCommandparamsfile.close() # open up a pickle file like so: podCommandparamsfile_recover = open("podCommandpa...
packetDef/podCommands.ipynb
ernestyalumni/servetheloop
mit
Going to .csv @nuttwerx and @ernestyalumni decided upon separating the multiple entries in a field by the semicolon ";":
tocsv = [] for cmd in podCommandparams_recover: name = cmd[0][0] funcparam = cmd[0][1] if funcparam is None: fparam = None else: fparam = ";".join(funcparam) udpparam = cmd[1] if udpparam is None: uname = None uparam = None else: uname = udpparam[0] ...
packetDef/podCommands.ipynb
ernestyalumni/servetheloop
mit
Add the headers in manually: 1 = Command name; 2 = Function args; 3 = Pod Node; 4 = Command Args
header = ["Command name","Function args", "Pod Node", "Command Args"] tocsv.insert(0,header)
packetDef/podCommands.ipynb
ernestyalumni/servetheloop
mit
The csv fields format is as follows: (function name) , (function arguments (None is there are none)) , (UDP transmit name (None is there are no udp transmit command)), (UDP transmit parameters, 4 of them, separated by semicolon, or None if there are no udp transmit command )
import csv f_podCommands_tocsv = open("podCommands.csv",'w') tocsv_writer = csv.writer( f_podCommands_tocsv ) tocsv_writer.writerows(tocsv) f_podCommands_tocsv.close() #tocsv.insert(0,header) no need #tocsv[:10] no need
packetDef/podCommands.ipynb
ernestyalumni/servetheloop
mit
Compute source power spectral density (PSD) of VectorView and OPM data Here we compute the resting state from raw for data recorded using a Neuromag VectorView system and a custom OPM system. The pipeline is meant to mostly follow the Brainstorm [1] OMEGA resting tutorial pipeline &lt;bst_omega_&gt;. The steps we use a...
# Authors: Denis Engemann <denis.engemann@gmail.com> # Luke Bloy <luke.bloy@gmail.com> # Eric Larson <larson.eric.d@gmail.com> # # License: BSD (3-clause) import os.path as op from mne.filter import next_fast_len import mne print(__doc__) data_path = mne.datasets.opm.data_path() subject = 'OPM_s...
0.21/_downloads/6035dcef33422511928bd2247a3d092d/plot_source_power_spectrum_opm.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Load data, resample. We will store the raw objects in dicts with entries "vv" and "opm" to simplify housekeeping and simplify looping later.
raws = dict() raw_erms = dict() new_sfreq = 90. # Nyquist frequency (45 Hz) < line noise freq (50 Hz) raws['vv'] = mne.io.read_raw_fif(vv_fname, verbose='error') # ignore naming raws['vv'].load_data().resample(new_sfreq) raws['vv'].info['bads'] = ['MEG2233', 'MEG1842'] raw_erms['vv'] = mne.io.read_raw_fif(vv_erm_fnam...
0.21/_downloads/6035dcef33422511928bd2247a3d092d/plot_source_power_spectrum_opm.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Do some minimal artifact rejection just for VectorView data
titles = dict(vv='VectorView', opm='OPM') ssp_ecg, _ = mne.preprocessing.compute_proj_ecg( raws['vv'], tmin=-0.1, tmax=0.1, n_grad=1, n_mag=1) raws['vv'].add_proj(ssp_ecg, remove_existing=True) # due to how compute_proj_eog works, it keeps the old projectors, so # the output contains both projector types (and also ...
0.21/_downloads/6035dcef33422511928bd2247a3d092d/plot_source_power_spectrum_opm.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Explore data
kinds = ('vv', 'opm') n_fft = next_fast_len(int(round(4 * new_sfreq))) print('Using n_fft=%d (%0.1f sec)' % (n_fft, n_fft / raws['vv'].info['sfreq'])) for kind in kinds: fig = raws[kind].plot_psd(n_fft=n_fft, proj=True) fig.suptitle(titles[kind]) fig.subplots_adjust(0.1, 0.1, 0.95, 0.85)
0.21/_downloads/6035dcef33422511928bd2247a3d092d/plot_source_power_spectrum_opm.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Alignment and forward
# Here we use a reduced size source space (oct5) just for speed src = mne.setup_source_space( subject, 'oct5', add_dist=False, subjects_dir=subjects_dir) # This line removes source-to-source distances that we will not need. # We only do it here to save a bit of memory, in general this is not required. del src[0]['d...
0.21/_downloads/6035dcef33422511928bd2247a3d092d/plot_source_power_spectrum_opm.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Compute and apply inverse to PSD estimated using multitaper + Welch. Group into frequency bands, then normalize each source point and sensor independently. This makes the value of each sensor point and source location in each frequency band the percentage of the PSD accounted for by that band.
freq_bands = dict( delta=(2, 4), theta=(5, 7), alpha=(8, 12), beta=(15, 29), gamma=(30, 45)) topos = dict(vv=dict(), opm=dict()) stcs = dict(vv=dict(), opm=dict()) snr = 3. lambda2 = 1. / snr ** 2 for kind in kinds: noise_cov = mne.compute_raw_covariance(raw_erms[kind]) inverse_operator = mne.minimum_norm....
0.21/_downloads/6035dcef33422511928bd2247a3d092d/plot_source_power_spectrum_opm.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Now we can make some plots of each frequency band. Note that the OPM head coverage is only over right motor cortex, so only localization of beta is likely to be worthwhile. Theta
def plot_band(kind, band): """Plot activity within a frequency band on the subject's brain.""" title = "%s %s\n(%d-%d Hz)" % ((titles[kind], band,) + freq_bands[band]) topos[kind][band].plot_topomap( times=0., scalings=1., cbar_fmt='%0.1f', vmin=0, cmap='inferno', time_format=title) brai...
0.21/_downloads/6035dcef33422511928bd2247a3d092d/plot_source_power_spectrum_opm.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Alpha
fig_alpha, brain_alpha = plot_band('vv', 'alpha')
0.21/_downloads/6035dcef33422511928bd2247a3d092d/plot_source_power_spectrum_opm.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Beta Here we also show OPM data, which shows a profile similar to the VectorView data beneath the sensors.
fig_beta, brain_beta = plot_band('vv', 'beta') fig_beta_opm, brain_beta_opm = plot_band('opm', 'beta')
0.21/_downloads/6035dcef33422511928bd2247a3d092d/plot_source_power_spectrum_opm.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Gamma
fig_gamma, brain_gamma = plot_band('vv', 'gamma')
0.21/_downloads/6035dcef33422511928bd2247a3d092d/plot_source_power_spectrum_opm.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The experiment takes participants with two test, congruent task and incongruent task. Congruent task is word with agreeing text and font color, while incongruent is a different text and its font color. Both of the task require the participants to say it out loud the word that are being display, and press 'Finish' butto...
df.describe()
p1-statistics/project.ipynb
napjon/ds-nd
mit
The measure of tendency that will be used in this situation is mean, and measure of variability is standard deviation.
df.plot.scatter(x='Congruent',y='Incongruent');
p1-statistics/project.ipynb
napjon/ds-nd
mit
The plot shown a moderaly weak correlation between congruent variable and incongruent variable.
(df.Incongruent - df.Congruent).plot.hist();
p1-statistics/project.ipynb
napjon/ds-nd
mit
We can see that is the difference is right skewed distribution. This makes sense, since congruent task is easier, there shouldn't be any participants that solve incongruent task shorter tha congruent task. And it should be the longer time it took for the participants at solving incongruent task, the less should be for ...
%%R n = 24 mu = 7.964792 s = 4.864827 CL = 0.95 n = 24 # z = round(qnorm((1-CL)/2, lower.tail=F),digits=2) SE = s/sqrt(n) t = mu/SE t_crit = round(qt((1-CL)/2,df=n-1),digits=3) c(t,c(-t_crit,t_crit))
p1-statistics/project.ipynb
napjon/ds-nd
mit
Since our t-statistics, 8.02 is higher than the t critical values, we can conclude that the data provides convincing evidence that the time participants took for incongruent task is significantly different than when they took congruent task. Confidence Interval
%%R ME = t*SE c(mu+ME,mu-ME)
p1-statistics/project.ipynb
napjon/ds-nd
mit
Load data Similar to previous exercises, we will load CIFAR-10 data from disk.
from cs231n.features import color_histogram_hsv, hog_feature def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000): # Load the raw CIFAR-10 data cifar10_dir = 'cs231n/datasets/cifar-10-batches-py' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # Subsample the data mask...
assignment1/features.ipynb
zlpure/CS231n
mit
Extract Features For each image we will compute a Histogram of Oriented Gradients (HOG) as well as a color histogram using the hue channel in HSV color space. We form our final feature vector for each image by concatenating the HOG and color histogram feature vectors. Roughly speaking, HOG should capture the texture of...
from cs231n.features import * num_color_bins = 10 # Number of bins in the color histogram feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)] X_train_feats = extract_features(X_train, feature_fns, verbose=True) X_val_feats = extract_features(X_val, feature_fns) X_test_feats = extract...
assignment1/features.ipynb
zlpure/CS231n
mit
Train SVM on features Using the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
# Use the validation set to tune the learning rate and regularization strength from cs231n.classifiers.linear_classifier import LinearSVM learning_rates = [1e-9, 1e-8, 1e-7] regularization_strengths = [1e5, 1e6, 1e7] results = {} best_val = -1 best_svm = None pass ###################################################...
assignment1/features.ipynb
zlpure/CS231n
mit
Inline question 1: Describe the misclassification results that you see. Do they make sense? Neural Network on image features Earlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have ...
print X_train_feats.shape from cs231n.classifiers.neural_net import TwoLayerNet input_dim = X_train_feats.shape[1] hidden_dim = 500 num_classes = 10 net = TwoLayerNet(input_dim, hidden_dim, num_classes) best_net = None ################################################################################ # TODO: Train a ...
assignment1/features.ipynb
zlpure/CS231n
mit
By default, autofig uses the z dimension just to assign z-order (so that positive z appears "on top")
autofig.reset() autofig.plot(x, y, z, i=t, xlabel='x', ylabel='y', zlabel='z') mplfig = autofig.draw()
docs/tutorials/3d.ipynb
kecnry/autofig
gpl-3.0
To instead plot using a projected 3d axes, simply pass projection='3d'
autofig.reset() autofig.plot(x, y, z, i=t, xlabel='x', ylabel='y', zlabel='z', projection='3d') mplfig = autofig.draw()
docs/tutorials/3d.ipynb
kecnry/autofig
gpl-3.0
If the projection is set to 3d, you can also set the elevation ('elev') and azimuth ('azim') of the viewing angle. These are provided in degrees and can be either a float (fixed) or a list (changes as a function of the current value of i).
autofig.reset() autofig.plot(x, y, z, i=t, xlabel='x', ylabel='y', zlabel='z', projection='3d', elev=0, azim=0) mplfig = autofig.draw()
docs/tutorials/3d.ipynb
kecnry/autofig
gpl-3.0
When provided as an array, the set viewing angle is determined as follows: if no i is passed, the median values of 'elev' and 'azim' are used if i is passed, then linear interpolation is used across the i dimension of all calls attached to that axes Therefore, passing an array (or list or tuple) with two items will s...
autofig.reset() autofig.plot(x, y, z, i=t, xlabel='x', ylabel='y', zlabel='z', projection='3d', elev=0, azim=[0, 180]) mplfig = autofig.draw(i=3) anim = autofig.animate(i=t, save='3d_azim_2.gif', save_kwargs={'writer': 'imagemagick'})
docs/tutorials/3d.ipynb
kecnry/autofig
gpl-3.0
We can then achieve an "accelerating" rotation by passing finer detail on the azimuth as a function of 'i'.
autofig.reset() autofig.plot(x, y, z, i=t, xlabel='x', ylabel='y', zlabel='z', projection='3d', elev=0, azim=[0, 20, 30, 50, 150, 180]) anim = autofig.animate(i=t, save='3d_azim_6.gif', save_kwargs={'writer': 'imagemagick'})
docs/tutorials/3d.ipynb
kecnry/autofig
gpl-3.0
Places your images in a folder such as dirname = '/Users/Someone/Desktop/ImagesFromTheInternet'. We'll then use the os package to load them and crop/resize them to a standard size of 100 x 100 pixels. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
# You need to find 100 images from the web/create them yourself # or find a dataset that interests you (e.g. I used celeb faces # in the course lecture...) # then store them all in a single directory. # With all the images in a single directory, you can then # perform the following steps to create a 4-d array of: # N x...
session-1/session-1.ipynb
goddoe/CADL
apache-2.0
<a name="part-two---compute-the-mean"></a> Part Two - Compute the Mean <a name="instructions-1"></a> Instructions First use Tensorflow to define a session. Then use Tensorflow to create an operation which takes your 4-d array and calculates the mean color image (100 x 100 x 3) using the function tf.reduce_mean. Have ...
# First create a tensorflow session sess = ... # Now create an operation that will calculate the mean of your images mean_img_op = ... # And then run that operation using your session mean_img = sess.run(mean_img_op) # Then plot the resulting mean image: # Make sure the mean image is the right size! assert(mean_img....
session-1/session-1.ipynb
goddoe/CADL
apache-2.0
Once you have seen the mean image of your dataset, how does it relate to your own expectations of the dataset? Did you expect something different? Was there something more "regular" or "predictable" about your dataset that the mean image did or did not reveal? If your mean image looks a lot like something recognizab...
# Create a tensorflow operation to give you the standard deviation # First compute the difference of every image with a # 4 dimensional mean image shaped 1 x H x W x C mean_img_4d = ... subtraction = imgs - mean_img_4d # Now compute the standard deviation by calculating the # square root of the expected squared diff...
session-1/session-1.ipynb
goddoe/CADL
apache-2.0
Once you have plotted your dataset's standard deviation per pixel, what does it reveal about your dataset? Like with the mean image, you should consider what is predictable and not predictable about this image. <a name="part-four---normalize-the-dataset"></a> Part Four - Normalize the Dataset <a name="instructions-3">...
norm_imgs_op = ... norm_imgs = sess.run(norm_imgs_op) print(np.min(norm_imgs), np.max(norm_imgs)) print(imgs.dtype) # Then plot the resulting normalized dataset montage: # Make sure we have a 100 x 100 x 100 x 3 dimension array assert(norm_imgs.shape == (100, 100, 100, 3)) plt.figure(figsize=(10, 10)) plt.imshow(util...
session-1/session-1.ipynb
goddoe/CADL
apache-2.0