markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
CelebA The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.
show_n_images = 25 """ DON'T MODIFY ANYTHING IN THIS CELL """ mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB') pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB')) mnist_images.shape
udacity-dl/GAN/dlnd_face_generation.ipynb
arasdar/DL
unlicense
Preprocess the Data Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resize...
""" DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer. You are using {}'.format(tf.__version__) print('TensorFlow V...
udacity-dl/GAN/dlnd_face_generation.ipynb
arasdar/DL
unlicense
Input Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Real input images placeholder with rank 4 using image_width, image_height, and image_channels. - Z input placeholder with rank 2 using z_dim. - Learning rate placeholder with rank ...
import problem_unittests as tests def model_inputs(image_width, image_height, image_channels, z_dim): """ Create the model inputs :param image_width: The input image width --> 28 :param image_height: The input image height --> 28 :param image_channels: The number of image channels --> 3 RGB :pa...
udacity-dl/GAN/dlnd_face_generation.ipynb
arasdar/DL
unlicense
Discriminator Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (te...
def conv2d_xavier(inputs, filters, kernel_size, strides, padding): #, trainable, reuse out_conv = tf.layers.conv2d(inputs, filters, kernel_size, strides, padding, data_format='channels_last', #strides=(1, 1), padding='valid', dilation_rate=(1, 1), a...
udacity-dl/GAN/dlnd_face_generation.ipynb
arasdar/DL
unlicense
Generator Implement generator to generate an image using z. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.
def generator(z, out_channel_dim, is_train=True): """ Create the generator network :param z: Input z :param out_channel_dim: The number of channels in the output image :param is_train: Boolean if generator is being used for training :return: The tensor output of the generator """ # TODO:...
udacity-dl/GAN/dlnd_face_generation.ipynb
arasdar/DL
unlicense
Loss Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented: - discriminator(images, reuse=False) - generator(z, out_channel_dim, is_train=True)
import numpy as np def model_loss(input_real, input_z, out_channel_dim): """ Get the loss for the discriminator and generator :param input_real: Images from the real dataset :param input_z: Z input :param out_channel_dim: The number of channels in the output image :return: A tuple of (discrimin...
udacity-dl/GAN/dlnd_face_generation.ipynb
arasdar/DL
unlicense
Optimization Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generato...
def model_opt(d_loss, g_loss, learning_rate, beta1): """ Get optimization operations :param d_loss: Discriminator loss Tensor :param g_loss: Generator loss Tensor :param learning_rate: Learning Rate Placeholder :param beta1: The exponential decay rate for the 1st moment in the optimizer :ret...
udacity-dl/GAN/dlnd_face_generation.ipynb
arasdar/DL
unlicense
Train Implement train to build and train the GANs. Use the following functions you implemented: - model_inputs(image_width, image_height, image_channels, z_dim) - model_loss(input_real, input_z, out_channel_dim) - model_opt(d_loss, g_loss, learning_rate, beta1) Use the show_generator_output to show generator output whi...
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode): """ Train the GAN :param epoch_count: Number of epochs :param batch_size: Batch Size :param z_dim: Z dimension :param learning_rate: Learning Rate :param beta1: The exponential decay ra...
udacity-dl/GAN/dlnd_face_generation.ipynb
arasdar/DL
unlicense
Setting up and fine-tuning the hyperparameters for both datasets
# Hyperparameters for both dataset GANs training, valid, and testing batch_size = 128 z_dim = 100 learning_rate = 0.0002 # 2/128 = 1/64 beta1 = 0.5 # The hyperparameters in DCGAN SVHN impl recom. # real_size = (32,32,3) -> (28, 28, 3) in this case for both mnist and celebA datasets # z_size = 100 # learning_rate = 0.0...
udacity-dl/GAN/dlnd_face_generation.ipynb
arasdar/DL
unlicense
MNIST Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.
""" DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ epochs = 2 mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg'))) with tf.Graph().as_default(): train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches, mnist_dataset.shape, mnist_dat...
udacity-dl/GAN/dlnd_face_generation.ipynb
arasdar/DL
unlicense
CelebA Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.
""" DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ epochs = 1 celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))) with tf.Graph().as_default(): train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches, celeba_dataset.s...
udacity-dl/GAN/dlnd_face_generation.ipynb
arasdar/DL
unlicense
json格式 mysql数据库基本操作 命令行操作
# 链接数据库? mysql -u root -p # u 是用户名 p: 需要用密码登录数据库 # 查看数据库 show databases; # 选择数据库 use database_name; # 查看数据库中的table表 show tables; # 查看表格的结构 desc tables; # 查看表中的数据 select * from table_name; # 查看数据并限制数量 select * from table_name limit 10;
0809下午python第三课课件.ipynb
superliaoyong/plist-forsource
apache-2.0
数据库管理工具 sequelpro 链接:http://www.sequelpro.com/ mysql与Excel的不同 | 姓名 | 性别 | 年龄 | 班级 | 考试 | 语文 | 数学 | 英语 | 物理 | 化学 | 生物 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | 高海 | 男 | 18 | 高三一班 | 第一次模拟 | 90 | 126 | 119 | 75 | 59 | 89 | | 高海 | 男 | 18 | 高三一班 | 第二次模拟 | 80 | 120 | 123 | 85 | 78 | 87 | | 秦佳艺...
3 转换为二进制 11 整数部分 0.4 转换为二进制 0.5*0 + 0.25*1 + 0.125*1 0 1 1 1 1 0 1
0809下午python第三课课件.ipynb
superliaoyong/plist-forsource
apache-2.0
mysql数据类型 http://www.runoob.com/mysql/mysql-data-types.html 插入数据
insert into `class`(`id`, `name`) values(1, '高一三班');
0809下午python第三课课件.ipynb
superliaoyong/plist-forsource
apache-2.0
修改数据
update `class` set `name` = '高一五班' where `name` = '高一三班';
0809下午python第三课课件.ipynb
superliaoyong/plist-forsource
apache-2.0
删除操作
delete from `class` where `id` = 6;
0809下午python第三课课件.ipynb
superliaoyong/plist-forsource
apache-2.0
使用python去操作数据库 python 安装第三方库 1、 pip ; 举例: pip install pymysql 2、 conda; 举例: conda install pymysql
import MySQLdb DATABASE = { 'host': '127.0.0.1', # 如果是远程数据库,此处为远程服务器的ip地址 'database': 'Examination', 'user': 'root', 'password': 'wangwei', 'charset': 'utf8mb4' } db = MySQLdb.connect(host='localhost', user='root', password='wangwei', db='Examination') # 等价于 db = MySQLdb.connect('localhost', 'root...
0809下午python第三课课件.ipynb
superliaoyong/plist-forsource
apache-2.0
游标
cursor = db.cursor()
0809下午python第三课课件.ipynb
superliaoyong/plist-forsource
apache-2.0
查询
sql = "select * from student where id <= 20 limit 4" cursor.execute(sql) results = cursor.fetchall() for row in results: print(row)
0809下午python第三课课件.ipynb
superliaoyong/plist-forsource
apache-2.0
插入操作
sql = "insert into `class`(`name`) values('高一五班');" cursor = db.cursor() cursor.execute(sql) cursor.execute(sql) db.commit()
0809下午python第三课课件.ipynb
superliaoyong/plist-forsource
apache-2.0
删除操作
sql = "delete from `class` where `name`='高一五班'" cursor = db.cursor() cursor.execute(sql) db.commit()
0809下午python第三课课件.ipynb
superliaoyong/plist-forsource
apache-2.0
更新操作
sql = "update `class` set `name`='高一十四班' where `id`=4;" cursor = db.cursor() cursor.execute(sql) db.commit()
0809下午python第三课课件.ipynb
superliaoyong/plist-forsource
apache-2.0
捕捉异常
a = 10 b = a + 'hello' try: a = 10 b = a + 'hello' except TypeError as e: print(e)
0809下午python第三课课件.ipynb
superliaoyong/plist-forsource
apache-2.0
遗留问题: 数据库回滚操作失败
try: sql = "insert into `class`(`name`) values('高一十六班')" cursor = db.cursor() cursor.execute(sql) error = 10 + 'sdfsdf' db.commit() except Exception as e: print(e) db.rollback()
0809下午python第三课课件.ipynb
superliaoyong/plist-forsource
apache-2.0
爬虫 python库 1、requests 用来获取页面内容 2、BeautifulSoup 安装 pip install reqeusts pip install bs4
import time import MySQLdb import requests from bs4 import BeautifulSoup # 此处为数据库配置文件,每个人的配置不同,因此需要同学们自己配置 DATABASE = { 'host': '127.0.0.1', # 如果是远程数据库,此处为远程服务器的ip地址 'database': '', 'user': '', 'password': '', 'charset': 'utf8mb4' } # 获取url下的页面内容,返回soup对象 def get_page(url): responce = reques...
0809下午python第三课课件.ipynb
superliaoyong/plist-forsource
apache-2.0
Dados de desempenho de um servidor web Os dados para este trabalho foram coletados de um servidor web que hospeda um site. As observações são as médias das variáveis por minuto: - Duracao_media_ms: Duração média do processamento de um Request HTTP (em milissegundos); - Perc_medio_CPU: Percentual médio de ocupação da C...
df = pd.read_csv('servidor.csv') df.head() df.info() df.describe() results = smf.ols('Duracao_media_ms ~ Perc_medio_CPU + Load_avg_minute + Requests_média', data=df).fit() results.summary() X = df.drop('Duracao_media_ms',axis=1) Xe = sm.add_constant(X,prepend=True) vif = [variance_inflation_factor(Xe.values, i) for i...
book/capt10/server_load.ipynb
cleuton/datascience
apache-2.0
In the paper there are two suggested kernels for modelling the covariance of the Kepler data (Eqs. 55 & 56). In the paper the authors fit Eq 56 - here we are going to fit Eq. 55. We can do this using a combination of kernels from the george library. Exponential Squared Kernel: $$ k_1(x_i,x_j)=h_1 \exp(−\frac{(x_i-x_j)...
# h =1.0; sigma = 1.0; Gamma = 2.0/1.0^2; P = 3.8 k1 = 1.0**2 * kernels.ExpSquaredKernel(1.0**2) \ * kernels.ExpSine2Kernel(2.0 / 1.0**2, 1.0) k2 = kernels.WhiteKernel(0.01) kernel = k1+k2 # first we feed our combined kernel to the George library: gp = george.GP(kernel, mean=0.0) # then ...
CDT-KickOff/TUTORIAL/KeplerLightCurve.ipynb
as595/AllOfYourBases
gpl-3.0
Now we have the tools to solve the email problem.
# Original character string string = '"Carl Friedrich Gauss" <approximatelynormal@email.com>, "Leonhard Euler" <e@email.com>, "Bernhard Riemann" <zeta@email.com>' # Remove <, >, and " from string and overwrite and print the result # Create a new variable called string_formatted with the commas replaced by the new li...
winter2017/econ129/python/Econ129_Class_03.ipynb
letsgoexploring/teaching
mit
A related problem might be to extract only the email address from the orginal string. To do this, we can use replace() method to remove the '&lt;', '&gt;', and ',' characters. Then we use the split() method to break the string apart at the spaces. The we loop over the resulting list of strings and take only the strings...
string = '"Carl Friedrich Gauss" <approximatelynormal@email.com>, "Leonhard Euler" <e@email.com>, "Bernhard Riemann" <zeta@email.com>'
winter2017/econ129/python/Econ129_Class_03.ipynb
letsgoexploring/teaching
mit
Numpy NumPy is a powerful Python module for scientific computing. Among other things, NumPy defines an N-dimensional array object that is especially convenient to use for plotting functions and for simulating and storing time series data. NumPy also defines many useful mathematical functions like, for example, the sine...
# Create a variable called a1 equal to a numpy array containing the numbers 1 through 5 # Find the type of a1 # find the shape of a1 # Use ndim to find the rank or number of dimensions of a1 # Create a variable called a2 equal to a 2-dimensionl numpy array containing the numbers 1 through 4 # find the shape o...
winter2017/econ129/python/Econ129_Class_03.ipynb
letsgoexploring/teaching
mit
Special functions for creating arrays Numpy has several built-in functions that can assist you in creating certain types of arrays: arange(), zeros(), and ones(). Of these, arrange() is probably the most useful because it allows you a create an array of numbers by specifying the initial value in the array, the maximum ...
# Create a variable called b that is equal to a numpy array containing the numbers 1 through 5 # Create a variable called c that is equal to a numpy array containing the numbers 0 through 10
winter2017/econ129/python/Econ129_Class_03.ipynb
letsgoexploring/teaching
mit
The zeros() and ones() take as arguments the desired shape of the array to be returned and fill that array with either zeros or ones.
# Construct a 1x5 array of zeros # Construct a 2x2 array of ones
winter2017/econ129/python/Econ129_Class_03.ipynb
letsgoexploring/teaching
mit
Math with NumPy arrays A nice aspect of NumPy arrays is that they are optimized for mathematical operations. The following standard Python arithemtic operators +, -, *, /, and ** operate element-wise on NumPy arrays as the following examples indicate.
# Define two 1-dimensional arrays A = np.array([2,4,6]) B = np.array([3,2,1]) C = np.array([-1,3,2,-4]) # Multiply A by a constant # Exponentiate A # Add A and B together # Exponentiate A with B # Add A and C together
winter2017/econ129/python/Econ129_Class_03.ipynb
letsgoexploring/teaching
mit
The error in the preceding example arises because addition is element-wise and A and C don't have the same shape.
# Compute the sine of the values in A
winter2017/econ129/python/Econ129_Class_03.ipynb
letsgoexploring/teaching
mit
Iterating through Numpy arrays NumPy arrays are iterable objects just like lists, strings, tuples, and dictionaries which means that you can use for loops to iterate through the elements of them.
# Use a for loop with a NumPy array to print the numbers 0 through 4
winter2017/econ129/python/Econ129_Class_03.ipynb
letsgoexploring/teaching
mit
Example: Basel problem One of my favorite math equations is: \begin{align} \sum_{n=1}^{\infty} \frac{1}{n^2} & = \frac{\pi^2}{6} \end{align} We can use an iteration through a NumPy array to approximate the lefthand-side and verify the validity of the expression.
# Set N equal to the number of terms to sum # Initialize a variable called summation equal to 0 # loop over the numbers 1 through N # Print the approximation and the exact solution
winter2017/econ129/python/Econ129_Class_03.ipynb
letsgoexploring/teaching
mit
Import neuroimaging data using nilearn. We recover a few MRI datapoints...
n_subjects = 4 dataset_files = datasets.fetch_oasis_vbm(n_subjects=n_subjects) gm_imgs = np.array(dataset_files.gray_matter_maps)
docs/notebooks/Sinkhorn_Barycenters.ipynb
google-research/ott
apache-2.0
... and plot their gray matter densities.
for i in range(n_subjects): plotting.plot_epi(gm_imgs[i]) plt.show()
docs/notebooks/Sinkhorn_Barycenters.ipynb
google-research/ott
apache-2.0
Represent data as histograms We normalize those gray matter densities so that they sum to 1, and check their size.
a = jnp.array(get_data(gm_imgs)).transpose((3, 0, 1, 2)) grid_size = a.shape[1:4] a = a.reshape((n_subjects, -1)) + 1e-2 a = a / np.sum(a,axis=1)[:, np.newaxis] print('Grid size: ', grid_size)
docs/notebooks/Sinkhorn_Barycenters.ipynb
google-research/ott
apache-2.0
Instantiate a grid geometry to compute $W_p^p$ We instantiate the grid geometry corresponding to these data points, living in a space of dimension $91 \times 109 \times 91$, for a total total dimension $d=902629$. Rather than stretch these voxel histograms and put them in the $[0,1]^3$ hypercube, we use a simpler resca...
@jax.tree_util.register_pytree_node_class class Custom(ott.geometry.costs.CostFn): """Custom function.""" def pairwise(self, x, y): return jnp.sum(jnp.abs(x-y) ** 1.1) # Instantiate Grid Geometry of suitable size, epsilon parameter and cost. g_grid = grid.Grid(x = [jnp.arange(0, n)/100 for n in grid_size], ...
docs/notebooks/Sinkhorn_Barycenters.ipynb
google-research/ott
apache-2.0
Compute their regularized $W_p^p$ iso-barycenter A small trick: If we jit and run the discrete_barycenter function with a small 𝜀 directly, it takes ages because it's both solving a hard problem and jitting the function at the same time. It's slightly more efficient to jit it with an easy problem, and run next the pro...
%%time g_grid._epsilon.target=1 barycenter = discrete_barycenter.discrete_barycenter(g_grid, a) %%time g_grid._epsilon.target=1e-4 barycenter = discrete_barycenter.discrete_barycenter(g_grid, a)
docs/notebooks/Sinkhorn_Barycenters.ipynb
google-research/ott
apache-2.0
Plot decrease of marginal error The computation of the barycenter of $N$ histograms involves [3] the resolution of $N$ OT problems pointing towards the same, but unknown, marginal. The convergence of that algorithm can be monitored by evaluating the distance between the marginals of these different transport matrices w...
plt.figure(figsize=(8,5)) errors = barycenter.errors[:-1] plt.plot(np.arange(errors.size) * 10, errors, lw=3) plt.title('Marginal error decrease in barycenter computation') plt.yscale("log") plt.xlabel('Iterations') plt.ylabel('Marginal Error') plt.show()
docs/notebooks/Sinkhorn_Barycenters.ipynb
google-research/ott
apache-2.0
Plot the barycenter itself
def data_to_nii(x): return nilearn.image.new_img_like( gm_imgs[0], data=np.array(x.reshape(grid_size))) plotting.plot_epi(data_to_nii(barycenter.histogram)) plt.show()
docs/notebooks/Sinkhorn_Barycenters.ipynb
google-research/ott
apache-2.0
Euclidean barycenter, for reference
plotting.plot_epi(data_to_nii(np.mean(a,axis=0)))
docs/notebooks/Sinkhorn_Barycenters.ipynb
google-research/ott
apache-2.0
Next we'll setup our data sources and acquire the data via OPeNDAP using xarray.
API_key = open('APIKEY').readlines()[0].strip() #'<YOUR API KEY HERE>' dataset_key = 'noaa_ndbc_swden_stations' variables = 'spectral_wave_density,mean_wave_dir,principal_wave_dir,wave_spectrum_r1,wave_spectrum_r2' # OpenDAP URLs for each product now = datetime.datetime.now() ndbc_rt_url='http://dods.ndbc.noaa.gov/thr...
api-examples/ndbc-spectral-wave-density-data-validation.ipynb
planet-os/notebooks
mit
Product Inspection: Planet OS / NDBC Realtime / NDBC 2014 Historical For each of our three data products, we'll create an associated Dataframe for analysis.
# First, the Planet OS data which is acquired from the NDBC realtime station file. df_planetos = ds_planetos_hour.to_dataframe().drop(['context_time_latitude_longitude_frequency','mx_dataset','mx_creator_institution'], axis=1) df_planetos.head(8) # Second, the NDBC realtime station data. df_ndbc_rt = ds_ndbc_rt_hour.t...
api-examples/ndbc-spectral-wave-density-data-validation.ipynb
planet-os/notebooks
mit
Based on the sample outputs above, it appears that the Planet OS data matches the NDBC realtime file that it is acquired from. We will further verify this below by performing an equality test against the two Dataframes. We can also see that the historical data is indeed different, with frequency bins that are neatly ro...
df_planetos.describe() df_ndbc_rt.describe() df_ndbc.describe()
api-examples/ndbc-spectral-wave-density-data-validation.ipynb
planet-os/notebooks
mit
Confirm Planet OS Equality to NDBC Realtime To confirm that the Planet OS and NDBC realtime Dataframes are indeed equal, we'll perform a diff. Note that NaN != NaN evaluates as True, so NaN values will be raised as inconsistent across the dataframes. This could be resolved using fillna() and an arbitrary fill value suc...
# function below requires identical index structure def df_diff(df1, df2): ne_stacked = (df1 != df2).stack() changed = ne_stacked[ne_stacked] difference_locations = np.where(df1 != df2) changed_from = df1.values[difference_locations] changed_to = df2.values[difference_locations] return pd.DataFr...
api-examples/ndbc-spectral-wave-density-data-validation.ipynb
planet-os/notebooks
mit
The df_dff results are as expected, only NaN values are different between the two datasets. Spectral Wave Density Plot Let's plot the spectral wave density for all three datasets across the frequency coverage to see how they differ.
plt.figure(figsize=(20,10)) ds_ndbc_rt_hour.spectral_wave_density.plot(label='NDBC Realtime') ds_ndbc_hour.spectral_wave_density.plot(label='NDBC ' + str(now.year)) ds_planetos_hour.spectral_wave_density.plot(label='Planet OS') plt.legend() plt.show()
api-examples/ndbc-spectral-wave-density-data-validation.ipynb
planet-os/notebooks
mit
There is a very slight discrepancy between the 2014 NDBC product and the Planet OS product, but no difference between the realtime NDBC product and Planet OS product. Wave Spectrum Plots
vars = ['wave_spectrum_r1','wave_spectrum_r2'] df_planetos.loc[:,vars].plot(label="Planet OS", figsize=(18,6)) df_ndbc_rt.loc[:,vars].plot(label="NDBC Realtime", figsize=(18,6)) df_ndbc.loc[:,vars].plot(label="NDBC " + str(now.year), figsize=(18,6)) plt.show()
api-examples/ndbc-spectral-wave-density-data-validation.ipynb
planet-os/notebooks
mit
Wave Direction Plots
vars = ['principal_wave_dir','mean_wave_dir'] df_planetos.loc[:,vars].plot(label="Planet OS", figsize=(18,6)) df_ndbc_rt.loc[:,vars].plot(label="NDBC Realtime", figsize=(18,6)) df_ndbc.loc[:,vars].plot(label="NDBC " + str(now.year), figsize=(18,6)) plt.show()
api-examples/ndbc-spectral-wave-density-data-validation.ipynb
planet-os/notebooks
mit
Strategy 2: Implementation of the CM sketch
import sys import random import numpy as np import heapq import json import time BIG_PRIME = 9223372036854775783 def random_parameter(): return random.randrange(0, BIG_PRIME - 1) class Sketch: def __init__(self, delta, epsilon, k): """ Setup a new count-min sketch with parameters delta, epsi...
count-min-101/CountMinSketch.ipynb
DrSkippy/Data-Science-45min-Intros
unlicense
Is it possible to make the sketch so coarse that its estimates are wrong even for this data set?
s = Sketch(0.9, 0.9, 10) f = open('CM_small.txt') results_coarse_CM = CM_top_users(f,s) print("\n".join(results_coarse_CM))
count-min-101/CountMinSketch.ipynb
DrSkippy/Data-Science-45min-Intros
unlicense
Yes! (if you try enough) Why? The 'w' parameter goes like $\text{ceiling}\exp(1/\epsilon)$, which is always >=~ 3. The 'd' parameter goes like $\text{ceiling}\log(1/\delta)$, which is always >= 1. So, you're dealing with a space with minimum size 3 x 1. With 10 records, it's possible that all 4 users map their count...
! wc -l CM_large.txt ! cat CM_large.txt | sort | uniq | wc -l ! cat CM_large.txt | sort | uniq -c | sort -rn f = open('CM_large.txt') %time results_exact = exact_top_users(f) print("\n".join(results_exact)) # this could take a few minutes f = open('CM_large.txt') s = Sketch(10**-4, 10**-4, 10) %time results_CM = CM_t...
count-min-101/CountMinSketch.ipynb
DrSkippy/Data-Science-45min-Intros
unlicense
For this precision and dataset size, the CM algo takes much longer than the exact solution. In fact, the crossover point at which the CM sketch can achieve reasonable accuracy in the same time as the exact solution is a very large number of entries.
for item in zip(results_exact,results_CM): print(item) # the CM sketch gets the top entry (an outlier) correct but doesn't do well # estimating the order of the more degenerate counts # let's decrease the precision via both the epsilon and delta parameters, # and see whether it still gets the "heavy-hitter" cor...
count-min-101/CountMinSketch.ipynb
DrSkippy/Data-Science-45min-Intros
unlicense
Contents Add Video IPython.display.YouTubeVideo lets you play Youtube video directly in the notebook. Library support is available to play Vimeo and local videos as well
from IPython.display import YouTubeVideo YouTubeVideo('ooOLl4_H-IE')
docs/source/getting_started/jupyter_notebooks_advanced_features.ipynb
cathalmccabe/PYNQ
bsd-3-clause
Video Link with image display <a href="https://www.youtube.com/watch?v=ooOLl4_H-IE"> <img src="http://img.youtube.com/vi/ooOLl4_H-IE/0.jpg" width="400" height="400" align="left"></a> Contents Add webpages as Interactive Frames Embed an entire page from another site in an iframe; for example this is the PYNQ documentati...
from IPython.display import IFrame IFrame('https://pynq.readthedocs.io/en/latest/getting_started.html', width='100%', height=500)
docs/source/getting_started/jupyter_notebooks_advanced_features.ipynb
cathalmccabe/PYNQ
bsd-3-clause
set up pipeline This class creates a simple pipeline that writes all found items to a JSON file, where each line contains one JSON element.
class JsonWriterPipeline(object): def open_spider(self, spider): self.file = open('quoteresult.jl', 'w') def close_spider(self, spider): self.file.close() def process_item(self, item, spider): line = json.dumps(dict(item)) + "\n" self.file.write(line) return item
Scrapy_nb/Quotes base case.ipynb
CLEpy/CLEpy-MotM
mit
Define Spider The QuotesSpider class defines from which URLs to start crawling and which values to retrieve. I set the logging level of the crawler to warning, otherwise the notebook is overloaded with DEBUG messages about the retrieved data.
class QuotesSpider(scrapy.Spider): name = "quotes" start_urls = [ 'http://quotes.toscrape.com/page/1/', 'http://quotes.toscrape.com/page/2/', ] custom_settings = { 'LOG_LEVEL': logging.WARNING, 'ITEM_PIPELINES': {'__main__.JsonWriterPipeline': 1}, # Used for pipeline 1 ...
Scrapy_nb/Quotes base case.ipynb
CLEpy/CLEpy-MotM
mit
Start the crawler
process = CrawlerProcess({ 'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)' }) process.crawl(QuotesSpider) process.start()
Scrapy_nb/Quotes base case.ipynb
CLEpy/CLEpy-MotM
mit
The majority of tagged questions have one or two tags with four tags max. I'm curious about the relationships and connections between different tags so for now we'll limit our scope to only looking at questions with 4 tags and only look at the 1000 most popular tags.
tag_counts = tags.groupby("Id")["Tag"].count() many_tags = tag_counts[tag_counts > 3].index popular_tags = tags.Tag.value_counts().iloc[:1000].index tags = tags[tags["Id"].isin(many_tags)] #getting questions with 4 tags tags = tags[tags["Tag"].isin(popular_tags)] #getting only top 1000 tags tags.shape tags.head(20)
other notebook/Tags statistics.ipynb
sjqgithub/rquestions
mit
Creating a Bag of Tags: Now I am going to create a bag of tags and do some kind of dimensionality reduction on it. To do this I'll basically have to spread the tags and create one column for each tag. Using pd.pivot works but it's very memory-intensive. Instead I'll take advantage of the sparsity and use scipy sparse m...
from sklearn.preprocessing import LabelEncoder from sklearn.decomposition import TruncatedSVD from sklearn.manifold import TSNE from sklearn.pipeline import make_pipeline #let's integer encode the id's and tags: tag_encoder = LabelEncoder() question_encoder = LabelEncoder() tags["Tag"] = tag_encoder.fit_transform(tag...
other notebook/Tags statistics.ipynb
sjqgithub/rquestions
mit
!@!@!@!|
X = csr_matrix((np.ones(tags.shape[0]), (tags.Id, tags.Tag))) X.shape #one row for each question, one column for each tag tags.shape
other notebook/Tags statistics.ipynb
sjqgithub/rquestions
mit
Dimensionality Reduction using SVD: Now we will project our bags of words matrix into a 3 dimensional subspace that captures as much of the variance as possible. Hopefully this will help us better understand the connections between the tags.
model = TruncatedSVD(n_components=3) model.fit(X) two_components = pd.DataFrame(model.transform(X),\ columns=["one", "two", "three"]) two_components.plot(x = "one", y = "two",kind = "scatter",\ title = "2D PCA projection components 1 and 2") two_components.plot(x = "t...
other notebook/Tags statistics.ipynb
sjqgithub/rquestions
mit
64k Particle LJ System This benchmark is designed to closely follow the HOOMD-BLUE LJ benchmark here.
!nvidia-smi
notebooks/lj_benchmark.ipynb
google/jax-md
apache-2.0
Prepare the system
lattice_constant = 1.37820 N_rep = 40 box_size = N_rep * lattice_constant # Using float32 for positions / velocities, but float64 for reductions. dtype = np.float32 # Specify the format of the neighbor list. # Options are Dense, Sparse, or OrderedSparse. format = partition.OrderedSparse displacement, shift = space....
notebooks/lj_benchmark.ipynb
google/jax-md
apache-2.0
Benchmark using fixed size neighbor list.
neighbor_fn, energy_fn = energy.lennard_jones_neighbor_list(displacement, box_size, r_cutoff=3.0, dr_threshold=1., ...
notebooks/lj_benchmark.ipynb
google/jax-md
apache-2.0
On an A100 this comes out to 22.4 s / loop which is 2.24 ms / step.
renderer.render( box_size, {'particles': renderer.Sphere(new_state.position)} )
notebooks/lj_benchmark.ipynb
google/jax-md
apache-2.0
Define a function to identify outliers using Tukey's boxplot method. The method is very simple: We define InterQuartile Range as: $IQR = Q3-Q1; $ Then $Whiskers = (Q3 + \beta * IQR, Q1 - \beta * IQR)$. Then points outside $Whiskers$ are outliers. The "magic number" for $\beta$ is $1.5$ This method can be extended to a...
def IdentifyBoxplotWhiskers (myvector, beta = 1.5, lowest_possible = None, highest_possible = None): pctls = np.percentile(myvector, q= (25, 75)) if VERBOSE: print pctls iqr = pctls[1] - pctls[0] if VERBOSE: print iqr whiskers = [(pctls[0] - beta * iqr), (pctls[1] + beta * iqr)] if lowest_...
reference/Outliers.ipynb
jbocharov-mids/W207-Machine-Learning
apache-2.0
Generate a bunch of random numbers and call the function defined above
np.random.seed(1234567890) thedata = np.random.exponential(10, 1000) outliers = np.random.exponential(30, 50) mydata = np.hstack((thedata, outliers)) print mydata.shape mydata_outliers = IdentifyOutlierIndices(mydata, 1.5, 0.0) print "Found %d outliers" % (mydata_outliers.shape[1]) plt.boxplot(x=mydata, sym='*', vert...
reference/Outliers.ipynb
jbocharov-mids/W207-Machine-Learning
apache-2.0
How will it work with a pair of linearly correlated variables?
############################################################################################################ X = np.arange (0, 100, 0.1) #print X.shape Y = 15 + 0.15*X #print Y.shape plt.scatter (X, Y) plt.show()
reference/Outliers.ipynb
jbocharov-mids/W207-Machine-Learning
apache-2.0
A regression through these data is trivial and not interesting. Let's "Make Some Noise"
Yblur = np.random.exponential(10, X.shape[0]) Y_fuzzy = Y + Yblur ## Some numpy-required manupulations here X_regr =X[:, np.newaxis] Y_regr = Y_fuzzy[:, np.newaxis] print X_regr.__class__ print Y_regr.__class__ ## And now let's fit the LinearRegression lr = LinearRegression() lr.fit(X_regr, Y_regr) print "LinearR...
reference/Outliers.ipynb
jbocharov-mids/W207-Machine-Learning
apache-2.0
Why is the RSq so bad? Plot the noisy data with the regression line
plt.plot(X_regr, pred, color = 'red', label="Model") plt.pcolor='r' plt.scatter(X_regr, Y_regr, cmap='b', label="Observed", marker='*') plt.legend(loc='best') plt.show() ####################################################################### ## Graphically Analyze Residuals: #########################################...
reference/Outliers.ipynb
jbocharov-mids/W207-Machine-Learning
apache-2.0
We see that: (1) Residuals are independent of X - this is good news: it means we've got the right $Y(X)$ relationship. (2) Residuals are not normally distributed - this is bad news: it means we've chosen the wrong model for prediction. Let's try to build a 95% confidence interval for the residuals. We know that for a ...
resid_3sd = 3*np.std(myResid) ## The lines corresponding to the 3-sigma confidence interval will be: plus3SD = pred + resid_3sd minus3SD = pred - resid_3sd print "The 95-pct CI for residuals = +/- %.3f" %(resid_3sd) ## Now rebuild the scatter plot with the regression line, adding the confidence interval: plt.plot(X_r...
reference/Outliers.ipynb
jbocharov-mids/W207-Machine-Learning
apache-2.0
This is a nonsensical result: we know that the data cannot be below zero: we built it to be above the $Y = 15 + 0.15*X$ line, and yet we cannot say with 95% confidence that it will not happen, especially at low values of X. We need to redefine the confidence interval if we are dealing with non-normally distributed dat...
myResidWhiskers = IdentifyBoxplotWhiskers(myResid) print myResidWhiskers loBound = pred + myResidWhiskers[0] hiBound = pred + myResidWhiskers[1] print "Outlier Boundaries on Residuals = ", myResidWhiskers print "The 95-pct CI for residuals = +/- %.3f" %(resid_3sd) ## Now rebuild the scatter plot with the regression...
reference/Outliers.ipynb
jbocharov-mids/W207-Machine-Learning
apache-2.0
Now looking at the outlier boundaries (the black lines in the plot above), we see that any value of $Y$ that happens to be negative will be an outlier. If we know that the lowest value of myResid is the lowest possible value, we can force the outlier boundary never to cross that line (BE VERY CAUTIOUS WHEN MAKING SUCH ...
## Check min(myResid) and set the low whisker to its value if it is < minresid: minresid = min(myResid) print minresid if myResidWhiskers[0] < minresid: myResidWhiskers[0] = minresid ## Predict the low and high boundaries: loBound = pred + myResidWhiskers[0] hiBound = pred + myResidWhiskers[1] print "Outlier ...
reference/Outliers.ipynb
jbocharov-mids/W207-Machine-Learning
apache-2.0
Retrieving training and test data The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data. Each MNIST data point has: 1. an image of a handwritten digit and 2. a corresponding label (a number 0-9 that identifies the image) We'll cal...
# Retrieve the training and test data trainX, trainY, testX, testY = mnist.load_data(one_hot=True) trainX.shape trainY.shape
tutorials/intro-to-tflearn/TFLearn_Digit_Recognition.ipynb
liumengjun/cn-deep-learning
mit
Building the network TFLearn lets you build the network by defining the layers in that network. For this example, you'll define: The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data. Hidden layers, which recognize patterns in data and connect the input to the ou...
# Define the neural network def build_model(): # This resets all parameters and variables, leave this here tf.reset_default_graph() #### Your code #### # Include the input layer, hidden layer(s), and set how you want to train the model net = tflearn.input_data([None, trainX.shape[1]]) # Hi...
tutorials/intro-to-tflearn/TFLearn_Digit_Recognition.ipynb
liumengjun/cn-deep-learning
mit
Import directives
import collections
python_collections_en.ipynb
jdhp-docs/python-notebooks
mit
Ordered dictionaries See https://docs.python.org/3/library/collections.html#collections.OrderedDict
d = collections.OrderedDict() d["2"] = 2 d["3"] = 3 d["1"] = 1 print(d) print(type(d.keys())) print(list(d.keys())) print(type(d.values())) print(list(d.values())) for k, v in d.items(): print(k, v)
python_collections_en.ipynb
jdhp-docs/python-notebooks
mit
There shouldn't be any output from that cell, but if you get any error messages, it's most likely because you don't have one or more of these modules installed on your system. Running pip3 install pandas matplotlib numpy seaborn bokeh from the command line should take care of that. If not, holler and I'll try to help y...
url = 'https://raw.githubusercontent.com/davidbjourno/finding-stories-in-data/master/data/leave-demographics.csv' # Pass in the URL of the CSV file: df = pd.read_csv(url)
finding-stories-in-data.ipynb
davidbjourno/finding-stories-in-data
mit
See how easy that was? Now let's check that df is in fact a dataframe. Using the .head(n=[number]) method on any dataframe will return the first [number] rows of that dataframe. Let's take a look at the first ten:
df.head(n=10)
finding-stories-in-data.ipynb
davidbjourno/finding-stories-in-data
mit
Looks good! (FYI: .tail(n=[number]) will give you the last [number] rows.) By now, you may have noticed that some of the row headers in this CSV aren't particularly descriptive (var1, var2 etc.). This is the game: by the end of this tutorial, you should be able to identify the variables that correlated most strongly wi...
# Configure Matplotlib's pyplot method (plt) to plot at a size of 8x8 inches and # a resolution of 72 dots per inch plt.figure( figsize=(8, 8), dpi=72 ) # Plot the data as a scatter plot g = plt.scatter( x=df['var1'], # The values we want to plot along the x axis y=df['leave'], # The values we want to ...
finding-stories-in-data.ipynb
davidbjourno/finding-stories-in-data
mit
Yikes, not much of a relationship there. Let's try a different variable:
plt.figure( figsize=(8, 8), dpi=72 ) g = plt.scatter( x=df['var2'], # Plotting var2 along the x axis this time y=df['leave'], s=50, c='#0571b0', alpha=0.5 )
finding-stories-in-data.ipynb
davidbjourno/finding-stories-in-data
mit
Hmm, that distribution looks better—there's a stronger, negative correlation there—but it's still a little unclear what we're looking at. Let's add some context. We know from our provisional data-munging (that we didn't do) that many of the boroughs of London were among the strongest ‘remain’ areas in the country. We c...
df['is_london'] = np.where(df['region_name'] == 'London', True, False) # Print all the rows in the dataframe in which is_london is equal to True df[df['is_london'] == True]
finding-stories-in-data.ipynb
davidbjourno/finding-stories-in-data
mit
Those names should look familiar. That's numpy's .where method coming in handy there to help us generate a new column of data based on the values of another column—in this case, region_name. At this point, we're going to abandon Matplotlib like merciless narcissists and turn our attention to the younger, hotter Seaborn...
# Set the chart background colour (completely unnecessary, I just don't like the # default) sns.set_style('darkgrid', { 'axes.facecolor': '#efefef' }) # Tell Seaborn that what we want from it is a FacetGrid, and assign this to the # variable ‘fg’ fg = sns.FacetGrid( data=df, # Use our dataframe as the input data ...
finding-stories-in-data.ipynb
davidbjourno/finding-stories-in-data
mit
Now we're cooking with gas! We can see a slight negative correlation in the distribution of the data points and we can see how London compares to all the other regions of the country. Whatever var2 is, we now know that the London boroughs generally have higher levels of it than most of the rest of the UK, and that it h...
# Plot the chart above with a different variable along the x axis.
finding-stories-in-data.ipynb
davidbjourno/finding-stories-in-data
mit
What's more, faceting isn't limited to just highlighting specific data points. We can also pass FacetGrid a col (column) argument with the name of a column that we'd like to use to further segment our data. So let's create another True/False (Boolean) column to flag the areas with the largest populations—the ones with ...
df['is_largest'] = np.where(df['electorate'] >= 100000, True, False) g = sns.FacetGrid( df, hue='is_london', col='is_largest', palette=['#0571b0', '#ca0020'], size=7 ) g.map( plt.scatter, 'var2', 'leave', alpha=0.5 )
finding-stories-in-data.ipynb
davidbjourno/finding-stories-in-data
mit
Now we're able to make the following statements based solely on a visual inspection of this facet grid: Most of the less populous areas (electorate < 100,000) voted ‘leave’ Most of the less populous areas had var2 levels below 35. Only two—both London boroughs—had levels higher than 35 There is a stronger correlation ...
# Just adding the first four variables, plus leave, to start with—you'll see why columns = [ 'var1', 'var2', 'var3', 'var4', 'leave', 'is_london' ] g = sns.PairGrid( data=df[columns], hue='is_london', palette=['#0571b0', '#ca0020'] ) g.map_offdiag(plt.scatter);
finding-stories-in-data.ipynb
davidbjourno/finding-stories-in-data
mit
Try passing the remaining variables (var5-var9) to the pair grid. You should be able to see which of the variables in the dataset correlate most strongly with ‘leave’ vote percentage and whether the correlations are positive or negative. 4. Go into detail Seaborn also provides a heatmap method that we can use to quickl...
plt.figure( figsize=(15, 15), dpi=72 ) columns = [ # ALL THE COLUMNS 'var1', 'var2', 'var3', 'var4', 'var5', 'var6', 'var7', 'var8', 'var9', 'leave' ] # Calculate the standard correlation coefficient of each pair of columns correlations = df[columns].corr(method='pearso...
finding-stories-in-data.ipynb
davidbjourno/finding-stories-in-data
mit
By now, you should have a pretty good idea which variables are worth reporting as being significant demographic factors in the ‘leave’ vote. If you wanted to take your analysis even further, you could also report on whether London boroughs returned higher or lower ‘leave’ vote percentages than we would expect based on ...
columns = ['var2', 'leave'] g = sns.lmplot( data=df, x=columns[0], y=columns[1], hue='is_london', palette=['#0571b0', '#ca0020'], size=7, fit_reg=False, ) sns.regplot( data=df, x=columns[0], y=columns[1], scatter=False, color='#0571b0', ax=g.axes[0, 0] )
finding-stories-in-data.ipynb
davidbjourno/finding-stories-in-data
mit
Reading this plot, we're able to say that, all things being equal, most of the London boroughs have lower ‘leave’ vote percentages than we would expect based on their levels of var2 alone. This suggests—rightly—that variables other than var2 are in play in determining London's lower-than-expected levels of ‘leave’ voti...
output_notebook()
finding-stories-in-data.ipynb
davidbjourno/finding-stories-in-data
mit
Because we want this to be our output graphic, we're going to be much fussier about how it looks, so there's quite a bit of configuration involved here:
color_map = {False: '#0571b0', True: '#ca0020'} # Instantiate our plot p = figure( plot_width=600, plot_height=422, background_fill_color='#d3d3d3', title='Leave demographics' ) # Add a circle renderer to the plot p.circle( x=df['var2'], y=df['leave'], # Size the markers according to the s...
finding-stories-in-data.ipynb
davidbjourno/finding-stories-in-data
mit
Normalization Q1. Apply l2_normalize to x.
_x = np.arange(1, 11) epsilon = 1e-12 x = tf.convert_to_tensor(_x, tf.float32)
programming/Python/tensorflow/exercises/Neural_Network_Part2.ipynb
diegocavalca/Studies
cc0-1.0
Q2. Calculate the mean and variance of x based on the sufficient statistics.
_x = np.arange(1, 11) x = tf.convert_to_tensor(_x, tf.float32)
programming/Python/tensorflow/exercises/Neural_Network_Part2.ipynb
diegocavalca/Studies
cc0-1.0
Q3. Calculate the mean and variance of x.
tf.reset_default_graph() _x = np.arange(1, 11) x = tf.convert_to_tensor(_x, tf.float32)
programming/Python/tensorflow/exercises/Neural_Network_Part2.ipynb
diegocavalca/Studies
cc0-1.0
Q4. Calculate the mean and variance of x using unique_x and counts.
tf.reset_default_graph() x = tf.constant([1, 1, 2, 2, 2, 3], tf.float32) # From `x` mean, variance = tf.nn.moments(x, [0]) with tf.Session() as sess: print(sess.run([mean, variance])) # From unique elements and their counts unique_x, _, counts = tf.unique_with_counts(x) mean, variance = ... with tf.Session() as s...
programming/Python/tensorflow/exercises/Neural_Network_Part2.ipynb
diegocavalca/Studies
cc0-1.0
Q5. The code below is to implement the mnist classification task. Complete it by adding batch normalization.
# Load data from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', one_hot=False) # build graph class Graph: def __init__(self, is_training=False): # Inputs and labels self.x = tf.placeholder(tf.float32, shape=[None, 784]) self.y = tf.pla...
programming/Python/tensorflow/exercises/Neural_Network_Part2.ipynb
diegocavalca/Studies
cc0-1.0
Losses Q06. Compute half the L2 norm of x without the sqrt.
tf.reset_default_graph() x = tf.constant([1, 1, 2, 2, 2, 3], tf.float32)
programming/Python/tensorflow/exercises/Neural_Network_Part2.ipynb
diegocavalca/Studies
cc0-1.0
Classification Q7. Compute softmax cross entropy between logits and labels. Note that the rank of them is not the same.
tf.reset_default_graph() logits = tf.random_normal(shape=[2, 5, 10]) labels = tf.convert_to_tensor(np.random.randint(0, 10, size=[2, 5]), tf.int32) output = tf.nn.... with tf.Session() as sess: print(sess.run(output))
programming/Python/tensorflow/exercises/Neural_Network_Part2.ipynb
diegocavalca/Studies
cc0-1.0