markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
(These are long novels!) We can also group and slice our dataframe to do further analyses.
###Ex: print the average novel length for male authors and female authors. ###### What conclusions might you draw from this? ###Ex: graph the average novel length by gender ##EX: Add error bars to your graph
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
Gold star exercise This one is a bit tricky. If you're not quite there, no worries! We'll work through it together. Ex: plot the average novel length by year, with error bars. Your x-axis should be year, and your y-axis number of words. HINT: Copy and paste what we did above with gender, and then change the necessary v...
#Write your exercise solution here
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
<a id='lambda'></a> 4. Applying NLTK Functions and the lambda function If we want to apply nltk functions we can do so using .apply(). If we want to use list comprehension on the split text, we have to introduce one more Python trick: the lambda function. This simply allows us to write our own function to apply to eac...
df['title_tokens'] = df['title'].apply(nltk.word_tokenize) df['title_tokens']
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
With this tokenized list we might want to, for example, remove punctuation. Again, we can use the lambda function, with list comprehension.
df['title_tokens_clean'] = df['title_tokens'].apply(lambda x: [word for word in x if word not in list(string.punctuation)]) df['title_tokens_clean']
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
<a id='extract'></a> 5. Extracting Text from a Dataframe We may want to extract the text from our dataframe, to do further analyses on the text only. We can do this using the tolist() function and the join() function.
novels = df['text'].tolist() print(novels[:1]) #turn all of the novels into one long string using the join function cat_novels = ''.join(n for n in novels) print(cat_novels[:100])
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
<a id='exercise'></a> 6. Exercise: Average TTR (if time, otherwise do on your own) Motivating Question: Is there a difference in the average TTR for male and female authors? To answer this, go step by step. For computational reasons we will use the list we created by splitting on white spaces rather than tokenized text...
##Ex: create a new column, 'text_type', which contains a list of unique token types ##Ex: create a new column, 'type_count', which is a count of the token types in each novel. ##Ex: create a new column, 'ttr', which contains the type-token ratio for each novel. ##Ex: Print the average ttr by author gender ##Ex: Graph...
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
Basic usage The simplest option is to simply call the diagnose_tcr_ecs_tcre method of the MAGICC instance and read out the results.
with MAGICC6() as magicc: # you can tweak whatever parameters you want in # MAGICC6/run/MAGCFG_DEFAULTALL.CFG, here's a few # examples that might be of interest results = magicc.diagnose_tcr_ecs_tcre( CORE_CLIMATESENSITIVITY=2.75, CORE_DELQ2XCO2=3.65, CORE_HEATXCHANGE_LANDOCEAN=1...
notebooks/Diagnose-TCR-ECS-TCRE.ipynb
openclimatedata/pymagicc
agpl-3.0
If we wish, we can alter the MAGICC instance's parameters before calling the diagnose_tcr_ecs method.
with MAGICC6() as magicc: results_default = magicc.diagnose_tcr_ecs_tcre() results_low_ecs = magicc.diagnose_tcr_ecs_tcre(CORE_CLIMATESENSITIVITY=1.5) results_high_ecs = magicc.diagnose_tcr_ecs_tcre( CORE_CLIMATESENSITIVITY=4.5 ) print( "Default TCR is {tcr:.4f}, ECS is {ecs:.4f} and TCRE i...
notebooks/Diagnose-TCR-ECS-TCRE.ipynb
openclimatedata/pymagicc
agpl-3.0
Making a plot The output also includes the timeseries that were used in the diagnosis experiment. Hence we can use the output to make a plot.
# NBVAL_IGNORE_OUTPUT join_year = 1900 pdf = ( results["timeseries"] .filter(region="World") .to_iamdataframe() .swap_time_for_year() .data ) for variable, df in pdf.groupby("variable"): fig, axes = plt.subplots(1, 2, sharey=True, figsize=(16, 4.5)) unit = df["unit"].unique()[0] for sc...
notebooks/Diagnose-TCR-ECS-TCRE.ipynb
openclimatedata/pymagicc
agpl-3.0
Térbeli görbék, adathalmazok Ahhoz hogy egy ábrát térben tudjunk megjeleníteni, fel kell készíteni a környezetet. A térbeli ábrák megjelenítése és azok tulajdonságainak beállítása kicsit körülményesebb a 2D-s ábráknál. A legszembetűnőbb különbség, hogy az ábrák úgynevezett axes (körül belül itt a koordinátatengelyekre...
t=linspace(0,2*pi,100) # 100 pont 0 és 2*pi között
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
A következő kódcellában két dolog fog történni. Előszöris létrehozzuk az ax nevű axes objektumot, amelynek expliciten megadjuk, hogy 3D-s koordinátarendszer legyen. Illetve erre az objektumra hatva a plot függvénnyel létrehozzuk magát az ábrát. Figyeljük meg, hogy most a plot függvény háruom bemenő paramétert vár!
ax=subplot(1,1,1,projection='3d') #térbeli koordináta tengely létrehozása ax.plot(cos(3*t),sin(3*t),t)
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
Ahogy a síkbeli ábráknál láttuk, a plot függvényt itt is használhatjuk rendezetlenül mintavételezett adatok ábrázolására is.
ax=subplot(1,1,1,projection='3d') ax.plot(rand(10),rand(10),rand(10),'o')
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
A stílusdefiníciók a 2D ábrákhoz hasonló kulcsszavas argumentumok alapján dolgozódnak fel! Lássunk erre is egy példát:
ax=subplot(1,1,1,projection='3d') #térbeli koordináta tengely létrehozása ax.plot(cos(3*t),sin(3*t),t,color='green',linestyle='dashed',linewidth=3)
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
Térbeli ábrák megjelenítése kapcsán rendszeresen felmerülő probléma, hogy jó irányból nézzünk rá az ábrára. Az ábra nézőpontjait a view_init függvény segítségével tudjuk megadni. A view_init két paramétere ekvatoriális gömbi koordinátarendszerben adja meg az ábra nézőpontját. A két bemenő paraméter a deklináció és az a...
ax=subplot(1,1,1,projection='3d') #térbeli koordináta tengely létrehozása ax.plot(cos(3*t),sin(3*t),t) ax.view_init(0,0)
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
Az $y$-tengely felől pedig így:
ax=subplot(1,1,1,projection='3d') #térbeli koordináta tengely létrehozása ax.plot(cos(3*t),sin(3*t),t) ax.view_init(0,90)
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
Ha interaktív függvényeket használunk, akkor a nézőpontot az alábbiak szerint interaktívan tudjuk változtatni:
def forog(th,phi): ax=subplot(1,1,1,projection='3d') ax.plot(sin(3*t),cos(3*t),t) ax.view_init(th,phi) interact(forog,th=(-90,90),phi=(0,360));
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
Kétváltozós függvények és felületek A térbeli ábrák egyik előnye, hogy térbeli felületeket is meg tudunk jeleníteni. Ennek a legegyszerűbb esete a kétváltozós $$z=f(x,y)$$ függvények magasságtérképszerű ábrázolása. Ahogy azt már megszoktuk, itt is az első feladat a mintavételezés és a függvény kiértékelése. Az alábbia...
x,y = meshgrid(linspace(-3,3,250),linspace(-5,5,250)) # mintavételezési pontok legyártása. z = -(sin(x) ** 10 + cos(10 + y * x) * cos(x))*exp((-x**2-y**2)/4) # függvény kiértékelés
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
A plot_surface függvény segítségével jeleníthetjük meg ezt a függvényt.
ax = subplot(111, projection='3d') ax.plot_surface(x, y, z)
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
Sokszor szemléletes a kirajzolódott felületet valamilyen színskála szerint színezni. Ezt a síkbeli ábráknál már megszokott módon a cmap kulcsszó segítségével tehetjük.
ax = subplot(111, projection='3d') ax.plot_surface(x, y, z,cmap='viridis')
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
A térbeli felületek legáltalánosabb megadása kétparaméteres vektor értékű függvényekkel lehetséges. Azaz \begin{equation} \mathbf{r}(u,v)=\left(\begin{array}{c} f(u,v)\ g(u,v)\ h(u,v) \end{array}\right) \end{equation} Vizsgáljunk meg erre egy példát, ahol a megjeleníteni kívánt felület egy tórusz! A tórusz egy lehetsé...
theta,phi=meshgrid(linspace(0,2*pi,250),linspace(0,2*pi,250)) x=(4 + 1*cos(theta))*cos(phi) y=(4 + 1*cos(theta))*sin(phi) z=1*sin(theta)
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
Ábrázolni ismét a plot_surface függvény segítségével tudunk:
ax = subplot(111, projection='3d') ax.plot_surface(x, y, z)
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
A fenti ábrát egy kicsit arányosabbá tehetjük, ha a tengelyek megjelenítésének arányát, illetve a tengerek határait átállítjuk. Ezt a set_aspect, illetve a set_xlim, set_ylim és set_zlim függvények segítségével tehetjük meg:
ax = subplot(111, projection='3d') ax.plot_surface(x, y, z) ax.set_aspect('equal'); ax.set_xlim(-5,5); ax.set_ylim(-5,5); ax.set_zlim(-5,5);
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
Végül tegyük ezt az ábrát is interaktívvá:
def forog(th,ph): ax = subplot(111, projection='3d') ax.plot_surface(x, y, z) ax.view_init(th,ph) ax.set_aspect('equal'); ax.set_xlim(-5,5); ax.set_ylim(-5,5); ax.set_zlim(-5,5); interact(forog,th=(-90,90),ph=(0,360));
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
Erőterek 3D-ben Térbeli vektortereket, azaz olyan függvényeket, amelyek a tér minden pontjához egy háromdimenziós vektort rendelnek, a síkbeli ábrákhoz hasonlóan itt is a quiver parancs segítségével tudunk megjeleníteni. Az alábbi példában az egységgömb felületének 100 pontjába rajzolunk egy-egy radiális irányba mutató...
phiv,thv=(2*pi*rand(100),pi*rand(100)) #Ez a két sor a térbeli egység gömb xv,yv,zv=(cos(phiv)*sin(thv),sin(phiv)*sin(thv),cos(thv)) #100 véletlen pontját jelöli ki uv,vv,wv=(xv,yv,zv) #Ez pedig a megfelelő pontokhoz hozzá rendel egy egy radiális vektort ax = s...
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
¿Cómo podemos encontrar el tipo de una variable? (a) Usar la función print() para determinar el tipo mirando el resultado. (b) Usar la función type(). (c) Usarla en una expresión y usar print() sobre el resultado. (d) Mirar el lugar donde se declaró la variable. Si a="10" y b="Diez", se puede decir que a y b: (a) So...
a='10' b='Diez' type(b)
Teaching Materials/Programming/Python/Python3Espanol/1_Introduccion/02. Variables, tipos y operaciones.ipynb
astro4dev/OAD-Data-Science-Toolkit
gpl-3.0
Operaciones entre tipos
int(3.14) int(3.9999) # Redondea? int? int int(3.0) int(3) int("12") int("twelve") float(3) float? str(3) str(3.0) str(int(2.9999))
Teaching Materials/Programming/Python/Python3Espanol/1_Introduccion/02. Variables, tipos y operaciones.ipynb
astro4dev/OAD-Data-Science-Toolkit
gpl-3.0
Nombres de variables
hola=10 hola mi variable=10 mi_variable=10 mi-variable=10 mi.variable=10 mi$variable=10 variable_1=34 1_variable=34 pi=3.1315 def=10
Teaching Materials/Programming/Python/Python3Espanol/1_Introduccion/02. Variables, tipos y operaciones.ipynb
astro4dev/OAD-Data-Science-Toolkit
gpl-3.0
Model 2: Apply data cleanup Recall that we did some data cleanup in the previous lab. Let's do those before training. This is a dataset that we will need quite frequently in this notebook, so let's extract it first.
%%bigquery CREATE OR REPLACE TABLE serverlessml.cleaned_training_data AS SELECT (tolls_amount + fare_amount) AS fare_amount, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count*1.0 AS passengers FROM ...
notebooks/launching_into_ml/solutions/2_first_model.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Note that, despite their completely different appearance, both affinity matrices contain the same values, but with a different order of rows and columns. For this dataset, the sorted affinity matrix is almost block diagonal. Note, also, that the block-wise form of this matrix depends on parameter $\gamma$. Exercise 2: ...
t = 0.001 # Kt = <FILL IN> # Truncated affinity matrix Kt = K*(K>t) # Truncated affinity matrix # Kst = <FILL IN> # Truncated and sorted affinity matrix Kst = Ks*(Ks>t) # Truncated and sorted affinity matrix # </SOL>
U2.SpectralClustering/.ipynb_checkpoints/SpecClustering-checkpoint.ipynb
ML4DS/ML4all
mit
Note that, despite the eigenvector components can not be used as a straighforward cluster indicator, they are strongly informative of the clustering structure. All points in the same cluster have similar values of the corresponding eigenvector components $(v_{n0}, \ldots, v_{n,c-1})$. Points from different clusters ...
# <SOL> g = 20 t = 0.1 K2 = rbf_kernel(X2, X2, gamma=g) K2t = K2*(K2>t) G2 = nx.from_numpy_matrix(K2t) graphplot = nx.draw(G2, X2, node_size=40, width=0.5) plt.axis('equal') plt.show() # </SOL>
U2.SpectralClustering/.ipynb_checkpoints/SpecClustering-checkpoint.ipynb
ML4DS/ML4all
mit
Visualize the CelebA Data The CelebA dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations, you'll only need the images. Note that these are color images with 3 color channels (RGB) each. Pre-process and Load the Data Since the project...
# necessary imports import torch from torchvision import datasets from torchvision import transforms def get_dataloader(batch_size, image_size, data_dir='processed_celeba_small/'): """ Batch the neural network data using DataLoader :param batch_size: The size of each batch; the number of images in a batch ...
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Create a DataLoader Exercise: Create a DataLoader celeba_train_loader with appropriate hyperparameters. Call the above function and create a dataloader to view images. * You can decide on any reasonable batch_size parameter * Your image_size must be 32. Resizing the data to a smaller size will make for faster training...
# Define function hyperparameters batch_size = img_size = """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ # Call your function and get a dataloader celeba_train_loader = get_dataloader(batch_size, img_size)
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Next, you can view some images! You should seen square images of somewhat-centered faces. Note: You'll need to convert the Tensor images into a NumPy type and transpose the dimensions to correctly display an image, suggested imshow code is below, but it may not be perfect.
# helper display function def imshow(img): npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ # obtain one batch of training images dataiter = iter(celeba_train_loader) images, _ = dataiter.next() # _ for no labels # plot the image...
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Exercise: Pre-process your image data and scale it to a pixel range of -1 to 1 You need to do a bit of pre-processing; you know that the output of a tanh activated generator will contain pixel values in a range from -1 to 1, and so, we need to rescale our training images to a range of -1 to 1. (Right now, they are in a...
# TODO: Complete the scale function def scale(x, feature_range=(-1, 1)): ''' Scale takes in an image x and returns that image, scaled with a feature_range of pixel values from -1 to 1. This function assumes that the input x is already scaled from 0-1.''' # assume x is scaled to (0, 1) # scale...
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Define the Model A GAN is comprised of two adversarial networks, a discriminator and a generator. Discriminator Your first task will be to define the discriminator. This is a convolutional classifier like you've built before, only without any maxpooling layers. To deal with this complex data, it's suggested you use a d...
import torch.nn as nn import torch.nn.functional as F class Discriminator(nn.Module): def __init__(self, conv_dim): """ Initialize the Discriminator Module :param conv_dim: The depth of the first convolutional layer """ super(Discriminator, self).__init__() # compl...
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Generator The generator should upsample an input and generate a new image of the same size as our training data 32x32x3. This should be mostly transpose convolutional layers with normalization applied to the outputs. Exercise: Complete the Generator class The inputs to the generator are vectors of some length z_size T...
class Generator(nn.Module): def __init__(self, z_size, conv_dim): """ Initialize the Generator Module :param z_size: The length of the input latent vector, z :param conv_dim: The depth of the inputs to the *last* transpose convolutional layer """ super(Generator,...
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Initialize the weights of your networks To help your models converge, you should initialize the weights of the convolutional and linear layers in your model. From reading the original DCGAN paper, they say: All weights were initialized from a zero-centered Normal distribution with standard deviation 0.02. So, your ne...
def weights_init_normal(m): """ Applies initial weights to certain layers in a model . The weights are taken from a normal distribution with mean = 0, std dev = 0.02. :param m: A module or layer in a network """ # classname will be something like: # `Conv`, `BatchNorm2d`, `Linear`, ...
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Build complete network Define your models' hyperparameters and instantiate the discriminator and generator from the classes defined above. Make sure you've passed in the correct input arguments.
""" DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ def build_network(d_conv_dim, g_conv_dim, z_size): # define discriminator and generator D = Discriminator(d_conv_dim) G = Generator(z_size=z_size, conv_dim=g_conv_dim) # initialize model weights D.apply(weights_init_normal) G.ap...
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Exercise: Define model hyperparameters
# Define model hyperparams d_conv_dim = g_conv_dim = z_size = """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ D, G = build_network(d_conv_dim, g_conv_dim, z_size)
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Training on GPU Check if you can train on GPU. Here, we'll set this as a boolean variable train_on_gpu. Later, you'll be responsible for making sure that Models, Model inputs, and Loss function arguments Are moved to GPU, where appropriate.
""" DON'T MODIFY ANYTHING IN THIS CELL """ import torch # Check for a GPU train_on_gpu = torch.cuda.is_available() if not train_on_gpu: print('No GPU found. Please use a GPU to train your neural network.') else: print('Training on GPU!')
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Discriminator and Generator Losses Now we need to calculate the losses for both types of adversarial networks. Discriminator Losses For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_real_loss + d_fake_loss. Remember that we want the discriminator to output 1 for real...
def real_loss(D_out): '''Calculates how close discriminator outputs are to being real. param, D_out: discriminator logits return: real loss''' loss = return loss def fake_loss(D_out): '''Calculates how close discriminator outputs are to being fake. param, D_out: discriminator logi...
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Optimizers Exercise: Define optimizers for your Discriminator (D) and Generator (G) Define optimizers for your models with appropriate hyperparameters.
import torch.optim as optim # Create optimizers for the discriminator D and generator G d_optimizer = g_optimizer =
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Training Training will involve alternating between training the discriminator and the generator. You'll use your functions real_loss and fake_loss to help you calculate the discriminator losses. You should train the discriminator by alternating on real and fake images Then the generator, which tries to trick the discr...
def train(D, G, n_epochs, print_every=50): '''Trains adversarial networks for some number of epochs param, D: the discriminator network param, G: the generator network param, n_epochs: number of epochs to train for param, print_every: when to print and record the models' losses re...
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Set your number of training epochs and train your GAN!
# set number of epochs n_epochs = """ DON'T MODIFY ANYTHING IN THIS CELL """ # call training function losses = train(D, G, n_epochs=n_epochs)
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Training loss Plot the training losses for the generator and discriminator, recorded after each epoch.
fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator', alpha=0.5) plt.plot(losses.T[1], label='Generator', alpha=0.5) plt.title("Training Losses") plt.legend()
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Generator samples from training View samples of images from the generator, and answer a question about the strengths and weaknesses of your trained models.
# helper function for viewing a list of passed in sample images def view_samples(epoch, samples): fig, axes = plt.subplots(figsize=(16,4), nrows=2, ncols=8, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch]): img = img.detach().cpu().numpy() img = np.transpose(img, (1, ...
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Let us perform our analysis on selected 2 days
gjw.store.window = TimeFrame(start='2015-09-03 00:00:00+01:00', end='2015-09-05 00:00:00+01:00') gjw.set_window = TimeFrame(start='2015-09-03 00:00:00+01:00', end='2015-09-05 00:00:00+01:00') elec = gjw.buildings[building_number].elec mains = elec.mains() mains.plot() #plt.show() house = elec['fridge'] #only one meter...
notebooks/disaggregation-hart-CO-active_only.ipynb
gjwo/nilm_gjw_data
apache-2.0
Hart Training We'll now do the training from the aggregate data. The algorithm segments the time series data into steady and transient states. Thus, we'll first figure out the transient and the steady states. Next, we'll try and pair the on and the off transitions based on their proximity in time and value.
#df.ix['2015-09-03 11:00:00+01:00':'2015-09-03 12:00:00+01:00'].plot()# select a time range and plot it #plt.show() h = Hart85() h.train(mains,cols=[('power','active')]) h.steady_states ax = mains.plot() h.steady_states['active average'].plot(style='o', ax = ax); plt.ylabel("Power (W)") plt.xlabel("Time"); #plt.sh...
notebooks/disaggregation-hart-CO-active_only.ipynb
gjwo/nilm_gjw_data
apache-2.0
Hart Disaggregation
disag_filename = join(data_dir, 'disag_gjw_hart.hdf5') output = HDFDataStore(disag_filename, 'w') h.disaggregate(mains,output,sample_period=1) output.close() disag_hart = DataSet(disag_filename) disag_hart disag_hart_elec = disag_hart.buildings[building_number].elec disag_hart_elec
notebooks/disaggregation-hart-CO-active_only.ipynb
gjwo/nilm_gjw_data
apache-2.0
Combinatorial Optimisation training
co = CombinatorialOptimisation() co.train(mains,cols=[('power','active')]) co.steady_states ax = mains.plot() co.steady_states['active average'].plot(style='o', ax = ax); plt.ylabel("Power (W)") plt.xlabel("Time"); disag_filename = join(data_dir, 'disag_gjw_co.hdf5') output = HDFDataStore(disag_filename, 'w') co.di...
notebooks/disaggregation-hart-CO-active_only.ipynb
gjwo/nilm_gjw_data
apache-2.0
Can't use because no test data for comparison
from nilmtk.metrics import f1_score f1_hart= f1_score(disag_hart_elec, test_elec) f1_hart.index = disag_hart_elec.get_labels(f1_hart.index) f1_hart.plot(kind='barh') plt.ylabel('appliance'); plt.xlabel('f-score'); plt.title("Hart");
notebooks/disaggregation-hart-CO-active_only.ipynb
gjwo/nilm_gjw_data
apache-2.0
Uniform Sample A uniform sample is a sample drawn at random without replacements
def sample(num_sample, top): """ Create a random sample from a table Attributes --------- num_sample: int top: dataframe Returns a random subset of table index """ df_index = [] for i in np.arange(0, num_sample, 1): # pick randomly from the whole table ...
Data/data_Stats_3_ChanceModels.ipynb
omoju/Fundamentals
gpl-3.0
We can simulate the act of rolling dice by just pulling out rows
index_ = sample(3, die) df = die.ix[index_, :] df index_ = sample(1, coin) df = coin.ix[index_, :] df def sum_draws( n, box ): """ Construct histogram for the sum of n draws from a box with replacement Attributes ----------- n: int (number of draws) box: dataframe (the box model) """ ...
Data/data_Stats_3_ChanceModels.ipynb
omoju/Fundamentals
gpl-3.0
Modeling the Law of Averages The law of averages states that as the number of draws increases, so too does the difference between the expected average versus the observed average. $$ Chance \ Error = Observed - Expected $$ In the case of coin tosses, as the number of tosses goes up, so does the absolute chance error.
def number_of_heads( n, box ): """ The number of heads in n tosses Attributes ----------- n: int (number of draws) box: dataframe (the coin box model) """ data = numpy.zeros(shape=(n,1)) if n > 0: value = np.random.randint(0, len(box), n) data = value else: ...
Data/data_Stats_3_ChanceModels.ipynb
omoju/Fundamentals
gpl-3.0
Statistical analysis on Allsides bias rating: No sources from the images boxes were rated in the Allsides bias rating dataset. Therefore comparisons between bias of baseline sources versus image box sources could not be performed. Statistical analysis on Facebook Study bias rating: Hillary Clinton Image Box images vers...
print("Baseline skew: ", stats.skew(HC_baseline.facebookbias_rating[HC_baseline.facebookbias_rating<3])) print("Image Box skew: ", stats.skew(HC_imagebox.facebookbias_rating[HC_imagebox.facebookbias_rating<3]))
Statistics.ipynb
comp-journalism/Baseline_Problem_for_Algorithm_Audits
mit
from the stats page "For normally distributed data, the skewness should be about 0. A skewness value > 0 means that there is more weight in the left tail of the distribution. The function skewtest can be used to determine if the skewness value is close enough to 0, statistically speaking."
print("Baseline skew: ", stats.skewtest(HC_baseline.facebookbias_rating[HC_baseline.facebookbias_rating<3])) print("Image Box skew: ", stats.skewtest(HC_imagebox.facebookbias_rating[HC_imagebox.facebookbias_rating<3])) stats.ks_2samp(HC_baseline.facebookbias_rating[HC_baseline.facebookbias_rating<3], ...
Statistics.ipynb
comp-journalism/Baseline_Problem_for_Algorithm_Audits
mit
Donald Trump Image Box images versus Baseline images source bias according to Facebook bias ratings:
print("Baseline skew: ", stats.skew(DT_baseline.facebookbias_rating[DT_baseline.facebookbias_rating<3])) print("Image Box skew: ", stats.skew(DT_imagebox.facebookbias_rating[DT_imagebox.facebookbias_rating<3])) stats.ks_2samp(DT_baseline.facebookbias_rating[DT_baseline.facebookbias_rating<3], ...
Statistics.ipynb
comp-journalism/Baseline_Problem_for_Algorithm_Audits
mit
The Kolmogorov-Smirnov analyis shows that the distribution of political representation across image sources is different between the baseline images and those found in the image box. Statistical analysis on Allsides + Facebook + MondoTimes + my bias ratings: Convert strings to integers:
def convert_to_ints(col): if col == 'Left': return -1 elif col == 'Center': return 0 elif col == 'Right': return 1 else: return np.nan HC_imagebox['final_rating_ints'] = HC_imagebox.final_rating.apply(convert_to_ints) DT_imagebox['final_rating_ints'] = DT_imagebox.final_...
Statistics.ipynb
comp-journalism/Baseline_Problem_for_Algorithm_Audits
mit
Prepare data for chi squared test
HC_baseline_counts = HC_baseline.final_rating.value_counts() HC_imagebox_counts = HC_imagebox.final_rating.value_counts() DT_baseline_counts = DT_baseline.final_rating.value_counts() DT_imagebox_counts = DT_imagebox.final_rating.value_counts() HC_baseline_counts.head() normalised_bias_ratings = pd.DataFrame({'HC_Imag...
Statistics.ipynb
comp-journalism/Baseline_Problem_for_Algorithm_Audits
mit
Remove Unknown / unreliable row
normalised_bias_ratings = normalised_bias_ratings[:3]
Statistics.ipynb
comp-journalism/Baseline_Problem_for_Algorithm_Audits
mit
Calculate percentages for plotting purposes
normalised_bias_ratings.loc[:,'HC_Baseline_pcnt'] = normalised_bias_ratings.HC_Baseline/normalised_bias_ratings.HC_Baseline.sum()*100 normalised_bias_ratings.loc[:,'HC_ImageBox_pcnt'] = normalised_bias_ratings.HC_ImageBox/normalised_bias_ratings.HC_ImageBox.sum()*100 normalised_bias_ratings.loc[:,'DT_Baseline_pcnt'] = ...
Statistics.ipynb
comp-journalism/Baseline_Problem_for_Algorithm_Audits
mit
Test Hillary Clinton Image Box images against Baseline images:
stats.chisquare(f_exp=normalised_bias_ratings.HC_Baseline, f_obs=normalised_bias_ratings.HC_ImageBox) HC_percentages.plot.bar()
Statistics.ipynb
comp-journalism/Baseline_Problem_for_Algorithm_Audits
mit
Test Donald Trump Image Box images against Basline images:
stats.chisquare(f_exp=normalised_bias_ratings.DT_Baseline, f_obs=normalised_bias_ratings.DT_ImageBox) DT_percentages.plot.bar()
Statistics.ipynb
comp-journalism/Baseline_Problem_for_Algorithm_Audits
mit
The storage bucket we create will be created by default using the project id.
storage_bucket = 'gs://' + datalab.Context.default().project_id + '-datalab-workspace/' storage_region = 'us-central1' workspace_path = os.path.join(storage_bucket, 'census') # We will rely on outputs from data preparation steps in the previous notebook. local_workspace_path = '/content/datalab/workspace/census' !gs...
samples/ML Toolbox/Regression/Census/2 Service Preprocess.ipynb
googledatalab/notebooks
apache-2.0
NOTE: If you have previously run this notebook, and want to start from scratch, then run the next cell to delete previous outputs.
!gsutil -m rm -rf {workspace_path}
samples/ML Toolbox/Regression/Census/2 Service Preprocess.ipynb
googledatalab/notebooks
apache-2.0
Data To get started, we will copy the data into this workspace from the local workspace created in the previous notebook. Generally, in your own work, you will have existing data to work with, that you may or may not need to copy around, depending on its current location.
!gsutil -q cp {local_workspace_path}/data/train.csv {workspace_path}/data/train.csv !gsutil -q cp {local_workspace_path}/data/eval.csv {workspace_path}/data/eval.csv !gsutil -q cp {local_workspace_path}/data/schema.json {workspace_path}/data/schema.json !gsutil ls -r {workspace_path}
samples/ML Toolbox/Regression/Census/2 Service Preprocess.ipynb
googledatalab/notebooks
apache-2.0
DataSets
train_data_path = os.path.join(workspace_path, 'data/train.csv') eval_data_path = os.path.join(workspace_path, 'data/eval.csv') schema_path = os.path.join(workspace_path, 'data/schema.json') train_data = ml.CsvDataSet(file_pattern=train_data_path, schema_file=schema_path) eval_data = ml.CsvDataSet(file_pattern=eval_da...
samples/ML Toolbox/Regression/Census/2 Service Preprocess.ipynb
googledatalab/notebooks
apache-2.0
Data Analysis When building a model, a number of pieces of information about the training data are required - for example, the list of entries or vocabulary of a categorical/discrete column, or aggregate statistics like min and max for numerical columns. These require a full pass over the training data, and is usually ...
analysis_path = os.path.join(workspace_path, 'analysis') regression.analyze(dataset=train_data, output_dir=analysis_path, cloud=True)
samples/ML Toolbox/Regression/Census/2 Service Preprocess.ipynb
googledatalab/notebooks
apache-2.0
Like in the local notebook, the output of analysis is a stats file that contains analysis from the numerical columns, and a vocab file from each categorical column.
!gsutil ls {analysis_path}
samples/ML Toolbox/Regression/Census/2 Service Preprocess.ipynb
googledatalab/notebooks
apache-2.0
Let's inspect one of the files; in particular the numerical analysis, since it will also tell us some interesting statistics about the income column, the value we want to predict.
!gsutil cat {analysis_path}/stats.json
samples/ML Toolbox/Regression/Census/2 Service Preprocess.ipynb
googledatalab/notebooks
apache-2.0
Exploring dataset Please see this notebook for more context on this problem and how the features were chosen.
#%writefile babyweight/trainer/model.py # Copyright 2018 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 #...
blogs/sklearn/babyweight_skl.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
<h2> Creating a ML dataset using BigQuery </h2> We can use BigQuery to create the training and evaluation datasets. Because of the masking (ultrasound vs. no ultrasound), the query itself is a little complex.
#%writefile -a babyweight/trainer/model.py def create_queries(): query_all = """ WITH with_ultrasound AS ( SELECT weight_pounds AS label, CAST(is_male AS STRING) AS is_male, mother_age, CAST(plurality AS STRING) AS plurality, gestation_weeks, FARM_FINGERPRINT(CONCAT(CAST(YEAR...
blogs/sklearn/babyweight_skl.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
<h2> Creating a scikit-learn model using random forests </h2> Let's train the model locally
#%writefile -a babyweight/trainer/model.py def input_fn(indf): import copy import pandas as pd df = copy.deepcopy(indf) # one-hot encode the categorical columns df["plurality"] = df["plurality"].astype(pd.api.types.CategoricalDtype( categories=["Single","Multiple","1","2","3","4","5"])) ...
blogs/sklearn/babyweight_skl.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Packaging up as a Python package Note the %writefile in the cells above. I uncommented those and ran the cells to write out a model.py The following cell writes out a task.py
%writefile babyweight/trainer/task.py # Copyright 2018 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # ...
blogs/sklearn/babyweight_skl.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Try out the package on a subset of the data.
%bash export PYTHONPATH=${PYTHONPATH}:${PWD}/babyweight python -m trainer.task \ --bucket=${BUCKET} --frac=0.001 --job-dir=gs://${BUCKET}/babyweight/sklearn --projectId $PROJECT
blogs/sklearn/babyweight_skl.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
<h2> Training on Cloud ML Engine </h2> Submit the code to the ML Engine service
%bash RUNTIME_VERSION="1.8" PYTHON_VERSION="2.7" JOB_NAME=babyweight_skl_$(date +"%Y%m%d_%H%M%S") JOB_DIR="gs://$BUCKET/babyweight/sklearn/${JOBNAME}" gcloud ml-engine jobs submit training $JOB_NAME \ --job-dir $JOB_DIR \ --package-path $(pwd)/babyweight/trainer \ --module-name trainer.task \ --region us-cent...
blogs/sklearn/babyweight_skl.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
The training finished in 20 minutes with a RMSE of 1.05 lbs. <h2> Deploying the trained model </h2> <p> Deploying the trained model to act as a REST web service is a simple gcloud call.
%bash gsutil ls gs://${BUCKET}/babyweight/sklearn/ | tail -1 %bash MODEL_NAME="babyweight" MODEL_VERSION="skl" MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/babyweight/sklearn/ | tail -1) echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes" #gcloud ml-engine versio...
blogs/sklearn/babyweight_skl.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
<h2> Using the model to predict </h2> <p> Send a JSON request to the endpoint of the service to make it predict a baby's weight ... Note that we need to send in an array of numbers in the same order as when we trained the model. You can sort of save some preprocessing by using sklearn's Pipeline, but we did our preproc...
data = [] for i in range(2): data.append([]) for col in eval_x: # convert from numpy integers to standard integers data[i].append(int(np.uint64(eval_x[col][i]).item())) print(eval_x.columns) print(json.dumps(data))
blogs/sklearn/babyweight_skl.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
As long as you send in the data in that order, it will work:
from googleapiclient import discovery from oauth2client.client import GoogleCredentials import json credentials = GoogleCredentials.get_application_default() api = discovery.build('ml', 'v1', credentials=credentials) request_data = {'instances': # [u'mother_age', u'gestation_weeks', u'is_male_Unknown', u'is_male_0'...
blogs/sklearn/babyweight_skl.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Hyperparameter tuning Let's do a bunch of parallel trials to find good maxDepth and numTrees
%writefile hyperparam.yaml trainingInput: hyperparameters: goal: MINIMIZE maxTrials: 100 maxParallelTrials: 5 hyperparameterMetricTag: rmse params: - parameterName: maxDepth type: INTEGER minValue: 2 maxValue: 8 scaleType: UNIT_LINEAR_SCALE - parameterName: numTrees...
blogs/sklearn/babyweight_skl.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
If you go to the GCP console and click on the job, you will see the trial information start to populating, with the lowest rmse trial listed first. I got the best performance with these settings: <pre> "hyperparameters": { "maxDepth": "8", "numTrees": "90" }, "finalMetric": { ...
%writefile largemachine.yaml trainingInput: scaleTier: CUSTOM masterType: large_model %bash RUNTIME_VERSION="1.8" PYTHON_VERSION="2.7" JOB_NAME=babyweight_skl_$(date +"%Y%m%d_%H%M%S") JOB_DIR="gs://$BUCKET/babyweight/sklearn/${JOBNAME}" gcloud ml-engine jobs submit training $JOB_NAME \ --job-dir $JOB_DIR \ -...
blogs/sklearn/babyweight_skl.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
In a Tropical Semiring The following example is taken from mohri.2009.hwa, Figure 12.
%%automaton --strip a context = "lal_char, zmin" $ -> 0 0 -> 1 <0>a, <1>b, <5>c 0 -> 2 <0>d, <1>e 1 -> 3 <0>e, <1>f 2 -> 3 <4>e, <5>f 3 -> $ a.push_weights()
doc/notebooks/automaton.push_weights.ipynb
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn
gpl-3.0
Note that weight pushing improves the "minimizability" of weighted automata:
a.minimize() a.push_weights().minimize()
doc/notebooks/automaton.push_weights.ipynb
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn
gpl-3.0
In $\mathbb{Q}$ Again, the following example is taken from mohri.2009.hwa, Figure 12 (subfigure 12.d lacks two transitions), but computed in $\mathbb{Q}$ rather than $\mathbb{R}$ to render more readable results.
%%automaton --strip a context = "lal_char, q" $ -> 0 0 -> 1 <0>a, <1>b, <5>c 0 -> 2 <0>d, <1>e 1 -> 3 <0>e, <1>f 2 -> 3 <4>e, <5>f 3 -> $ a.push_weights()
doc/notebooks/automaton.push_weights.ipynb
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn
gpl-3.0
1. Load blast hits
#Load blast hits blastp_hits = pd.read_csv("2_blastp_hits.tsv",sep="\t",quotechar='"') blastp_hits.head() #Filter out Metahit 2010 hits, keep only Metahit 2014 blastp_hits = blastp_hits[blastp_hits.db != "metahit_pep"]
phage_assembly/5_annotation/asm_v1.2/orf_160621/3b_select_reliable_orfs.ipynb
maubarsom/ORFan-proteins
mit
2.4.3 Write out filtered blast hits
filt_blastp_hits = blastp_hits_annot[ blastp_hits_annot.query_id.apply(lambda x: x in reliable_orfs.query_id.tolist())] filt_blastp_hits.to_csv("3_filtered_orfs/d9539_asm_v1.2_orf_filt_blastp.tsv",sep="\t",quotechar='"') filt_blastp_hits.head()
phage_assembly/5_annotation/asm_v1.2/orf_160621/3b_select_reliable_orfs.ipynb
maubarsom/ORFan-proteins
mit
train test split
sku_id_groups = np.load(npz_sku_ids_group_kmeans) for key, val in sku_id_groups.iteritems(): print key, ",", val.shape # gp_predictor = GaussianProcessPricePredictorForCluster(npz_sku_ids_group_kmeans=npz_sku_ids_group_kmeans, # mobs_norm_path=mobs_norm_path, #...
02_preprocessing/exploration11-price_history_gaussian_process_regressor_clustered_cross_valid.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Cross Validation
#writing to bayes_opt_dir = data_path + '/gp_regressor' assert isdir(bayes_opt_dir) pairs_ts_npy_filename = 'pairs_ts' cv_score_dict_npy_filename = 'dtw_scores' pairs_ts_npy_filename = 'pairs_ts' res_gp_filename = 'res_gp_opt'
02_preprocessing/exploration11-price_history_gaussian_process_regressor_clustered_cross_valid.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Cluster: 6 Best Length Scale: 1.2593471510883105 n restart optimizer: 5 Cluster: 4 Best Length Scale: 2.5249662383238189 n restarts optimizer: 4 Cluster: 0 Best Length Scale: 4.2180911518619402 n restarts optimizer: 3 Cluster: 1 Best Length Scale: 0.90557520548216341 n restarts optimizer: 2 Cluster: 7 Best Length Sc...
%%time cur_gp_opt = GaussianProcessPricePredictorGpOpt(chosen_cluster=9, bayes_opt_dir=bayes_opt_dir, cv_score_dict_npy_filename=cv_score_dict_npy_filename, pairs_ts_npy_filename=...
02_preprocessing/exploration11-price_history_gaussian_process_regressor_clustered_cross_valid.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Cluster 2
%%time cur_gp_opt = GaussianProcessPricePredictorGpOpt(chosen_cluster=2, bayes_opt_dir=bayes_opt_dir, cv_score_dict_npy_filename=cv_score_dict_npy_filename, pairs_ts_npy_filename=...
02_preprocessing/exploration11-price_history_gaussian_process_regressor_clustered_cross_valid.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Cluster 3
%%time cur_gp_opt = GaussianProcessPricePredictorGpOpt(chosen_cluster=3, bayes_opt_dir=bayes_opt_dir, cv_score_dict_npy_filename=cv_score_dict_npy_filename, pairs_ts_npy_filename=...
02_preprocessing/exploration11-price_history_gaussian_process_regressor_clustered_cross_valid.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Cluster 5
%%time cur_gp_opt = GaussianProcessPricePredictorGpOpt(chosen_cluster=5, bayes_opt_dir=bayes_opt_dir, cv_score_dict_npy_filename=cv_score_dict_npy_filename, pairs_ts_npy_filename=...
02_preprocessing/exploration11-price_history_gaussian_process_regressor_clustered_cross_valid.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Cluster 7
%%time cur_gp_opt = GaussianProcessPricePredictorGpOpt(chosen_cluster=7, bayes_opt_dir=bayes_opt_dir, cv_score_dict_npy_filename=cv_score_dict_npy_filename, pairs_ts_npy_filename=...
02_preprocessing/exploration11-price_history_gaussian_process_regressor_clustered_cross_valid.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Cluster 1
%%time cur_gp_opt = GaussianProcessPricePredictorGpOpt(chosen_cluster=1, bayes_opt_dir=bayes_opt_dir, cv_score_dict_npy_filename=cv_score_dict_npy_filename, pairs_ts_npy_filename=...
02_preprocessing/exploration11-price_history_gaussian_process_regressor_clustered_cross_valid.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Cluster 6
%%time cur_gp_opt = GaussianProcessPricePredictorGpOpt(chosen_cluster=6, bayes_opt_dir=bayes_opt_dir, cv_score_dict_npy_filename=cv_score_dict_npy_filename, pairs_ts_npy_filename=...
02_preprocessing/exploration11-price_history_gaussian_process_regressor_clustered_cross_valid.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Cluster 4
%%time cur_gp_opt = GaussianProcessPricePredictorGpOpt(chosen_cluster=4, bayes_opt_dir=bayes_opt_dir, cv_score_dict_npy_filename=cv_score_dict_npy_filename, pairs_ts_npy_filename=...
02_preprocessing/exploration11-price_history_gaussian_process_regressor_clustered_cross_valid.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Cluster 0
%%time cur_gp_opt = GaussianProcessPricePredictorGpOpt(chosen_cluster=0, bayes_opt_dir=bayes_opt_dir, cv_score_dict_npy_filename=cv_score_dict_npy_filename, pairs_ts_npy_filename=...
02_preprocessing/exploration11-price_history_gaussian_process_regressor_clustered_cross_valid.ipynb
pligor/predicting-future-product-prices
agpl-3.0
target data for feature selection average all data for each compound
# load the training data data = pd.read_csv(os.path.abspath('__file__' + "/../../../../data/TrainSet.txt"),sep='\t') data.drop(['Intensity','Odor','Replicate','Dilution'],axis=1, inplace=1) data.columns = ['#oID', 'individual'] + list(data.columns)[2:] data.head() # load leaderboard data and reshape them to match th...
opc_python/hulab/collaboration/target_data_preparation.ipynb
dream-olfaction/olfaction-prediction
mit
target data for training filter out the relevant data for each compound
# load the train data data = pd.read_csv(os.path.abspath('__file__' + "/../../../../data/TrainSet.txt"),sep='\t') data.drop(['Odor','Replicate'],axis=1, inplace=1) data.columns = [u'#oID','Intensity','Dilution', u'individual', u'INTENSITY/STRENGTH', u'VALENCE/PLEASANTNESS', u'BAKERY', u'SWEET', u'FRUIT', u'FISH', u'G...
opc_python/hulab/collaboration/target_data_preparation.ipynb
dream-olfaction/olfaction-prediction
mit
Enhance Image
# Enhance image image_enhanced = cv2.equalizeHist(image)
machine-learning/enhance_contrast_of_greyscale_image.ipynb
tpin3694/tpin3694.github.io
mit