markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
Vaya, seguimos sin estar muy a gusto con estos resultados.
Seguimos sin definir el tipo del valor de entrada.
La función mágica %%cython dispone de una serie de funcionalidades entre la que se encuentra -a o --annotate (además del -n o --name que ya hemos visto). Si le pasamos este parámetro podremos ver una representa... | %%cython --annotate
import numpy as np
cdef tuple cbusca_min_cython3(malla):
cdef list minimosx, minimosy
cdef unsigned int i, j
cdef unsigned int ii = malla.shape[1]-1
cdef unsigned int jj = malla.shape[0]-1
cdef unsigned int start = 1
minimosx = []
minimosy = []
for i in range(start, ... | C elemental, querido Cython..ipynb | Ykharo/notebooks | bsd-2-clause |
El if parece la parte más lenta. Estamos usando el valor de entrada que no tiene un tipo Cython definido.
Los bucles parece que están optimizados (las variables envueltas en el bucle las hemos declarado como unsigned int).
Pero todas las partes por las que pasa el numpy array parece que no están muy optimizadas...
Cyth... | %%cython --name probandocython4
import numpy as np
cimport numpy as np
cpdef tuple busca_min_cython4(np.ndarray[double, ndim = 2] malla):
cdef list minimosx, minimosy
cdef unsigned int i, j
cdef unsigned int ii = malla.shape[1]-1
cdef unsigned int jj = malla.shape[0]-1
cdef unsigned int start = 1
... | C elemental, querido Cython..ipynb | Ykharo/notebooks | bsd-2-clause |
Guauuuu!!! Acabamos de obtener un incremento de entre 25x a 30x veces más rápido.
Vamos a comprobar que el resultado sea el mismo que la función original: | a, b = busca_min(data)
print(a)
print(b)
aa, bb = busca_min_cython4(data)
print(aa)
print(bb)
print(np.array_equal(a, aa))
print(np.array_equal(b, bb)) | C elemental, querido Cython..ipynb | Ykharo/notebooks | bsd-2-clause |
Pues parece que sí :-)
Vamos a ver si hemos dejado la mayoría del código anterior en blanco o más clarito usando --annotate. | %%cython --annotate
import numpy as np
cimport numpy as np
cpdef tuple busca_min_cython4(np.ndarray[double, ndim = 2] malla):
cdef list minimosx, minimosy
cdef unsigned int i, j
cdef unsigned int ii = malla.shape[1]-1
cdef unsigned int jj = malla.shape[0]-1
cdef unsigned int start = 1
minimosx ... | C elemental, querido Cython..ipynb | Ykharo/notebooks | bsd-2-clause |
Vemos que muchas de las partes oscuras ahora son más claras!!! Pero parece que sigue quedando espacio para la mejora.
Cythonizando, que es gerundio (toma 5).
Vamos a ver si definiendo el tipo del resultado de la función como un numpy array en lugar de como una tupla nos introduce alguna mejora: | %%cython --name probandocython5
import numpy as np
cimport numpy as np
cpdef np.ndarray[int, ndim = 2] busca_min_cython5(np.ndarray[double, ndim = 2] malla):
cdef list minimosx, minimosy
cdef unsigned int i, j
cdef unsigned int ii = malla.shape[1]-1
cdef unsigned int jj = malla.shape[0]-1
cdef unsi... | C elemental, querido Cython..ipynb | Ykharo/notebooks | bsd-2-clause |
Vaya, parece que con respecto a la versión anterior solo obtenemos una ganancia de un 2% - 4%.
Cythonizando, que es gerundio (toma 6).
Vamos a dejar de usar listas y vamos a usar numpy arrays vacios que iremos 'rellenando' con numpy.append. A ver si usando todo numpy arrays conseguimos algún tipo de mejora: | %%cython --name probandocython6
import numpy as np
cimport numpy as np
cpdef tuple busca_min_cython6(np.ndarray[double, ndim = 2] malla):
cdef np.ndarray[long, ndim = 1] minimosx, minimosy
cdef unsigned int i, j
cdef unsigned int ii = malla.shape[1]-1
cdef unsigned int jj = malla.shape[0]-1
cdef un... | C elemental, querido Cython..ipynb | Ykharo/notebooks | bsd-2-clause |
En realidad, en la anterior porción de código estoy usando algo muy ineficiente. La función numpy.append no funciona como una lista a la que vas anexando elementos. Lo que estamos haciendo en realidad es crear copias del array existente para convertirlo a un nuevo array con un elemento nuevo. Esto no es lo que pretendi... | %%cython --name probandocython7
import numpy as np
cimport numpy as np
from cpython cimport array as c_array
from array import array
cpdef tuple busca_min_cython7(np.ndarray[double, ndim = 2] malla):
cdef c_array.array minimosx, minimosy
cdef unsigned int i, j
cdef unsigned int ii = malla.shape[1]-1
cd... | C elemental, querido Cython..ipynb | Ykharo/notebooks | bsd-2-clause |
Parece que hemos ganado otro 25% - 30% con respecto a lo anterior más eficiente que habíamos conseguido. Con respecto a la implementación inicial en Python puro tenemos una mejora de 30x - 35x veces la velocidad inicial.
Vamos a comprobar si seguimos teniendo los mismos resultados. | a, b = busca_min(data)
print(a)
print(b)
aa, bb = busca_min_cython7(data)
print(aa)
print(bb)
print(np.array_equal(a, aa))
print(np.array_equal(b, bb)) | C elemental, querido Cython..ipynb | Ykharo/notebooks | bsd-2-clause |
¿Qué pasa si el tamaño del array se incrementa? | data2 = np.random.randn(5000, 5000)
%timeit busca_min(data2)
%timeit busca_min_cython7(data2)
a, b = busca_min(data2)
print(a)
print(b)
aa, bb = busca_min_cython7(data2)
print(aa)
print(bb)
print(np.array_equal(a, aa))
print(np.array_equal(b, bb)) | C elemental, querido Cython..ipynb | Ykharo/notebooks | bsd-2-clause |
Parece que al ir aumentando el tamaño de los datos de entrada a la función los números son consistentes y el rendimiento se mantiene. En este caso concreto parece que ya hemos llegado a rendimientos de más de ¡¡35x!! con respecto a la implementación inicial.
Cythonizando, que es gerundio (toma 8).
Podemos usar directiv... | %%cython --name probandocython8
import numpy as np
cimport numpy as np
from cpython cimport array as c_array
from array import array
cimport cython
@cython.boundscheck(False)
@cython.wraparound(False)
cpdef tuple busca_min_cython8(np.ndarray[double, ndim = 2] malla):
cdef c_array.array minimosx, minimosy
cdef... | C elemental, querido Cython..ipynb | Ykharo/notebooks | bsd-2-clause |
Parece que hemos conseguido arañar otro poquito de rendimiento.
Cythonizando, que es gerundio (toma 9).
En lugar de usar numpy arrays vamos a usar memoryviews. Los memoryviews son arrays de acceso rápido. Si solo queremos almacenar cosas y no necesitamos ninguna de las características de un numpy array pueden ser una b... | %%cython --name probandocython9
import numpy as np
cimport numpy as np
from cpython cimport array as c_array
from array import array
cimport cython
@cython.boundscheck(False)
@cython.wraparound(False)
#cpdef tuple busca_min_cython9(np.ndarray[double, ndim = 2] malla):
cpdef tuple busca_min_cython9(double [:,:] malla)... | C elemental, querido Cython..ipynb | Ykharo/notebooks | bsd-2-clause |
Parece que, virtualmente, el rendimiento es parecido a lo que ya teniamos por lo que parece que nos hemos quedado igual.
Bonus track
Voy a intentar usar pypy (2.4 (CPython 2.7)) conjuntamente con numpypy para ver lo que conseguimos. | %%pypy
import numpy as np
import time
np.random.seed(0)
data = np.random.randn(2000,2000)
def busca_min(malla):
minimosx = []
minimosy = []
for i in range(1, malla.shape[1]-1):
for j in range(1, malla.shape[0]-1):
if (malla[j, i] < malla[j-1, i-1] and
malla[j, i] < mall... | C elemental, querido Cython..ipynb | Ykharo/notebooks | bsd-2-clause |
El último valor del output anterior es el tiempo promedio después de repetir el cálculo 100 veces.
Wow!! Parece que sin hacer modificaciones tenemos que el resultado es 10x - 15x veces más rápido que el obtenido usando la función inicial. Y llega a ser solo 3.5x veces más lento que lo que hemos conseguido con Cython.
R... | funcs = [busca_min, busca_min_numba, busca_min_cython1,
busca_min_cython2, busca_min_cython3,
busca_min_cython4, busca_min_cython5,
busca_min_cython6, busca_min_cython7,
busca_min_cython8, busca_min_cython9]
t = []
for func in funcs:
res = %timeit -o func(data)
t.append(res.b... | C elemental, querido Cython..ipynb | Ykharo/notebooks | bsd-2-clause |
En el gráfico anterior, la primera barra corresponde a la función de partida (busca_min). Recordemos que la versión de pypy ha tardado unos 0.38 segundos.
Y ahora vamos a ver los tiempos entre busca_min (la versión original) y la última versión de cython que hemos creado, busca_min_cython9 usando diferentes tamaños de ... | tamanyos = [10, 100, 500, 1000, 2000, 5000]
t_p = []
t_c = []
for i in tamanyos:
data = np.random.randn(i, i)
res = %timeit -o busca_min(data)
t_p.append(res.best)
res = %timeit -o busca_min_cython9(data)
t_c.append(res.best)
plt.figure(figsize = (10,6))
plt.plot(tamanyos, t_p, 'bo-')
plt.plot(tama... | C elemental, querido Cython..ipynb | Ykharo/notebooks | bsd-2-clause |
Given a 2D set of points spanned by axes $x$ and $y$ axes, we will try to fit a line that best approximates the data. The equation of the line, in slope-intercept form, is defined by: $y = mx + b$. | def generate_random_points_along_a_line (slope, intercept, num_points, abs_value, abs_noise):
# randomly select x
x = np.random.uniform(-abs_value, abs_value, num_points)
# y = mx + b + noise
y = slope*x + intercept + np.random.uniform(-abs_noise, abs_noise, num_points)
return x, y
def plot_points(... | src/linear_regression/linear_regression.ipynb | kaushikpavani/neural_networks_in_python | mit |
If $N$ = num_points, then the error in fitting a line to the points (also defined as Cost, $C$) can be defined as:
$C = \sum_{i=0}^{N} (y-(mx+b))^2$
To perform gradient descent, we need the partial derivatives of Cost $C$ with respect to slope $m$ and intercept $b$.
$\frac{\partial C}{\partial m} = \sum_{i=0}^{N} -2(y-... | # this function computes gradient with respect to slope m
def grad_m (x, y, m, b):
return np.sum(np.multiply(-2*(y - (m*x + b)), x))
# this function computes gradient with respect to intercept b
def grad_b (x, y, m, b):
return np.sum(-2*(y - (m*x + b)))
# Performs gradient descent
def gradient_descent (x, y, ... | src/linear_regression/linear_regression.ipynb | kaushikpavani/neural_networks_in_python | mit |
Now create the SparkContext,A SparkContext represents the connection to a Spark cluster, and can be used to create an RDD and broadcast variables on that cluster.
Note! You can only have one SparkContext at a time the way we are running things here. | sc = SparkContext() | udemy_ml_bootcamp/Big-Data-and-Spark/Introduction to Spark and Python.ipynb | AtmaMani/pyChakras | mit |
Basic Operations
We're going to start with a 'hello world' example, which is just reading a text file. First let's create a text file.
Let's write an example text file to read, we'll use some special jupyter notebook commands for this, but feel free to use any .txt file: | %%writefile example.txt
first line
second line
third line
fourth line | udemy_ml_bootcamp/Big-Data-and-Spark/Introduction to Spark and Python.ipynb | AtmaMani/pyChakras | mit |
Creating the RDD
Now we can take in the textfile using the textFile method off of the SparkContext we created. This method will read a text file from HDFS, a local file system (available on all
nodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings. | textFile = sc.textFile('example.txt') | udemy_ml_bootcamp/Big-Data-and-Spark/Introduction to Spark and Python.ipynb | AtmaMani/pyChakras | mit |
Spark’s primary abstraction is a distributed collection of items called a Resilient Distributed Dataset (RDD). RDDs can be created from Hadoop InputFormats (such as HDFS files) or by transforming other RDDs.
Actions
We have just created an RDD using the textFile method and can perform operations on this object, such a... | textFile.count()
textFile.first() | udemy_ml_bootcamp/Big-Data-and-Spark/Introduction to Spark and Python.ipynb | AtmaMani/pyChakras | mit |
Transformations
Now we can use transformations, for example the filter transformation will return a new RDD with a subset of items in the file. Let's create a sample transformation using the filter() method. This method (just like Python's own filter function) will only return elements that satisfy the condition. Let's... | secfind = textFile.filter(lambda line: 'second' in line)
# RDD
secfind
# Perform action on transformation
secfind.collect()
# Perform action on transformation
secfind.count() | udemy_ml_bootcamp/Big-Data-and-Spark/Introduction to Spark and Python.ipynb | AtmaMani/pyChakras | mit |
Load gromacs trajectory/topology
Gromacs was used to sample a dilute solution of sodium chloride in SPC/E water for 100 ns.
The trajectory and .gro loaded below have been stripped from hydrogens to reduce disk space. | traj = md.load('gmx/traj_noh.xtc', top='gmx/conf_noh.gro')
traj | nacl-water/nacl.ipynb | mlund/kirkwood-buff | mit |
Calculate average number densities for solute and solvent | volume=0
for vec in traj.unitcell_lengths:
volume = volume + vec[0]*vec[1]*vec[2] / traj.n_frames
N_c = len(traj.topology.select('name NA or name CL'))
N_w = len(traj.topology.select('name O'))
rho_c = N_c / volume
rho_w = N_w / volume
print "Simulation time = ", traj.time[-1]*1e-3, 'ns'
print "Averag... | nacl-water/nacl.ipynb | mlund/kirkwood-buff | mit |
Compute and plot RDFs
Note: The radial distribution function in mdtraj differs from i.e. Gromacs g_rdf in
the way data is normalized and the $g(r)$ may need rescaling. It seems that densities
are calculated by the number of selected pairs which for the cc case exclude all the
self terms. This can be easily corrected an... | rmax = (volume)**(1/3.)/2
select_cc = traj.topology.select_pairs('name NA or name CL', 'name NA or name CL')
select_wc = traj.topology.select_pairs('name NA or name CL', 'name O')
r, g_cc = md.compute_rdf(traj, select_cc, r_range=[0.0,rmax], bin_width=0.01, periodic=True)
r, g_wc = md.compute_rdf(traj, select_wc, r_ran... | nacl-water/nacl.ipynb | mlund/kirkwood-buff | mit |
Calculate KB integrals
Here we calculate the number of solute molecules around other solute molecules (cc) and around water (wc).
For example,
$$ N_{cc} = 4\pi\rho_c\int_0^{\infty} \left ( g(r)_{cc} -1 \right ) r^2 dr$$
The preferential binding parameter is subsequently calculated as $\Gamma = N_{cc}-N_{wc}$. | dr = r[1]-r[0]
N_cc = rho_c * 4*pi*np.cumsum( ( g_cc - 1 )*r**2*dr )
N_wc = rho_c * 4*pi*np.cumsum( ( g_wc - 1 )*r**2*dr )
Gamma = N_cc - N_wc
plt.xlabel('$r$/nm')
plt.ylabel('$\\Gamma = N_{cc}-N_{wc}$')
plt.plot(r, Gamma, 'r-') | nacl-water/nacl.ipynb | mlund/kirkwood-buff | mit |
Finite system size corrected KB integrals
As can be seen in the above figure, the KB integrals do not converge since in a finite sized $NVT$ simulation,
$g(r)$ can never exactly go to unity at large separations.
To correct for this, a simple scaling factor can be applied, as describe in the link on top of the page,
$$ ... | Vn = 4*pi/3*r**3 / volume
g_ccc = g_cc * N_c * (1-Vn) / ( N_c*(1-Vn)-N_cc-1)
g_wcc = g_wc * N_w * (1-Vn) / ( N_w*(1-Vn)-N_wc-0)
N_ccc = rho_c * 4*pi*dr*np.cumsum( ( g_ccc - 1 )*r**2 )
N_wcc = rho_c * 4*pi*dr*np.cumsum( ( g_wcc - 1 )*r**2 )
Gammac = N_ccc - N_wcc
plt.xlabel('$r$/nm')
plt.ylabel('$\\Gamma = N_{cc}-N_{w... | nacl-water/nacl.ipynb | mlund/kirkwood-buff | mit |
Index Label이 없는 경우의 주의점
Label이 지정되지 않는 경우에는 integer slicing을 label slicing으로 간주하여 마지막 값을 포함한다 | df = pd.DataFrame(np.random.randn(5, 3))
df
df.columns = ["c1", "c2", "c3"]
df.ix[0:2, 1:2] | 통계, 머신러닝 복습/160502월_1일차_분석 환경, 소개/14.Pandas 고급 인덱싱.ipynb | kimkipyo/dss_git_kkp | mit |
loc 인덱서
라벨 기준 인덱싱
숫자가 오더라도 라벨로 인식한다.
라벨 리스트 가능
라벨 슬라이싱 가능
불리언 배열 가능
iloc 인덱서
숫자 기준 인덱싱
문자열 라벨은 불가
숫자 리스트 가능
숫자 슬라이싱 가능
불리언 배열 가능 | np.random.seed(1)
df = pd.DataFrame(np.random.randint(1, 11, size=(4,3)),
columns=["A", "B", "C"], index=["a", "b", "c", "d"])
df
df.ix[["a", "c"], "B":"C"]
df.ix[[0, 2], 1:3]
df.loc[["a", "c"], "B":"C"]
df.ix[2:4, 1:3]
df.loc[2:4, 1:3]
df.iloc[2:4, 1:3]
df.iloc[["a", "c"], "B":"C"] | 통계, 머신러닝 복습/160502월_1일차_분석 환경, 소개/14.Pandas 고급 인덱싱.ipynb | kimkipyo/dss_git_kkp | mit |
A Bioinformatics Library for Data Scientists, Students, and Developers
Jai Rideout and Evan Bolyen
Caporaso Lab, Northern Arizona University
What is scikit-bio?
A Python bioinformatics library for:
data scientists
students
developers
"The first step in developing a new genetic analysis algorithm is to decide h... | from skbio import DNA
seq1 = DNA.read('data/seqs.fasta', qual='data/seqs.qual')
seq2 = DNA.read('data/seqs.fastq', variant='illumina1.8')
seq1
seq1 == seq2 | scipy-2015/A Bioinformatics Library for Data Scientists, Students, and Developers.ipynb | biocore/scikit-bio-presentations | bsd-3-clause |
Format ambiguity | import skbio.io
skbio.io.sniff('data/mystery_file.gz') | scipy-2015/A Bioinformatics Library for Data Scientists, Students, and Developers.ipynb | biocore/scikit-bio-presentations | bsd-3-clause |
Heterogeneous sources
Read a gzip file from a URL: | from skbio import TreeNode
tree1 = skbio.io.read('http://localhost:8888/files/data/newick.gz',
into=TreeNode)
print(tree1.ascii_art()) | scipy-2015/A Bioinformatics Library for Data Scientists, Students, and Developers.ipynb | biocore/scikit-bio-presentations | bsd-3-clause |
Read a bz2 file from a file path: | import io
with io.open('data/newick.bz2', mode='rb') as open_filehandle:
tree2 = skbio.io.read(open_filehandle, into=TreeNode)
print(tree2.ascii_art()) | scipy-2015/A Bioinformatics Library for Data Scientists, Students, and Developers.ipynb | biocore/scikit-bio-presentations | bsd-3-clause |
Read a list of lines: | tree3 = skbio.io.read(['((a, b, c), d:15):0;'], into=TreeNode)
print(tree3.ascii_art()) | scipy-2015/A Bioinformatics Library for Data Scientists, Students, and Developers.ipynb | biocore/scikit-bio-presentations | bsd-3-clause |
Let's make a format!
YASF (Yet Another Sequence Format) | !cat data/yasf-seq.yml
import yaml
yasf = skbio.io.create_format('yasf')
@yasf.sniffer()
def yasf_sniffer(fh):
return fh.readline().rstrip() == "#YASF", {}
@yasf.reader(DNA)
def yasf_to_dna(fh):
seq = yaml.load(fh.read())
return DNA(seq['Sequence'], metadata={
'id': seq['ID'],
'location'... | scipy-2015/A Bioinformatics Library for Data Scientists, Students, and Developers.ipynb | biocore/scikit-bio-presentations | bsd-3-clause |
Convert YASF to FASTA | seq.write("data/not-yasf.fna", format='fasta')
!cat data/not-yasf.fna | scipy-2015/A Bioinformatics Library for Data Scientists, Students, and Developers.ipynb | biocore/scikit-bio-presentations | bsd-3-clause |
We are in beta - should you even use our software?
YES!
API Lifecycle | from skbio.util._decorator import stable
@stable(as_of='0.4.0')
def add(a, b):
"""add two numbers.
Parameters
----------
a, b : int
Numbers to add.
Returns
-------
int
Sum of `a` and `b`.
"""
return a + b
help(add) | scipy-2015/A Bioinformatics Library for Data Scientists, Students, and Developers.ipynb | biocore/scikit-bio-presentations | bsd-3-clause |
What is stable:
skbio.io
skbio.sequence
What is next:
skbio.alignment
skbio.tree
skbio.diversity
skbio.stats
<your awesome subpackage!>
Sequence API: putting the scikit in scikit-bio | seq = DNA("AacgtGTggA", lowercase='exon')
seq | scipy-2015/A Bioinformatics Library for Data Scientists, Students, and Developers.ipynb | biocore/scikit-bio-presentations | bsd-3-clause |
Made with numpy | seq.values | scipy-2015/A Bioinformatics Library for Data Scientists, Students, and Developers.ipynb | biocore/scikit-bio-presentations | bsd-3-clause |
And a pinch of pandas | seq.positional_metadata | scipy-2015/A Bioinformatics Library for Data Scientists, Students, and Developers.ipynb | biocore/scikit-bio-presentations | bsd-3-clause |
Slicing with positional metadata: | seq[seq.positional_metadata['exon']] | scipy-2015/A Bioinformatics Library for Data Scientists, Students, and Developers.ipynb | biocore/scikit-bio-presentations | bsd-3-clause |
Application: building a taxonomy classifier | aligned_seqs_fp = 'data/gg_13_8_otus/rep_set_aligned/82_otus.fasta'
taxonomy_fp = 'data/gg_13_8_otus/taxonomy/82_otu_taxonomy.txt'
from skbio import DNA
fwd_primer = DNA("GTGCCAGCMGCCGCGGTAA",
metadata={'label':'fwd-primer'})
rev_primer = DNA("GGACTACHVGGGTWTCTAAT",
metadata={'label'... | scipy-2015/A Bioinformatics Library for Data Scientists, Students, and Developers.ipynb | biocore/scikit-bio-presentations | bsd-3-clause |
Below we define a function to generate random intervals with various properties, returning a dataframe of intervals. | def make_random_intervals(
n=1e5,
n_chroms=1,
max_coord=None,
max_length=10,
sort=False,
categorical_chroms=False,
):
n = int(n)
n_chroms = int(n_chroms)
max_coord = (n // n_chroms) if max_coord is None else int(max_coord)
max_length = int... | docs/guide-performance.ipynb | open2c/bioframe | mit |
Overlap
In this chapter we characterize the performance of the key function, bioframe.overlap. We show that the speed depends on:
- the number of intervals
- number of intersections (or density of intervals)
- type of overlap (inner, outer, left)
- dtype of chromosomes
vs number of intervals | timings = {}
for n in [1e2, 1e3, 1e4, 1e5, 1e6]:
df = make_random_intervals(n=n, n_chroms=1)
df2 = make_random_intervals(n=n, n_chroms=1)
timings[n] = %timeit -o -r 1 bioframe.overlap(df, df2)
plt.loglog(
list(timings.keys()),
list([r.average for r in timings.values()]),
'o-',
)
plt.xlabel('N i... | docs/guide-performance.ipynb | open2c/bioframe | mit |
vs total number of intersections
Note that not only the number of intervals, but also the density of intervals determines the performance of overlap. | timings = {}
n_intersections = {}
n = 1e4
for avg_interval_len in [3, 1e1, 3e1, 1e2, 3e2]:
df = make_random_intervals(n=n, n_chroms=1, max_length=avg_interval_len*2)
df2 = make_random_intervals(n=n, n_chroms=1, max_length=avg_interval_len*2)
timings[avg_interval_len] = %timeit -o -r 1 bioframe.overlap(df, d... | docs/guide-performance.ipynb | open2c/bioframe | mit |
vs number of chromosomes
If we consider a genome of the same length, divided into more chromosomes, the timing is relatively unaffected. | timings = {}
n_intersections = {}
n = 1e5
for n_chroms in [1, 3, 10, 30, 100, 300, 1000]:
df = make_random_intervals(n, n_chroms)
df2 = make_random_intervals(n, n_chroms)
timings[n_chroms] = %timeit -o -r 1 bioframe.overlap(df, df2)
n_intersections[n_chroms] = bioframe.overlap(df, df2).shape[0]
| docs/guide-performance.ipynb | open2c/bioframe | mit |
Note this test preserves the number of intersections, which is likely why performance remains similar over the considered range. | n_intersections
plt.loglog(
list(timings.keys()),
list([r.average for r in timings.values()]),
'o-',
)
plt.ylim([1e-1, 10])
plt.xlabel('# chromosomes')
plt.ylabel('time, seconds')
# plt.gca().set_aspect(1.0)
plt.grid() | docs/guide-performance.ipynb | open2c/bioframe | mit |
vs other parameters: join type, sorted or categorical inputs
Note that default for overlap: how='left', keep_order=True, and the returned dataframe is sorted after the overlaps have been ascertained. Also note that keep_order=True is only a valid argument for how='left' as the order is not well-defined for inner or out... | df = make_random_intervals()
df2 = make_random_intervals()
%timeit -r 1 bioframe.overlap(df, df2)
%timeit -r 1 bioframe.overlap(df, df2, how='left', keep_order=False)
df = make_random_intervals()
df2 = make_random_intervals()
%timeit -r 1 bioframe.overlap(df, df2, how='outer')
%timeit -r 1 bioframe.overlap(df, df2, h... | docs/guide-performance.ipynb | open2c/bioframe | mit |
Note below that detection of overlaps takes a relatively small fraction of the execution time. The majority of the time the user-facing function spends on formatting the output table. | df = make_random_intervals()
df2 = make_random_intervals()
%timeit -r 1 bioframe.overlap(df, df2)
%timeit -r 1 bioframe.overlap(df, df2, how='inner')
%timeit -r 1 bioframe.ops._overlap_intidxs(df, df2)
%timeit -r 1 bioframe.ops._overlap_intidxs(df, df2, how='inner') | docs/guide-performance.ipynb | open2c/bioframe | mit |
Note that sorting inputs provides a moderate speedup, as well as storing chromosomes as categoricals | print('Default inputs (outer/inner joins):')
df = make_random_intervals()
df2 = make_random_intervals()
%timeit -r 1 bioframe.overlap(df, df2)
%timeit -r 1 bioframe.overlap(df, df2, how='inner')
print('Sorted inputs (outer/inner joins):')
df_sorted = make_random_intervals(sort=True)
df2_sorted = make_random_intervals... | docs/guide-performance.ipynb | open2c/bioframe | mit |
Vs Pyranges
Default arguments
The core intersection function of PyRanges is faster, since PyRanges object splits intervals by chromosomes at the object construction stage | def df2pr(df):
return pyranges.PyRanges(
chromosomes=df.chrom,
starts=df.start,
ends=df.end,
)
timings_bf = {}
timings_pr = {}
for n in [1e2, 1e3, 1e4, 1e5, 1e6, 3e6]:
df = make_random_intervals(n=n, n_chroms=1)
df2 = make_random_intervals(n=n, n_chroms=1)
pr = df2pr(df)
pr2 = df2pr... | docs/guide-performance.ipynb | open2c/bioframe | mit |
With roundtrips to dataframes
Note that pyranges performs useful calculations at the stage of creating a PyRanges object. Thus a direct comparison for one-off operations on pandas DataFrames between bioframe and pyranges should take this step into account. This roundrip is handled by pyranges_intersect_dfs below. | def pyranges_intersect_dfs(df, df2):
return df2pr(df).intersect(df2pr(df2)).as_df()
timings_bf = {}
timings_pr = {}
for n in [1e2, 1e3, 1e4, 1e5, 1e6, 3e6]:
df = make_random_intervals(n=n, n_chroms=1)
df2 = make_random_intervals(n=n, n_chroms=1)
timings_bf[n] = %timeit -o -r 1 bioframe.overlap(df, df2,... | docs/guide-performance.ipynb | open2c/bioframe | mit |
Memory usage | from memory_profiler import memory_usage
import time
def sleep_before_after(func, sleep_sec=0.5):
def _f(*args, **kwargs):
time.sleep(sleep_sec)
func(*args, **kwargs)
time.sleep(sleep_sec)
return _f
mem_usage_bf = {}
mem_usage_pr = {}
for n in [1e2, 1e3, 1e4, 1e5, 1e6, 3e6]:
df = ... | docs/guide-performance.ipynb | open2c/bioframe | mit |
The 2x memory consumption of bioframe is due to the fact that bioframe store genomic coordinates as int64 by default, while pyranges uses int32: | print('Bioframe dtypes:')
display(df.dtypes)
print()
print('Pyranges dtypes:')
display(df2pr(df).dtypes)
### Combined performance figure.
fig, axs = plt.subplot_mosaic(
'AAA.BBB',
figsize=(9.0,4))
plt.sca(axs['A'])
plt.text(-0.25, 1.0, 'A', horizontalalignment='center',
verticalalign... | docs/guide-performance.ipynb | open2c/bioframe | mit |
Slicing | timings_slicing_bf = {}
timings_slicing_pr = {}
for n in [1e2, 1e3, 1e4, 1e5, 1e6, 3e6]:
df = make_random_intervals(n=n, n_chroms=1)
timings_slicing_bf[n] = %timeit -o -r 1 bioframe.select(df, ('chr1', n//2, n//4*3))
pr = df2pr(df)
timings_slicing_pr[n] = %timeit -o -r 1 pr['chr1', n//2:n//4*3]
... | docs/guide-performance.ipynb | open2c/bioframe | mit |
The normal distribution test: | x=df.sort_values("temperature",axis=0)
t=x["temperature"]
#print(np.mean(t))
plot_fit = stats.norm.pdf(t, np.mean(t), np.std(t))
plt.plot(t,plot_fit,'-o')
plt.hist(df.temperature, bins = 20 ,normed = True)
plt.ylabel('Frequency')
plt.xlabel('Temperature')
plt.show()
stats.normaltest(t) | Human_Temp.ipynb | SATHVIKRAJU/Inferential_Statistics | mit |
To check if the distribution of temperature is normal, it is always better to visualize it. We plot the histogram of the values and plot the fitted values to obtain a normal distribution. We see that there are a few outliers in the distribution on the right side but still it correlates as a normal distribution.
Perfo... | #Question 2:
no_of_samples=df["temperature"].count()
print(no_of_samples) | Human_Temp.ipynb | SATHVIKRAJU/Inferential_Statistics | mit |
We see the sample size is n= 130 and as a general rule of thumb inorder for CLT to be validated
it is necessary for n>30. Hence the sample size is compartively large.
Question 3
HO: The true population mean is 98.6 degrees F (Null hypothesis)
H1: The true population mean is not 98.6 degrees F (Alternative hypothesis)
... | from statsmodels.stats.weightstats import ztest
from scipy.stats import ttest_ind
from scipy.stats import ttest_1samp
t_score=ttest_1samp(t,98.6)
t_score_abs=abs(t_score[0])
t_score_p_abs=abs(t_score[1])
z_score=ztest(t,value=98.6)
z_score_abs=abs(z_score[0])
p_value_abs=abs(z_score[1])
print("The z score is given by: ... | Human_Temp.ipynb | SATHVIKRAJU/Inferential_Statistics | mit |
Choosing one sample test vs two sample test:
The problem defined has a single sample and we need to test against the population mean and hence we would use a one sample test as against the two sample test.
T-test vs Z-test:
T-test is chosen and best suited when n<30 and hence we can choose z-test for this particular d... | #Question 4:
#For a 95% Confidence Interval the Confidence interval can be computed as:
variance_=np.std(t)/np.sqrt(no_of_samples)
mean_=np.mean(t)
confidence_interval = stats.norm.interval(0.95, loc=mean_, scale=variance_)
print("The Confidence Interval Lies between %F and %F"%(confidence_interval[0],confidence_interv... | Human_Temp.ipynb | SATHVIKRAJU/Inferential_Statistics | mit |
Any temperatures out of this range should be considered abnormal.
Question 5:
Here we use t-test statistic because we want to compare the mean of two groups involved, the male and the female group and it is better to use a t-test. | temp_male=df.temperature[df.gender=='M']
female_temp=df.temperature[df.gender=='F']
ttest_ind(temp_male,female_temp) | Human_Temp.ipynb | SATHVIKRAJU/Inferential_Statistics | mit |
2 基本用法 | import requests
cs_url = 'http://httpbin.org'
r = requests.get("%s/%s" % (cs_url, 'get'))
r = requests.post("%s/%s" % (cs_url, 'post'))
r = requests.put("%s/%s" % (cs_url, 'put'))
r = requests.delete("%s/%s" % (cs_url, 'delete'))
r = requests.patch("%s/%s" % (cs_url, 'patch'))
r = requests.options("%s/%s" % (cs_url, 'g... | python-statatics-tutorial/advance-theme/Request.ipynb | gaufung/Data_Analytics_Learning_Note | mit |
3 URL 传参
https://encrypted.google.com/search?q=hello
<协议>://<域名>/<接口>?<键1>=<值1>&<键2>=<值2>
requests 库提供的 HTTP 方法,都提供了名为 params 的参数。这个参数可以接受一个 Python 字典,并自动格式化为上述格式。 | import requests
cs_url = 'https://www.so.com/s'
param = {'ie':'utf-8','q':'query'}
r = requests.get(cs_url,params = param)
print r.url | python-statatics-tutorial/advance-theme/Request.ipynb | gaufung/Data_Analytics_Learning_Note | mit |
4 设置超时
requests 的超时设置以秒为单位。例如,对请求加参数 timeout = 5 即可设置超时为 5 秒 | import requests
cs_url = 'https://www.zhihu.com'
r = requests.get(cs_url,timeout=100) | python-statatics-tutorial/advance-theme/Request.ipynb | gaufung/Data_Analytics_Learning_Note | mit |
5 请求头 | import requests
cs_url = 'http://httpbin.org/get'
r = requests.get (cs_url)
print r.content | python-statatics-tutorial/advance-theme/Request.ipynb | gaufung/Data_Analytics_Learning_Note | mit |
通常我们比较关注其中的 User-Agent 和 Accept-Encoding。如果我们要修改 HTTP 头中的这两项内容,只需要将一个合适的字典参数传给 headers 即可。 | import requests
my_headers = {'User-Agent' : 'From Liam Huang', 'Accept-Encoding' : 'gzip'}
cs_url = 'http://httpbin.org/get'
r = requests.get (cs_url, headers = my_headers)
print r.content | python-statatics-tutorial/advance-theme/Request.ipynb | gaufung/Data_Analytics_Learning_Note | mit |
6 响应头 | import requests
cs_url = 'http://httpbin.org/get'
r = requests.get (cs_url)
print r.headers | python-statatics-tutorial/advance-theme/Request.ipynb | gaufung/Data_Analytics_Learning_Note | mit |
7 响应内容
长期以来,互联网都存在带宽有限的情况。因此,网络上传输的数据,很多情况下都是经过压缩的。经由 requests 发送的请求,当收到的响应内容经过 gzip 或 deflate 压缩时,requests 会自动为我们解包。我们可以用 Response.content 来获得以字节形式返回的相应内容。 | import requests
cs_url = 'https://www.zhihu.com'
r = requests.get (cs_url)
if r.status_code == requests.codes.ok:
print r.content | python-statatics-tutorial/advance-theme/Request.ipynb | gaufung/Data_Analytics_Learning_Note | mit |
如果相应内容不是文本,而是二进制数据(比如图片),则需要进行响应的解码 | import requests
from PIL import Image
from StringIO import StringIO
cs_url = 'http://liam0205.me/uploads/avatar/avatar-2.jpg'
r = requests.get (cs_url)
if r.status_code == requests.codes.ok:
Image.open(StringIO(r.content)).show() | python-statatics-tutorial/advance-theme/Request.ipynb | gaufung/Data_Analytics_Learning_Note | mit |
文本模式解码 | import requests
cs_url = 'https://www.zhihu.com'
r = requests.get (cs_url,auth=('gaofengcumt@126.com','gaofengcumt'))
if r.status_code == requests.codes.ok:
print r.text
else:
print 'bad request' | python-statatics-tutorial/advance-theme/Request.ipynb | gaufung/Data_Analytics_Learning_Note | mit |
8 反序列化 JSON 数据 | import requests
cs_url = 'http://ip.taobao.com/service/getIpInfo.php'
my_param = {'ip':'8.8.8.8'}
r = requests.get(cs_url, params = my_param)
print r.json()['data']['country'].encode('utf-8') | python-statatics-tutorial/advance-theme/Request.ipynb | gaufung/Data_Analytics_Learning_Note | mit |
Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll wan... | from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100] | sentiment-rnn/Sentiment_RNN.ipynb | msanterre/deep_learning | mit |
Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to en... | # Create your dictionary that maps vocab words to integers here
vocab_to_int = {word: idx+1 for (idx, word) in enumerate(set(words))}
print("Vocab to int")
print("len words: ", len(set(words)))
print("len vocab: ", len(vocab_to_int))
print("Sample: ", vocab_to_int['in'])
# Convert the reviews to i... | sentiment-rnn/Sentiment_RNN.ipynb | msanterre/deep_learning | mit |
Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively. | # Convert labels to 1s and 0s for 'positive' and 'negative'
labels = np.array([0 if a == "negative" else 1 for a in labels_.split()])
print(len(labels))
print(labels[:100])
print(labels_[:100]) | sentiment-rnn/Sentiment_RNN.ipynb | msanterre/deep_learning | mit |
If you built labels correctly, you should see the next output. | from collections import Counter
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens))) | sentiment-rnn/Sentiment_RNN.ipynb | msanterre/deep_learning | mit |
Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove... | # Filter out that review with 0 length
# for i, review in enumerate(reviews_ints):
# if len(review) == 0:
# np.delete(reviews_ints, i)
# break
reviews_ints = [r for r in reviews_ints if len(r) > 0]
print("Reviews ints len: ", len(reviews_ints))
print("Labels len: ", len(labels)) | sentiment-rnn/Sentiment_RNN.ipynb | msanterre/deep_learning | mit |
Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'],... | seq_len = 200
features = []
for review in reviews_ints:
cut = review[:seq_len]
feature = ([0] * (seq_len - len(cut))) + cut
features.append(feature)
features = np.array(features) | sentiment-rnn/Sentiment_RNN.ipynb | msanterre/deep_learning | mit |
If you build features correctly, it should look like that cell output below. | features[:10,:100] | sentiment-rnn/Sentiment_RNN.ipynb | msanterre/deep_learning | mit |
Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fractio... | from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(features, labels, test_size=0.2)
train_x = x_train
train_y = y_train
val_x = x_test[:len(x_test)//2]
val_y = y_test[:len(y_test)//2]
test_x = x_test[len(x_test)//2:]
test_y = y_test[len(y_test)//2:]
print("\t\t... | sentiment-rnn/Sentiment_RNN.ipynb | msanterre/deep_learning | mit |
With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2500, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units i... | lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001 | sentiment-rnn/Sentiment_RNN.ipynb | msanterre/deep_learning | mit |
For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. l... | n_words = len(vocab_to_int) + 1 # Adding 1 because we use 0's for padding, dictionary started at 1
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name="inputs")
labels_ = tf.placeholder(tf.int32, [None, None], name... | sentiment-rnn/Sentiment_RNN.ipynb | msanterre/deep_learning | mit |
Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that... | # Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.truncated_normal((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_) | sentiment-rnn/Sentiment_RNN.ipynb | msanterre/deep_learning | mit |
LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic L... | with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop]*l... | sentiment-rnn/Sentiment_RNN.ipynb | msanterre/deep_learning | mit |
RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dyna... | with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state) | sentiment-rnn/Sentiment_RNN.ipynb | msanterre/deep_learning | mit |
Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_. | with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost) | sentiment-rnn/Sentiment_RNN.ipynb | msanterre/deep_learning | mit |
Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass. | with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) | sentiment-rnn/Sentiment_RNN.ipynb | msanterre/deep_learning | mit |
Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size]. | def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size] | sentiment-rnn/Sentiment_RNN.ipynb | msanterre/deep_learning | mit |
Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists. | epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch... | sentiment-rnn/Sentiment_RNN.ipynb | msanterre/deep_learning | mit |
Testing | test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:,... | sentiment-rnn/Sentiment_RNN.ipynb | msanterre/deep_learning | mit |
2 - Overview of the Problem set
Problem Statement: You are given a dataset ("data.h5") containing:
- a training set of m_train images labeled as cat (y=1) or non-cat (y=0)
- a test set of m_test images labeled as cat or non-cat
- each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB)... | # Loading the data (cat/non-cat)
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset() | course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb | liufuyang/deep_learning_tutorial | mit |
We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).
Each line of your train_set_x_orig and test_set_x_orig is an array representin... | # Example of a picture
index = 25
plt.imshow(train_set_x_orig[index])
print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.") | course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb | liufuyang/deep_learning_tutorial | mit |
Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs.
Exercise: Find the values for:
- m_train (number of training examples)
- m_test (number of test examples)
... | train_set_y.shape
### START CODE HERE ### (≈ 3 lines of code)
m_train = train_set_y.shape[1]
m_test = test_set_y.shape[1]
num_px = train_set_x_orig.shape[1]
### END CODE HERE ###
print ("Number of training examples: m_train = " + str(m_train))
print ("Number of testing examples: m_test = " + str(m_test))
print ("Heig... | course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb | liufuyang/deep_learning_tutorial | mit |
Expected Output for m_train, m_test and num_px:
<table style="width:15%">
<tr>
<td>**m_train**</td>
<td> 209 </td>
</tr>
<tr>
<td>**m_test**</td>
<td> 50 </td>
</tr>
<tr>
<td>**num_px**</td>
<td> 64 </td>
</tr>
</table>
For convenience, you should now reshape images of shape ... | # Reshape the training and test examples
### START CODE HERE ### (≈ 2 lines of code)
train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0], -1).T
test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0], -1).T
### END CODE HERE ###
print ("train_set_x_flatten shape: " + str(train_set_x... | course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb | liufuyang/deep_learning_tutorial | mit |
Expected Output:
<table style="width:35%">
<tr>
<td>**train_set_x_flatten shape**</td>
<td> (12288, 209)</td>
</tr>
<tr>
<td>**train_set_y shape**</td>
<td>(1, 209)</td>
</tr>
<tr>
<td>**test_set_x_flatten shape**</td>
<td>(12288, 50)</td>
</tr>
<tr>
<td>**test_set_y shape*... | train_set_x = train_set_x_flatten/255.
test_set_x = test_set_x_flatten/255. | course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb | liufuyang/deep_learning_tutorial | mit |
<font color='blue'>
What you need to remember:
Common steps for pre-processing a new dataset are:
- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)
- Reshape the datasets such that each example is now a vector of size (num_px * num_px * 3, 1)
- "Standardize" the data
3 - General Archi... | # GRADED FUNCTION: sigmoid
def sigmoid(z):
"""
Compute the sigmoid of z
Arguments:
z -- A scalar or numpy array of any size.
Return:
s -- sigmoid(z)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1.0 / (1.0 + np.exp(-z))
### END CODE HERE ###
return s
print ("si... | course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb | liufuyang/deep_learning_tutorial | mit |
Expected Output:
<table>
<tr>
<td>**sigmoid([0, 2])**</td>
<td> [ 0.5 0.88079708]</td>
</tr>
</table>
4.2 - Initializing parameters
Exercise: Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up ... | # GRADED FUNCTION: initialize_with_zeros
def initialize_with_zeros(dim):
"""
This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.
Argument:
dim -- size of the w vector we want (or number of parameters in this case)
Returns:
w -- initialized vector of... | course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb | liufuyang/deep_learning_tutorial | mit |
Expected Output:
<table style="width:15%">
<tr>
<td> ** w ** </td>
<td> [[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td> ** b ** </td>
<td> 0 </td>
</tr>
</table>
For image inputs, w will be of shape (num_px $\times$ num_px $\times$ 3, 1).
4.3 - Forward and Backward propagation... | # GRADED FUNCTION: propagate
def propagate(w, b, X, Y):
"""
Implement the cost function and its gradient for the propagation explained above
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
... | course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb | liufuyang/deep_learning_tutorial | mit |
Expected Output:
<table style="width:50%">
<tr>
<td> ** dw ** </td>
<td> [[ 0.99993216]
[ 1.99980262]]</td>
</tr>
<tr>
<td> ** db ** </td>
<td> 0.499935230625 </td>
</tr>
<tr>
<td> ** cost ** </td>
<td> 6.000064773192205</td>
</tr>
</table>... | # GRADED FUNCTION: optimize
def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
"""
This function optimizes w and b by running a gradient descent algorithm
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of shap... | course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb | liufuyang/deep_learning_tutorial | mit |
Expected Output:
<table style="width:40%">
<tr>
<td> **w** </td>
<td>[[ 0.1124579 ]
[ 0.23106775]] </td>
</tr>
<tr>
<td> **b** </td>
<td> 1.55930492484 </td>
</tr>
<tr>
<td> **dw** </td>
<td> [[ 0.90158428]
[ 1.76250842]] </td>
</tr>
<tr>
... | # GRADED FUNCTION: predict
def predict(w, b, X):
'''
Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of example... | course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb | liufuyang/deep_learning_tutorial | mit |
Expected Output:
<table style="width:30%">
<tr>
<td>
**predictions**
</td>
<td>
[[ 1. 1.]]
</td>
</tr>
</table>
<font color='blue'>
What to remember:
You've implemented several functions that:
- Initialize (w,b)
- Optimize the loss iteratively t... | # GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):
"""
Builds the logistic regression model by calling the function you've implemented previously
Arguments:
X_train -- training set represented by a numpy array of sh... | course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb | liufuyang/deep_learning_tutorial | mit |
Run the following cell to train your model. | d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True) | course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb | liufuyang/deep_learning_tutorial | mit |
Expected Output:
<table style="width:40%">
<tr>
<td> **Train Accuracy** </td>
<td> 99.04306220095694 % </td>
</tr>
<tr>
<td>**Test Accuracy** </td>
<td> 70.0 % </td>
</tr>
</table>
Comment: Training accuracy is close to 100%. This is a good sanity check: your mode... | # Example of a picture that was wrongly classified.
index = 1
plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3)))
print ("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0,index]].decode("utf-8") + "\" picture.") | course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb | liufuyang/deep_learning_tutorial | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.