markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Three-point centered-difference formula for second derivative $f''(x) = \frac{f(x - h) - 2f(x) + f(x + h)}{h^2} - \frac{h^2}{12}f^{(iv)}(c)$ for some $c$ between $x - h$ and $x + h$ Rounding error Example Approximate the derivative of $f(x) = e^x$ at $x = 0$
# Parameters f = lambda x : math.exp(x) real_value = 1 h_msg = "$10^{-%d}$" twp_deri_x1 = lambda x, h : ( f(x + h) - f(x) ) / h thp_deri_x1 = lambda x, h : ( f(x + h) - f(x - h) ) / (2 * h) data = [ ["h", "$f'(x) \\approx \\frac{e^{x+h} - e^x}{h}$", "error", "$f'(x) \\approx \\frac{e^{x+h} - e^{x-h}}{2h}$", "error"], ] for i in range(1,10): h = pow(10, -i) twp_deri_x1_value = twp_deri_x1(0, h) thp_deri_x1_value = thp_deri_x1(0, h) row = ["", "", "", "", ""] row[0] = h_msg %i row[1] = '%.14f' %twp_deri_x1_value row[2] = '%.14f' %abs(twp_deri_x1_value - real_value) row[3] = '%.14f' %thp_deri_x1_value row[4] = '%.14f' %abs(thp_deri_x1_value - real_value) data.append(row) table = ply_ff.create_table(data) plotly.offline.iplot(table, show_link=False)
5_Numerical_Differentiation_And_Integration.ipynb
Jim00000/Numerical-Analysis
unlicense
Extrapolation for order n formula $ Q \approx \frac{2^nF(h/2) - F(h)}{2^n - 1} $
sym.init_printing(use_latex=True) x = sym.Symbol('x') dx = sym.diff(sym.exp(sym.sin(x)), x) Math('Derivative : %s' %sym.latex(dx) )
5_Numerical_Differentiation_And_Integration.ipynb
Jim00000/Numerical-Analysis
unlicense
5.2 Newton-Cotes Formulas For Numerical Integration Trapezoid Rule $\int_{x_0}^{x_1} f(x) dx = \frac{h}{2}(y_0 + y_1) - \frac{h^3}{12}f''(c)$ where $h = x_1 - x_0$ and $c$ is between $x_0$ and $x_1$ Simpson's Rule $\int_{x_0}^{x_2} f(x) dx = \frac{h}{3}(y_0 + 4y_1 + y_2) - \frac{h^5}{90}f^{(iv)}(c)$ where $h = x_2 - x_1 = x_1 - x_0$ and $c$ is between $x_0$ and $x_2$ Example Apply the Trapezoid Rule and Simpson's Rule to approximate $\int_{1}^{2} \ln(x) dx$ and find an upper bound for the error in your approximations
# Apply Trapezoid Rule trapz = scipy.integrate.trapz([np.log(1), np.log(2)], [1, 2]) # Evaluate the error term of Trapezoid Rule sym_x = sym.Symbol('x') expr = sym.diff(sym.log(sym_x), sym_x, 2) trapz_err = abs(expr.subs(sym_x, 1).evalf() / 12) # Print out results print('Trapezoid rule : %f and upper bound error : %f' %(trapz, trapz_err) ) # Apply Simpson's Rule area = scipy.integrate.simps([np.log(1), np.log(1.5), np.log(2)], [1, 1.5, 2]) # Evaluate the error term sym_x = sym.Symbol('x') expr = sym.diff(sym.log(sym_x), sym_x, 4) simps_err = abs( pow(0.5, 5) / 90 * expr.subs(sym_x, 1).evalf() ) # Print out results print('Simpson\'s rule : %f and upper bound error : %f' %(area, simps_err) )
5_Numerical_Differentiation_And_Integration.ipynb
Jim00000/Numerical-Analysis
unlicense
Composite Trapezoid Rule $\int_{a}^{b} f(x) dx = \frac{h}{2} \left ( y_0 + y_m + 2\sum_{i=1}^{m-1}y_i \right ) - \frac{(b-a)h^2}{12}f''(c)$ where $h = (b - a) / m $ and $c$ is between $a$ and $b$ Composite Simpson's Rule $ \int_{a}^{b}f(x)dx = \frac{h}{3}\left [ y_0 + y_{2m} + 4\sum_{i=1}^{m}y_{2i-1} + 2\sum_{i=1}^{m - 1}y_{2i} \right ] - \frac{(b-a)h^4}{180}f^{(iv)}(c) $ where $c$ is between $a$ and $b$ Example Carry out four-panel approximations of $\int_{1}^{2} \ln{x} dx$ using the composite Trapezoid Rule and composite Simpson's Rule
# Apply composite Trapezoid Rule x = np.linspace(1, 2, 5) y = np.log(x) trapz = scipy.integrate.trapz(y, x) # Error term sym_x = sym.Symbol('x') expr = sym.diff(sym.log(sym_x), sym_x, 2) trapz_err = abs( (2 - 1) * pow(0.25, 2) / 12 * expr.subs(sym_x, 1).evalf() ) print('Trapezoid Rule : %f, error = %f' %(trapz, trapz_err) ) # Apply composite Trapezoid Rule x = np.linspace(1, 2, 9) y = np.log(x) area = scipy.integrate.simps(y, x) # Error term sym_x = sym.Symbol('x') expr = sym.diff(sym.log(sym_x), sym_x, 4) simps_err = abs( (2 - 1) * pow(0.125, 4) / 180 * expr.subs(sym_x, 1).evalf() ) print('Simpson\'s Rule : %f, error = %f' %(area, simps_err) )
5_Numerical_Differentiation_And_Integration.ipynb
Jim00000/Numerical-Analysis
unlicense
Midpoint Rule $ \int_{x_0}^{x_1} f(x)dx = hf(\omega) + \frac{h^3}{24}f''(c) $ where $ h = (x_1 - x_0) $, $\omega$ is the midpoint $ x_0 + h / 2 $, and $c$ is between $x_0$ and $x_1$ Composite Midpoint Rule $ \int_{a}^{b} f(x) dx = h \sum_{i=1}^{m}f(\omega_{i}) + \frac{(b - a)h^2}{24} f''(c) $ where $h = (b - a) / m$ and $c$ is between $a$ and $b$. The $\omega_{i}$ are the midpoints of the $m$ equal subintervals of $[a,b]$ Example Approximate $\int_{0}^{1} \frac{\sin x}{x} dx$ by using the composite Midpoint Rule with $m = 10$ panels
# Parameters m = 10 h = (1 - 0) / m f = lambda x : np.sin(x) / x mids = np.arange(0 + h/2, 1, h) # Apply composite midpoint rule area = h * np.sum(f(mids)) # Error term sym_x = sym.Symbol('x') expr = sym.diff(sym.sin(sym_x) / sym_x, sym_x, 2) mid_err = abs( (1 - 0) * pow(h, 2) / 24 * expr.subs(sym_x, 1).evalf() ) # Print out print('Composite Midpoint Rule : %.8f, error = %.8f' %(area, mid_err) )
5_Numerical_Differentiation_And_Integration.ipynb
Jim00000/Numerical-Analysis
unlicense
5.3 Romberg Integration
def romberg(f, a, b, step): R = np.zeros(step * step).reshape(step, step) R[0][0] = (b - a) * (f(a) + f(b)) / 2 for j in range(1, step): h = (b - a) / pow(2, j) summ = 0 for i in range(1, pow(2, j - 1) + 1): summ += h * f(a + (2 * i - 1) * h) R[j][0] = 0.5 * R[j - 1][0] + summ for k in range(1, j + 1): R[j][k] = ( pow(4, k) * R[j][k - 1] - R[j - 1][k - 1] ) / ( pow(4, k) - 1 ) return R[step - 1][step - 1]
5_Numerical_Differentiation_And_Integration.ipynb
Jim00000/Numerical-Analysis
unlicense
Example Apply Romberg Integration to approximate $\int_{1}^{2} \ln{x}dx$
f = lambda x : np.log(x) result = romberg(f, 1, 2, 4) print('Romberg Integration : %f' %(result) ) f = lambda x : np.log(x) result = scipy.integrate.romberg(f, 1, 2, show=True) print('Romberg Integration : %f' %(result) )
5_Numerical_Differentiation_And_Integration.ipynb
Jim00000/Numerical-Analysis
unlicense
5.4 Adaptive Quadrature
''' Use Trapezoid Rule ''' def adaptive_quadrature(f, a, b, tol): return adaptive_quadrature_recursively(f, a, b, tol, a, b, 0) def adaptive_quadrature_recursively(f, a, b, tol, orig_a, orig_b, deep): c = (a + b) / 2 S = lambda x, y : (y - x) * (f(x) + f(y)) / 2 if abs( S(a, b) - S(a, c) - S(c, b) ) < 3 * tol * (b - a) / (orig_b - orig_a) or deep > 20 : return S(a, c) + S(c, b) else: return adaptive_quadrature_recursively(f, a, c, tol / 2, orig_a, orig_b, deep + 1) + adaptive_quadrature_recursively(f, c, b, tol / 2, orig_a, orig_b, deep + 1) ''' Use Simpon's Rule ''' def adaptive_quadrature(f, a, b, tol): return adaptive_quadrature_recursively(f, a, b, tol, a, b, 0) def adaptive_quadrature_recursively(f, a, b, tol, orig_a, orig_b, deep): c = (a + b) / 2 S = lambda x, y : (y - x) * ( f(x) + 4 * f((x + y) / 2) + f(y) ) / 6 if abs( S(a, b) - S(a, c) - S(c, b) ) < 15 * tol or deep > 20 : return S(a, c) + S(c, b) else: return adaptive_quadrature_recursively(f, a, c, tol / 2, orig_a, orig_b, deep + 1) + adaptive_quadrature_recursively(f, c, b, tol / 2, orig_a, orig_b, deep + 1)
5_Numerical_Differentiation_And_Integration.ipynb
Jim00000/Numerical-Analysis
unlicense
Example Use Adaptive Quadrature to approximate the integral $ \int_{-1}^{1} (1 + \sin{e^{3x}}) dx $
f = lambda x : 1 + np.sin(np.exp(3 * x)) val = adaptive_quadrature(f, -1, 1, tol=1e-12) print(val)
5_Numerical_Differentiation_And_Integration.ipynb
Jim00000/Numerical-Analysis
unlicense
5.5 Gaussian Quadrature
poly = scipy.special.legendre(2) # Find roots of polynomials comp = scipy.linalg.companion(poly) roots = scipy.linalg.eig(comp)[0]
5_Numerical_Differentiation_And_Integration.ipynb
Jim00000/Numerical-Analysis
unlicense
Example Approximate $\int_{-1}^{1} e^{-\frac{x^2}{2}}dx$ using Gaussian Quadrature
f = lambda x : np.exp(-np.power(x, 2) / 2) quad = scipy.integrate.quadrature(f, -1, 1) print(quad[0]) # Parametes a = -1 b = 1 deg = 3 f = lambda x : np.exp( -np.power(x, 2) / 2 ) x, w = scipy.special.p_roots(deg) # Or use numpy.polynomial.legendre.leggauss quad = np.sum(w * f(x)) print(quad)
5_Numerical_Differentiation_And_Integration.ipynb
Jim00000/Numerical-Analysis
unlicense
Example Approximate the integral $\int_{1}^{2} \ln{x} dx$ using Gaussian Quadrature
# Parametes a = 1 b = 2 deg = 4 f = lambda t : np.log( ((b - a) * t + b + a) / 2) * (b - a) / 2 x, w = scipy.special.p_roots(deg) np.sum(w * f(x))
5_Numerical_Differentiation_And_Integration.ipynb
Jim00000/Numerical-Analysis
unlicense
Second step Initialize COMPSs runtime Parameters indicates if the execution will generate task graph, tracefile, monitor interval and debug information. The parameter taskCount is a work around for the dot generation of the legend
ipycompss.start(graph=True, trace=True, debug=True, project_xml='../project.xml', resources_xml='../resources.xml', mpi_worker=True)
tests/sources/python/9_jupyter_notebook/src/simple_mpi.ipynb
mF2C/COMPSs
apache-2.0
I'd like to make this figure better - easier to tell which rows people are on Save Notebook
%%bash jupyter nbconvert --to slides Exploring_Data.ipynb && mv Exploring_Data.slides.html ../notebook_slides/Exploring_Data_v2.slides.html jupyter nbconvert --to html Exploring_Data.ipynb && mv Exploring_Data.html ../notebook_htmls/Exploring_Data_v2.html cp Exploring_Data.ipynb ../notebook_versions/Exploring_Data_v2.ipynb # push to s3 import sys import os sys.path.append(os.getcwd()+'/../') from src import s3_data_management s3_data_management.push_results_to_s3('Exploring_Data_v1.html','../notebook_htmls/Exploring_Data_v1.html') s3_data_management.push_results_to_s3('Exporing_Data_v1.slides.html','../notebook_slides/Exploring_Data_v1.slides.html')
notebook_versions/Exploring_Data_v2.ipynb
walkon302/CDIPS_Recommender
apache-2.0
2. Read in the hanford.csv file
cd C:\Users\Harsha Devulapalli\Desktop\algorithms\class6 df=pd.read_csv("data/hanford.csv")
class6/donow/Devulapalli_Harsha_Class6_DoNow.ipynb
ledeprogram/algorithms
gpl-3.0
6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)
df.plot(kind="scatter",x="Exposure",y="Mortality") plt.plot(df["Exposure"],slope*df["Exposure"]+intercept,"-",color="red") r = df.corr()['Exposure']['Mortality'] r*r
class6/donow/Devulapalli_Harsha_Class6_DoNow.ipynb
ledeprogram/algorithms
gpl-3.0
7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10
def predictor(exposure): return intercept+float(exposure)*slope predictor(10)
class6/donow/Devulapalli_Harsha_Class6_DoNow.ipynb
ledeprogram/algorithms
gpl-3.0
分词 任务越少,速度越快。如指定仅执行分词,默认细粒度:
HanLP('阿婆主来到北京立方庭参观自然语义科技公司。', tasks='tok').pretty_print()
plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb
hankcs/HanLP
apache-2.0
执行粗颗粒度分词:
HanLP('阿婆主来到北京立方庭参观自然语义科技公司。', tasks='tok/coarse').pretty_print()
plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb
hankcs/HanLP
apache-2.0
同时执行细粒度和粗粒度分词:
HanLP('阿婆主来到北京立方庭参观自然语义科技公司。', tasks='tok*')
plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb
hankcs/HanLP
apache-2.0
coarse为粗分,fine为细分。 注意 Native API的输入单位限定为句子,需使用多语种分句模型或基于规则的分句函数先行分句。RESTful同时支持全文、句子、已分词的句子。除此之外,RESTful和native两种API的语义设计完全一致,用户可以无缝互换。 自定义词典 自定义词典为分词任务的成员变量,要操作自定义词典,先获取分词任务,以细分标准为例:
tok = HanLP['tok/fine'] tok
plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb
hankcs/HanLP
apache-2.0
自定义词典为分词任务的成员变量:
tok.dict_combine, tok.dict_force
plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb
hankcs/HanLP
apache-2.0
HanLP支持合并和强制两种优先级的自定义词典,以满足不同场景的需求。 不挂词典:
tok.dict_force = tok.dict_combine = None HanLP("商品和服务项目", tasks='tok/fine').pretty_print()
plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb
hankcs/HanLP
apache-2.0
强制模式 强制模式优先输出正向最长匹配到的自定义词条(慎用,详见《自然语言处理入门》第二章):
tok.dict_force = {'和服', '服务项目'} HanLP("商品和服务项目", tasks='tok/fine').pretty_print()
plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb
hankcs/HanLP
apache-2.0
与大众的朴素认知不同,词典优先级最高未必是好事,极有可能匹配到不该分出来的自定义词语,导致歧义。自定义词语越长,越不容易发生歧义。这启发我们将强制模式拓展为强制校正功能。 强制校正原理相似,但会将匹配到的自定义词条替换为相应的分词结果:
tok.dict_force = {'和服务': ['和', '服务']} HanLP("商品和服务项目", tasks='tok/fine').pretty_print()
plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb
hankcs/HanLP
apache-2.0
合并模式 合并模式的优先级低于统计模型,即dict_combine会在统计模型的分词结果上执行最长匹配并合并匹配到的词条。一般情况下,推荐使用该模式。
tok.dict_force = None tok.dict_combine = {'和服', '服务项目'} HanLP("商品和服务项目", tasks='tok/fine').pretty_print()
plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb
hankcs/HanLP
apache-2.0
需要算法基础才能理解,初学者可参考《自然语言处理入门》。 空格单词 含有空格、制表符等(Transformer tokenizer去掉的字符)的词语需要用tuple的形式提供:
tok.dict_combine = {('iPad', 'Pro'), '2个空格'} HanLP("如何评价iPad Pro ?iPad Pro有2个空格", tasks='tok/fine')['tok/fine']
plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb
hankcs/HanLP
apache-2.0
聪明的用户请继续阅读,tuple词典中的字符串其实等价于该字符串的所有可能的切分方式:
dict(tok.dict_combine.config["dictionary"]).keys()
plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb
hankcs/HanLP
apache-2.0
单词位置 HanLP支持输出每个单词在文本中的原始位置,以便用于搜索引擎等场景。在词法分析中,非语素字符(空格、换行、制表符等)会被剔除,此时需要额外的位置信息才能定位每个单词:
tok.config.output_spans = True sent = '2021 年\nHanLPv2.1 为生产环境带来次世代最先进的多语种NLP技术。' word_offsets = HanLP(sent, tasks='tok/fine')['tok/fine'] print(word_offsets)
plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb
hankcs/HanLP
apache-2.0
返回格式为三元组(单词,单词的起始下标,单词的终止下标),下标以字符级别计量。
for word, begin, end in word_offsets: assert word == sent[begin:end]
plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb
hankcs/HanLP
apache-2.0
Reading Dataset Idea here is to read the files in the dataset to extract data for training, testing and the corresponding (activity) labels for them. The outcome is a set of numpy arrays for each set.
# Paths and filenames DATASET_PATH = "../dataset/UCI HAR/UCI HAR Dataset" TEST_RELPATH = "/test" TRAIN_RELPATH = "/train" VARS_FILENAMES = [ 'body_acc_x_', 'body_acc_y_', 'body_acc_z_', 'body_gyro_x_', 'body_gyro_y_', 'body_gyro_z_', 'total_acc_x_', 'total_acc_y_', 'total_acc_z_'] LABELS_DEF_FILE = DATASET_PATH + "/activity_labels.txt" # Make a list of files for training trainFiles = [DATASET_PATH + TRAIN_RELPATH + '/Inertial Signals/' + var_filename + 'train.txt' for var_filename in VARS_FILENAMES] # Make an tensor with data for training dataTrain = ds.get_data(trainFiles, print_on = True) # Show dataTrain dimensions print dataTrain.shape # Make a list of files for testing testFiles = [DATASET_PATH + TEST_RELPATH + '/Inertial Signals/' + var_filename + 'test.txt' for var_filename in VARS_FILENAMES] # Make an tensor with data for training dataTest = ds.get_data(testFiles, print_on = True) # Show dataTrain dimensions print dataTest.shape # Sensor 0 : Sample 1 (128 samples) (Training set) fig = plt.figure() plt.figure(figsize=(16,8)) dataTrain[0,1,:] plt.plot(dataTrain[0,1,:]) plt.show() # Sensor 1 : Sample 2 (128 samples) (Test set) fig = plt.figure() plt.figure(figsize=(16,8)) dataTest[1,2,:] plt.plot(dataTest[1,2,:]) plt.show() # Get the labels values for training samples trainLabelsFile = DATASET_PATH + TRAIN_RELPATH + '/' + 'y_train.txt' labelsTrain = ds.get_labels(trainLabelsFile, print_on = True) print labelsTrain.shape #show dimension # Get the labels values for testing samples testLabelsFile = DATASET_PATH + TEST_RELPATH + '/' + 'y_test.txt' labelsTest = ds.get_labels(testLabelsFile, print_on = True) print labelsTest.shape #show dimension # convert outputs to one-hot code labelsTrainEncoded = ds.encode_onehot(labelsTrain) labelsTestEncoded = ds.encode_onehot(labelsTest) # Make a dictionary labelDictionary = ds.make_labels_dictionary(LABELS_DEF_FILE) print label_dict print "\n" sel = 300 print "label {} ({}) -> {}".format(labelsTrain[sel], label_dict[labelsTrain[sel]], labelsTrainEncoded[sel])
sources/TempScript.ipynb
francof2a/APC
gpl-3.0
Filtered plots
activityToPlot = 2.0 fig = plt.figure() plt.figure(figsize=(16,8)) plt.title(label_dict[activityToPlot]) for idx, activity in enumerate(labelsTrain): if activityToPlot == activity: plt.plot(dataTrain[4,idx,:]) plt.show()
sources/TempScript.ipynb
francof2a/APC
gpl-3.0
RNN first tries
numLayers = 50; lstm_cell = tf.contrib.rnn.BasicRNNCell(numLayers) lstm_cell
sources/TempScript.ipynb
francof2a/APC
gpl-3.0
We will generate a test set of 50 "bombs", and each "bomb" will be run through a 20-step measurement circuit. We set up the program as explained in previous examples.
# Use local qasm simulator backend = Aer.get_backend('qasm_simulator') # Use the IBMQ Quantum Experience # backend = least_busy(IBMQ.backends()) N = 50 # Number of bombs steps = 20 # Number of steps for the algorithm, limited by maximum circuit depth eps = np.pi / steps # Algorithm parameter, small # Prototype circuit for bomb generation q_gen = QuantumRegister(1, name='q_gen') c_gen = ClassicalRegister(1, name='c_gen') IFM_gen = QuantumCircuit(q_gen, c_gen, name='IFM_gen') # Prototype circuit for bomb measurement q = QuantumRegister(2, name='q') c = ClassicalRegister(steps+1, name='c') IFM_meas = QuantumCircuit(q, c, name='IFM_meas')
community/terra/qis_adv/vaidman_detection_test.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Generating a random bomb is achieved by simply applying a Hadamard gate to a $q_1$, which starts in $|0\rangle$, and then measuring. This randomly gives a $0$ or $1$, each with equal probability. We run one such circuit for each bomb, since circuits are currently limited to a single measurement.
# Quantum circuits to generate bombs qc = [] circuits = ["IFM_gen"+str(i) for i in range(N)] # NB: Can't have more than one measurement per circuit for circuit in circuits: IFM = QuantumCircuit(q_gen, c_gen, name=circuit) IFM.h(q_gen[0]) #Turn the qubit into |0> + |1> IFM.measure(q_gen[0], c_gen[0]) qc.append(IFM) _ = [i.qasm() for i in qc] # Suppress the output
community/terra/qis_adv/vaidman_detection_test.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Note that, since we want to measure several discrete instances, we do not want to average over multiple shots. Averaging would yield partial bombs, but we assume bombs are discretely either live or dead.
result = execute(qc, backend=backend, shots=1).result() # Note that we only want one shot bombs = [] for circuit in qc: for key in result.get_counts(circuit): # Hack, there should only be one key, since there was only one shot bombs.append(int(key)) #print(', '.join(('Live' if bomb else 'Dud' for bomb in bombs))) # Uncomment to print out "truth" of bombs plot_histogram(Counter(('Live' if bomb else 'Dud' for bomb in bombs))) #Plotting bomb generation results
community/terra/qis_adv/vaidman_detection_test.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Testing the Bombs Here we implement the algorithm described above to measure the bombs. As with the generation of the bombs, it is currently impossible to take several measurements in a single circuit - therefore, it must be run on the simulator.
# Use local qasm simulator backend = Aer.get_backend('qasm_simulator') qc = [] circuits = ["IFM_meas"+str(i) for i in range(N)] #Creating one measurement circuit for each bomb for i in range(N): bomb = bombs[i] IFM = QuantumCircuit(q, c, name=circuits[i]) for step in range(steps): IFM.ry(eps, q[0]) #First we rotate the control qubit by epsilon if bomb: #If the bomb is live, the gate is a controlled X gate IFM.cx(q[0],q[1]) #If the bomb is a dud, the gate is a controlled identity gate, which does nothing IFM.measure(q[1], c[step]) #Now we measure to collapse the combined state IFM.measure(q[0], c[steps]) qc.append(IFM) _ = [i.qasm() for i in qc] # Suppress the output result = execute(qc, backend=backend, shots=1, max_credits=5).result() def get_status(counts): # Return whether a bomb was a dud, was live but detonated, or was live and undetonated # Note that registers are returned in reversed order for key in counts: if '1' in key[1:]: #If we ever measure a '1' from the measurement qubit (q1), the bomb was measured and will detonate return '!!BOOM!!' elif key[0] == '1': #If the control qubit (q0) was rotated to '1', the state never entangled because the bomb was a dud return 'Dud' else: #If we only measured '0' for both the control and measurement qubit, the bomb was live but never set off return 'Live' results = {'Live': 0, 'Dud': 0, "!!BOOM!!": 0} for circuit in qc: status = get_status(result.get_counts(circuit)) results[status] += 1 plot_histogram(results)
community/terra/qis_adv/vaidman_detection_test.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Generator network Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values. Variable Scope Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks. We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again. To use tf.variable_scope, you use a with statement: python with tf.variable_scope('scope_name', reuse=False): # code here Here's more from the TensorFlow documentation to get another look at using tf.variable_scope. Leaky ReLU TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x: $$ f(x) = max(\alpha * x, x) $$ Tanh Output The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1. Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01): ''' Build the generator network. Arguments --------- z : Input tensor for the generator out_dim : Shape of the generator output n_units : Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak parameter for leaky ReLU Returns ------- out, logits: ''' with tf.variable_scope('generator', reuse=reuse): # Hidden layer h1 = tf.layers.dense(z, n_units, activation=None )#tf.nn.elu) # Leaky ReLU h1 = tf.maximum(alpha * h1, h1) # Logits and tanh output logits = tf.layers.dense(h1, out_dim, activation=None) out = tf.tanh(logits) #out = tf.sigmoid(logits) return out
gan_mnist/Intro_to_GANs_Exercises.ipynb
AndysDeepAbstractions/deep-learning
mit
Discriminator The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer. Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
def discriminator(x, n_units=128, reuse=False, alpha=0.01): ''' Build the discriminator network. Arguments --------- x : Input tensor for the discriminator n_units: Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak parameter for leaky ReLU Returns ------- out, logits: ''' with tf.variable_scope('discriminator', reuse=reuse): # Hidden layer h1 = tf.layers.dense(x, n_units, activation= None)#tf.nn.elu) # Leaky ReLU h1 = tf.maximum(alpha * h1, h1) logits = tf.layers.dense(h1, 1, activation=None) #out = tf.sigmoid(logits) out = tf.tanh(logits) return out, logits
gan_mnist/Intro_to_GANs_Exercises.ipynb
AndysDeepAbstractions/deep-learning
mit
Hyperparameters
# Size of input image to discriminator input_size = 784 # 28x28 MNIST images flattened # Size of latent vector to generator z_size = 100 # Sizes of hidden layers in generator and discriminator g_hidden_size = 128 d_hidden_size = 128 # Leak factor for leaky ReLU alpha = 0.01 # Label smoothing smooth = 0.25
gan_mnist/Intro_to_GANs_Exercises.ipynb
AndysDeepAbstractions/deep-learning
mit
Discriminator and Generator Losses Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like python tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth) The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that. Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images. Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
# Calculate losses d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels = tf.ones_like(d_logits_real) * (1 - smooth))) d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels = tf.ones_like(d_logits_real) * -(1 - smooth))) d_loss = d_loss_real + d_loss_fake g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels = tf.ones_like(d_logits_fake) * (1 - smooth)))
gan_mnist/Intro_to_GANs_Exercises.ipynb
AndysDeepAbstractions/deep-learning
mit
Training
def view_samples(epoch, samples): fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch]): ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) im = ax.imshow(img.reshape((28,28)), cmap='Greys_r') plt.show() return fig, axes batch_size = 100 epochs = 1 samples = [] losses = [] saver = tf.train.Saver(var_list = g_vars) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) # Get images, reshape and rescale to pass to D batch_images = batch[0].reshape((batch_size, 784)) batch_images = batch_images*2 - 1 # Sample random noise for G batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size)) # Run optimizers _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z}) _ = sess.run(g_train_opt, feed_dict={input_z: batch_z}) # At the end of each epoch, get the losses and print them out %time train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images}) train_loss_g = g_loss.eval({input_z: batch_z}) print("Epoch {}/{}...".format(e+1, epochs), "Discriminator Loss: {:.4f}...".format(train_loss_d), "Generator Loss: {:.4f}".format(train_loss_g)) # Save losses to view after training losses.append((train_loss_d, train_loss_g)) # Sample from generator as we're training for viewing afterwards sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, reuse=True), feed_dict={input_z: sample_z}) samples.append(gen_samples) saver.save(sess, './checkpoints/generator_my.ckpt') _ = view_samples(-1, samples) # Save training generator samples with open('train_samples_my.pkl', 'wb') as f: pkl.dump(samples, f)
gan_mnist/Intro_to_GANs_Exercises.ipynb
AndysDeepAbstractions/deep-learning
mit
Generator samples from training Here we can view samples of images from the generator. First we'll look at images taken while training.
# Load samples from generator taken while training with open('train_samples.pkl', 'rb') as f: samples = pkl.load(f)
gan_mnist/Intro_to_GANs_Exercises.ipynb
AndysDeepAbstractions/deep-learning
mit
Compute and visualize statistics tfdv.generate_statistics_from_csv로 데이터 분포 생성 많은 데이터일 경우 내부적으로 Apache Beam을 사용해 병렬처리 Beam의 PTransform과 결합 가능
train_stats = tfdv.generate_statistics_from_csv(data_location=TRAIN_DATA)
Tensorflow-Extended/TFDV(data validation) example.ipynb
zzsza/TIL
mit
tfdv.visualize_statistics를 사용해 시각화, 내부적으론 Facets을 사용한다 함 numeric, categorical feature들을 나눔
tfdv.visualize_statistics(train_stats)
Tensorflow-Extended/TFDV(data validation) example.ipynb
zzsza/TIL
mit
Infer a scahema 데이터를 통해 스키마 추론 tfdv.infer_schema tfdv.display_schema
schema = tfdv.infer_schema(statistics=train_stats) tfdv.display_schema(schema=schema)
Tensorflow-Extended/TFDV(data validation) example.ipynb
zzsza/TIL
mit
평가 데이터 에러 체크 train, validation에서 다른 데이터들이 있음 캐글할 때 유용할듯
# Compute stats for evaluation data eval_stats = tfdv.generate_statistics_from_csv(data_location=EVAL_DATA) # Compare evaluation data with training data tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats, lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')
Tensorflow-Extended/TFDV(data validation) example.ipynb
zzsza/TIL
mit
Check for evaluation anomalies train 데이터엔 없었는데 validation에 생긴 데이터 있는지 확인
# Check eval data for errors by validating the eval data stats using the previously inferred schema. anomalies = tfdv.validate_statistics(statistics=eval_stats, schema=schema) tfdv.display_anomalies(anomalies)
Tensorflow-Extended/TFDV(data validation) example.ipynb
zzsza/TIL
mit
Fix evaluation anomalies in the schema 수정
# Relax the minimum fraction of values that must come from the domain for feature company. company = tfdv.get_feature(schema, 'company') company.distribution_constraints.min_domain_mass = 0.9 # Add new value to the domain of feature payment_type. payment_type_domain = tfdv.get_domain(schema, 'payment_type') payment_type_domain.value.append('Prcard') # Validate eval stats after updating the schema updated_anomalies = tfdv.validate_statistics(eval_stats, schema) tfdv.display_anomalies(updated_anomalies)
Tensorflow-Extended/TFDV(data validation) example.ipynb
zzsza/TIL
mit
Schema Environments serving할 때도 스키마 체크해야 함 Environments can be used to express such requirements. In particular, features in schema can be associated with a set of environments using default_environment, in_environment and not_in_environment.
serving_stats = tfdv.generate_statistics_from_csv(SERVING_DATA) serving_anomalies = tfdv.validate_statistics(serving_stats, schema) tfdv.display_anomalies(serving_anomalies)
Tensorflow-Extended/TFDV(data validation) example.ipynb
zzsza/TIL
mit
Int value가 있음 => Float으로 수정
options = tfdv.StatsOptions(schema=schema, infer_type_from_schema=True) serving_stats = tfdv.generate_statistics_from_csv(SERVING_DATA, stats_options=options) serving_anomalies = tfdv.validate_statistics(serving_stats, schema) tfdv.display_anomalies(serving_anomalies) # All features are by default in both TRAINING and SERVING environments. schema.default_environment.append('TRAINING') schema.default_environment.append('SERVING') # Specify that 'tips' feature is not in SERVING environment. tfdv.get_feature(schema, 'tips').not_in_environment.append('SERVING') serving_anomalies_with_env = tfdv.validate_statistics( serving_stats, schema, environment='SERVING') tfdv.display_anomalies(serving_anomalies_with_env)
Tensorflow-Extended/TFDV(data validation) example.ipynb
zzsza/TIL
mit
Check for drift and skew Drift Drift detection is supported for categorical features and between consecutive spans of data (i.e., between span N and span N+1), such as between different days of training data. We express drift in terms of L-infinity distance, and you can set the threshold distance so that you receive warnings when the drift is higher than is acceptable. Setting the correct distance is typically an iterative process requiring domain knowledge and experimentation. Skew Schema Skew 같은 스키마를 가지지 않을 때 Feature Skew Feature 생성 로직이 변경될 때 Dsitribution Skew Train, Serving시 데이터 분포가 다를 경우
# Add skew comparator for 'payment_type' feature. payment_type = tfdv.get_feature(schema, 'payment_type') payment_type.skew_comparator.infinity_norm.threshold = 0.01 # Add drift comparator for 'company' feature. company=tfdv.get_feature(schema, 'company') company.drift_comparator.infinity_norm.threshold = 0.001 skew_anomalies = tfdv.validate_statistics(train_stats, schema, previous_statistics=eval_stats, serving_statistics=serving_stats) tfdv.display_anomalies(skew_anomalies)
Tensorflow-Extended/TFDV(data validation) example.ipynb
zzsza/TIL
mit
Freeze the schema 스키마 저장
from tensorflow.python.lib.io import file_io from google.protobuf import text_format file_io.recursive_create_dir(OUTPUT_DIR) schema_file = os.path.join(OUTPUT_DIR, 'schema.pbtxt') tfdv.write_schema_text(schema, schema_file) !cat {schema_file}
Tensorflow-Extended/TFDV(data validation) example.ipynb
zzsza/TIL
mit
轴向连接 DataFrame 中有很丰富的merge方法,此外还有一种数据合并运算被称作连接(concatenation)、binding、stacking。 在Numpy中,也有concatenation函数。
import numpy as np arr1 = np.arange(12).reshape(3,4) print(arr1) np.concatenate([arr1, arr1], axis=1)
machine-learning/McKinney-pythonbook2013/chapter07-note.ipynb
yw-fang/readingnotes
apache-2.0
对于pandas对象,带有标签的轴使我们能够进一步推广数组的连接运算。 pandas中的concate函数提供了一些功能,来操作这种合并运算 下方这个例子中,有三个series,这三个series的索引没有重叠,我们来看看,concate是如何给出合并运算的。
import pandas as pd seri1 = pd.Series([-1,2], index=list('ab')) seri2 = pd.Series([2,3,4], index=list('cde')) seri3 = pd.Series([5,6], index=list('fg')) print(seri1) print(seri2) print(seri3) print(seri1) pd.concat([seri1,seri2,seri3])
machine-learning/McKinney-pythonbook2013/chapter07-note.ipynb
yw-fang/readingnotes
apache-2.0
By default,concat是在axis=0上工作的,最终产生一个全新的Series。如果传入axis=1,那么结果就会成为一个 DataFrame (axis=1 是列)
pd.concat([seri1, seri2, seri3],axis=1, sort=False) pd.concat([seri1, seri2, seri3],axis=1, sort=False,join='inner') # 传入 inner,得到并集,该处并集为none seri4 = pd.concat([seri1*5, seri3]) print(seri4) seri4 = pd.concat([seri1*5, seri3],axis=1, join='inner') print(seri4)
machine-learning/McKinney-pythonbook2013/chapter07-note.ipynb
yw-fang/readingnotes
apache-2.0
Appendix during writing this note define a function which returns a string with repeated letters
# Ref: https://stackoverflow.com/questions/38273353/how-to-repeat-individual-characters-in-strings-in-python def special_sign(sign, times): # sign is string, times is integer str_list = sign*times new_str = ''.join([i for i in str_list]) return(new_str) print(special_sign('*',20))
machine-learning/McKinney-pythonbook2013/chapter07-note.ipynb
yw-fang/readingnotes
apache-2.0
Next, we write a function to construct the factor graphs and prepare labels for training. For each factor graph instance, the structure is a chain but the number of nodes and edges depend on the number of letters, where unary factors will be added for each letter, pairwise factors will be added for each pair of neighboring letters. Besides, the first and last letter will get an additional bias factor respectively.
def prepare_data(x, y, ftype, num_samples): """prepare FactorGraphFeatures and FactorGraphLabels """ from shogun import Factor, TableFactorType, FactorGraph from shogun import FactorGraphObservation, FactorGraphLabels, FactorGraphFeatures samples = FactorGraphFeatures(num_samples) labels = FactorGraphLabels(num_samples) for i in xrange(num_samples): n_vars = x[0,i].shape[1] data = x[0,i].astype(np.float64) vc = np.array([n_stats]*n_vars, np.int32) fg = FactorGraph(vc) # add unary factors for v in xrange(n_vars): datau = data[:,v] vindu = np.array([v], np.int32) facu = Factor(ftype[0], vindu, datau) fg.add_factor(facu) # add pairwise factors for e in xrange(n_vars-1): datap = np.array([1.0]) vindp = np.array([e,e+1], np.int32) facp = Factor(ftype[1], vindp, datap) fg.add_factor(facp) # add bias factor to first letter datas = np.array([1.0]) vinds = np.array([0], np.int32) facs = Factor(ftype[2], vinds, datas) fg.add_factor(facs) # add bias factor to last letter datat = np.array([1.0]) vindt = np.array([n_vars-1], np.int32) fact = Factor(ftype[3], vindt, datat) fg.add_factor(fact) # add factor graph samples.add_sample(fg) # add corresponding label states_gt = y[0,i].astype(np.int32) states_gt = states_gt[0,:]; # mat to vector loss_weights = np.array([1.0/n_vars]*n_vars) fg_obs = FactorGraphObservation(states_gt, loss_weights) labels.add_label(fg_obs) return samples, labels # prepare training pairs (factor graph, node states) n_tr_samples = 350 # choose a subset of training data to avoid time out on buildbot samples, labels = prepare_data(p_tr, l_tr, ftype_all, n_tr_samples)
doc/ipython-notebooks/structure/FGM.ipynb
cfjhallgren/shogun
gpl-3.0
In Shogun, we implemented several batch solvers and online solvers. Let's first try to train the model using a batch solver. We choose the dual bundle method solver (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDualLibQPBMSOSVM.html">DualLibQPBMSOSVM</a>) [2], since in practice it is slightly faster than the primal n-slack cutting plane solver (<a a href="http://www.shogun-toolbox.org/doc/en/latest/PrimalMosekSOSVM_8h.html">PrimalMosekSOSVM</a>) [3]. However, it still will take a while until convergence. Briefly, in each iteration, a gradually tighter piece-wise linear lower bound of the objective function will be constructed by adding more cutting planes (most violated constraints), then the approximate QP will be solved. Finding a cutting plane involves calling the max oracle $H_i(\mathbf{w})$ and in average $N$ calls are required in an iteration. This is basically why the training is time consuming.
from shogun import DualLibQPBMSOSVM from shogun import BmrmStatistics import pickle import time # create bundle method SOSVM, there are few variants can be chosen # BMRM, Proximal Point BMRM, Proximal Point P-BMRM, NCBM # usually the default one i.e. BMRM is good enough # lambda is set to 1e-2 bmrm = DualLibQPBMSOSVM(model, labels, 0.01) bmrm.set_TolAbs(20.0) bmrm.set_verbose(True) bmrm.set_store_train_info(True) # train t0 = time.time() bmrm.train() t1 = time.time() w_bmrm = bmrm.get_w() print "BMRM took", t1 - t0, "seconds."
doc/ipython-notebooks/structure/FGM.ipynb
cfjhallgren/shogun
gpl-3.0
In our case, we have 101 active cutting planes, which is much less than 4082, i.e. the number of parameters. We could expect a good model by looking at these statistics. Now come to the online solvers. Unlike the cutting plane algorithms re-optimizes over all the previously added dual variables, an online solver will update the solution based on a single point. This difference results in a faster convergence rate, i.e. less oracle calls, please refer to Table 1 in [4] for more detail. Here, we use the stochastic subgradient descent (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CStochasticSOSVM.html">StochasticSOSVM</a>) to compare with the BMRM algorithm shown before.
from shogun import StochasticSOSVM # the 3rd parameter is do_weighted_averaging, by turning this on, # a possibly faster convergence rate may be achieved. # the 4th parameter controls outputs of verbose training information sgd = StochasticSOSVM(model, labels, True, True) sgd.set_num_iter(100) sgd.set_lambda(0.01) # train t0 = time.time() sgd.train() t1 = time.time() w_sgd = sgd.get_w() print "SGD took", t1 - t0, "seconds."
doc/ipython-notebooks/structure/FGM.ipynb
cfjhallgren/shogun
gpl-3.0
Inference Next, we show how to do inference with the learned model parameters for a given data point.
# get testing data samples_ts, labels_ts = prepare_data(p_ts, l_ts, ftype_all, n_ts_samples) from shogun import FactorGraphFeatures, FactorGraphObservation, TREE_MAX_PROD, MAPInference # get a factor graph instance from test data fg0 = samples_ts.get_sample(100) fg0.compute_energies() fg0.connect_components() # create a MAP inference using tree max-product infer_met = MAPInference(fg0, TREE_MAX_PROD) infer_met.inference() # get inference results y_pred = infer_met.get_structured_outputs() y_truth = FactorGraphObservation.obtain_from_generic(labels_ts.get_label(100)) print y_pred.get_data() print y_truth.get_data()
doc/ipython-notebooks/structure/FGM.ipynb
cfjhallgren/shogun
gpl-3.0
Evaluation In the end, we check average training error and average testing error. The evaluation can be done by two methods. We can either use the apply() function in the structured output machine or use the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CSOSVMHelper.html">SOSVMHelper</a>.
from shogun import LabelsFactory, SOSVMHelper # training error of BMRM method bmrm.set_w(w_bmrm) model.w_to_fparams(w_bmrm) lbs_bmrm = bmrm.apply() acc_loss = 0.0 ave_loss = 0.0 for i in xrange(n_tr_samples): y_pred = lbs_bmrm.get_label(i) y_truth = labels.get_label(i) acc_loss = acc_loss + model.delta_loss(y_truth, y_pred) ave_loss = acc_loss / n_tr_samples print('BMRM: Average training error is %.4f' % ave_loss) # training error of stochastic method print('SGD: Average training error is %.4f' % SOSVMHelper.average_loss(w_sgd, model)) # testing error bmrm.set_features(samples_ts) bmrm.set_labels(labels_ts) lbs_bmrm_ts = bmrm.apply() acc_loss = 0.0 ave_loss_ts = 0.0 for i in xrange(n_ts_samples): y_pred = lbs_bmrm_ts.get_label(i) y_truth = labels_ts.get_label(i) acc_loss = acc_loss + model.delta_loss(y_truth, y_pred) ave_loss_ts = acc_loss / n_ts_samples print('BMRM: Average testing error is %.4f' % ave_loss_ts) # testing error of stochastic method print('SGD: Average testing error is %.4f' % SOSVMHelper.average_loss(sgd.get_w(), model))
doc/ipython-notebooks/structure/FGM.ipynb
cfjhallgren/shogun
gpl-3.0