markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
For more about Numpy, see [http://wiki.scipy.org/Tentative_NumPy_Tutorial](http://wiki.scipy.org/Tentative_NumPy_Tutorial). Read and save filesThere are two kinds of computer files: text files and binary files:> Text file: computer file where the content is structured as a sequence of lines of electronic text. Text files can contain plain text (letters, numbers, and symbols) but they are not limited to such. The type of content in the text file is defined by the Unicode encoding (a computing industry standard for the consistent encoding, representation and handling of text expressed in most of the world's writing systems). >> Binary file: computer file where the content is encoded in binary form, a sequence of integers representing byte values.Let's see how to save and read numeric data stored in a text file:**Using plain Python**
f = open("newfile.txt", "w") # open file for writing f.write("This is a test\n") # save to file f.write("And here is another line\n") # save to file f.close() f = open('newfile.txt', 'r') # open file for reading f = f.read() # read from file print(f) help(open)
Help on built-in function open in module io: open(...) open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True, opener=None) -> file object Open file and return a stream. Raise IOError upon failure. file is either a text or byte string giving the name (and the path if the file isn't in the current working directory) of the file to be opened or an integer file descriptor of the file to be wrapped. (If a file descriptor is given, it is closed when the returned I/O object is closed, unless closefd is set to False.) mode is an optional string that specifies the mode in which the file is opened. It defaults to 'r' which means open for reading in text mode. Other common values are 'w' for writing (truncating the file if it already exists), 'x' for creating and writing to a new file, and 'a' for appending (which on some Unix systems, means that all writes append to the end of the file regardless of the current seek position). In text mode, if encoding is not specified the encoding used is platform dependent: locale.getpreferredencoding(False) is called to get the current locale encoding. (For reading and writing raw bytes use binary mode and leave encoding unspecified.) The available modes are: ========= =============================================================== Character Meaning --------- --------------------------------------------------------------- 'r' open for reading (default) 'w' open for writing, truncating the file first 'x' create a new file and open it for writing 'a' open for writing, appending to the end of the file if it exists 'b' binary mode 't' text mode (default) '+' open a disk file for updating (reading and writing) 'U' universal newline mode (deprecated) ========= =============================================================== The default mode is 'rt' (open for reading text). For binary random access, the mode 'w+b' opens and truncates the file to 0 bytes, while 'r+b' opens the file without truncation. The 'x' mode implies 'w' and raises an `FileExistsError` if the file already exists. Python distinguishes between files opened in binary and text modes, even when the underlying operating system doesn't. Files opened in binary mode (appending 'b' to the mode argument) return contents as bytes objects without any decoding. In text mode (the default, or when 't' is appended to the mode argument), the contents of the file are returned as strings, the bytes having been first decoded using a platform-dependent encoding or using the specified encoding if given. 'U' mode is deprecated and will raise an exception in future versions of Python. It has no effect in Python 3. Use newline to control universal newlines mode. buffering is an optional integer used to set the buffering policy. Pass 0 to switch buffering off (only allowed in binary mode), 1 to select line buffering (only usable in text mode), and an integer > 1 to indicate the size of a fixed-size chunk buffer. When no buffering argument is given, the default buffering policy works as follows: * Binary files are buffered in fixed-size chunks; the size of the buffer is chosen using a heuristic trying to determine the underlying device's "block size" and falling back on `io.DEFAULT_BUFFER_SIZE`. On many systems, the buffer will typically be 4096 or 8192 bytes long. * "Interactive" text files (files for which isatty() returns True) use line buffering. Other text files use the policy described above for binary files. encoding is the name of the encoding used to decode or encode the file. This should only be used in text mode. The default encoding is platform dependent, but any encoding supported by Python can be passed. See the codecs module for the list of supported encodings. errors is an optional string that specifies how encoding errors are to be handled---this argument should not be used in binary mode. Pass 'strict' to raise a ValueError exception if there is an encoding error (the default of None has the same effect), or pass 'ignore' to ignore errors. (Note that ignoring encoding errors can lead to data loss.) See the documentation for codecs.register or run 'help(codecs.Codec)' for a list of the permitted encoding error strings. newline controls how universal newlines works (it only applies to text mode). It can be None, '', '\n', '\r', and '\r\n'. It works as follows: * On input, if newline is None, universal newlines mode is enabled. Lines in the input can end in '\n', '\r', or '\r\n', and these are translated into '\n' before being returned to the caller. If it is '', universal newline mode is enabled, but line endings are returned to the caller untranslated. If it has any of the other legal values, input lines are only terminated by the given string, and the line ending is returned to the caller untranslated. * On output, if newline is None, any '\n' characters written are translated to the system default line separator, os.linesep. If newline is '' or '\n', no translation takes place. If newline is any of the other legal values, any '\n' characters written are translated to the given string. If closefd is False, the underlying file descriptor will be kept open when the file is closed. This does not work when a file name is given and must be True in that case. A custom opener can be used by passing a callable as *opener*. The underlying file descriptor for the file object is then obtained by calling *opener* with (*file*, *flags*). *opener* must return an open file descriptor (passing os.open as *opener* results in functionality similar to passing None). open() returns a file object whose type depends on the mode, and through which the standard file operations such as reading and writing are performed. When open() is used to open a file in a text mode ('w', 'r', 'wt', 'rt', etc.), it returns a TextIOWrapper. When used to open a file in a binary mode, the returned class varies: in read binary mode, it returns a BufferedReader; in write binary and append binary modes, it returns a BufferedWriter, and in read/write mode, it returns a BufferedRandom. It is also possible to use a string or bytearray as a file for both reading and writing. For strings StringIO can be used like a file opened in a text mode, and for bytes a BytesIO can be used like a file opened in a binary mode.
MIT
notebooks/PythonTutorial.ipynb
jagar2/BMC
**Using Numpy**
import numpy as np data = np.random.randn(3,3) np.savetxt('myfile.txt', data, fmt="%12.6G") # save to file data = np.genfromtxt('myfile.txt', unpack=True) # read from file data
_____no_output_____
MIT
notebooks/PythonTutorial.ipynb
jagar2/BMC
Ploting with matplotlibMatplotlib is the most-widely used packge for plotting data in Python. Let's see some examples of it.
import matplotlib.pyplot as plt
_____no_output_____
MIT
notebooks/PythonTutorial.ipynb
jagar2/BMC
Use the IPython magic `%matplotlib inline` to plot a figure inline in the notebook with the rest of the text:
%matplotlib notebook import numpy as np t = np.linspace(0, 0.99, 100) x = np.sin(2 * np.pi * 2 * t) n = np.random.randn(100) / 5 plt.Figure(figsize=(12,8)) plt.plot(t, x, label='sine', linewidth=2) plt.plot(t, x + n, label='noisy sine', linewidth=2) plt.annotate(s='$sin(4 \pi t)$', xy=(.2, 1), fontsize=20, color=[0, 0, 1]) plt.legend(loc='best', framealpha=.5) plt.xlabel('Time [s]') plt.ylabel('Amplitude') plt.title('Data plotting using matplotlib') plt.show()
_____no_output_____
MIT
notebooks/PythonTutorial.ipynb
jagar2/BMC
Use the IPython magic `%matplotlib qt` to plot a figure in a separate window (from where you will be able to change some of the figure proprerties):
%matplotlib qt mu, sigma = 10, 2 x = mu + sigma * np.random.randn(1000) fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4)) ax1.plot(x, 'ro') ax1.set_title('Data') ax1.grid() n, bins, patches = ax2.hist(x, 25, normed=True, facecolor='r') # histogram ax2.set_xlabel('Bins') ax2.set_ylabel('Probability') ax2.set_title('Histogram') fig.suptitle('Another example using matplotlib', fontsize=18, y=1) ax2.grid() plt.tight_layout() plt.show()
_____no_output_____
MIT
notebooks/PythonTutorial.ipynb
jagar2/BMC
And a window with the following figure should appear:
from IPython.display import Image Image(url="./../images/plot.png")
_____no_output_____
MIT
notebooks/PythonTutorial.ipynb
jagar2/BMC
You can switch back and forth between inline and separate figure using the `%matplotlib` magic commands used above. There are plenty more examples with the source code in the [matplotlib gallery](http://matplotlib.org/gallery.html).
# get back the inline plot %matplotlib inline
_____no_output_____
MIT
notebooks/PythonTutorial.ipynb
jagar2/BMC
Signal processing with ScipyThe Scipy package has a lot of functions for signal processing, among them: Integration (scipy.integrate), Optimization (scipy.optimize), Interpolation (scipy.interpolate), Fourier Transforms (scipy.fftpack), Signal Processing (scipy.signal), Linear Algebra (scipy.linalg), and Statistics (scipy.stats). As an example, let's see how to use a low-pass Butterworth filter to attenuate high-frequency noise and how the differentiation process of a signal affects the signal-to-noise content. We will also calculate the Fourier transform of these data to look at their frequencies content.
from scipy.signal import butter, filtfilt import scipy.fftpack freq = 100. t = np.arange(0,1,.01); w = 2*np.pi*1 # 1 Hz y = np.sin(w*t)+0.1*np.sin(10*w*t) # Butterworth filter b, a = butter(4, (5/(freq/2)), btype = 'low') y2 = filtfilt(b, a, y) # 2nd derivative of the data ydd = np.diff(y,2)*freq*freq # raw data y2dd = np.diff(y2,2)*freq*freq # filtered data # frequency content yfft = np.abs(scipy.fftpack.fft(y))/(y.size/2); # raw data y2fft = np.abs(scipy.fftpack.fft(y2))/(y.size/2); # filtered data freqs = scipy.fftpack.fftfreq(y.size, 1./freq) yddfft = np.abs(scipy.fftpack.fft(ydd))/(ydd.size/2); y2ddfft = np.abs(scipy.fftpack.fft(y2dd))/(ydd.size/2); freqs2 = scipy.fftpack.fftfreq(ydd.size, 1./freq)
_____no_output_____
MIT
notebooks/PythonTutorial.ipynb
jagar2/BMC
And the plots:
fig, ((ax1,ax2),(ax3,ax4)) = plt.subplots(2, 2, figsize=(10, 4)) ax1.set_title('Temporal domain', fontsize=14) ax1.plot(t, y, 'r', linewidth=2, label = 'raw data') ax1.plot(t, y2, 'b', linewidth=2, label = 'filtered @ 5 Hz') ax1.set_ylabel('f') ax1.legend(frameon=False, fontsize=12) ax2.set_title('Frequency domain', fontsize=14) ax2.plot(freqs[:yfft.size/4], yfft[:yfft.size/4],'r', lw=2,label='raw data') ax2.plot(freqs[:yfft.size/4],y2fft[:yfft.size/4],'b--',lw=2,label='filtered @ 5 Hz') ax2.set_ylabel('FFT(f)') ax2.legend(frameon=False, fontsize=12) ax3.plot(t[:-2], ydd, 'r', linewidth=2, label = 'raw') ax3.plot(t[:-2], y2dd, 'b', linewidth=2, label = 'filtered @ 5 Hz') ax3.set_xlabel('Time [s]'); ax3.set_ylabel("f ''") ax4.plot(freqs[:yddfft.size/4], yddfft[:yddfft.size/4], 'r', lw=2, label = 'raw') ax4.plot(freqs[:yddfft.size/4],y2ddfft[:yddfft.size/4],'b--',lw=2, label='filtered @ 5 Hz') ax4.set_xlabel('Frequency [Hz]'); ax4.set_ylabel("FFT(f '')") plt.show()
_____no_output_____
MIT
notebooks/PythonTutorial.ipynb
jagar2/BMC
For more about Scipy, see [http://docs.scipy.org/doc/scipy/reference/tutorial/](http://docs.scipy.org/doc/scipy/reference/tutorial/). Symbolic mathematics with SympySympy is a package to perform symbolic mathematics in Python. Let's see some of its features:
from IPython.display import display import sympy as sym from sympy.interactive import printing printing.init_printing()
_____no_output_____
MIT
notebooks/PythonTutorial.ipynb
jagar2/BMC
Define some symbols and the create a second-order polynomial function (a.k.a., parabola):
x, y = sym.symbols('x y') y = x**2 - 2*x - 3 y
_____no_output_____
MIT
notebooks/PythonTutorial.ipynb
jagar2/BMC
Plot the parabola at some given range:
from sympy.plotting import plot %matplotlib inline plot(y, (x, -3, 5));
_____no_output_____
MIT
notebooks/PythonTutorial.ipynb
jagar2/BMC
And the roots of the parabola are given by:
sym.solve(y, x)
_____no_output_____
MIT
notebooks/PythonTutorial.ipynb
jagar2/BMC
We can also do symbolic differentiation and integration:
dy = sym.diff(y, x) dy sym.integrate(dy, x)
_____no_output_____
MIT
notebooks/PythonTutorial.ipynb
jagar2/BMC
For example, let's use Sympy to represent three-dimensional rotations. Consider the problem of a coordinate system xyz rotated in relation to other coordinate system XYZ. The single rotations around each axis are illustrated by:
from IPython.display import Image Image(url="./../images/rotations.png")
_____no_output_____
MIT
notebooks/PythonTutorial.ipynb
jagar2/BMC
The single 3D rotation matrices around Z, Y, and X axes can be expressed in Sympy:
from IPython.core.display import Math from sympy import symbols, cos, sin, Matrix, latex a, b, g = symbols('alpha beta gamma') RX = Matrix([[1, 0, 0], [0, cos(a), -sin(a)], [0, sin(a), cos(a)]]) display(Math(latex('\\mathbf{R_{X}}=') + latex(RX, mat_str = 'matrix'))) RY = Matrix([[cos(b), 0, sin(b)], [0, 1, 0], [-sin(b), 0, cos(b)]]) display(Math(latex('\\mathbf{R_{Y}}=') + latex(RY, mat_str = 'matrix'))) RZ = Matrix([[cos(g), -sin(g), 0], [sin(g), cos(g), 0], [0, 0, 1]]) display(Math(latex('\\mathbf{R_{Z}}=') + latex(RZ, mat_str = 'matrix')))
_____no_output_____
MIT
notebooks/PythonTutorial.ipynb
jagar2/BMC
And using Sympy, a sequence of elementary rotations around X, Y, Z axes is given by:
RXYZ = RZ*RY*RX display(Math(latex('\\mathbf{R_{XYZ}}=') + latex(RXYZ, mat_str = 'matrix')))
_____no_output_____
MIT
notebooks/PythonTutorial.ipynb
jagar2/BMC
Suppose there is a rotation only around X ($\alpha$) by $\pi/2$; we can get the numerical value of the rotation matrix by substituing the angle values:
r = RXYZ.subs({a: np.pi/2, b: 0, g: 0}) r
_____no_output_____
MIT
notebooks/PythonTutorial.ipynb
jagar2/BMC
And we can prettify this result:
display(Math(latex(r'\mathbf{R_{(\alpha=\pi/2)}}=') + latex(r.n(chop=True, prec=3), mat_str = 'matrix')))
_____no_output_____
MIT
notebooks/PythonTutorial.ipynb
jagar2/BMC
For more about Sympy, see [http://docs.sympy.org/latest/tutorial/](http://docs.sympy.org/latest/tutorial/). Data analysis with pandas> "[pandas](http://pandas.pydata.org/) is a Python package providing fast, flexible, and expressive data structures designed to make working with โ€œrelationalโ€ or โ€œlabeledโ€ data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python."To work with labellled data, pandas has a type called DataFrame (basically, a matrix where columns and rows have may names and may be of different types) and it is also the main type of the software [R](http://www.r-project.org/). Fo ezample:
import pandas as pd x = 5*['A'] + 5*['B'] x df = pd.DataFrame(np.random.rand(10,2), columns=['Level 1', 'Level 2'] ) df['Group'] = pd.Series(['A']*5 + ['B']*5) plot = df.boxplot(by='Group') from pandas.tools.plotting import scatter_matrix df = pd.DataFrame(np.random.randn(100, 3), columns=['A', 'B', 'C']) plot = scatter_matrix(df, alpha=0.5, figsize=(8, 6), diagonal='kde')
_____no_output_____
MIT
notebooks/PythonTutorial.ipynb
jagar2/BMC
pandas is aware the data is structured and give you basic statistics considerint that and nicely formatted:
df.describe()
_____no_output_____
MIT
notebooks/PythonTutorial.ipynb
jagar2/BMC
Task 4: Largest palindrome product As always, we'll try with the brute force algorithm, that has ~O(n^2) complexity, but still works pretty fast:
import time start = time.time() max_palindrome = 0 for i in range(100, 1000): for j in range(100, 1000): if str(i*j) == str(i*j)[::-1] and i*j > max_palindrome: max_palindrome = i*j finish = time.time() - start print(max_palindrome) print("Time in seconds: ", finish)
906609 Time in seconds: 0.7557618618011475
Unlicense
python/Problem4.ipynb
ditekunov/ProjectEuler-research
But we'll try to improve it.Firstly, we'll implement an obvious way to find MAX value, by running the cycle downwards.Let's record the speed to track, that the time improved.
import time start = time.time() max_palindrome = 0 for i in range(999, 800, -1): for j in range(999, 99, -1): if str(i*j) == str(i*j)[::-1] and i*j > max_palindrome: max_palindrome = i*j finish = time.time() - start print(max_palindrome) print(finish)
906609 0.17235779762268066
Unlicense
python/Problem4.ipynb
ditekunov/ProjectEuler-research
Note for the interpretation of the curves and definition of the statistical variablesThe quantum state classifier (QSC) error rates $\widehat{r}_i$ in function of the number of experimental shots $n$ were determined for each highly entangled quantum state $\omega_i$ in the $\Omega$ set, with $i=1...m$.The curves seen on the figures represents the mean of the QSC error rate $\widehat{r}_{mean}$ over the $m$ quantum states at each $n$ value.This Monte Carlo simulation allowed to determine a safe shot number $n_s$ such that $\forall i\; \widehat{r}_i\le \epsilon_s$. The value of $\epsilon_s$ was set at 0.001.$\widehat{r}_{max}$ is the maximal value observed among all the $\widehat{r}_i$ values for the determined number of shots $n_s$.Similarly, from the error curves stored in the data file, was computed the safe shot number $n_t$ such that $\widehat{r}_{mean}\le \epsilon_t$. The value of $\epsilon_t$ was set at 0.0005 after verifying that all $\widehat{r}_{mean}$ at $n_s$ were $\le \epsilon_s$ in the different experimental settings. Correspondance between variables names in the text and in the data base:- $\widehat{r}_{mean}$: error_curve- $n_s$: shots- max ($\widehat{r}_i$) at $n_s$: shot_rate- $\widehat{r}_{mean}$ at $n_s$: mns_rate- $n_t$: m_shots- $\widehat{r}_{mean}$ at $n_t$: m_shot_rate
# Calculate shot number 'm_shots' for mean error rate 'm_shot_rates' <= epsilon_t len_data = len(All_data) epsilon_t = 0.0005 window = 11 for i in range(len_data): curve = np.array(All_data[i]['error_curve']) # filter the curve only for real devices: if All_data[i]['device']!="ideal_device": curve = savgol_filter(curve,window,2) # find the safe shot number: len_c = len(curve) n_a = np.argmin(np.flip(curve)<=epsilon_t)+1 if n_a == 1: n_a = np.nan m_r = np.nan else: m_r = curve[len_c-n_a+1] All_data[i]['min_r_shots'] = len_c-n_a All_data[i]['min_r'] = m_r # find mean error rate at n_s for i in range(len_data): i_shot = All_data[i]["shots"] if not np.isnan(i_shot): j = int(i_shot)-1 All_data[i]['mns_rate'] = All_data[i]['error_curve'][j] else: All_data[i]['mns_rate'] = np.nan #defining the pandas data frame for statistics excluding from here ibmqx2 data df_All= pd.DataFrame(All_data,columns=['shot_rates','shots', 'device', 'fidelity', 'mitigation','model','id_gates', 'QV', 'metric','error_curve', 'mns_rate','min_r_shots', 'min_r']).query("device != 'ibmqx2'") # any shot number >= 488 indicates that the curve calculation # was ended after reaching n = 500, hence this data correction: df_All.loc[df_All.shots>=488,"shots"]=np.nan # add the variable neperian log of safe shot number: df_All['log_shots'] = np.log(df_All['shots']) df_All['log_min_r_shots'] = np.log(df_All['min_r_shots'])
_____no_output_____
Apache-2.0
3_2_preliminary_statistics_project_2.ipynb
cnktysz/qiskit-quantum-state-classifier
Error rates in function of chosen $\epsilon_s$ and $\epsilon_t$
print("max mean error rate at n_s over all experiments =", round(max(df_All.mns_rate[:-2]),6)) print("min mean error rate at n_t over all experiments =", round(min(df_All.min_r[:-2]),6)) print("max mean error rate at n_t over all experiments =", round(max(df_All.min_r[:-2]),6)) df_All.mns_rate[:-2].plot.hist(alpha=0.5, legend = True) df_All.min_r[:-2].plot.hist(alpha=0.5, legend = True)
_____no_output_____
Apache-2.0
3_2_preliminary_statistics_project_2.ipynb
cnktysz/qiskit-quantum-state-classifier
Statistical overviewFor this section, an ordinary linear least square estimation is performed.The dependent variables tested are $ln\;n_s$ (log_shots) and $ln\;n_t$ (log_min_r_shots)
stat_model = ols("log_shots ~ metric", df_All.query("device != 'ideal_device'")).fit() print(stat_model.summary()) stat_model = ols("log_min_r_shots ~ metric", df_All.query("device != 'ideal_device'")).fit() print(stat_model.summary()) stat_model = ols("log_shots ~ model+mitigation+id_gates+fidelity+QV", df_All.query("device != 'ideal_device' & metric == 'sqeuclidean'")).fit() print(stat_model.summary()) stat_model = ols("log_min_r_shots ~ model+mitigation+id_gates+fidelity+QV", df_All.query("device != 'ideal_device'& metric == 'sqeuclidean'")).fit() print(stat_model.summary())
OLS Regression Results ============================================================================== Dep. Variable: log_min_r_shots R-squared: 0.532 Model: OLS Adj. R-squared: 0.491 Method: Least Squares F-statistic: 13.16 Date: Tue, 02 Mar 2021 Prob (F-statistic): 1.43e-08 Time: 17:22:26 Log-Likelihood: -24.867 No. Observations: 64 AIC: 61.73 Df Residuals: 58 BIC: 74.69 Df Model: 5 Covariance Type: nonrobust ====================================================================================== coef std err t P>|t| [0.025 0.975] -------------------------------------------------------------------------------------- Intercept 3.5234 1.886 1.868 0.067 -0.252 7.299 model[T.ideal_sim] 0.2831 0.094 3.021 0.004 0.096 0.471 mitigation[T.yes] -0.3990 0.094 -4.258 0.000 -0.587 -0.211 id_gates 0.0022 0.000 5.893 0.000 0.001 0.003 fidelity 0.1449 2.485 0.058 0.954 -4.829 5.119 QV -0.0112 0.013 -0.884 0.380 -0.036 0.014 ============================================================================== Omnibus: 29.956 Durbin-Watson: 1.010 Prob(Omnibus): 0.000 Jarque-Bera (JB): 55.722 Skew: 1.626 Prob(JB): 7.94e-13 Kurtosis: 6.212 Cond. No. 1.21e+04 ============================================================================== Notes: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. [2] The condition number is large, 1.21e+04. This might indicate that there are strong multicollinearity or other numerical problems.
Apache-2.0
3_2_preliminary_statistics_project_2.ipynb
cnktysz/qiskit-quantum-state-classifier
Comments:For the QSC, two different metrics were compared and at the end they gave the same output. For further analysis, the results obtained using the squared euclidean distance between distribution will be illustrated in this notebook, as it is more classical and strictly equivalent to the other classical Hellinger and Bhattacharyya distances. The Jensen-Shannon metric has however the theoretical advantage of being bayesian in nature and is therefore presented as an option for the result analysis.Curves obtained for counts corrected by measurement error mitigation (MEM) are used in this presentation. MEM significantly reduces $n_s$ and $n_t$. However, using counts distribution before MEM is presented as an option because they anticipate how the method could perform in devices with more qubits where obtaining the mitigation filter is a problem. Introducing a delay time $\delta t$ of 256 identity gates between state creation and measurement significantly increased $ln\;n_s$ and $ln\;n_t$ . Detailed statistical analysis Determine the optionsRunning sequentially these cells will end up with the main streaming options
# this for Jensen-Shannon metric s_metric = 'jensenshannon' sm = np.array([96+16+16+16]) # added Quito and Lima and Belem SAD=0 # ! will be unselected by running the next cell # mainstream option for metric: squared euclidean distance # skip this cell if you don't want this option s_metric = 'sqeuclidean' sm = np.array([97+16+16+16]) # added Quito and Lima and Belem SAD=2 # this for no mitigation mit = 'no' MIT=-4 # ! will be unselected by running the next cell # mainstream option: this for measurement mitigation # skip this cell if you don't want this option mit = 'yes' MIT=0
_____no_output_____
Apache-2.0
3_2_preliminary_statistics_project_2.ipynb
cnktysz/qiskit-quantum-state-classifier
1. Compare distribution models
# select data according to the options df_mod = df_All[df_All.mitigation == mit][df_All.metric == s_metric]
<ipython-input-20-af347b9ea33a>:2: UserWarning: Boolean Series key will be reindexed to match DataFrame index. df_mod = df_All[df_All.mitigation == mit][df_All.metric == s_metric]
Apache-2.0
3_2_preliminary_statistics_project_2.ipynb
cnktysz/qiskit-quantum-state-classifier
A look at $n_s$ and $n_t$
print("mitigation:",mit," metric:",s_metric ) df_mod.groupby('device')[['shots','min_r_shots']].describe(percentiles=[0.5])
mitigation: yes metric: sqeuclidean
Apache-2.0
3_2_preliminary_statistics_project_2.ipynb
cnktysz/qiskit-quantum-state-classifier
Ideal vs empirical model: no state creation - measurements delay
ADD=0+SAD+MIT #opl.plot_curves(All_data, np.append(sm,ADD+np.array([4,5,12,13,20,21,28,29,36,37,44,45])), opl.plot_curves(All_data, np.append(sm,ADD+np.array([4,5,12,13,20,21,28,29,36,37,52,53,60,61,68,69])), "Monte Carlo Simulation: Theoretical PDM vs Empirical PDM - no $\delta_t0$", ["metric","mitigation"], ["device","model"], right_xlimit = 90)
_____no_output_____
Apache-2.0
3_2_preliminary_statistics_project_2.ipynb
cnktysz/qiskit-quantum-state-classifier
Paired t-test and Wilcoxon test
for depvar in ['log_shots', 'log_min_r_shots']: #for depvar in ['shots', 'min_r_shots']: print("mitigation:",mit," metric:",s_metric, "variable:", depvar) df_dep = df_mod.query("id_gates == 0.0").groupby(['model'])[depvar] print(df_dep.describe(percentiles=[0.5]),"\n") # no error rate curve obtained for ibmqx2 with the ideal model, hence this exclusion: df_emp=df_mod.query("model == 'empirical' & id_gates == 0.0") df_ide=df_mod.query("model == 'ideal_sim' & id_gates == 0.0") #.reindex_like(df_emp,'nearest') # back to numpy arrays from pandas: print("paired data") print(np.asarray(df_emp[depvar])) print(np.asarray(df_ide[depvar]),"\n") print(stats.ttest_rel(np.asarray(df_emp[depvar]),np.asarray(df_ide[depvar]))) print(stats.wilcoxon(np.asarray(df_emp[depvar]),np.asarray(df_ide[depvar])),"\n") print("mitigation:",mit," metric:",s_metric, "id_gates == 0.0 ") stat_model = ols("log_shots ~ model + device + fidelity + QV" , df_mod.query("id_gates == 0.0 ")).fit() print(stat_model.summary()) print("mitigation:",mit," metric:",s_metric, "id_gates == 0.0 " ) stat_model = ols("log_min_r_shots ~ model + device + fidelity+QV", df_mod.query("id_gates == 0.0 ")).fit() print(stat_model.summary())
mitigation: yes metric: sqeuclidean id_gates == 0.0 OLS Regression Results ============================================================================== Dep. Variable: log_min_r_shots R-squared: 0.833 Model: OLS Adj. R-squared: 0.643 Method: Least Squares F-statistic: 4.374 Date: Tue, 02 Mar 2021 Prob (F-statistic): 0.0335 Time: 17:22:28 Log-Likelihood: 13.155 No. Observations: 16 AIC: -8.310 Df Residuals: 7 BIC: -1.357 Df Model: 8 Covariance Type: nonrobust =========================================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------------------- Intercept 1.5428 0.057 27.265 0.000 1.409 1.677 model[T.ideal_sim] 0.1871 0.080 2.327 0.053 -0.003 0.377 device[T.ibmq_belem] 0.2643 0.111 2.385 0.049 0.002 0.526 device[T.ibmq_lima] 0.4849 0.100 4.851 0.002 0.249 0.721 device[T.ibmq_ourense] 0.4523 0.099 4.565 0.003 0.218 0.687 device[T.ibmq_quito] 0.9195 0.111 8.294 0.000 0.657 1.182 device[T.ibmq_santiago] 0.0100 0.161 0.062 0.952 -0.371 0.391 device[T.ibmq_valencia] 0.3472 0.112 3.109 0.017 0.083 0.611 device[T.ibmq_vigo] 0.1480 0.111 1.335 0.224 -0.114 0.410 fidelity 1.1633 0.042 27.862 0.000 1.065 1.262 QV 0.0144 0.006 2.486 0.042 0.001 0.028 ============================================================================== Omnibus: 1.117 Durbin-Watson: 2.583 Prob(Omnibus): 0.572 Jarque-Bera (JB): 0.092 Skew: 0.000 Prob(JB): 0.955 Kurtosis: 3.372 Cond. No. 5.63e+17 ============================================================================== Notes: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. [2] The smallest eigenvalue is 2.02e-32. This might indicate that there are strong multicollinearity problems or that the design matrix is singular.
Apache-2.0
3_2_preliminary_statistics_project_2.ipynb
cnktysz/qiskit-quantum-state-classifier
Ideal vs empirical model: with state creation - measurements delay of 256 id gates
ADD=72+SAD+MIT opl.plot_curves(All_data, np.append(sm,ADD+np.array([4,5,12,13,20,21,28,29,36,37,52,53,60,61,68,69])), "No noise simulator vs empirical model - $\epsilon=0.001$ - with delay", ["metric","mitigation"], ["device","model"], right_xlimit = 90)
_____no_output_____
Apache-2.0
3_2_preliminary_statistics_project_2.ipynb
cnktysz/qiskit-quantum-state-classifier
Paired t-test and Wilcoxon test
for depvar in ['log_shots', 'log_min_r_shots']: print("mitigation:",mit," metric:",s_metric, "variable:", depvar) df_dep = df_mod.query("id_gates == 256.0 ").groupby(['model'])[depvar] print(df_dep.describe(percentiles=[0.5]),"\n") # no error rate curve obtained for ibmqx2 with the ideal model, hence their exclusion: df_emp=df_mod.query("model == 'empirical' & id_gates == 256.0 ") df_ide=df_mod.query("model == 'ideal_sim' & id_gates == 256.0") #.reindex_like(df_emp,'nearest') # back to numpy arrays from pandas: print("paired data") print(np.asarray(df_emp[depvar])) print(np.asarray(df_ide[depvar]),"\n") print(stats.ttest_rel(np.asarray(df_emp[depvar]),np.asarray(df_ide[depvar]))) print(stats.wilcoxon(np.asarray(df_emp[depvar]),np.asarray(df_ide[depvar])),"\n") print("mitigation:",mit," metric:",s_metric , "id_gates == 256.0 ") stat_model = ols("log_shots ~ model + device + fidelity + QV" , df_mod.query("id_gates == 256.0 ")).fit() print(stat_model.summary()) print("mitigation:",mit," metric:",s_metric, "id_gates == 256.0 " ) stat_model = ols("log_min_r_shots ~ model + device +fidelity+QV", df_mod.query("id_gates == 256.0 ")).fit() print(stat_model.summary())
mitigation: yes metric: sqeuclidean id_gates == 256.0 OLS Regression Results ============================================================================== Dep. Variable: log_min_r_shots R-squared: 0.890 Model: OLS Adj. R-squared: 0.764 Method: Least Squares F-statistic: 7.062 Date: Tue, 02 Mar 2021 Prob (F-statistic): 0.00913 Time: 17:22:30 Log-Likelihood: 8.2449 No. Observations: 16 AIC: 1.510 Df Residuals: 7 BIC: 8.463 Df Model: 8 Covariance Type: nonrobust =========================================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------------------- Intercept 1.7754 0.077 23.083 0.000 1.594 1.957 model[T.ideal_sim] 0.2943 0.109 2.693 0.031 0.036 0.553 device[T.ibmq_belem] 0.3949 0.151 2.621 0.034 0.039 0.751 device[T.ibmq_lima] 0.9230 0.136 6.793 0.000 0.602 1.244 device[T.ibmq_ourense] 0.2576 0.135 1.913 0.097 -0.061 0.576 device[T.ibmq_quito] 1.2167 0.151 8.074 0.000 0.860 1.573 device[T.ibmq_santiago] -0.2650 0.219 -1.212 0.265 -0.782 0.252 device[T.ibmq_valencia] 0.0412 0.152 0.271 0.794 -0.318 0.400 device[T.ibmq_vigo] 0.1260 0.151 0.836 0.431 -0.230 0.482 fidelity 1.3422 0.057 23.652 0.000 1.208 1.476 QV 0.0187 0.008 2.376 0.049 9.25e-05 0.037 ============================================================================== Omnibus: 3.031 Durbin-Watson: 3.036 Prob(Omnibus): 0.220 Jarque-Bera (JB): 1.026 Skew: 0.000 Prob(JB): 0.599 Kurtosis: 4.241 Cond. No. 5.63e+17 ============================================================================== Notes: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. [2] The smallest eigenvalue is 2.02e-32. This might indicate that there are strong multicollinearity problems or that the design matrix is singular.
Apache-2.0
3_2_preliminary_statistics_project_2.ipynb
cnktysz/qiskit-quantum-state-classifier
Pooling results obtained in circuit sets with and without creation-measurement delay Paired t-test and Wilcoxon test
#for depvar in ['log_shots', 'log_min_r_shots']: for depvar in ['log_shots', 'log_min_r_shots']: print("mitigation:",mit," metric:",s_metric, "variable:", depvar) df_dep = df_mod.groupby(['model'])[depvar] print(df_dep.describe(percentiles=[0.5]),"\n") # no error rate curve obtained for ibmqx2 with the ideal model, hence this exclusion: df_emp=df_mod.query("model == 'empirical'") df_ide=df_mod.query("model == 'ideal_sim'") #.reindex_like(df_emp,'nearest') # back to numpy arrays from pandas: print("paired data") print(np.asarray(df_emp[depvar])) print(np.asarray(df_ide[depvar]),"\n") print(stats.ttest_rel(np.asarray(df_emp[depvar]),np.asarray(df_ide[depvar]))) print(stats.wilcoxon(np.asarray(df_emp[depvar]),np.asarray(df_ide[depvar])),"\n")
mitigation: yes metric: sqeuclidean variable: log_shots count mean std min 50% max model empirical 16.0 3.580581 0.316850 3.218876 3.510542 4.276666 ideal_sim 16.0 3.834544 0.520834 3.295837 3.701226 5.323010 paired data [3.29583687 3.4657359 3.21887582 3.21887582 3.40119738 3.66356165 3.36729583 3.21887582 3.8286414 3.80666249 3.4657359 3.61091791 3.55534806 4.27666612 4.11087386 3.78418963] [3.71357207 3.49650756 3.63758616 3.29583687 3.52636052 4.4543473 3.40119738 3.36729583 3.71357207 3.97029191 3.58351894 3.71357207 3.68887945 5.32300998 4.40671925 4.06044301] Ttest_relResult(statistic=-3.411743256395652, pvalue=0.0038635533717249343) WilcoxonResult(statistic=5.0, pvalue=0.00030517578125) mitigation: yes metric: sqeuclidean variable: log_min_r_shots count mean std min 50% max model empirical 16.0 3.342394 0.323171 2.944439 3.295151 4.077537 ideal_sim 16.0 3.583064 0.533690 3.044522 3.417592 5.056246 paired data [3.09104245 3.17805383 2.94443898 2.94443898 3.13549422 3.4339872 3.09104245 3.04452244 3.49650756 3.63758616 3.25809654 3.40119738 3.33220451 4.07753744 3.8286414 3.58351894] [3.21887582 3.13549422 3.33220451 3.04452244 3.17805383 4.09434456 3.17805383 3.17805383 3.4339872 3.8501476 3.40119738 3.55534806 3.52636052 5.05624581 4.2341065 3.91202301] Ttest_relResult(statistic=-3.593874266151202, pvalue=0.0026588536103780047) WilcoxonResult(statistic=4.5, pvalue=0.000213623046875)
Apache-2.0
3_2_preliminary_statistics_project_2.ipynb
cnktysz/qiskit-quantum-state-classifier
Statsmodel Ordinary Least Square (OLS) Analysis
print("mitigation:",mit," metric:",s_metric ) stat_model = ols("log_shots ~ model + id_gates + device + fidelity + QV" , df_mod).fit() print(stat_model.summary()) print("mitigation:",mit," metric:",s_metric ) stat_model = ols("log_min_r_shots ~ model + id_gates + device + fidelity+QV ", df_mod).fit() print(stat_model.summary())
mitigation: yes metric: sqeuclidean OLS Regression Results ============================================================================== Dep. Variable: log_min_r_shots R-squared: 0.842 Model: OLS Adj. R-squared: 0.778 Method: Least Squares F-statistic: 13.08 Date: Tue, 02 Mar 2021 Prob (F-statistic): 6.19e-07 Time: 17:22:30 Log-Likelihood: 10.164 No. Observations: 32 AIC: -0.3273 Df Residuals: 22 BIC: 14.33 Df Model: 9 Covariance Type: nonrobust =========================================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------------------- Intercept 1.5308 0.056 27.340 0.000 1.415 1.647 model[T.ideal_sim] 0.2407 0.075 3.205 0.004 0.085 0.396 device[T.ibmq_belem] 0.3004 0.104 2.899 0.008 0.085 0.515 device[T.ibmq_lima] 0.6567 0.094 7.013 0.000 0.462 0.851 device[T.ibmq_ourense] 0.3122 0.093 3.366 0.003 0.120 0.505 device[T.ibmq_quito] 1.0387 0.104 10.020 0.000 0.824 1.254 device[T.ibmq_santiago] -0.1285 0.150 -0.855 0.402 -0.440 0.183 device[T.ibmq_valencia] 0.1604 0.104 1.536 0.139 -0.056 0.377 device[T.ibmq_vigo] 0.1078 0.104 1.040 0.310 -0.107 0.323 id_gates 0.0020 0.000 6.959 0.000 0.001 0.003 fidelity 1.1563 0.041 27.935 0.000 1.070 1.242 QV 0.0152 0.005 2.796 0.011 0.004 0.026 ============================================================================== Omnibus: 2.598 Durbin-Watson: 1.933 Prob(Omnibus): 0.273 Jarque-Bera (JB): 1.371 Skew: 0.412 Prob(JB): 0.504 Kurtosis: 3.592 Cond. No. 5.00e+18 ============================================================================== Notes: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. [2] The smallest eigenvalue is 4.22e-32. This might indicate that there are strong multicollinearity problems or that the design matrix is singular.
Apache-2.0
3_2_preliminary_statistics_project_2.ipynb
cnktysz/qiskit-quantum-state-classifier
Continuous Control--- 1. Import the Necessary Packages
from unityagents import UnityEnvironment import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline from ddpg_agent import Agent
_____no_output_____
MIT
Projects/p2_continuous-control/.ipynb_checkpoints/DDPG_with_20_Agent-checkpoint.ipynb
Clara-YR/Udacity-DRL
2. Instantiate the Environment and 20 Agents
# initialize the environment env = UnityEnvironment(file_name='./Reacher_20.app') # get the default brain brain_name = env.brain_names[0] brain = env.brains[brain_name] # reset the environment env_info = env.reset(train_mode=True)[brain_name] # number of agents num_agents = len(env_info.agents) print('Number of agents:', num_agents) # size of each action action_size = brain.vector_action_space_size print('Size of each action:', action_size) # examine the state space states = env_info.vector_observations state_size = states.shape[1] print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size)) print('The state for the first agent looks like:', states[0]) # initialize agents agent = Agent(state_size=33, action_size=4, random_seed=2, num_agents=20)
_____no_output_____
MIT
Projects/p2_continuous-control/.ipynb_checkpoints/DDPG_with_20_Agent-checkpoint.ipynb
Clara-YR/Udacity-DRL
3. Train the 20 Agents with DDPGTo amend the `ddpg` code to work for 20 agents instead of 1, here are the modifications I did in `ddpg_agent.py`:- With each step, each agent adds its experience to a replay buffer shared by all agents (line 61-61).- At first, the (local) actor and critic networks are updated 20 times in a row (one for each agent), using 20 different samples from the replay buffer as below:```def step(self, states, actions, rewards, next_states, dones): ... Learn (with each agent), if enough samples are available in memory if len(self.memory) > BATCH_SIZE: for i in range(self.num_agents): experiences = self.memory.sample() self.learn(experiences, GAMMA)``` Then in order to get less aggressive with the number of updates per time step, instead of updating the actor and critic networks __20 times__ at __every timestep__, we amended the code to update the networks __10 times__ after every __20 timesteps__ (line )
def ddpg(n_episodes=1000, max_t=300, print_every=100, num_agents=1): """ Params ====== n_episodes (int): maximum number of training episodes max_t (int): maximum number of timesteps per episode print_every (int): episodes interval to print training scores num_agents (int): the number of agents """ scores_deque = deque(maxlen=print_every) scores = [] for i_episode in range(1, n_episodes+1): # reset the environment env_info = env.reset(train_mode=True)[brain_name] # get the current state (for each agent) states = env_info.vector_observations # initialize the scores (for each agent) of the current episode scores_i = np.zeros(num_agents) for t in range(max_t): # select an action (for each agent) actions = agent.act(states) # send action to the environment env_info = env.step(actions)[brain_name] # get the next_state, reward, done (for each agent) next_states = env_info.vector_observations rewards = env_info.rewards dones = env_info.local_done # store experience and train the agent agent.step(states, actions, rewards, next_states, dones, update_every=20, update_times=10) # roll over state to next time step states = next_states # update the score scores_i += rewards # exit loop if episode finished if np.any(dones): break # save average of the most recent scores scores_deque.append(scores_i.mean()) scores.append(scores_i.mean()) print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)), end="") torch.save(agent.actor_local.state_dict(), 'd') torch.save(agent.critic_local.state_dict(), 'checkpoint_critic.pth') if i_episode % print_every == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque))) return scores scores = ddpg(n_episodes=200, max_t=1000, print_every=20, num_agents=20) fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(1, len(scores)+1), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() plt.savefig('ddpg_20_agents.png') #env.close() # load Actor-Critic policy agent.actor_local.state_dict() = torch.load('checkpoint_actor.pth') agent.critic_local.state_dict() = torch.load('checkpoint_critic.pth') scores = ddpg(n_episodes=100, max_t=300, print_every=10, num_agents=20) fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(1, len(scores)+1), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() plt.savefig('ddpg_20_agents_101to200.png')
_____no_output_____
MIT
Projects/p2_continuous-control/.ipynb_checkpoints/DDPG_with_20_Agent-checkpoint.ipynb
Clara-YR/Udacity-DRL
**OPTICS Algorithm** Ordering Points to Identify the Clustering Structure (OPTICS) is a Clustering Algorithm which locates region of high density that are seperated from one another by regions of low density. For using this library in Python this comes under Scikit Learn Library. Parameters: **Reachability Distance** -It is defined with respect to another data point q(Let). The Reachability distance between a point p and q is the maximum of the Core Distance of p and the Euclidean Distance(or some other distance metric) between p and q. Note that The Reachability Distance is not defined if q is not a Core point.**Core Distance** โ€“ It is the minimum value of radius required to classify a given point as a core point. If the given point is not a Core point, then itโ€™s Core Distance is undefined. OPTICS Pointers Produces a special order of the database with respect to its density-based clustering structure.This cluster-ordering contains info equivalent to the density-based clustering corresponding to a broad range of parameter settings. Good for both automatic and interactive cluster analysis, including finding intrinsic clustering structure Can be represented graphically or using visualization technique In this file , we will showcase how a basic OPTICS Algorithm works in Python , on a randomly created Dataset. Importing Libraries
import matplotlib.pyplot as plt #Used for plotting graphs from sklearn.datasets import make_blobs #Used for creating random dataset from sklearn.cluster import OPTICS #OPTICS is provided under Scikit-Learn Extra from sklearn.metrics import silhouette_score #silhouette score for checking accuracy import numpy as np import pandas as pd
_____no_output_____
MIT
Datascience_With_Python/Machine Learning/Algorithms/Optics Clustering Algorithm/optics_clustering_algorithm.ipynb
vishnupriya129/winter-of-contributing
Generating Data
data, clusters = make_blobs( n_samples=800, centers=4, cluster_std=0.3, random_state=0 ) # Originally created plot with data plt.scatter(data[:,0], data[:,1]) plt.show()
_____no_output_____
MIT
Datascience_With_Python/Machine Learning/Algorithms/Optics Clustering Algorithm/optics_clustering_algorithm.ipynb
vishnupriya129/winter-of-contributing
Model Creation
# Creating OPTICS Model optics_model = OPTICS(min_samples=50, xi=.05, min_cluster_size=.05) #min_samples : The number of samples in a neighborhood for a point to be considered as a core point. #xi : Determines the minimum steepness on the reachability plot that constitutes a cluster boundary #min_cluster_size : Minimum number of samples in an OPTICS cluster, expressed as an absolute number or a fraction of the number of samples pred =optics_model.fit(data) #Fitting the data optics_labels = optics_model.labels_ #storing labels predicted by our model no_clusters = len(np.unique(optics_labels) ) #determining the no. of unique clusters and noise our model predicted no_noise = np.sum(np.array(optics_labels) == -1, axis=0)
_____no_output_____
MIT
Datascience_With_Python/Machine Learning/Algorithms/Optics Clustering Algorithm/optics_clustering_algorithm.ipynb
vishnupriya129/winter-of-contributing
Plotting our observations
print('Estimated no. of clusters: %d' % no_clusters) print('Estimated no. of noise points: %d' % no_noise) colors = list(map(lambda x: '#aa2211' if x == 1 else '#120416', optics_labels)) plt.scatter(data[:,0], data[:,1], c=colors, marker="o", picker=True) plt.title(f'OPTICS clustering') plt.xlabel('Axis X[0]') plt.ylabel('Axis X[1]') plt.show() # Generate reachability plot , this helps understand the working of our Model in OPTICS reachability = optics_model.reachability_[optics_model.ordering_] plt.plot(reachability) plt.title('Reachability plot') plt.show()
_____no_output_____
MIT
Datascience_With_Python/Machine Learning/Algorithms/Optics Clustering Algorithm/optics_clustering_algorithm.ipynb
vishnupriya129/winter-of-contributing
Accuracy of OPTICS Clustering
OPTICS_score = silhouette_score(data, optics_labels) OPTICS_score
_____no_output_____
MIT
Datascience_With_Python/Machine Learning/Algorithms/Optics Clustering Algorithm/optics_clustering_algorithm.ipynb
vishnupriya129/winter-of-contributing
My First Square Roots This is my first notebook where I am going to implement the Babylonian square root algorithm in Python.\
variable = 6 variable a = q = 1.5 a 1.5**2 1.25**2 u*u=2 u*u = 2 u**2 = 2 s*s=33 q = 1 q a=[1.5] for i in range (10): next = a [i] + 2 a.append(next) a 2 a[0] a[2.3] a[0] a[5] a[0:5] plt.plot(a) plt.plot(a, 'o') plt.title("My First Sequence") b=[1.5] for i in range (10): next = b [i] * 2 b.append(next) plt. plot(b, 'o') plt.title ("My Second Sequence") b plt.plot(a,'--o') plt.plot(b, '--o') plt.title ("First and Second Sequence") a=[3] a a=[3] for i in range (7): next = a[i]+1 a.append(next) a=[3]+1 for i in range (7): next = a[i]/2 a.append(next) a=[3] a[0]+1 a a.append(next) a a=[3] a[0]+1 a.append(next) a a=[3] a=[3] for i in range(7): next = a[i]*1/2 a.append(next) a plt.plot(a) plt.title('Sequence a') plt.xlabel('x') plt.ylabel('y') b=[1/2] for i in range (7) next=b[i]+ 0.5**i b.append(next) b=[1/2] for i in range(7): next = b[i]+0.5**i a.append(next) b=[1/2] b b=[1/2] for i in range(70): next = b[i]+0.5**i b.append(next) b plt.plot(b) plt.title('Sequence b') plt.xlabel('x') plt.ylabel('y') a**2=[2] a=[2] root a a=[2] sqrt(2) import math math.sqrt(x) sqrt = x**1/2 x=[1.5] for i in range(10): next = (x[i]+2/x[i])/2 x.append(next) x plt.plot(x) plt.title('Sequence x') plt.xlabel('x') plt.ylabel('y') x=[10] for i in range(10): next = (x[i]+2/x[i])/2 x.append(next) x plt.plot(x) plt.title('Sequence x') plt.xlabel('x') plt.ylabel('y')
_____no_output_____
MIT
square_roots_intro.ipynb
izzetmert/lang_calc_2017
Finding ProbabilitiesOver the centuries, there has been considerable philosophical debate about what probabilities are. Some people think that probabilities are relative frequencies; others think they are long run relative frequencies; still others think that probabilities are a subjective measure of their own personal degree of uncertainty.In this course, most probabilities will be relative frequencies, though many will have subjective interpretations. Regardless, the ways in which probabilities are calculated and combined are consistent across the different interpretations.By convention, probabilities are numbers between 0 and 1, or, equivalently, 0% and 100%. Impossible events have probability 0. Events that are certain have probability 1.Math is the main tool for finding probabilities exactly, though computers are useful for this purpose too. Simulation can provide excellent approximations, with high probability. In this section, we will informally develop a few simple rules that govern the calculation of probabilities. In subsequent sections we will return to simulations to approximate probabilities of complex events.We will use the standard notation $P(\mbox{event})$ to denote the probability that "event" happens, and we will use the words "chance" and "probability" interchangeably. When an Event Doesn't HappenIf the chance that event happens is 40%, then the chance that it doesn't happen is 60%. This natural calculation can be described in general as follows:$$P(\mbox{an event doesn't happen}) ~=~ 1 - P(\mbox{the event happens})$$ When All Outcomes are Equally LikelyIf you are rolling an ordinary die, a natural assumption is that all six faces are equally likely. Under this assumption, the probabilities of how one roll comes out can be easily calculated as a ratio. For example, the chance that the die shows an even number is$$\frac{\mbox{number of even faces}}{\mbox{number of all faces}}~=~ \frac{\\{2, 4, 6\}}{\\{1, 2, 3, 4, 5, 6\}}~=~ \frac{3}{6}$$Similarly,$$P(\mbox{die shows a multiple of 3}) ~=~\frac{\\{3, 6\}}{\\{1, 2, 3, 4, 5, 6\}}~=~ \frac{2}{6}$$ In general, **if all outcomes are equally likely**,$$P(\mbox{an event happens}) ~=~\frac{\\{\mbox{outcomes that make the event happen}\}}{\\{\mbox{all outcomes}\}}$$ Not all random phenomena are as simple as one roll of a die. The two main rules of probability, developed below, allow mathematicians to find probabilities even in complex situations. When Two Events Must Both HappenSuppose you have a box that contains three tickets: one red, one blue, and one green. Suppose you draw two tickets at random without replacement; that is, you shuffle the three tickets, draw one, shuffle the remaining two, and draw another from those two. What is the chance you get the green ticket first, followed by the red one?There are six possible pairs of colors: RB, BR, RG, GR, BG, GB (we've abbreviated the names of each color to just its first letter). All of these are equally likely by the sampling scheme, and only one of them (GR) makes the event happen. So$$P(\mbox{green first, then red}) ~=~ \frac{\\{\mbox{GR}\}}{\\{\mbox{RB, BR, RG, GR, BG, GB}\}} ~=~ \frac{1}{6}$$ But there is another way of arriving at the answer, by thinking about the event in two stages. First, the green ticket has to be drawn. That has chance $1/3$, which means that the green ticket is drawn first in about $1/3$ of all repetitions of the experiment. But that doesn't complete the event. *Among the 1/3 of repetitions when green is drawn first*, the red ticket has to be drawn next. That happens in about $1/2$ of those repetitions, and so:$$P(\mbox{green first, then red}) ~=~ \frac{1}{2} ~\mbox{of}~ \frac{1}{3}~=~ \frac{1}{6}$$This calculation is usually written "in chronological order," as follows.$$P(\mbox{green first, then red}) ~=~ \frac{1}{3} ~\times~ \frac{1}{2}~=~ \frac{1}{6}$$ The factor of $1/2$ is called " the conditional chance that the red ticket appears second, given that the green ticket appeared first."In general, we have the **multiplication rule**:$$P(\mbox{two events both happen})~=~ P(\mbox{one event happens}) \times P(\mbox{the other event happens, given that the first one happened})$$Thus, when there are two conditions โ€“ one event must happen, as well as another โ€“ the chance is *a fraction of a fraction*, which is smaller than either of the two component fractions. The more conditions that have to be satisfied, the less likely they are to all be satisfied. When an Event Can Happen in Two Different WaysSuppose instead we want the chance that one of the two tickets is green and the other red. This event doesn't specify the order in which the colors must appear. So they can appear in either order. A good way to tackle problems like this is to *partition* the event so that it can happen in exactly one of several different ways. The natural partition of "one green and one red" is: GR, RG. Each of GR and RG has chance $1/6$ by the calculation above. So you can calculate the chance of "one green and one red" by adding them up.$$P(\mbox{one green and one red}) ~=~ P(\mbox{GR}) + P(\mbox{RG}) ~=~ \frac{1}{6} + \frac{1}{6} ~=~ \frac{2}{6}$$In general, we have the **addition rule**:$$P(\mbox{an event happens}) ~=~P(\mbox{first way it can happen}) + P(\mbox{second way it can happen}) ~~~\mbox{}$$provided the event happens in exactly one of the two ways.Thus, when an event can happen in one of two different ways, the chance that it happens is a sum of chances, and hence bigger than the chance of either of the individual ways. The multiplication rule has a natural extension to more than two events, as we will see below. So also the addition rule has a natural extension to events that can happen in one of several different ways.We end the section with examples that use combinations of all these rules. At Least One SuccessData scientists often work with random samples from populations. A question that sometimes arises is about the likelihood that a particular individual in the population is selected to be in the sample. To work out the chance, that individual is called a "success," and the problem is to find the chance that the sample contains a success.To see how such chances might be calculated, we start with a simpler setting: tossing a coin two times.If you toss a coin twice, there are four equally likely outcomes: HH, HT, TH, and TT. We have abbreviated "Heads" to H and "Tails" to T. The chance of getting at least one head in two tosses is therefore 3/4.Another way of coming up with this answer is to work out what happens if you *don't* get at least one head. That is when both the tosses land tails. So$$P(\mbox{at least one head in two tosses}) ~=~ 1 - P(\mbox{both tails}) ~=~ 1 - \frac{1}{4}~=~ \frac{3}{4}$$Notice also that $$P(\mbox{both tails}) ~=~ \frac{1}{4} ~=~ \frac{1}{2} \cdot \frac{1}{2} ~=~ \left(\frac{1}{2}\right)^2$$by the multiplication rule.These two observations allow us to find the chance of at least one head in any given number of tosses. For example,$$P(\mbox{at least one head in 17 tosses}) ~=~ 1 - P(\mbox{all 17 are tails})~=~ 1 - \left(\frac{1}{2}\right)^{17}$$And now we are in a position to find the chance that the face with six spots comes up at least once in rolls of a die. For example,$$P(\mbox{a single roll is not 6}) ~=~ 1 - P(6)~=~ \frac{5}{6}$$Therefore,$$P(\mbox{at least one 6 in two rolls}) ~=~ 1 - P(\mbox{both rolls are not 6})~=~ 1 - \left(\frac{5}{6}\right)^2$$and$$P(\mbox{at least one 6 in 17 rolls})~=~ 1 - \left(\frac{5}{6}\right)^{17}$$The table below shows these probabilities as the number of rolls increases from 1 to 50.
rolls = np.arange(1, 51, 1) results = Table().with_columns( 'Rolls', rolls, 'Chance of at least one 6', 1 - (5/6)**rolls ) results
_____no_output_____
MIT
Mathematics/Statistics/Statistics and Probability Python Notebooks/Computational and Inferential Thinking - The Foundations of Data Science (book)/Notebooks - by chapter/9. Randomness and Probabiltities/5. Finding_Probabilities.ipynb
okara83/Becoming-a-Data-Scientist
The chance that a 6 appears at least once rises rapidly as the number of rolls increases.
results.scatter('Rolls')
_____no_output_____
MIT
Mathematics/Statistics/Statistics and Probability Python Notebooks/Computational and Inferential Thinking - The Foundations of Data Science (book)/Notebooks - by chapter/9. Randomness and Probabiltities/5. Finding_Probabilities.ipynb
okara83/Becoming-a-Data-Scientist
In 50 rolls, you are almost certain to get at least one 6.
results.where('Rolls', are.equal_to(50))
_____no_output_____
MIT
Mathematics/Statistics/Statistics and Probability Python Notebooks/Computational and Inferential Thinking - The Foundations of Data Science (book)/Notebooks - by chapter/9. Randomness and Probabiltities/5. Finding_Probabilities.ipynb
okara83/Becoming-a-Data-Scientist
Chapter 7 1. What makes dictionaries different from sequence type containers like lists and tuples is the way the data are stored and accessed. 2.Sequence types use numeric keys only (numbered sequentially as indexed offsets from the beginning of the sequence). Mapping types may use most other object types as keys; strings are the most common. 3.Hash tabel: They store each piece of data, called a value, based on an associated data item, called a key. Hash tables generally provide good performance because lookups occur fairly quickly once you have a key. 4. Dictionary is an unordered collection of data.The only kind of ordering you can obtain is by taking either a dictionaryโ€™s set of keys or values. The keys() or values() method returns lists, which are sortable. You can also call items() to get a list of keys and values as tuple pairs and sort that. Dictionaries themselves have no implicit ordering because they are hashes. 5.create dictionaryThe syntax of a dictionary entry is key:value. Also, dictionary entries are enclosed in braces ( { } ). a.dictionary can be created by using {} with K,V pairs
adict={} bdict={"k":"v"} bdict
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
b. another way to create dictionary is using dict() method (factory fucntion)
fdict = dict((['x', 1], ['y', 2])) cdict=dict([("k1",2),("k2",3)])
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
c.dictionaries may also be created using a very convenient built-in method for creating a โ€œdefaultโ€ dictionary whose elements all have the same value (defaulting to None if not given), fromkeys():
ddict = {}.fromkeys(('x', 'y'), -1) ddict ddict={}.fromkeys(('x','y'),(2,3)) ddict
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
6.How to Access Values in DictionariesTo traverse a dictionary (normally by key), you only need to cycle through its keys, like this:
dict2 = {'name': 'earth', 'port': 80} for key in dict2.keys(): print("key=%s,value=%s" %(key,dict2[key]))
key=name,value=earth key=port,value=80
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
a.Beginning with Python 2.2, you no longer need to use the keys() method to extract a list of keys to loop over. Iterators were created to simplify access- ing of sequence-like objects such as dictionaries and files. Using just the dictionary name itself will cause an iterator over that dictionary to be used in a for loop:
dict2 = {'name': 'earth', 'port': 80} for key in dict2: print("key=%s,value=%s" %(key,dict2[key]))
key=name,value=earth key=port,value=80
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
b.To access individual dictionary elements, you use the familiar square brackets along with the key to obtain its value:
dict2['name']
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
b. If we attempt to access a data item with a key that is not part of the dictio- nary, we get an error:
dict['service']
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
c. The best way to check if a dic- tionary has a specific key is to use the in or not in operators
'service' in dict2
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
d. number can be the keys for dictionary
dict3 = {3.2: 'xyz', 1: 'abc', '1': 3.14159}
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
e. Not allowing keys to change during execution makes sense keys must be hashable, so numbers and strings are fine, but lists and other dictionaries are not. 7. update and add new dictionary
dict2['port'] = 6969 # update existing entry or add new entry
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
a. the string format operator (%) is specific for dictionary
print('host %(name)s is running on port %(port)d' % dict2) dict2
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
b. You may also add the contents of an entire dictionary to another dictionary by using the update() built-in method.3 8. remove dictionary elements: a. use the del statement to delete an entire dictionary
del dict2['name'] # remove entry with key 'name' dict2.pop('port') # remove and return entry with key adict.clear() # remove all entries in adict del bdict # delete entire dictionary
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
Note: dict() is now a type and factory function, overriding it may cause you headaches and potential bugsDo NOT create variables with built-in names like: dict, list, file, bool, str, input, or len! 9.Dictionaries will work with all of the standard type operators but do not support operations such as concatenation and repetition. a.Dictionary Key-Lookup Operator ( [ ] ). The key-lookup operator is used for both assigning values to and retrieving values from a dictionary
adict={"k":2} adict["k"] = 3 # set value in dictionary. Dictionary Key-Lookup Operator ( [ ] ) cdict = {'fruits':1} ddict = {'fruits':1}
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
10. dict() function a. The dict() factory function is used for creating dictionaries. If no argument is provided, then an empty dictionary is created. The fun happens when a container object is passed in as an argument to dict(). dict()* dict() -> new empty dictionary* dict(mapping) -> new dictionary initialized from a mapping object's (key, value) pairs* dict(iterable) -> new dictionary initialized as if via: d = {} for k, v in iterable: d[k] = v * dict(**kwargs) -> new dictionary initialized with the name=value pairs in the keyword argument list. For example: dict(one=1, two=2)
dict() dict({'k':1,'k2':2}) dict([(1,2),(2,3)]) dict(((1,2),(2,3))) dict(([2,3],[3,4])) dict(zip(('x','y'),(1,2)))
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
11. If it is a(nother) mapping object, i.e., a dictionary, then dict() will just create a new dictionary and copy the contents of the existing one. The new dictionary is actually a shallow copy of the original one and the same results can be accomplished by using a dictionaryโ€™s copy() built-in method. Because creating a new dictionary from an existing one using dict() is measurably slower than using copy(), we recommend using the latter. 12.it is possible to call dict() with an existing dic- tionary or keyword argument dictionary ( function operator)
dict7=dict(x=1, y=2) dict8 = dict(x=1, y=2) dict9 = dict(**dict8) # not a realistic example, better use copy() dict10=dict(dict7) dict9 dict10 dict9 = dict8.copy() # better than dict9 = dict(**dict8)
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
13.The len() BIF is flexible. It works with sequences, mapping types, and sets
dict2 = {'name': 'earth', 'port': 80} len(dict2)
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
14. We can see that above, when referencing dict2, the items are listed in reverse order from which they were entered into the dictionary. ??????
dict2 = {'name': 'earth', 'port': 80} dict2
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
15.The hash() BIF is not really meant to be used for dictionaries per se, but it can be used to determine whether an object is fit to be a dictionary key (or not).* Given an object as its argument, hash() returns the hash value of that object.* Numeric val- ues that are equal hash to the same value.* A TypeError will occur if an unhashable type is given as the argument to hash()
hash([]) dict2={} dict2[{}]="foo"
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
16. Mapping Type Built-in Methods* has_key() and its replacements in and not in* keys(), which returns a list of the dictionaryโ€™s keys, * values(), which returns a list of the dictionaryโ€™s values, and* items(), which returns a list of (key, value) tuple pairs.
dict2={"k1":1,"k2":2,"k3":3} for eachKey in dict2.keys(): print(eachKey,dict2[eachKey])
k1 1 k2 2 k3 3
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
* dict.fromkeysc (seq, val=None): create dict where all the key in seq have teh same value val
{}.fromkeys(("k1","k2","3"),None) # fromkeysc(seq, val=None)
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
* get(key,default=None): return the value corresponding to key, otherwise return None if key is not in dictionary * dict.setdefault (key, default=None): Similar to get(), but sets dict[key]=default if key is not already in dict * dict.setdefault(key, default=None): Add the key-value pairs of dict2 to dict * keys() method to get the list of its keys, then call that listโ€™s sort() method to get a sorted list to iterate over. sorted(), made especially for iterators, exists, which returns a sorted iterator:
for eachKey in sorted(dict2): print(eachKey,dict2[eachKey])
k1 1 k2 2 k3 3
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
* The update() method can be used to add the contents of one dictionary to another. Any existing entries with duplicate keys will be overridden by the new incoming entries. Nonexistent ones will be added. All entries in a dictio- nary can be removed with the clear() method.
dict2 dict3={"k1":"ka","kb":"kb"} dict2.update(dict3) dict2 dict3.clear() dict3 del dict3 dict3
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
* The copy() method simply returns a copy of a dictionary. * the get() method is similar to using the key-lookup operator ( [ ] ), but allows you to provide a default value returned if a key does not exist.
dict2 dict4=dict2.copy() dict4 dict4.get('xgag') type(dict4.get("agasg")) type(dict4.get("agasg", "no such key")) dict2
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
* f the dictionary does not have the key you are seeking, you want to set a default value and then return it. That is precisely what setdefault() does
dict2.setdefault('kk','k1') dict2
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
* Currently,thekeys(),items(),andvalues()methodsreturnlists. This can be unwieldy if such data collections are large, and the main reason why iteritems(), iterkeys(), and itervalues() were added to Python * In python3, iter*() names are no longer supported. The new keys(), values(), and items() all return views * When key collisions are detected (meaning duplicate keys encountered during assignment), the last (most recent) assignment wins.
dict1 = {' foo':789, 'foo': 'xyz'} dict1 dict1['foo']
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
* most Python objects can serve as keys; however they have to be hashable objectsโ€”mutable types such as lists and dictionaries are disallowed because they cannot be hashed* All immutable types are hashable* Numbers of the same value represent the same key. In other words, the integer 1 and the float 1.0 hash to the same value, meaning that they are identical as keys.* there are some mutable objects that are (barely) hashable, so they are eligible as keys, but there are very few of them. One example would be a class that has implemented the __hash__() special method. In the end, an immutable value is used anyway as __hash__() must return an integer.* Why must keys be hashable? The hash function used by the interpreter to calculate where to store your data is based on the value of your key. If the key was a mutable object, its value could be changed. If a key changes, the hash function will map to a different place to store the data. If that was the case, then the hash function could never reliably store or retrieve the associated value* : Tuples are valid keys only if they only contain immutable arguments like numbers and strings. 19.A set object is an unordered collection of distinct values that are hashable. a.Like other container types, sets support membership testing via in and not in operators, cardinality using the len()BIF, and iteration over the set membership using for loops. However, since sets are unordered, you do not index into or slice them, and there are no keys used to access a value. b.There are two different types of sets available, mutable (set) and immuta- ble (frozenset). c.Note that mutable sets are not hashable and thus cannot be used as either a dictionary key or as an element of another set. d.sets module and accessed via the ImmutableSet and Set classes. d. sets can be created, using their factory functions set() and frozenset():
s1=set('asgag') s2=frozenset('sbag') len(s1) type(s1) type(s2) len(s2)
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
f. iterate over set or check if an item is a member of a set
'k' in s1
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
g. update set
s1.add('z') s1 s1.update('abc') s1 s1.remove('a') s1
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
h.As mentioned before, only mutable sets can be updated. Any attempt at such operations on immutable sets is met with an exception
s2.add('c')
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
i. mixed set type operation
type(s1|s2) # set | frozenset, mix operation s3=frozenset('agg') #frozenset type(s2|s3)
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
j. update mutable set: (Union) Update ( |= )
s=set('abc') s1=set("123") s |=s1 s
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
k.The retention (or intersection update) operation keeps only the existing set members that are also elements of the other set. The method equivalent is intersection_update()
s=set('abc') s1=set('ab') s &=s1 s
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
l.The difference update operation returns a set whose elements are members of the original set after removing elements that are (also) members of the other set. The method equivalent is difference_update().
s = set('cheeseshop') s u = frozenset(s) s -= set('shop') s
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
m.The symmetric difference update operation returns a set whose members are either elements of the original or other set but not both. The method equiva- lent is symmetric_difference_update()
s=set('cheeseshop') s u=set('bookshop') u s ^=u s vari='abc' set(vari) # vari must be iterable
_____no_output_____
Apache-2.0
Chapter7.ipynb
yangzhou95/notes
Multiple Linear Regression with Normalize Data
# Importing the libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import warnings warnings.filterwarnings("ignore") # fix_yahoo_finance is used to fetch data import fix_yahoo_finance as yf yf.pdr_override() # input symbol = 'AMD' start = '2014-01-01' end = '2018-08-27' # Read data dataset = yf.download(symbol,start,end) # View columns dataset.head() X = dataset.iloc[ : , 0:4].values Y = np.asanyarray(dataset[['Adj Close']]) from sklearn import preprocessing # normalize the data attributes normalized_X = preprocessing.normalize(X) X = normalized_X[: , 1:] # Splitting the dataset into the Training set and Test set from sklearn.model_selection import train_test_split X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2, random_state = 0) from sklearn.linear_model import LinearRegression regressor = LinearRegression() regressor.fit(X_train, Y_train) y_pred = regressor.predict(X_test) from sklearn.metrics import explained_variance_score, mean_absolute_error, mean_squared_error, r2_score ex_var_score = explained_variance_score(Y_test, y_pred) m_absolute_error = mean_absolute_error(Y_test, y_pred) m_squared_error = mean_squared_error(Y_test, y_pred) r_2_score = r2_score(Y_test, y_pred) print("Explained Variance Score: "+str(ex_var_score)) print("Mean Absolute Error "+str(m_absolute_error)) print("Mean Squared Error "+str(m_squared_error)) print("R Squared Error "+str(r_2_score)) print ('Coefficients: ', regressor.coef_) print("Residual sum of squares: %.2f" % np.mean((y_pred - Y_test) ** 2)) # Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % regressor.score(X_test, y_pred)) print('Multiple Linear Score:', regressor.score(X_test, y_pred))
Multiple Linear Score: 0.0145752513278
MIT
Stock_Algorithms/Multiple_Linear_Regression_with_Normalize_Data.ipynb
NTForked-ML/Deep-Learning-Machine-Learning-Stock
Chapter 7. ํ…์ŠคํŠธ ๋ฌธ์„œ์˜ ๋ฒ”์ฃผํ™” - (4) IMDB ์ „์ฒด ๋ฐ์ดํ„ฐ๋กœ ์ „์ดํ•™์Šต- ์•ž์„  ์ „์ดํ•™์Šต ์‹ค์Šต๊ณผ๋Š” ๋‹ฌ๋ฆฌ, IMDB ์˜ํ™”๋ฆฌ๋ทฐ ๋ฐ์ดํ„ฐ์…‹ ์ „์ฒด๋ฅผ ์‚ฌ์šฉํ•˜๋ฉฐ ๋ฌธ์žฅ ์ˆ˜๋Š” 10๊ฐœ -> 20๊ฐœ๋กœ ์กฐ์ •ํ•œ๋‹ค- IMDB ์˜ํ™” ๋ฆฌ๋ทฐ ๋ฐ์ดํ„ฐ๋ฅผ ๋‹ค์šด๋กœ๋“œ ๋ฐ›์•„ data ๋””๋ ‰ํ† ๋ฆฌ์— ์••์ถ• ํ•ด์ œํ•œ๋‹ค - ๋‹ค์šด๋กœ๋“œ : http://ai.stanford.edu/~amaas/data/sentiment/ - ์ €์žฅ๊ฒฝ๋กœ : data/aclImdb
import os import config from dataloader.loader import Loader from preprocessing.utils import Preprocess, remove_empty_docs from dataloader.embeddings import GloVe from model.cnn_document_model import DocumentModel, TrainingParameters from keras.callbacks import ModelCheckpoint, EarlyStopping import numpy as np
Using TensorFlow backend.
MIT
Chapter07/HandsOn-04_Transfer_with_IMDB_full_model.ipynb
wikibook/transfer-learning
ํ•™์Šต ํŒŒ๋ผ๋ฏธํ„ฐ ์„ค์ •
# ํ•™์Šต๋œ ๋ชจ๋ธ์„ ์ €์žฅํ•  ๋””๋ ‰ํ† ๋ฆฌ ์ƒ์„ฑ if not os.path.exists(os.path.join(config.MODEL_DIR, 'imdb')): os.makedirs(os.path.join(config.MODEL_DIR, 'imdb')) # ํ•™์Šต ํŒŒ๋ผ๋ฏธํ„ฐ ์„ค์ • train_params = TrainingParameters('imdb_transfer_tanh_activation', model_file_path = config.MODEL_DIR+ '/imdb/full_model_10.hdf5', model_hyper_parameters = config.MODEL_DIR+ '/imdb/full_model_10.json', model_train_parameters = config.MODEL_DIR+ '/imdb/full_model_10_meta.json', num_epochs=30, batch_size=128)
_____no_output_____
MIT
Chapter07/HandsOn-04_Transfer_with_IMDB_full_model.ipynb
wikibook/transfer-learning
IMDB ๋ฐ์ดํ„ฐ์…‹ ๋กœ๋“œ
# ๋‹ค์šด๋ฐ›์€ IMDB ๋ฐ์ดํ„ฐ ๋กœ๋“œ: ํ•™์Šต์…‹ ์ „์ฒด ์‚ฌ์šฉ train_df = Loader.load_imdb_data(directory = 'train') # train_df = train_df.sample(frac=0.05, random_state = train_params.seed) print(f'train_df.shape : {train_df.shape}') test_df = Loader.load_imdb_data(directory = 'test') print(f'test_df.shape : {test_df.shape}') # ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ, ๋ ˆ์ด๋ธ” ์ถ”์ถœ corpus = train_df['review'].tolist() target = train_df['sentiment'].tolist() corpus, target = remove_empty_docs(corpus, target) print(f'corpus size : {len(corpus)}') print(f'target size : {len(target)}')
train_df.shape : (25000, 2) test_df.shape : (25000, 2) corpus size : 25000 target size : 25000
MIT
Chapter07/HandsOn-04_Transfer_with_IMDB_full_model.ipynb
wikibook/transfer-learning
์ธ๋ฑ์Šค ์‹œํ€€์Šค ์ƒ์„ฑ
# ์•ž์„  ์ „์ดํ•™์Šต ์‹ค์Šต๊ณผ ๋‹ฌ๋ฆฌ, ๋ฌธ์žฅ ๊ฐœ์ˆ˜๋ฅผ 10๊ฐœ -> 20๊ฐœ๋กœ ์ƒํ–ฅ Preprocess.NUM_SENTENCES = 20 # ํ•™์Šต์…‹์„ ์ธ๋ฑ์Šค ์‹œํ€€์Šค๋กœ ๋ณ€ํ™˜ preprocessor = Preprocess(corpus=corpus) corpus_to_seq = preprocessor.fit() print(f'corpus_to_seq size : {len(corpus_to_seq)}') print(f'corpus_to_seq[0] size : {len(corpus_to_seq[0])}') # ํ…Œ์ŠคํŠธ์…‹์„ ์ธ๋ฑ์Šค ์‹œํ€€์Šค๋กœ ๋ณ€ํ™˜ test_corpus = test_df['review'].tolist() test_target = test_df['sentiment'].tolist() test_corpus, test_target = remove_empty_docs(test_corpus, test_target) test_corpus_to_seq = preprocessor.transform(test_corpus) print(f'test_corpus_to_seq size : {len(test_corpus_to_seq)}') print(f'test_corpus_to_seq[0] size : {len(test_corpus_to_seq[0])}') # ํ•™์Šต์…‹, ํ…Œ์ŠคํŠธ์…‹ ์ค€๋น„ x_train = np.array(corpus_to_seq) x_test = np.array(test_corpus_to_seq) y_train = np.array(target) y_test = np.array(test_target) print(f'x_train.shape : {x_train.shape}') print(f'y_train.shape : {y_train.shape}') print(f'x_test.shape : {x_test.shape}') print(f'y_test.shape : {y_test.shape}')
x_train.shape : (25000, 600) y_train.shape : (25000,) x_test.shape : (25000, 600) y_test.shape : (25000,)
MIT
Chapter07/HandsOn-04_Transfer_with_IMDB_full_model.ipynb
wikibook/transfer-learning
GloVe ์ž„๋ฒ ๋”ฉ ์ดˆ๊ธฐํ™”
# GloVe ์ž„๋ฒ ๋”ฉ ์ดˆ๊ธฐํ™” - glove.6B.50d.txt pretrained ๋ฒกํ„ฐ ์‚ฌ์šฉ glove = GloVe(50) initial_embeddings = glove.get_embedding(preprocessor.word_index) print(f'initial_embeddings.shape : {initial_embeddings.shape}')
Reading 50 dim GloVe vectors Found 400000 word vectors. words not found in embeddings: 499 initial_embeddings.shape : (28656, 50)
MIT
Chapter07/HandsOn-04_Transfer_with_IMDB_full_model.ipynb
wikibook/transfer-learning
ํ›ˆ๋ จ๋œ ๋ชจ๋ธ ๋กœ๋“œ- HandsOn03์—์„œ ์•„๋งˆ์กด ๋ฆฌ๋ทฐ ๋ฐ์ดํ„ฐ๋กœ ํ•™์Šตํ•œ CNN ๋ชจ๋ธ์„ ๋กœ๋“œํ•œ๋‹ค.- DocumentModel ํด๋ž˜์Šค์˜ load_model๋กœ ๋ชจ๋ธ์„ ๋กœ๋“œํ•˜๊ณ , load_model_weights๋กœ ํ•™์Šต๋œ ๊ฐ€์ค‘์น˜๋ฅผ ๊ฐ€์ ธ์˜จ๋‹ค. - ๊ทธ ํ›„, GloVe.update_embeddings ํ•จ์ˆ˜๋กœ GloVe ์ดˆ๊ธฐํ™” ์ž„๋ฒ ๋”ฉ์„ ์—…๋ฐ์ดํŠธํ•œ๋‹ค
# ๋ชจ๋ธ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ๋กœ๋“œ model_json_path = os.path.join(config.MODEL_DIR, 'amazonreviews/model_06.json') amazon_review_model = DocumentModel.load_model(model_json_path) # ๋ชจ๋ธ ๊ฐ€์ค‘์น˜ ๋กœ๋“œ model_hdf5_path = os.path.join(config.MODEL_DIR, 'amazonreviews/model_06.hdf5') amazon_review_model.load_model_weights(model_hdf5_path) # ๋ชจ๋ธ ์ž„๋ฒ ๋”ฉ ๋ ˆ์ด์–ด ์ถ”์ถœ learned_embeddings = amazon_review_model.get_classification_model().get_layer('imdb_embedding').get_weights()[0] print(f'learned_embeddings size : {len(learned_embeddings)}') # ๊ธฐ์กด GloVe ๋ชจ๋ธ์„ ํ•™์Šต๋œ ์ž„๋ฒ ๋”ฉ ํ–‰๋ ฌ๋กœ ์—…๋ฐ์ดํŠธํ•œ๋‹ค glove.update_embeddings(preprocessor.word_index, np.array(learned_embeddings), amazon_review_model.word_index) # ์—…๋ฐ์ดํŠธ๋œ ์ž„๋ฒ ๋”ฉ์„ ์–ป๋Š”๋‹ค initial_embeddings = glove.get_embedding(preprocessor.word_index)
learned_embeddings size : 43197 23629 words are updated out of 28654
MIT
Chapter07/HandsOn-04_Transfer_with_IMDB_full_model.ipynb
wikibook/transfer-learning
IMDB ์ „์ดํ•™์Šต ๋ชจ๋ธ ์ƒ์„ฑ
# ๋ถ„๋ฅ˜ ๋ชจ๋ธ ์ƒ์„ฑ : IMDB ๋ฆฌ๋ทฐ ๋ฐ์ดํ„ฐ๋ฅผ ์ž…๋ ฅ๋ฐ›์•„ ์ด์ง„๋ถ„๋ฅ˜๋ฅผ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ชจ๋ธ ์ƒ์„ฑ imdb_model = DocumentModel(vocab_size=preprocessor.get_vocab_size(), word_index = preprocessor.word_index, num_sentences=Preprocess.NUM_SENTENCES, embedding_weights=initial_embeddings, embedding_regularizer_l2 = 0.0, conv_activation = 'tanh', train_embedding = True, # ์ž„๋ฒ ๋”ฉ ๋ ˆ์ด์–ด์˜ ๊ฐ€์ค‘์น˜ ํ•™์Šตํ•จ learn_word_conv = False, # ๋‹จ์–ด ์ˆ˜์ค€ conv ๋ ˆ์ด์–ด์˜ ๊ฐ€์ค‘์น˜ ํ•™์Šต ์•ˆ ํ•จ learn_sent_conv = False, # ๋ฌธ์žฅ ์ˆ˜์ค€ conv ๋ ˆ์ด์–ด์˜ ๊ฐ€์ค‘์น˜ ํ•™์Šต ์•ˆ ํ•จ hidden_dims=64, input_dropout=0.1, hidden_layer_kernel_regularizer=0.01, final_layer_kernel_regularizer=0.01) # ๊ฐ€์ค‘์น˜ ์—…๋ฐ์ดํŠธ : ์ƒ์„ฑํ•œ imdb_model ๋ชจ๋ธ์—์„œ ๋‹ค์Œ์˜ ๊ฐ ๋ ˆ์ด์–ด๋“ค์˜ ๊ฐ€์ค‘์น˜๋ฅผ ์œ„์—์„œ ๋กœ๋“œํ•œ ๊ฐ€์ค‘์น˜๋กœ ๊ฐฑ์‹ ํ•œ๋‹ค for l_name in ['word_conv','sentence_conv','hidden_0', 'final']: new_weights = amazon_review_model.get_classification_model().get_layer(l_name).get_weights() imdb_model.get_classification_model().get_layer(l_name).set_weights(weights=new_weights)
Vocab Size = 28656 and the index of vocabulary words passed has 28654 words
MIT
Chapter07/HandsOn-04_Transfer_with_IMDB_full_model.ipynb
wikibook/transfer-learning
๋ชจ๋ธ ํ•™์Šต ๋ฐ ํ‰๊ฐ€
# ๋ชจ๋ธ ์ปดํŒŒ์ผ imdb_model.get_classification_model().compile(loss="binary_crossentropy", optimizer='rmsprop', metrics=["accuracy"]) # callback (1) - ์ฒดํฌํฌ์ธํŠธ checkpointer = ModelCheckpoint(filepath=train_params.model_file_path, verbose=1, save_best_only=True, save_weights_only=True) # callback (2) - ์กฐ๊ธฐ์ข…๋ฃŒ early_stop = EarlyStopping(patience=2) # ํ•™์Šต ์‹œ์ž‘ imdb_model.get_classification_model().fit(x_train, y_train, batch_size=train_params.batch_size, epochs=train_params.num_epochs, verbose=2, validation_split=0.01, callbacks=[checkpointer]) # ๋ชจ๋ธ ์ €์žฅ imdb_model._save_model(train_params.model_hyper_parameters) train_params.save() # ๋ชจ๋ธ ํ‰๊ฐ€ imdb_model.get_classification_model().evaluate(x_test, y_test, batch_size=train_params.batch_size*10, verbose=2)
_____no_output_____
MIT
Chapter07/HandsOn-04_Transfer_with_IMDB_full_model.ipynb
wikibook/transfer-learning
**Basic chatbot**
import ast from google.colab import drive questions = [] answers = [] drive.mount("/content/drive") with open("/content/drive/My Drive/data/chatbot/qa_Electronics.json") as f: for line in f: data = ast.literal_eval(line) questions.append(data["question"].lower()) answers.append(data["answer"].lower()) from sklearn.feature_extraction.text import TfidfVectorizer import numpy as np from sklearn.metrics.pairwise import cosine_similarity vectorizer = TfidfVectorizer(stop_words="english") X_questions = vectorizer.fit_transform(questions) def conversation(user_input): global vectorizer, answers, X_questions X_user_input = vectorizer.transform(user_input) similarity_matrix = cosine_similarity(X_user_input, X_questions) max_similarity = np.amax(similarity_matrix) angle = np.rad2deg(np.arccos(max_similarity)) if angle > 60: return "sorry, I did not quite understand that" else: index_max_similarity = np.argmax(similarity_matrix) return answers[index_max_similarity] def main(): usr = input("Please enter your username: ") print("Q&A support: Hi, welcome to Q&A support. How can I help you?") while True: user_input = input("{}: ".format(usr)) if user_input.lower() == "bye": print("Q&A support: bye!") break else: print("Q&A support: " + conversation([user_input])) main()
Please enter your username: pepe Q&A support: Hi, welcome to Q&A support. How can I help you? pepe: I want to buy an iPhone Q&A support: i am sure amazon has all types.
Apache-2.0
3.Statistical_NLP/2_chatbot.ipynb
bonigarcia/nlp-examples
Lgbm and Optuna* changed with cross validation
import pandas as pd import numpy as np # the GBM used mport xgboost as xgb import catboost as cat import lightgbm as lgb from sklearn.model_selection import cross_validate from sklearn.metrics import make_scorer # to encode categoricals from sklearn.preprocessing import LabelEncoder # see utils.py from utils import add_features, rmsle, train_encoders, apply_encoders import warnings warnings.filterwarnings('ignore') import optuna # globals and load train dataset FILE_TRAIN = "train.csv" # load train dataset data_orig = pd.read_csv(FILE_TRAIN) # # Data preparation, feature engineering # # add features (hour, year) extracted form timestamp data_extended = add_features(data_orig) # ok, we will treat as categorical: holiday, hour, season, weather, workingday, year all_columns = data_extended.columns # cols to be ignored # atemp and temp are strongly correlated (0.98) we're taking only one del_columns = ['datetime', 'casual', 'registered', 'temp'] TARGET = "count" cat_cols = ['season', 'holiday','workingday', 'weather', 'hour', 'year'] num_cols = list(set(all_columns) - set([TARGET]) - set(del_columns) - set(cat_cols)) features = sorted(cat_cols + num_cols) # drop ignored columns data_used = data_extended.drop(del_columns, axis=1) # Code categorical columns (only season, weather, year) le_list = train_encoders(data_used) # coding data_used = apply_encoders(data_used, le_list) # define indexes for cat_cols # cat boost want indexes cat_columns_idxs = [i for i, col in enumerate(features) if col in cat_cols] # finally we have the train dataset X = data_used[features].values y = data_used[TARGET].values # general FOLDS = 5 SEED = 4321 N_TRIALS = 5 STUDY_NAME = "gbm3" # # Here we define what we do using Optuna # def objective(trial): # tuning on max_depth, n_estimators for the example dict_params = { "num_iterations": trial.suggest_categorical("num_iterations", [3000, 4000, 5000]), "learning_rate": trial.suggest_loguniform("learning_rate", low=1e-4, high=1e-2), "metrics" : ["rmse"], "verbose" : -1, } max_depth = trial.suggest_int("max_depth", 4, 10) num_leaves = trial.suggest_int("num_leaves", 2**(max_depth), 2**(max_depth)) dict_params['max_depth'] = max_depth dict_params['num_leaves'] = num_leaves regr = lgb.LGBMRegressor(**dict_params) # using rmsle for scoring scorer = make_scorer(rmsle, greater_is_better=False) scores = cross_validate(regr, X, y, cv=FOLDS, scoring=scorer) avg_test_score = round(np.mean(scores['test_score']), 4) return avg_test_score # launch Optuna Study study = optuna.create_study(study_name=STUDY_NAME, direction="maximize") study.optimize(objective, n_trials=N_TRIALS) study.best_params # visualize trials as an ordered Pandas df df = study.trials_dataframe() result_df = df[df['state'] == 'COMPLETE'].sort_values(by=['value'], ascending=False) # best on top result_df.head()
_____no_output_____
MIT
.ipynb_checkpoints/lgbm-optuna-cross-validate-checkpoint.ipynb
luigisaetta/bike-sharing-forecast
train the model on entire train set and save
%%time # maybe I shoud add save best model (see nu_iteration in cell below) model = lgb.LGBMRegressor(**study.best_params) model.fit(X, y) model_file = "lgboost.txt" model.booster_.save_model(model_file, num_iteration=study.best_params['num_iterations'])
_____no_output_____
MIT
.ipynb_checkpoints/lgbm-optuna-cross-validate-checkpoint.ipynb
luigisaetta/bike-sharing-forecast
TensorFlow Tutorial 01 Simple Linear Modelby [Magnus Erik Hvass Pedersen](http://www.hvass-labs.org/)/ [GitHub](https://github.com/Hvass-Labs/TensorFlow-Tutorials) / [Videos on YouTube](https://www.youtube.com/playlist?list=PL9Hr9sNUjfsmEu1ZniY0XpHSzl5uihcXZ) IntroductionThis tutorial demonstrates the basic workflow of using TensorFlow with a simple linear model. After loading the so-called MNIST data-set with images of hand-written digits, we define and optimize a simple mathematical model in TensorFlow. The results are then plotted and discussed.You should be familiar with basic linear algebra, Python and the Jupyter Notebook editor. It also helps if you have a basic understanding of Machine Learning and classification. Imports
%matplotlib inline import matplotlib.pyplot as plt from matplotlib.colors import LogNorm import tensorflow as tf import numpy as np from sklearn.metrics import confusion_matrix
/home/magnus/anaconda3/envs/tf-gpu/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters
MIT
01_Simple_Linear_Model.ipynb
Asciotti/TensorFlow-Tutorials
This was developed using Python 3.6 (Anaconda) and TensorFlow version:
tf.__version__
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
Asciotti/TensorFlow-Tutorials
Load Data The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
from mnist import MNIST data = MNIST(data_dir="data/MNIST/")
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
Asciotti/TensorFlow-Tutorials
The MNIST data-set has now been loaded and consists of 70.000 images and class-numbers for the images. The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
print("Size of:") print("- Training-set:\t\t{}".format(data.num_train)) print("- Validation-set:\t{}".format(data.num_val)) print("- Test-set:\t\t{}".format(data.num_test))
Size of: - Training-set: 55000 - Validation-set: 5000 - Test-set: 10000
MIT
01_Simple_Linear_Model.ipynb
Asciotti/TensorFlow-Tutorials
Copy some of the data-dimensions for convenience.
# The images are stored in one-dimensional arrays of this length. img_size_flat = data.img_size_flat # Tuple with height and width of images used to reshape arrays. img_shape = data.img_shape # Number of classes, one class for each of 10 digits. num_classes = data.num_classes
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
Asciotti/TensorFlow-Tutorials
One-Hot Encoding The output-data is loaded as both integer class-numbers and so-called One-Hot encoded arrays. This means the class-numbers have been converted from a single integer to a vector whose length equals the number of possible classes. All elements of the vector are zero except for the $i$'th element which is 1 and means the class is $i$. For example, the One-Hot encoded labels for the first 5 images in the test-set are:
data.y_test[0:5, :]
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
Asciotti/TensorFlow-Tutorials
We also need the classes as integers for various comparisons and performance measures. These can be found from the One-Hot encoded arrays by taking the index of the highest element using the `np.argmax()` function. But this has already been done for us when the data-set was loaded, so we can see the class-number for the first five images in the test-set. Compare these to the One-Hot encoded arrays above.
data.y_test_cls[0:5]
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
Asciotti/TensorFlow-Tutorials