hexsha
stringlengths
40
40
ext
stringclasses
1 value
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
11
148
max_stars_repo_name
stringlengths
11
79
max_stars_repo_licenses
stringclasses
11 values
content
stringlengths
3.39k
756k
avg_line_length
float64
26
3.16k
max_line_length
int64
1k
734k
7b4673d53ea9b763e74276f8685ed5a123f53a4f
py
python
notebooks_ru/ch-demos/chsh.ipynb
gitlocalize/platypus
['Apache-2.0']
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] tags=["remove_cell"] # # Местная реальность и неравенство ЧШГ # - # Мы видели СТРАШНОГО ПРИЗРАКА!!! в предыдущем модуле, как квантовая запутанность приводит к сильным корреляциям в многочастной системе. На самом деле эти корреляции кажутся более сильными, чем все, что можно было бы объяснить с помощью классической физики. # # Историческое развитие квантовой механики наполнено бурными дискуссиями об истинной природе реальности и о том, в какой степени квантовая механика может ее объяснить. Учитывая впечатляющий эмпирический успех квантовой механики, должно было стать ясно, что люди не откажутся от нее просто потому, что некоторые ее аспекты трудно согласовать с интуицией. # # В основе этих различных точек зрения лежал вопрос о природе измерения. Мы знаем, что в квантовых измерениях есть элемент случайности, но так ли это на самом деле? Существует ли какой-то хитрый способ, с помощью которого Вселенная уже заранее решила, какое значение данное измерение даст в будущем? Эта гипотеза легла в основу различных теорий *скрытых переменных* . Но этим теориям нужно было не только объяснить случайность на уровне отдельных частиц. Им также нужно было объяснить, что происходит, когда разные наблюдатели измеряют разные части многокомпонентной запутанной системы! Это вышло за рамки только теории скрытых переменных. Теперь понадобилась локальная теория скрытых переменных, чтобы примирить наблюдения квантовой механики со Вселенной, в которой действительна локальная реальность. # # Что такое местная реальность? Во Вселенной, где сохраняется локальность, должна быть возможность разделить две системы так далеко в космосе, что они не смогут взаимодействовать друг с другом. Понятие реальности связано с тем, имеет ли измеримая величина определенное значение *при отсутствии какого-либо измерения в будущем* . # # В 1963 году Джон Стюарт Белл опубликовал то, что можно было бы назвать одним из самых глубоких открытий в истории науки. Белл заявил, что любая теория, использующая локальные скрытые переменные, может быть экспериментально исключена. В этом разделе мы увидим, как это сделать, и проведем реальный эксперимент, который это продемонстрирует! (с некоторыми оставшимися лазейками, которые нужно закрыть...) # ### Неравенство CHSH # # Представьте, что Алисе и Бобу дана каждая часть двудольной запутанной системы. Затем каждый из них выполняет два измерения со своей стороны в двух разных базах. Назовем базы Алисы *A* и *a* и базы Боба *B* и *b* . Каково математическое ожидание величины $\langle CHSH \rangle = \langle AB \rangle - \langle Ab \rangle + \langle aB \rangle + \langle ab \rangle$ ? # # Теперь у Алисы и Боба есть по одному кубиту, поэтому любое измерение, которое они выполняют в своей системе (кубите), может дать только один из двух возможных результатов: +1 или -1. Обратите внимание, что хотя мы обычно называем два состояния кубита $|0\rangle$ и $|1\rangle$, это *собственные состояния* , и проективное измерение даст их *собственные значения* , +1 и -1 соответственно. # # Следовательно, если любое измерение *A* , *a* , *B* и *b* может дать только $\pm 1$, то величины $(Bb)$ и $(B+b)$ могут быть только 0 или $\pm2$. Таким образом, величина $A(Bb) + a(B+b)$ может быть только либо +2, либо -2, а это означает, что должна существовать граница для среднего значения величины, которую мы назвали $|\langle ЧШ\ранг| =|\langle AB \rangle - \langle Ab \rangle + \langle aB \rangle + \langle ab \rangle| \leq 2$. # # Приведенное выше обсуждение чрезмерно упрощено, потому что мы могли бы предположить, что результат любого набора измерений Алисы и Боба может зависеть от набора локальных скрытых переменных, но с помощью некоторых математических вычислений можно показать, что даже в этом случае , математическое ожидание величины $CHSH$ должно быть ограничено 2, если сохраняется локальный реализм. # # Но что происходит, когда мы проводим эти эксперименты с запутанной системой? Давай попробуем! # + #import qiskit tools import qiskit from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute, transpile, Aer, IBMQ from qiskit.tools.visualization import circuit_drawer from qiskit.tools.monitor import job_monitor, backend_monitor, backend_overview from qiskit.providers.aer import noise #import python stuff import matplotlib.pyplot as plt import numpy as np import time # + tags=["uses-hardware"] # Set devices, if using a real device IBMQ.load_account() provider = IBMQ.get_provider(hub='ibm-q', group='open', project='main') lima = provider.get_backend('ibmq_lima') # - sv_sim = Aer.get_backend('statevector_simulator') qasm_sim = Aer.get_backend('qasm_simulator') # Сначала мы собираемся определить функцию для создания наших цепей CHSH. Мы собираемся выбрать, не ограничивая общности, что Боб всегда использует расчетный ($Z$) и $X$ базис для своих измерений $B$ и $b$ соответственно, тогда как Алиса выбирает также ортогональные базисы, но угол которых мы собираемся варьироваться от $0$ до $2\pi$ относительно оснований Боба. Этот набор углов будет входным аргументом для нашей функции построения схемы $CHSH$. def make_chsh_circuit(theta_vec): """Return a list of QuantumCircuits for use in a CHSH experiemnt (one for each value of theta in theta_vec) Args: theta_vec (list): list of values of angles between the bases of Alice and Bob Returns: List[QuantumCircuit]: CHSH QuantumCircuits for each value of theta """ chsh_circuits = [] for theta in theta_vec: obs_vec = ['00', '01', '10', '11'] for el in obs_vec: qc = QuantumCircuit(2,2) qc.h(0) qc.cx(0, 1) qc.ry(theta, 0) for a in range(2): if el[a] == '1': qc.h(a) qc.measure(range(2),range(2)) chsh_circuits.append(qc) return chsh_circuits # Далее мы собираемся определить функцию для оценки величины $\langle CHSH \rangle$. Фактически можно определить две такие величины: $\langle CHSH1 \rangle = \langle AB \rangle - \langle Ab \rangle + \langle aB \rangle + \langle ab \rangle$ и $\langle CHSH2 \rangle = \ langle AB \rangle + \langle Ab \rangle - \langle aB \rangle + \langle ab \rangle$. После выбора соответствующих осей измерения для обеих сторон каждое ожидаемое значение может быть просто оценено путем сложения отсчетов из выходных битовых строк с соответствующим знаком (плюс для четных членов $00$ и $11$ и минус для нечетных членов $01$ и $10$). . def compute_chsh_witness(counts): """Computes expectation values for the CHSH inequality, for each angle (theta) between measurement axis. Args: counts (list[dict]): dict of counts for each experiment (4 per value of theta) Returns: Tuple(List, List): Tuple of lists with the two CHSH witnesses """ # Order is ZZ,ZX,XZ,XX CHSH1 = [] CHSH2 = [] # Divide the list of dictionaries in sets of 4 for i in range(0, len(counts), 4): theta_dict = counts[i:i + 4] zz = theta_dict[0] zx = theta_dict[1] xz = theta_dict[2] xx = theta_dict[3] no_shots = sum(xx[y] for y in xx) chsh1 = 0 chsh2 = 0 for element in zz: parity = (-1)**(int(element[0])+int(element[1])) chsh1+= parity*zz[element] chsh2+= parity*zz[element] for element in zx: parity = (-1)**(int(element[0])+int(element[1])) chsh1+= parity*zx[element] chsh2-= parity*zx[element] for element in xz: parity = (-1)**(int(element[0])+int(element[1])) chsh1-= parity*xz[element] chsh2+= parity*xz[element] for element in xx: parity = (-1)**(int(element[0])+int(element[1])) chsh1+= parity*xx[element] chsh2+= parity*xx[element] CHSH1.append(chsh1/no_shots) CHSH2.append(chsh2/no_shots) return CHSH1, CHSH2 # Наконец, разобьем отрезок $[0, 2\pi)$ на 15 углов и построим соответствующий набор цепей $CHSH$ number_of_thetas = 15 theta_vec = np.linspace(0,2*np.pi,number_of_thetas) my_chsh_circuits = make_chsh_circuit(theta_vec) # Теперь давайте кратко рассмотрим, как выглядят четыре из этих цепей для заданного $\theta$ my_chsh_circuits[4].draw('mpl') my_chsh_circuits[5].draw('mpl') my_chsh_circuits[6].draw('mpl') my_chsh_circuits[7].draw('mpl') # Эти схемы просто создают пару Белла, а затем измеряют каждую партию по разным основаниям. В то время как Боб ($q_1$) всегда измеряет либо в вычислительном базисе, либо в базисе $X$, базис измерения Алисы поворачивается на угол $\theta$ по отношению к базису Боба. # + tags=["uses-hardware"] # Execute and get counts result_ideal = execute(my_chsh_circuits, qasm_sim).result() tic = time.time() job_real = execute(my_chsh_circuits, backend=lima, shots=8192) job_monitor(job_real) result_real = job_real.result() toc = time.time() print(toc-tic) # + tags=["uses-hardware"] CHSH1_ideal, CHSH2_ideal = compute_chsh_witness(result_ideal.get_counts()) CHSH1_real, CHSH2_real = compute_chsh_witness(result_real.get_counts()) # - # Теперь наносим результаты # + tags=["uses-hardware"] plt.figure(figsize=(12,8)) plt.rcParams.update({'font.size': 22}) plt.plot(theta_vec,CHSH1_ideal,'o-',label = 'CHSH1 Noiseless') plt.plot(theta_vec,CHSH2_ideal,'o-',label = 'CHSH2 Noiseless') plt.plot(theta_vec,CHSH1_real,'x-',label = 'CHSH1 Lima') plt.plot(theta_vec,CHSH2_real,'x-',label = 'CHSH2 Lima') plt.grid(which='major',axis='both') plt.rcParams.update({'font.size': 16}) plt.legend() plt.axhline(y=2, color='r', linestyle='-') plt.axhline(y=-2, color='r', linestyle='-') plt.axhline(y=np.sqrt(2)*2, color='k', linestyle='-.') plt.axhline(y=-np.sqrt(2)*2, color='k', linestyle='-.') plt.xlabel('Theta') plt.ylabel('CHSH witness') # - # Обратите внимание, что произошло! Существуют определенные комбинации баз измерения, для которых $|CHSH| \geq 2$. Как это возможно? Давайте посмотрим на нашу запутанную двудольную систему. Легко показать, что если $|\psi \rangle = 1/\sqrt{2} (|00\rangle + |11\rangle)$, то математическое ожидание $\langle AB \rangle = \langle \psi| А \ раз Б| \psi \rangle = -\cos \theta_{AB}$, где $\theta_{AB}$ — угол между базами измерения $A$ и $B$. Следовательно, для конкретного выбора оснований $A = 1/\sqrt{2}(\sigma_z - \sigma_x)$ и $a = 1/\sqrt{2}(\sigma_z + \sigma_x)$, позволяя Бобу измерять с $B=\sigma_z$ и $b=\sigma_x$, мы видим, что $|\langle CHSH1 \rangle| = 2\sqrt{2} > 2$. Также можно показать, что $2\sqrt{2}$ является максимально возможным значением, достижимым даже в квантовом случае (штрихпунктирная линия на графике). # # Приведенное выше неравенство называется CHSH в честь Клаузера, Хорна, Шимони и Холта, и это самый популярный способ представления исходного неравенства Белла. # # Тот факт, что мы нарушили неравенство ЧШШ в нашем реальном устройстве, имеет значение. Всего десять лет назад такой эксперимент имел бы большое значение. В настоящее время квантовые устройства стали значительно лучше, и эти результаты можно легко воспроизвести на самом современном оборудовании. Однако есть ряд лазеек, которые приходится закрывать при нарушении неравенства, чтобы утверждать, что либо локальность, либо реализм опровергнуты. Это лазейка обнаружения (где наш детектор неисправен и не может предоставить содержательную статистику) и лазейка локальности/причинности (где две части запутанной системы разделены расстоянием, меньшим, чем расстояние, пройденное светом за время его прохождения). требуется для выполнения измерения). Учитывая, что мы можем генерировать запутанные пары с высокой точностью и каждое измерение дает результат (то есть ни одна измеренная частица не «потеряна»), мы закрыли лазейку обнаружения в наших экспериментах выше. Однако, учитывая расстояние между нашими кубитами (несколько миллиметров) и время, необходимое для выполнения измерения (порядка $\mu$s), мы не можем утверждать, что закрыли лазейку причинно-следственной связи. # ### Упражнение # # Рассмотрим игру, в которой Алису и Боба помещают в разные комнаты, и каждому дается бит $x$ и $y$ соответственно. Эти биты выбираются случайным образом и независимо друг от друга. При получении бита каждый из них отвечает своим собственным битом, $a$ и $b$. Теперь Алиса и Боб выигрывают игру, если $a$ и $b$ различны при $x=y=1$ и равны в противном случае. Легко видеть, что наилучшая возможная стратегия для Алисы и Боба — всегда давать $a=b=0$ (или $1$). С помощью этой стратегии Алиса и Боб могут выиграть игру не более чем в 75% случаев. # # Представьте, что Алисе и Бобу разрешено совместно использовать запутанное двухкубитное состояние. Есть ли стратегия, которую они могут использовать, которая даст им больше шансов на победу, чем 75%? (Помните, что они могут заранее договориться о любой стратегии, но как только им дадут случайные биты, они больше не смогут общаться. Конечно, они могут всегда брать с собой соответствующие части запутанной пары.)
60.451613
1,174
c795501e5f7b7dbf439f5c6386f1bf7f55105b7f
py
python
FinalProject_Taxis.ipynb
Snehlata25/DataMiningFinalProject
['MIT']
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] colab_type="text" id="8XHle-lLB3cO" # ## Check missing data # + [markdown] id="COobrKASudvs" colab_type="text" # # Import Data and APIs # + [markdown] id="59MnlDV1xmn5" colab_type="text" # ## Download Data from Kaggle API # + id="KbV90ksBqx0E" colab_type="code" outputId="45cf3918-bb27-47dd-ed10-eab816c8df84" colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "Ly8gQ29weXJpZ2h0IDIwMTcgR29vZ2xlIExMQwovLwovLyBMaWNlbnNlZCB1bmRlciB0aGUgQXBhY2hlIExpY2Vuc2UsIFZlcnNpb24gMi4wICh0aGUgIkxpY2Vuc2UiKTsKLy8geW91IG1heSBub3QgdXNlIHRoaXMgZmlsZSBleGNlcHQgaW4gY29tcGxpYW5jZSB3aXRoIHRoZSBMaWNlbnNlLgovLyBZb3UgbWF5IG9idGFpbiBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKLy8KLy8gICAgICBodHRwOi8vd3d3LmFwYWNoZS5vcmcvbGljZW5zZXMvTElDRU5TRS0yLjAKLy8KLy8gVW5sZXNzIHJlcXVpcmVkIGJ5IGFwcGxpY2FibGUgbGF3IG9yIGFncmVlZCB0byBpbiB3cml0aW5nLCBzb2Z0d2FyZQovLyBkaXN0cmlidXRlZCB1bmRlciB0aGUgTGljZW5zZSBpcyBkaXN0cmlidXRlZCBvbiBhbiAiQVMgSVMiIEJBU0lTLAovLyBXSVRIT1VUIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4KLy8gU2VlIHRoZSBMaWNlbnNlIGZvciB0aGUgc3BlY2lmaWMgbGFuZ3VhZ2UgZ292ZXJuaW5nIHBlcm1pc3Npb25zIGFuZAovLyBsaW1pdGF0aW9ucyB1bmRlciB0aGUgTGljZW5zZS4KCi8qKgogKiBAZmlsZW92ZXJ2aWV3IEhlbHBlcnMgZm9yIGdvb2dsZS5jb2xhYiBQeXRob24gbW9kdWxlLgogKi8KKGZ1bmN0aW9uKHNjb3BlKSB7CmZ1bmN0aW9uIHNwYW4odGV4dCwgc3R5bGVBdHRyaWJ1dGVzID0ge30pIHsKICBjb25zdCBlbGVtZW50ID0gZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgnc3BhbicpOwogIGVsZW1lbnQudGV4dENvbnRlbnQgPSB0ZXh0OwogIGZvciAoY29uc3Qga2V5IG9mIE9iamVjdC5rZXlzKHN0eWxlQXR0cmlidXRlcykpIHsKICAgIGVsZW1lbnQuc3R5bGVba2V5XSA9IHN0eWxlQXR0cmlidXRlc1trZXldOwogIH0KICByZXR1cm4gZWxlbWVudDsKfQoKLy8gTWF4IG51bWJlciBvZiBieXRlcyB3aGljaCB3aWxsIGJlIHVwbG9hZGVkIGF0IGEgdGltZS4KY29uc3QgTUFYX1BBWUxPQURfU0laRSA9IDEwMCAqIDEwMjQ7Ci8vIE1heCBhbW91bnQgb2YgdGltZSB0byBibG9jayB3YWl0aW5nIGZvciB0aGUgdXNlci4KY29uc3QgRklMRV9DSEFOR0VfVElNRU9VVF9NUyA9IDMwICogMTAwMDsKCmZ1bmN0aW9uIF91cGxvYWRGaWxlcyhpbnB1dElkLCBvdXRwdXRJZCkgewogIGNvbnN0IHN0ZXBzID0gdXBsb2FkRmlsZXNTdGVwKGlucHV0SWQsIG91dHB1dElkKTsKICBjb25zdCBvdXRwdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQob3V0cHV0SWQpOwogIC8vIENhY2hlIHN0ZXBzIG9uIHRoZSBvdXRwdXRFbGVtZW50IHRvIG1ha2UgaXQgYXZhaWxhYmxlIGZvciB0aGUgbmV4dCBjYWxsCiAgLy8gdG8gdXBsb2FkRmlsZXNDb250aW51ZSBmcm9tIFB5dGhvbi4KICBvdXRwdXRFbGVtZW50LnN0ZXBzID0gc3RlcHM7CgogIHJldHVybiBfdXBsb2FkRmlsZXNDb250aW51ZShvdXRwdXRJZCk7Cn0KCi8vIFRoaXMgaXMgcm91Z2hseSBhbiBhc3luYyBnZW5lcmF0b3IgKG5vdCBzdXBwb3J0ZWQgaW4gdGhlIGJyb3dzZXIgeWV0KSwKLy8gd2hlcmUgdGhlcmUgYXJlIG11bHRpcGxlIGFzeW5jaHJvbm91cyBzdGVwcyBhbmQgdGhlIFB5dGhvbiBzaWRlIGlzIGdvaW5nCi8vIHRvIHBvbGwgZm9yIGNvbXBsZXRpb24gb2YgZWFjaCBzdGVwLgovLyBUaGlzIHVzZXMgYSBQcm9taXNlIHRvIGJsb2NrIHRoZSBweXRob24gc2lkZSBvbiBjb21wbGV0aW9uIG9mIGVhY2ggc3RlcCwKLy8gdGhlbiBwYXNzZXMgdGhlIHJlc3VsdCBvZiB0aGUgcHJldmlvdXMgc3RlcCBhcyB0aGUgaW5wdXQgdG8gdGhlIG5leHQgc3RlcC4KZnVuY3Rpb24gX3VwbG9hZEZpbGVzQ29udGludWUob3V0cHV0SWQpIHsKICBjb25zdCBvdXRwdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQob3V0cHV0SWQpOwogIGNvbnN0IHN0ZXBzID0gb3V0cHV0RWxlbWVudC5zdGVwczsKCiAgY29uc3QgbmV4dCA9IHN0ZXBzLm5leHQob3V0cHV0RWxlbWVudC5sYXN0UHJvbWlzZVZhbHVlKTsKICByZXR1cm4gUHJvbWlzZS5yZXNvbHZlKG5leHQudmFsdWUucHJvbWlzZSkudGhlbigodmFsdWUpID0+IHsKICAgIC8vIENhY2hlIHRoZSBsYXN0IHByb21pc2UgdmFsdWUgdG8gbWFrZSBpdCBhdmFpbGFibGUgdG8gdGhlIG5leHQKICAgIC8vIHN0ZXAgb2YgdGhlIGdlbmVyYXRvci4KICAgIG91dHB1dEVsZW1lbnQubGFzdFByb21pc2VWYWx1ZSA9IHZhbHVlOwogICAgcmV0dXJuIG5leHQudmFsdWUucmVzcG9uc2U7CiAgfSk7Cn0KCi8qKgogKiBHZW5lcmF0b3IgZnVuY3Rpb24gd2hpY2ggaXMgY2FsbGVkIGJldHdlZW4gZWFjaCBhc3luYyBzdGVwIG9mIHRoZSB1cGxvYWQKICogcHJvY2Vzcy4KICogQHBhcmFtIHtzdHJpbmd9IGlucHV0SWQgRWxlbWVudCBJRCBvZiB0aGUgaW5wdXQgZmlsZSBwaWNrZXIgZWxlbWVudC4KICogQHBhcmFtIHtzdHJpbmd9IG91dHB1dElkIEVsZW1lbnQgSUQgb2YgdGhlIG91dHB1dCBkaXNwbGF5LgogKiBAcmV0dXJuIHshSXRlcmFibGU8IU9iamVjdD59IEl0ZXJhYmxlIG9mIG5leHQgc3RlcHMuCiAqLwpmdW5jdGlvbiogdXBsb2FkRmlsZXNTdGVwKGlucHV0SWQsIG91dHB1dElkKSB7CiAgY29uc3QgaW5wdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQoaW5wdXRJZCk7CiAgaW5wdXRFbGVtZW50LmRpc2FibGVkID0gZmFsc2U7CgogIGNvbnN0IG91dHB1dEVsZW1lbnQgPSBkb2N1bWVudC5nZXRFbGVtZW50QnlJZChvdXRwdXRJZCk7CiAgb3V0cHV0RWxlbWVudC5pbm5lckhUTUwgPSAnJzsKCiAgY29uc3QgcGlja2VkUHJvbWlzZSA9IG5ldyBQcm9taXNlKChyZXNvbHZlKSA9PiB7CiAgICBpbnB1dEVsZW1lbnQuYWRkRXZlbnRMaXN0ZW5lcignY2hhbmdlJywgKGUpID0+IHsKICAgICAgcmVzb2x2ZShlLnRhcmdldC5maWxlcyk7CiAgICB9KTsKICB9KTsKCiAgY29uc3QgY2FuY2VsID0gZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgnYnV0dG9uJyk7CiAgaW5wdXRFbGVtZW50LnBhcmVudEVsZW1lbnQuYXBwZW5kQ2hpbGQoY2FuY2VsKTsKICBjYW5jZWwudGV4dENvbnRlbnQgPSAnQ2FuY2VsIHVwbG9hZCc7CiAgY29uc3QgY2FuY2VsUHJvbWlzZSA9IG5ldyBQcm9taXNlKChyZXNvbHZlKSA9PiB7CiAgICBjYW5jZWwub25jbGljayA9ICgpID0+IHsKICAgICAgcmVzb2x2ZShudWxsKTsKICAgIH07CiAgfSk7CgogIC8vIENhbmNlbCB1cGxvYWQgaWYgdXNlciBoYXNuJ3QgcGlja2VkIGFueXRoaW5nIGluIHRpbWVvdXQuCiAgY29uc3QgdGltZW91dFByb21pc2UgPSBuZXcgUHJvbWlzZSgocmVzb2x2ZSkgPT4gewogICAgc2V0VGltZW91dCgoKSA9PiB7CiAgICAgIHJlc29sdmUobnVsbCk7CiAgICB9LCBGSUxFX0NIQU5HRV9USU1FT1VUX01TKTsKICB9KTsKCiAgLy8gV2FpdCBmb3IgdGhlIHVzZXIgdG8gcGljayB0aGUgZmlsZXMuCiAgY29uc3QgZmlsZXMgPSB5aWVsZCB7CiAgICBwcm9taXNlOiBQcm9taXNlLnJhY2UoW3BpY2tlZFByb21pc2UsIHRpbWVvdXRQcm9taXNlLCBjYW5jZWxQcm9taXNlXSksCiAgICByZXNwb25zZTogewogICAgICBhY3Rpb246ICdzdGFydGluZycsCiAgICB9CiAgfTsKCiAgaWYgKCFmaWxlcykgewogICAgcmV0dXJuIHsKICAgICAgcmVzcG9uc2U6IHsKICAgICAgICBhY3Rpb246ICdjb21wbGV0ZScsCiAgICAgIH0KICAgIH07CiAgfQoKICBjYW5jZWwucmVtb3ZlKCk7CgogIC8vIERpc2FibGUgdGhlIGlucHV0IGVsZW1lbnQgc2luY2UgZnVydGhlciBwaWNrcyBhcmUgbm90IGFsbG93ZWQuCiAgaW5wdXRFbGVtZW50LmRpc2FibGVkID0gdHJ1ZTsKCiAgZm9yIChjb25zdCBmaWxlIG9mIGZpbGVzKSB7CiAgICBjb25zdCBsaSA9IGRvY3VtZW50LmNyZWF0ZUVsZW1lbnQoJ2xpJyk7CiAgICBsaS5hcHBlbmQoc3BhbihmaWxlLm5hbWUsIHtmb250V2VpZ2h0OiAnYm9sZCd9KSk7CiAgICBsaS5hcHBlbmQoc3BhbigKICAgICAgICBgKCR7ZmlsZS50eXBlIHx8ICduL2EnfSkgLSAke2ZpbGUuc2l6ZX0gYnl0ZXMsIGAgKwogICAgICAgIGBsYXN0IG1vZGlmaWVkOiAkewogICAgICAgICAgICBmaWxlLmxhc3RNb2RpZmllZERhdGUgPyBmaWxlLmxhc3RNb2RpZmllZERhdGUudG9Mb2NhbGVEYXRlU3RyaW5nKCkgOgogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAnbi9hJ30gLSBgKSk7CiAgICBjb25zdCBwZXJjZW50ID0gc3BhbignMCUgZG9uZScpOwogICAgbGkuYXBwZW5kQ2hpbGQocGVyY2VudCk7CgogICAgb3V0cHV0RWxlbWVudC5hcHBlbmRDaGlsZChsaSk7CgogICAgY29uc3QgZmlsZURhdGFQcm9taXNlID0gbmV3IFByb21pc2UoKHJlc29sdmUpID0+IHsKICAgICAgY29uc3QgcmVhZGVyID0gbmV3IEZpbGVSZWFkZXIoKTsKICAgICAgcmVhZGVyLm9ubG9hZCA9IChlKSA9PiB7CiAgICAgICAgcmVzb2x2ZShlLnRhcmdldC5yZXN1bHQpOwogICAgICB9OwogICAgICByZWFkZXIucmVhZEFzQXJyYXlCdWZmZXIoZmlsZSk7CiAgICB9KTsKICAgIC8vIFdhaXQgZm9yIHRoZSBkYXRhIHRvIGJlIHJlYWR5LgogICAgbGV0IGZpbGVEYXRhID0geWllbGQgewogICAgICBwcm9taXNlOiBmaWxlRGF0YVByb21pc2UsCiAgICAgIHJlc3BvbnNlOiB7CiAgICAgICAgYWN0aW9uOiAnY29udGludWUnLAogICAgICB9CiAgICB9OwoKICAgIC8vIFVzZSBhIGNodW5rZWQgc2VuZGluZyB0byBhdm9pZCBtZXNzYWdlIHNpemUgbGltaXRzLiBTZWUgYi82MjExNTY2MC4KICAgIGxldCBwb3NpdGlvbiA9IDA7CiAgICB3aGlsZSAocG9zaXRpb24gPCBmaWxlRGF0YS5ieXRlTGVuZ3RoKSB7CiAgICAgIGNvbnN0IGxlbmd0aCA9IE1hdGgubWluKGZpbGVEYXRhLmJ5dGVMZW5ndGggLSBwb3NpdGlvbiwgTUFYX1BBWUxPQURfU0laRSk7CiAgICAgIGNvbnN0IGNodW5rID0gbmV3IFVpbnQ4QXJyYXkoZmlsZURhdGEsIHBvc2l0aW9uLCBsZW5ndGgpOwogICAgICBwb3NpdGlvbiArPSBsZW5ndGg7CgogICAgICBjb25zdCBiYXNlNjQgPSBidG9hKFN0cmluZy5mcm9tQ2hhckNvZGUuYXBwbHkobnVsbCwgY2h1bmspKTsKICAgICAgeWllbGQgewogICAgICAgIHJlc3BvbnNlOiB7CiAgICAgICAgICBhY3Rpb246ICdhcHBlbmQnLAogICAgICAgICAgZmlsZTogZmlsZS5uYW1lLAogICAgICAgICAgZGF0YTogYmFzZTY0LAogICAgICAgIH0sCiAgICAgIH07CiAgICAgIHBlcmNlbnQudGV4dENvbnRlbnQgPQogICAgICAgICAgYCR7TWF0aC5yb3VuZCgocG9zaXRpb24gLyBmaWxlRGF0YS5ieXRlTGVuZ3RoKSAqIDEwMCl9JSBkb25lYDsKICAgIH0KICB9CgogIC8vIEFsbCBkb25lLgogIHlpZWxkIHsKICAgIHJlc3BvbnNlOiB7CiAgICAgIGFjdGlvbjogJ2NvbXBsZXRlJywKICAgIH0KICB9Owp9CgpzY29wZS5nb29nbGUgPSBzY29wZS5nb29nbGUgfHwge307CnNjb3BlLmdvb2dsZS5jb2xhYiA9IHNjb3BlLmdvb2dsZS5jb2xhYiB8fCB7fTsKc2NvcGUuZ29vZ2xlLmNvbGFiLl9maWxlcyA9IHsKICBfdXBsb2FkRmlsZXMsCiAgX3VwbG9hZEZpbGVzQ29udGludWUsCn07Cn0pKHNlbGYpOwo=", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 41} # Import tools to access Kaggle identity keys. from google.colab import drive from google.colab import files uploaded = files.upload() # + id="h2kxUQzFrMUO" colab_type="code" outputId="fe4b0a36-95d6-4b66-bc7b-a4ff1f79aa3f" colab={"base_uri": "https://localhost:8080/", "height": 148} # !mkdir -p ~/.kaggle # Makes a directory in home folder named kaggle # !cp kaggle.json ~/.kaggle/ # Copies the contents of kaggle.jason into the home/kaggle folder # !apt-get install p7zip-full # Installs p7zip-full tool # + id="FkGG_fKrsCst" colab_type="code" colab={} # Use the Python pip install command to install the Kaggle library # !pip install -q kaggle # + id="fnEz8IVSr5H9" colab_type="code" outputId="3c799dbf-72f7-4157-e7e8-8341a4ea20b5" colab={"base_uri": "https://localhost:8080/", "height": 217} # This code downloads the dataset from kaggle # !kaggle competitions download -c nyc-taxi-trip-duration # + id="g8shQPkltkyM" colab_type="code" outputId="2a04d341-6f03-49d9-ef6a-a9f5676d348e" colab={"base_uri": "https://localhost:8080/", "height": 316} # This extracts the test data from kaggle download !7za e test.zip # + id="iJpp7SSVuAh-" colab_type="code" outputId="9f9c92b7-a900-4d35-a37a-70a56d3aa177" colab={"base_uri": "https://localhost:8080/", "height": 316} # This extracts the train data from kaggle download !7za e train.zip # + id="K7EFCs7euVUV" colab_type="code" outputId="c8b040eb-6757-4322-e9a1-7260b8becbe6" colab={"base_uri": "https://localhost:8080/", "height": 316} # This extracts the sample_submission from kaggle download !7za e sample_submission.zip # + [markdown] id="y_FZ1S7Yujbu" colab_type="text" # ## Import APIs # + id="j6m-sxTxuku9" colab_type="code" colab={} # import commands brings in the necessary libraries from 'library' # that are required to run this notebook into the notebook environment import os import time import json import numpy as np import pandas as pd import matplotlib.pyplot as plt from matplotlib.ticker import StrMethodFormatter from sklearn import linear_model from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import RandomForestRegressor from sklearn import preprocessing from sklearn import metrics import datetime import seaborn as sns # Seaborn library for plotting # Some statistics tools from sklearn import datasets, linear_model from sklearn.linear_model import LinearRegression import statsmodels.api as sm from scipy import stats from xgboost import XGBRegressor # %matplotlib inline # Necessary in JupyterNotebooks # + [markdown] id="E6OHZt3F15To" colab_type="text" # ## Load trainning and testing data # + id="YPOeJ_85v6yt" colab_type="code" colab={} # Load train data train_df = pd.read_csv('./train.csv') # Load test data test_df = pd.read_csv('./test.csv') # + [markdown] id="EGmitIUUzN7z" colab_type="text" # # Problem Definition # # We are provided a data set with geographical, time, and passanger count data along with other features from a set of taxi rides in New York City. We are asked to predict the total trip duration based on the provided data. In other words, we are asked to predict a number from a set of labeled input feature values, this is a classic supervised learning problem, specifically a regression problem. # # ## Feature Details # # id - a unique identifier for each trip # vendor_id - a code indicating the provider associated with the trip record # pickup_datetime - date and time when the meter was engaged # dropoff_datetime - date and time when the meter was disengaged # passenger_count - the number of passengers in the vehicle (driver entered value) # pickup_longitude - the longitude where the meter was engaged # pickup_latitude - the latitude where the meter was engaged # dropoff_longitude - the longitude where the meter was disengaged # dropoff_latitude - the latitude where the meter was disengaged # store_and_fwd_flag - This flag indicates whether the trip record was held in vehicle memory before sending to the vendor because the vehicle did not have a connection to the server - Y=store and forward; N=not a store and forward trip. # # # ## Label Details # # trip_duration - duration of the trip in seconds # + [markdown] id="yiZJP1rTzVjn" colab_type="text" # # Data Cleaning # # In this section, we will run several, and similar data cleaning and data engineering procedures, we will look for Nan data points, outliers, legally unacceptable points, and ensure data is formatted as necessary. # + [markdown] id="fW2RN2SY11As" colab_type="text" # ## Quick Look at the datasets # + id="uBufTqWzwUDe" colab_type="code" outputId="027a0d42-a88a-4088-e731-76082376b911" colab={"base_uri": "https://localhost:8080/", "height": 32} train_df.shape # + id="12715ViTwZ_c" colab_type="code" outputId="cc0afc94-d50b-42cd-a5b9-a2faa36bfbd8" colab={"base_uri": "https://localhost:8080/", "height": 32} test_df.shape # + id="MVoE9pTS0_Uq" colab_type="code" outputId="3266b2c6-af81-4d19-f93b-9ffe22159763" colab={"base_uri": "https://localhost:8080/", "height": 303} train_df.head() # + id="59MFux4e1lR9" colab_type="code" outputId="37f33130-c115-4aff-dd64-3582739cd5f0" colab={"base_uri": "https://localhost:8080/", "height": 303} test_df.head() # + id="F4tcVRqUzled" colab_type="code" outputId="c7d37207-2bfa-40b7-d77e-300bb5d9cae8" colab={"base_uri": "https://localhost:8080/", "height": 214} # Check for Missing Data in training dataset using df.isna() command # This command iterates over the columns of a dataframe checking wether an entry # is Nan and counts the number of those such entries. train_df.isna().sum(axis=0) # + [markdown] id="d1UW3g8B2tnK" colab_type="text" # There is no missing data in training and testing dataset # + [markdown] id="ZSGfC-7GqJQa" colab_type="text" # ## Remove Outliers # + id="VRvHVwFlPoJo" colab_type="code" outputId="3be8905c-50c9-456a-adb1-46f88b6e7b7a" colab={"base_uri": "https://localhost:8080/", "height": 291} # Change the formatting of the numbers in order to help visualization pd.set_option('display.float_format',lambda x : '%.2f'% x) train_df.describe() # + [markdown] id="Hv83afP5Qg-5" colab_type="text" # The maximum trip duration is ~41 days which doesn't make sense. Also maximum number of passengers is 9, which is also strange. We may need to remove some outliers # + [markdown] id="UpcdTIX5PRCY" colab_type="text" # ### Duration # + id="JoAD_tqmV8RR" colab_type="code" outputId="34c7f470-db7b-4c8b-c374-1b257b3e5284" colab={"base_uri": "https://localhost:8080/", "height": 164} train_df.trip_duration.describe() # Provides simple statistic summary of the # Columns in the DataFrame. # + id="UmlT066z3hfj" colab_type="code" outputId="9293afd3-12b5-478d-d47c-a4eae4390a55" colab={"base_uri": "https://localhost:8080/", "height": 278} sns.boxplot(train_df.trip_duration) # Creates a boxplot of trip duration using # Seaborn library. plt.show() # + id="gG_FJvCs-YNo" colab_type="code" outputId="b057b8e8-bbfb-46f3-f550-b97e271708a0" colab={"base_uri": "https://localhost:8080/", "height": 32} print('there are', train_df[(train_df.trip_duration < 5)].trip_duration.count(), 'trips took less than 5 seconds, and', train_df[(train_df.trip_duration > 86400)].trip_duration.count(), 'trips took more than one day') # + id="rmDvEnNZEa2d" colab_type="code" colab={} # remove instances based on Duration in the testing set # remove these 849 train_df = train_df[train_df.trip_duration >= 5] train_df = train_df[train_df.trip_duration < 1000000] # + id="VNmAnV9hERzp" colab_type="code" outputId="e32eeee9-a9b7-42f9-a907-61459951c31c" colab={"base_uri": "https://localhost:8080/", "height": 32} train_df.shape # + id="ux6IGJ3P8O8o" colab_type="code" outputId="84e77249-5356-4fcf-90e9-39c21804e531" colab={"base_uri": "https://localhost:8080/", "height": 278} sns.boxplot(train_df.trip_duration) plt.show() # + id="zHhUJ8uVz4V4" colab_type="code" outputId="273bf75c-e658-4342-b70e-0a58bf8abd41" colab={"base_uri": "https://localhost:8080/", "height": 508} # %matplotlib inline # For visualization purposes, we will use the Seaborn Library sns.set(style="white", palette="muted", color_codes=True) f, axes = plt.subplots(1, 1, figsize=(11, 7), sharex=True) sns.despine(left=True) sns.distplot(np.log(train_df['trip_duration'].values+1), axlabel = 'Log(trip_duration)', label = 'log(trip_duration)', bins = 50, color="r") plt.setp(axes, yticks=[]) plt.tight_layout() plt.show() # + [markdown] id="wbYY2dC2Prvu" colab_type="text" # **Passenger Count** # + id="S-53odII5Inc" colab_type="code" outputId="45e31418-020a-4a5e-d762-3c8cce4f999e" colab={"base_uri": "https://localhost:8080/", "height": 197} # remove instances based on Number of Passengers in the testing set train_df.passenger_count.value_counts() # + [markdown] id="CWbkYlxz6-V8" colab_type="text" # By New York legislation, rides with more than 6 passengers are ilegal, therefore, we will remove all those datapoints in addition to those rides # with less than 1 passanger. # + id="HVqV2UTp6tdA" colab_type="code" colab={} # remove these 53 trips train_df = train_df[train_df.passenger_count <= 6] train_df = train_df[train_df.passenger_count > 0] # + id="L-Fyj7z-ORaD" colab_type="code" outputId="71c9a97f-5db2-42d2-d482-023f550b211f" colab={"base_uri": "https://localhost:8080/", "height": 32} train_df.shape # Shape of the DataFrame matrix. # + id="pRVtT8PasrHe" colab_type="code" outputId="b5bb1a4d-b536-4be4-afa9-89d3262395ca" colab={"base_uri": "https://localhost:8080/", "height": 283} # Passanger count histogram. sns.countplot(train_df.passenger_count) plt.show() # + [markdown] id="0X-qHSumRAia" colab_type="text" # ### Distance # + id="QwD5hmG4qN92" colab_type="code" outputId="786c87f2-41ab-4c16-9812-1501baa2545e" colab={"base_uri": "https://localhost:8080/", "height": 102} # Some useful libraries # !pip install haversine from haversine import haversine, Unit # + id="JEZ3gjy1TJac" colab_type="code" colab={} def calc_distance(df): pickup = (df['pickup_latitude'], df['pickup_longitude']) drop = (df['dropoff_latitude'], df['dropoff_longitude']) return haversine(pickup, drop) # + id="-taaVJAvTwp1" colab_type="code" colab={} train_df['distance'] = train_df.apply(lambda x: calc_distance(x), axis = 1) # + id="I1_9Gj7KVWK9" colab_type="code" outputId="e2d973de-18c3-487a-d697-fa13e282a38b" colab={"base_uri": "https://localhost:8080/", "height": 164} train_df.distance.describe() # + id="OfJFy4viVqs0" colab_type="code" outputId="4afd832c-c703-4fdf-ed98-71b8155b00c3" colab={"base_uri": "https://localhost:8080/", "height": 283} sns.boxplot(train_df.distance) plt.show() # + id="6VbDVVr6WLmS" colab_type="code" outputId="41bfdbbd-22ca-44ff-9cb0-1c0bc0255bbf" colab={"base_uri": "https://localhost:8080/", "height": 32} # remove instances based on Duration in the testing set train_df[(train_df.distance == 0)].distance.count() # + id="fXN6qKLFIMAR" colab_type="code" outputId="24021764-0e06-4830-82f1-a6cc02fc1f12" colab={"base_uri": "https://localhost:8080/", "height": 164} train_df.distance.describe() # + id="TGvOYwtrXNFh" colab_type="code" outputId="3a2d145c-179c-4cd0-bdad-7ba9a0acbe58" colab={"base_uri": "https://localhost:8080/", "height": 303} train_df.nlargest(5,['distance']) # + [markdown] id="NY4QrV3jYunv" colab_type="text" # There are trips with 0 distance, and as shown in the chart above, there are some points look like outliers # + id="IV31jk87GKeE" colab_type="code" colab={} # Remove instance with distance = 0 train_df = train_df[train_df.distance != 0] # + id="YgK8tuoCIqFZ" colab_type="code" outputId="970cd259-ae3e-4eb8-cdfe-4a6a04e3acbb" colab={"base_uri": "https://localhost:8080/", "height": 267} train_df.distance.groupby(pd.cut(train_df.distance, np.arange(0,100,10))).count().plot(kind='barh') plt.show() # + [markdown] id="DOvraW1eI_9-" colab_type="text" # As shown above, most of the rides are completed between 1-10 kms with some of the rides with distances between 10-30 kms # + [markdown] id="WNGIYA6S3n0W" colab_type="text" # ### Speed # + id="N3YmYm3s3qyM" colab_type="code" colab={} train_df['speed'] = (train_df.distance/(train_df.trip_duration/3600)) # + id="62SpVP_6C3N6" colab_type="code" outputId="709c2dcb-c9c6-4632-f484-da819cbd6b4d" colab={"base_uri": "https://localhost:8080/", "height": 164} train_df.speed.describe() # + [markdown] id="x8cqPcOBDN65" colab_type="text" # Some trips have speed more than 2,000 meter/hour, which is unrealistic. We will need to remove these instances. # + id="xmfC9JcbFIwR" colab_type="code" colab={} train_df = train_df[train_df.speed <= 110] # + id="H2wAz1UHEaJh" colab_type="code" outputId="8539dafa-38b9-4dda-a0fb-8d3607432d31" colab={"base_uri": "https://localhost:8080/", "height": 357} plt.figure(figsize = (20,5)) sns.boxplot(train_df.speed) plt.show() # + [markdown] id="H-yoCWVMKKwt" colab_type="text" # # Feature Engineering # + [markdown] id="5yqW8ExAKSA9" colab_type="text" # ## Time and Date # + id="gKpfP7XpvPop" colab_type="code" colab={} #Calculate and assign new columns to the dataframe such as weekday, #month and pickup_hour which will help us to gain more insights from the data. def convert_datetime(df): df['pickup_datetime'] = pd.to_datetime(df['pickup_datetime']) df['weekday'] = df.pickup_datetime.dt.weekday_name df['month'] = df.pickup_datetime.dt.month df['weekday_number'] = df.pickup_datetime.dt.weekday df['pickup_hour'] = df.pickup_datetime.dt.hour # + id="RDNgzAw11D5F" colab_type="code" outputId="47a06989-ff68-4e68-e298-360e51a1c89a" colab={"base_uri": "https://localhost:8080/", "height": 303} convert_datetime(train_df) train_df.head() # + [markdown] id="7uqa8kMKeXHn" colab_type="text" # ## Creating Dummy Variables # + [markdown] id="w6eI4MPqeqM2" colab_type="text" # We can start training our model at this point. However, to add the model accuracy, we can convert our categorical data into dummy variables. We will use the function in Pandas library to make the change. # # Alternatively, we could have converted the categorical data into numerical data manually or by using some Scikit Learn tools such as # + id="qTkJ8_wT1WSO" colab_type="code" colab={} def create_dummy(df): dummy = pd.get_dummies(df.store_and_fwd_flag, prefix='flag') df = pd.concat([df,dummy],axis=1) dummy = pd.get_dummies(df.vendor_id, prefix='vendor_id') df = pd.concat([df,dummy],axis=1) dummy = pd.get_dummies(df.passenger_count, prefix='passenger_count') df = pd.concat([df,dummy],axis=1) dummy = pd.get_dummies(df.month, prefix='month') df = pd.concat([df,dummy],axis=1) dummy = pd.get_dummies(df.weekday_number, prefix='weekday_number') df = pd.concat([df,dummy],axis=1) dummy = pd.get_dummies(df.pickup_hour, prefix='pickup_hour') df = pd.concat([df,dummy],axis=1) return df # + id="PZGo4Bc4u1na" colab_type="code" colab={} train_df = create_dummy(train_df) # + id="CpnCuKs01hRQ" colab_type="code" outputId="608317cd-be01-4042-c27c-6c4873153494" colab={"base_uri": "https://localhost:8080/", "height": 32} train_df.shape # + id="lyTTza6IZ8CJ" colab_type="code" outputId="055987c2-7dc2-4f32-b2b6-1470dba63c8c" colab={"base_uri": "https://localhost:8080/", "height": 1000} # get the index of the features and label list(zip(range(0,len(train_df.columns)),train_df.columns)) # + id="bDwPW9fcZg1E" colab_type="code" outputId="44dd474b-b204-4617-cb44-a43584598cdc" colab={"base_uri": "https://localhost:8080/", "height": 85} # drop all the redundant columns such as pickup_datetime, weekday, month etc. # and drop unneeded features such as id, speed (a dependant of duration) # also seperate features with labels X_train_set = train_df.iloc[:,np.r_[11,17:64]] y_train_set = train_df["trip_duration"].copy() # General equation for multiple linear regression usually includes the constant value, # so we will add "1" to each instance first X_train_set = sm.add_constant(X_train_set) print(X_train_set.shape) # + [markdown] id="KTAiD_xqIo5x" colab_type="text" # ## Backward Feature selection # + [markdown] id="fuS1xFgZLRCh" colab_type="text" # # We will run linear regression multiple time by using different combination of features and check p value of each regression iteration until we reach the level of p value that is less than 5%. If the regression p value is greater than 5%, we will reject the feature from the list of array and continue with next iteration until we reach the optimal combination of features. # + id="l0Bx7kZxQPUw" colab_type="code" colab={} X_train_opt = X_train_set est = sm.OLS(y_train_set, X_train_opt) est2 = est.fit() # + id="SpILBV169n03" colab_type="code" outputId="b7064638-09f2-4184-cb3b-0725a0366a22" colab={"base_uri": "https://localhost:8080/", "height": 32} X_train_opt.shape # + id="Bqke6O2qSgWh" colab_type="code" colab={} # fetch p-value p_Vals = est2.pvalues print(p_Vals) # + id="1orhsLDzOKo_" colab_type="code" colab={} # Define significance level for accepting the feature. sig_Level = 0.05 # Looping over features and remove the feature with p value less than the 5% while max(p_Vals) > sig_Level: X_train_opt = X_train_opt.drop(X_train_opt.columns[np.argmax(np.array(p_Vals))],axis=1) print("\n") print("Feature at index {} is removed \n".format(str(np.argmax(np.array(p_Vals))))) print(str(X_train_opt.shape[1]-1) + " dimensions remaining now... \n") est = sm.OLS(y_train_set, X_train_opt) est2 = est.fit() p_Vals = est2.pvalues print("=================================================================\n") # + id="4Jv-4wFf-C4B" colab_type="code" colab={} #Print final summary print("Final stat summary with optimal {} features".format(str(X_train_opt.shape[1]-1))) print(est2.pvalues) # + [markdown] id="j-Y9LSxEmCDW" colab_type="text" # # Modelling # + [markdown] id="FFMZCBXSARZ2" colab_type="text" # ## Linear Regression # + [markdown] id="YI6uIrjCAZ8U" colab_type="text" # ### Using all features # + id="KP4gGNVkqhbC" colab_type="code" colab={} # Split data from the all features X_train_all, X_test_all, y_train_all, y_test_all = train_test_split(X_train_set,y_train_set, random_state=4, test_size=0.2) # + id="7eAUenr0WdvD" colab_type="code" outputId="169c66a4-e3a1-4c14-f24b-7a2bd5ae6d99" colab={"base_uri": "https://localhost:8080/", "height": 32} # Linear regressor for all features regressor0 = LinearRegression() regressor0.fit(X_train_all,y_train_all) # + id="NX9cDLCzXBZR" colab_type="code" colab={} # Predict from the test features of Feature Selection group y_pred_all = regressor0.predict(X_test_all) # + [markdown] id="-moWpbosAewC" colab_type="text" # ### Using the selected features # + id="QxDuRHD7_ByT" colab_type="code" colab={} # Split data from the feature selection group X_train_fs, X_test_fs, y_train_fs, y_test_fs = train_test_split(X_train_opt,y_train_set, random_state=4, test_size=0.2) # + id="89S9nMgR_Vb4" colab_type="code" outputId="3dc5f691-c11a-4f6a-974b-4ff174fded19" colab={"base_uri": "https://localhost:8080/", "height": 32} # Linear regressor for the Feature selection group regressor1 = LinearRegression() regressor1.fit(X_train_fs,y_train_fs) # + id="A_wictRz_eVL" colab_type="code" colab={} # Predict from the test features of Feature Selection group y_pred_fs = regressor1.predict(X_test_fs) # + id="wsKguu29_j2C" colab_type="code" outputId="8084d57c-2d19-472c-bcf4-72302cbfed9d" colab={"base_uri": "https://localhost:8080/", "height": 148} # Evaluate the models print('RMSE score for the Multiple LR using all features is : {}'.format(np.sqrt(metrics.mean_squared_error(y_test_all,y_pred_all)))) print('Variance score for the Multiple LR is : %.2f' % regressor0.score(X_test_all, y_test_all)) print("\n") print('RMSE score for the Multiple LR FS is : {}'.format(np.sqrt(metrics.mean_squared_error(y_test_fs,y_pred_fs)))) print('Variance score for the Multiple LR FS is : %.2f' % regressor1.score(X_test_fs, y_test_fs)) print("\n") # + id="inbZaIlG4tNZ" colab_type="code" colab={} corr_matrix = train_df.corr() corr_matrix["trip_duration"].sort_values(ascending=False) # + [markdown] id="eQWvg093bINM" colab_type="text" # ## Random Forest Regression # + id="SkA-UV2hUQ4c" colab_type="code" outputId="9dcc6cf3-2048-4993-be3d-85d590c5db40" colab={"base_uri": "https://localhost:8080/", "height": 217} # Tnstantiate() the object for the Random Forest Regressor with default params from raw data regressor_rf_full = RandomForestRegressor(n_jobs=-1) # Instantiate() the object for the Random Forest Regressor with default params for Feature Selection Group regressor_rf_fs = RandomForestRegressor(n_jobs=-1) # Train the object with default params for raw data regressor_rf_full.fit(X_train_all,y_train_all) # Train the object with default params for Feature Selection Group regressor_rf_fs.fit(X_train_fs,y_train_fs) # + id="GLlcGgrdlY8y" colab_type="code" colab={} #Predict the output with object of default params for Feature Selection Group y_pred_rf_full = regressor_rf_full.predict(X_test_all) #Predict the output with object of default params for Feature Selection Group y_pred_rf_fs = regressor_rf_fs.predict(X_test_fs) # + id="pKcfco1Nrqxo" colab_type="code" outputId="e8fdf5d3-f3fa-45a1-e19d-513b1102fb3f" colab={"base_uri": "https://localhost:8080/", "height": 32} type(regressor_rf_fs) # + id="BO75Nlrglk6m" colab_type="code" outputId="865a5072-6067-47ec-9f47-675eedea9ff1" colab={"base_uri": "https://localhost:8080/", "height": 49} print(np.sqrt(metrics.mean_squared_error(y_test_all,y_pred_rf_full))) print(np.sqrt(metrics.mean_squared_error(y_test_fs,y_pred_rf_fs)))
57.942801
7,663
00d41f63eafd11724836cb5e4cbb90f1be8e02ab
py
python
chapter_8/8_5_NMT/8_5_NMT_scheduled_sampling.ipynb
tedpark/nlp-with-pytorch
['Apache-2.0']
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="nazLNtCMK8bf" # *아래 링크를 통해 이 노트북을 주피터 노트북 뷰어(nbviewer.jupyter.org)로 보거나 구글 코랩(colab.research.google.com)에서 실행할 수 있습니다.* # # <table class="tfo-notebook-buttons" align="left"> # <td> # <a target="_blank" href="https://nbviewer.jupyter.org/github/rickiepark/nlp-with-pytorch/blob/master/chapter_8/8_5_NMT/8_5_NMT_scheduled_sampling.ipynb"><img src="https://jupyter.org/assets/main-logo.svg" width="28" />주피터 노트북 뷰어로 보기</a> # </td> # <td> # <a target="_blank" href="https://colab.research.google.com/github/rickiepark/nlp-with-pytorch/blob/master/chapter_8/8_5_NMT/8_5_NMT_scheduled_sampling.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Colab)에서 실행하기</a> # </td> # </table> # + id="7exiWxE3K8bj" import os from argparse import Namespace from collections import Counter import json import re import string import numpy as np import pandas as pd import torch import torch.nn as nn from torch.nn import functional as F from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence import torch.optim as optim from torch.utils.data import Dataset, DataLoader import tqdm # + [markdown] id="LxglotKoK8bk" # ### Vocabulary # + id="XsGcntBwK8bk" class Vocabulary(object): """매핑을 위해 텍스트를 처리하고 어휘 사전을 만드는 클래스 """ def __init__(self, token_to_idx=None): """ 매개변수: token_to_idx (dict): 기존 토큰-인덱스 매핑 딕셔너리 """ if token_to_idx is None: token_to_idx = {} self._token_to_idx = token_to_idx self._idx_to_token = {idx: token for token, idx in self._token_to_idx.items()} def to_serializable(self): """ 직렬화할 수 있는 딕셔너리를 반환합니다 """ return {'token_to_idx': self._token_to_idx} @classmethod def from_serializable(cls, contents): """ 직렬화된 딕셔너리에서 Vocabulary 객체를 만듭니다 """ return cls(**contents) def add_token(self, token): """ 토큰을 기반으로 매핑 딕셔너리를 업데이트합니다 매개변수: token (str): Vocabulary에 추가할 토큰 반환값: index (int): 토큰에 상응하는 정수 """ if token in self._token_to_idx: index = self._token_to_idx[token] else: index = len(self._token_to_idx) self._token_to_idx[token] = index self._idx_to_token[index] = token return index def add_many(self, tokens): """토큰 리스트를 Vocabulary에 추가합니다. 매개변수: tokens (list): 문자열 토큰 리스트 반환값: indices (list): 토큰 리스트에 상응되는 인덱스 리스트 """ return [self.add_token(token) for token in tokens] def lookup_token(self, token): """토큰에 대응하는 인덱스를 추출합니다. 매개변수: token (str): 찾을 토큰 반환값: index (int): 토큰에 해당하는 인덱스 """ return self._token_to_idx[token] def lookup_index(self, index): """ 인덱스에 해당하는 토큰을 반환합니다. 매개변수: index (int): 찾을 인덱스 반환값: token (str): 인텍스에 해당하는 토큰 에러: KeyError: 인덱스가 Vocabulary에 없을 때 발생합니다. """ if index not in self._idx_to_token: raise KeyError("the index (%d) is not in the Vocabulary" % index) return self._idx_to_token[index] def __str__(self): return "<Vocabulary(size=%d)>" % len(self) def __len__(self): return len(self._token_to_idx) # + id="C00MoCZiK8bk" class SequenceVocabulary(Vocabulary): def __init__(self, token_to_idx=None, unk_token="<UNK>", mask_token="<MASK>", begin_seq_token="<BEGIN>", end_seq_token="<END>"): super(SequenceVocabulary, self).__init__(token_to_idx) self._mask_token = mask_token self._unk_token = unk_token self._begin_seq_token = begin_seq_token self._end_seq_token = end_seq_token self.mask_index = self.add_token(self._mask_token) self.unk_index = self.add_token(self._unk_token) self.begin_seq_index = self.add_token(self._begin_seq_token) self.end_seq_index = self.add_token(self._end_seq_token) def to_serializable(self): contents = super(SequenceVocabulary, self).to_serializable() contents.update({'unk_token': self._unk_token, 'mask_token': self._mask_token, 'begin_seq_token': self._begin_seq_token, 'end_seq_token': self._end_seq_token}) return contents def lookup_token(self, token): """ 토큰에 대응하는 인덱스를 추출합니다. 토큰이 없으면 UNK 인덱스를 반환합니다. 매개변수: token (str): 찾을 토큰 반환값: index (int): 토큰에 해당하는 인덱스 노트: UNK 토큰을 사용하려면 (Vocabulary에 추가하기 위해) `unk_index`가 0보다 커야 합니다. """ if self.unk_index >= 0: return self._token_to_idx.get(token, self.unk_index) else: return self._token_to_idx[token] # + [markdown] id="Shr3OnTZK8bl" # ### Vectorizer # + id="GCEpJxkeK8bl" class NMTVectorizer(object): """ 어휘 사전을 생성하고 관리합니다 """ def __init__(self, source_vocab, target_vocab, max_source_length, max_target_length): """ 매개변수: source_vocab (SequenceVocabulary): 소스 단어를 정수에 매핑합니다 target_vocab (SequenceVocabulary): 타깃 단어를 정수에 매핑합니다 max_source_length (int): 소스 데이터셋에서 가장 긴 시퀀스 길이 max_target_length (int): 타깃 데이터셋에서 가장 긴 시퀀스 길이 """ self.source_vocab = source_vocab self.target_vocab = target_vocab self.max_source_length = max_source_length self.max_target_length = max_target_length def _vectorize(self, indices, vector_length=-1, mask_index=0): """인덱스를 벡터로 변환합니다 매개변수: indices (list): 시퀀스를 나타내는 정수 리스트 vector_length (int): 인덱스 벡터의 길이 mask_index (int): 사용할 마스크 인덱스; 거의 항상 0 """ if vector_length < 0: vector_length = len(indices) vector = np.zeros(vector_length, dtype=np.int64) vector[:len(indices)] = indices vector[len(indices):] = mask_index return vector def _get_source_indices(self, text): """ 벡터로 변환된 소스 텍스트를 반환합니다 매개변수: text (str): 소스 텍스트; 토큰은 공백으로 구분되어야 합니다 반환값: indices (list): 텍스트를 표현하는 정수 리스트 """ indices = [self.source_vocab.begin_seq_index] indices.extend(self.source_vocab.lookup_token(token) for token in text.split(" ")) indices.append(self.source_vocab.end_seq_index) return indices def _get_target_indices(self, text): """ 벡터로 변환된 타깃 텍스트를 반환합니다 매개변수: text (str): 타깃 텍스트; 토큰은 공백으로 구분되어야 합니다 반환값: 튜플: (x_indices, y_indices) x_indices (list): 디코더에서 샘플을 나타내는 정수 리스트 y_indices (list): 디코더에서 예측을 나타내는 정수 리스트 """ indices = [self.target_vocab.lookup_token(token) for token in text.split(" ")] x_indices = [self.target_vocab.begin_seq_index] + indices y_indices = indices + [self.target_vocab.end_seq_index] return x_indices, y_indices def vectorize(self, source_text, target_text, use_dataset_max_lengths=True): """ 벡터화된 소스 텍스트와 타깃 텍스트를 반환합니다 벡터화된 소스 텍슽트는 하나의 벡터입니다. 벡터화된 타깃 텍스트는 7장의 성씨 모델링과 비슷한 스타일로 두 개의 벡터로 나뉩니다. 각 타임 스텝에서 첫 번째 벡터가 샘플이고 두 번째 벡터가 타깃이 됩니다. 매개변수: source_text (str): 소스 언어의 텍스트 target_text (str): 타깃 언어의 텍스트 use_dataset_max_lengths (bool): 최대 벡터 길이를 사용할지 여부 반환값: 다음과 같은 키에 벡터화된 데이터를 담은 딕셔너리: source_vector, target_x_vector, target_y_vector, source_length """ source_vector_length = -1 target_vector_length = -1 if use_dataset_max_lengths: source_vector_length = self.max_source_length + 2 target_vector_length = self.max_target_length + 1 source_indices = self._get_source_indices(source_text) source_vector = self._vectorize(source_indices, vector_length=source_vector_length, mask_index=self.source_vocab.mask_index) target_x_indices, target_y_indices = self._get_target_indices(target_text) target_x_vector = self._vectorize(target_x_indices, vector_length=target_vector_length, mask_index=self.target_vocab.mask_index) target_y_vector = self._vectorize(target_y_indices, vector_length=target_vector_length, mask_index=self.target_vocab.mask_index) return {"source_vector": source_vector, "target_x_vector": target_x_vector, "target_y_vector": target_y_vector, "source_length": len(source_indices)} @classmethod def from_dataframe(cls, bitext_df): """ 데이터셋 데이터프레임으로 NMTVectorizer를 초기화합니다 매개변수: bitext_df (pandas.DataFrame): 텍스트 데이터셋 반환값 : NMTVectorizer 객체 """ source_vocab = SequenceVocabulary() target_vocab = SequenceVocabulary() max_source_length = 0 max_target_length = 0 for _, row in bitext_df.iterrows(): source_tokens = row["source_language"].split(" ") if len(source_tokens) > max_source_length: max_source_length = len(source_tokens) for token in source_tokens: source_vocab.add_token(token) target_tokens = row["target_language"].split(" ") if len(target_tokens) > max_target_length: max_target_length = len(target_tokens) for token in target_tokens: target_vocab.add_token(token) return cls(source_vocab, target_vocab, max_source_length, max_target_length) @classmethod def from_serializable(cls, contents): source_vocab = SequenceVocabulary.from_serializable(contents["source_vocab"]) target_vocab = SequenceVocabulary.from_serializable(contents["target_vocab"]) return cls(source_vocab=source_vocab, target_vocab=target_vocab, max_source_length=contents["max_source_length"], max_target_length=contents["max_target_length"]) def to_serializable(self): return {"source_vocab": self.source_vocab.to_serializable(), "target_vocab": self.target_vocab.to_serializable(), "max_source_length": self.max_source_length, "max_target_length": self.max_target_length} # + [markdown] id="UAzWNsUSK8bn" # ### Dataset # + id="paSQ8rP4K8bp" class NMTDataset(Dataset): def __init__(self, text_df, vectorizer): """ 매개변수: text_df (pandas.DataFrame): 데이터셋 vectorizer (SurnameVectorizer): 데이터셋에서 만든 Vectorizer 객체 """ self.text_df = text_df self._vectorizer = vectorizer self.train_df = self.text_df[self.text_df.split=='train'] self.train_size = len(self.train_df) self.val_df = self.text_df[self.text_df.split=='val'] self.validation_size = len(self.val_df) self.test_df = self.text_df[self.text_df.split=='test'] self.test_size = len(self.test_df) self._lookup_dict = {'train': (self.train_df, self.train_size), 'val': (self.val_df, self.validation_size), 'test': (self.test_df, self.test_size)} self.set_split('train') @classmethod def load_dataset_and_make_vectorizer(cls, dataset_csv): """데이터셋을 로드하고 새로운 Vectorizer를 만듭니다 매개변수: dataset_csv (str): 데이터셋의 위치 반환값: NMTDataset의 객체 """ text_df = pd.read_csv(dataset_csv) train_subset = text_df[text_df.split=='train'] return cls(text_df, NMTVectorizer.from_dataframe(train_subset)) @classmethod def load_dataset_and_load_vectorizer(cls, dataset_csv, vectorizer_filepath): """데이터셋과 새로운 Vectorizer 객체를 로드합니다. 캐싱된 Vectorizer 객체를 재사용할 때 사용합니다. 매개변수: dataset_csv (str): 데이터셋의 위치 vectorizer_filepath (str): Vectorizer 객체의 저장 위치 반환값: NMTDataset의 객체 """ text_df = pd.read_csv(dataset_csv) vectorizer = cls.load_vectorizer_only(vectorizer_filepath) return cls(text_df, vectorizer) @staticmethod def load_vectorizer_only(vectorizer_filepath): """파일에서 Vectorizer 객체를 로드하는 정적 메서드 매개변수: vectorizer_filepath (str): 직렬화된 Vectorizer 객체의 위치 반환값: NMTVectorizer의 인스턴스 """ with open(vectorizer_filepath) as fp: return NMTVectorizer.from_serializable(json.load(fp)) def save_vectorizer(self, vectorizer_filepath): """Vectorizer 객체를 json 형태로 디스크에 저장합니다 매개변수: vectorizer_filepath (str): Vectorizer 객체의 저장 위치 """ with open(vectorizer_filepath, "w") as fp: json.dump(self._vectorizer.to_serializable(), fp) def get_vectorizer(self): """ 벡터 변환 객체를 반환합니다 """ return self._vectorizer def set_split(self, split="train"): self._target_split = split self._target_df, self._target_size = self._lookup_dict[split] def __len__(self): return self._target_size def __getitem__(self, index): """파이토치 데이터셋의 주요 진입 메서드 매개변수: index (int): 데이터 포인트에 대한 인덱스 반환값: 데이터 포인트(x_source, x_target, y_target, x_source_length)를 담고 있는 딕셔너리 """ row = self._target_df.iloc[index] vector_dict = self._vectorizer.vectorize(row.source_language, row.target_language) return {"x_source": vector_dict["source_vector"], "x_target": vector_dict["target_x_vector"], "y_target": vector_dict["target_y_vector"], "x_source_length": vector_dict["source_length"]} def get_num_batches(self, batch_size): """배치 크기가 주어지면 데이터셋으로 만들 수 있는 배치 개수를 반환합니다 매개변수: batch_size (int) 반환값: 배치 개수 """ return len(self) // batch_size # + id="zUXyb3O7K8bq" def generate_nmt_batches(dataset, batch_size, shuffle=True, drop_last=True, device="cpu"): """ 파이토치 DataLoader를 감싸고 있는 제너레이터 함수; NMT 버전 """ dataloader = DataLoader(dataset=dataset, batch_size=batch_size, shuffle=shuffle, drop_last=drop_last) for data_dict in dataloader: lengths = data_dict['x_source_length'].numpy() sorted_length_indices = lengths.argsort()[::-1].tolist() out_data_dict = {} for name, tensor in data_dict.items(): out_data_dict[name] = data_dict[name][sorted_length_indices].to(device) yield out_data_dict # + [markdown] id="FUU7soeFK8bq" # ## 신경망 기계 번역 모델 # # 구성 요소: # # 1. NMTEncoder # - 소스 시퀀스를 입력으로 받아 임베딩하여 양방향 GRU에 주입합니다. # 2. NMTDecoder # - 인코더 상태와 어텐션을 사용해 디코더가 새로운 시퀀스를 생성합니다. # - 타임 스텝마다 정답 타깃 시퀀스를 입력으로 사용합니다. # - 또는 디코더가 선택한 시퀀스를 입력으로 사용할 수도 있습니다. # - 이를 커리큘럼 학습(curriculum learning), 탐색 학습(learning to search)이라 부릅니다. # 3. NMTModel # - 인코더와 디코더를 하나의 클래스로 구성합니다. # + id="MPE37HiTK8bq" class NMTEncoder(nn.Module): def __init__(self, num_embeddings, embedding_size, rnn_hidden_size): """ 매개변수: num_embeddings (int): 임베딩 개수는 소스 어휘 사전의 크기입니다 embedding_size (int): 임베딩 벡터의 크기 rnn_hidden_size (int): RNN 은닉 상태 벡터의 크기 """ super(NMTEncoder, self).__init__() self.source_embedding = nn.Embedding(num_embeddings, embedding_size, padding_idx=0) self.birnn = nn.GRU(embedding_size, rnn_hidden_size, bidirectional=True, batch_first=True) def forward(self, x_source, x_lengths): """ 모델의 정방향 계산 매개변수: x_source (torch.Tensor): 입력 데이터 텐서 x_source.shape는 (batch, seq_size)이다. x_lengths (torch.Tensor): 배치에 있는 아이템의 길이 벡터 반환값: 튜플: x_unpacked (torch.Tensor), x_birnn_h (torch.Tensor) x_unpacked.shape = (batch, seq_size, rnn_hidden_size * 2) x_birnn_h.shape = (batch, rnn_hidden_size * 2) """ x_embedded = self.source_embedding(x_source) # PackedSequence 생성; x_packed.data.shape=(number_items, embeddign_size) x_packed = pack_padded_sequence(x_embedded, x_lengths.detach().cpu().numpy(), batch_first=True) # x_birnn_h.shape = (num_rnn, batch_size, feature_size) x_birnn_out, x_birnn_h = self.birnn(x_packed) # (batch_size, num_rnn, feature_size)로 변환 x_birnn_h = x_birnn_h.permute(1, 0, 2) # 특성 펼침; (batch_size, num_rnn * feature_size)로 바꾸기 # (참고: -1은 남은 차원에 해당합니다, # 두 개의 RNN 은닉 벡터를 1로 펼칩니다) x_birnn_h = x_birnn_h.contiguous().view(x_birnn_h.size(0), -1) x_unpacked, _ = pad_packed_sequence(x_birnn_out, batch_first=True) return x_unpacked, x_birnn_h def verbose_attention(encoder_state_vectors, query_vector): """ 원소별 연산을 사용하는 어텐션 메커니즘 버전 매개변수: encoder_state_vectors (torch.Tensor): 인코더의 양방향 GRU에서 출력된 3차원 텐서 query_vector (torch.Tensor): 디코더 GRU의 은닉 상태 """ batch_size, num_vectors, vector_size = encoder_state_vectors.size() vector_scores = torch.sum(encoder_state_vectors * query_vector.view(batch_size, 1, vector_size), dim=2) vector_probabilities = F.softmax(vector_scores, dim=1) weighted_vectors = encoder_state_vectors * vector_probabilities.view(batch_size, num_vectors, 1) context_vectors = torch.sum(weighted_vectors, dim=1) return context_vectors, vector_probabilities, vector_scores def terse_attention(encoder_state_vectors, query_vector): """ 점곱을 사용하는 어텐션 메커니즘 버전 매개변수: encoder_state_vectors (torch.Tensor): 인코더의 양방향 GRU에서 출력된 3차원 텐서 query_vector (torch.Tensor): 디코더 GRU의 은닉 상태 """ vector_scores = torch.matmul(encoder_state_vectors, query_vector.unsqueeze(dim=2)).squeeze() vector_probabilities = F.softmax(vector_scores, dim=-1) context_vectors = torch.matmul(encoder_state_vectors.transpose(-2, -1), vector_probabilities.unsqueeze(dim=2)).squeeze() return context_vectors, vector_probabilities class NMTDecoder(nn.Module): def __init__(self, num_embeddings, embedding_size, rnn_hidden_size, bos_index): """ 매개변수: num_embeddings (int): 임베딩 개수는 타깃 어휘 사전에 있는 고유한 단어의 개수이다 embedding_size (int): 임베딩 벡터 크기 rnn_hidden_size (int): RNN 은닉 상태 크기 bos_index(int): begin-of-sequence 인덱스 """ super(NMTDecoder, self).__init__() self._rnn_hidden_size = rnn_hidden_size self.target_embedding = nn.Embedding(num_embeddings=num_embeddings, embedding_dim=embedding_size, padding_idx=0) self.gru_cell = nn.GRUCell(embedding_size + rnn_hidden_size, rnn_hidden_size) self.hidden_map = nn.Linear(rnn_hidden_size, rnn_hidden_size) self.classifier = nn.Linear(rnn_hidden_size * 2, num_embeddings) self.bos_index = bos_index self._sampling_temperature = 3 def _init_indices(self, batch_size): """ BEGIN-OF-SEQUENCE 인덱스 벡터를 반환합니다 """ return torch.ones(batch_size, dtype=torch.int64) * self.bos_index def _init_context_vectors(self, batch_size): """ 문맥 벡터를 초기화하기 위한 0 벡터를 반환합니다 """ return torch.zeros(batch_size, self._rnn_hidden_size) def forward(self, encoder_state, initial_hidden_state, target_sequence, sample_probability=0.0): """ 모델의 정방향 계산 매개변수: encoder_state (torch.Tensor): NMTEncoder의 출력 initial_hidden_state (torch.Tensor): NMTEncoder의 마지막 은닉 상태 target_sequence (torch.Tensor): 타깃 텍스트 데이터 텐서 sample_probability (float): 스케줄링된 샘플링 파라미터 디코더 타임 스텝마다 모델 예측에 사용할 확률 반환값: output_vectors (torch.Tensor): 각 타임 스텝의 예측 벡터 """ if target_sequence is None: sample_probability = 1.0 else: # 가정: 첫 번째 차원은 배치 차원입니다 # 즉 입력은 (Batch, Seq) # 시퀀스에 대해 반복해야 하므로 (Seq, Batch)로 차원을 바꿉니다 target_sequence = target_sequence.permute(1, 0) output_sequence_size = target_sequence.size(0) # 주어진 인코더의 은닉 상태를 초기 은닉 상태로 사용합니다 h_t = self.hidden_map(initial_hidden_state) batch_size = encoder_state.size(0) # 문맥 벡터를 0으로 초기화합니다 context_vectors = self._init_context_vectors(batch_size) # 첫 단어 y_t를 BOS로 초기화합니다 y_t_index = self._init_indices(batch_size) h_t = h_t.to(encoder_state.device) y_t_index = y_t_index.to(encoder_state.device) context_vectors = context_vectors.to(encoder_state.device) output_vectors = [] self._cached_p_attn = [] self._cached_ht = [] self._cached_decoder_state = encoder_state.cpu().detach().numpy() for i in range(output_sequence_size): # 스케줄링된 샘플링 사용 여부 use_sample = np.random.random() < sample_probability if not use_sample: y_t_index = target_sequence[i] # 단계 1: 단어를 임베딩하고 이전 문맥과 연결합니다 y_input_vector = self.target_embedding(y_t_index) rnn_input = torch.cat([y_input_vector, context_vectors], dim=1) # 단계 2: GRU를 적용하고 새로운 은닉 벡터를 얻습니다 h_t = self.gru_cell(rnn_input, h_t) self._cached_ht.append(h_t.cpu().detach().numpy()) # 단계 3: 현재 은닉 상태를 사용해 인코더의 상태를 주목합니다 context_vectors, p_attn, _ = verbose_attention(encoder_state_vectors=encoder_state, query_vector=h_t) # 부가 작업: 시각화를 위해 어텐션 확률을 저장합니다 self._cached_p_attn.append(p_attn.cpu().detach().numpy()) # 단게 4: 현재 은닉 상태와 문맥 벡터를 사용해 다음 단어를 예측합니다 prediction_vector = torch.cat((context_vectors, h_t), dim=1) score_for_y_t_index = self.classifier(F.dropout(prediction_vector, 0.3)) if use_sample: p_y_t_index = F.softmax(score_for_y_t_index * self._sampling_temperature, dim=1) # _, y_t_index = torch.max(p_y_t_index, 1) y_t_index = torch.multinomial(p_y_t_index, 1).squeeze() # 부가 작업: 예측 성능 점수를 기록합니다 output_vectors.append(score_for_y_t_index) output_vectors = torch.stack(output_vectors).permute(1, 0, 2) return output_vectors class NMTModel(nn.Module): """ 신경망 기계 번역 모델 """ def __init__(self, source_vocab_size, source_embedding_size, target_vocab_size, target_embedding_size, encoding_size, target_bos_index): """ 매개변수: source_vocab_size (int): 소스 언어에 있는 고유한 단어 개수 source_embedding_size (int): 소스 임베딩 벡터의 크기 target_vocab_size (int): 타깃 언어에 있는 고유한 단어 개수 target_embedding_size (int): 타깃 임베딩 벡터의 크기 encoding_size (int): 인코더 RNN의 크기 target_bos_index (int): BEGIN-OF-SEQUENCE 토큰 인덱스 """ super(NMTModel, self).__init__() self.encoder = NMTEncoder(num_embeddings=source_vocab_size, embedding_size=source_embedding_size, rnn_hidden_size=encoding_size) decoding_size = encoding_size * 2 self.decoder = NMTDecoder(num_embeddings=target_vocab_size, embedding_size=target_embedding_size, rnn_hidden_size=decoding_size, bos_index=target_bos_index) def forward(self, x_source, x_source_lengths, target_sequence, sample_probability=0.0): """ 모델의 정방향 계산 매개변수: x_source (torch.Tensor): 소스 텍스트 데이터 텐서 x_source.shape는 (batch, vectorizer.max_source_length)입니다. x_source_lengths torch.Tensor): x_source에 있는 시퀀스 길이 target_sequence (torch.Tensor): 타깃 텍스트 데이터 텐서 sample_probability (float): 스케줄링된 샘플링 파라미터 디코더 타임 스텝마다 모델 예측에 사용할 확률 반환값: decoded_states (torch.Tensor): 각 출력 타임 스텝의 예측 벡터 """ encoder_state, final_hidden_states = self.encoder(x_source, x_source_lengths) decoded_states = self.decoder(encoder_state=encoder_state, initial_hidden_state=final_hidden_states, target_sequence=target_sequence, sample_probability=sample_probability) return decoded_states # + [markdown] id="sMpnJ-RKK8bq" # ## 모델 훈련과 상태 기록 함수 # + id="_NdbRrN8K8bq" def set_seed_everywhere(seed, cuda): np.random.seed(seed) torch.manual_seed(seed) if cuda: torch.cuda.manual_seed_all(seed) def handle_dirs(dirpath): if not os.path.exists(dirpath): os.makedirs(dirpath) def make_train_state(args): return {'stop_early': False, 'early_stopping_step': 0, 'early_stopping_best_val': 1e8, 'learning_rate': args.learning_rate, 'epoch_index': 0, 'train_loss': [], 'train_acc': [], 'val_loss': [], 'val_acc': [], 'test_loss': -1, 'test_acc': -1, 'model_filename': args.model_state_file} def update_train_state(args, model, train_state): """훈련 상태 업데이트합니다. 콤포넌트: - 조기 종료: 과대 적합 방지 - 모델 체크포인트: 더 나은 모델을 저장합니다 :param args: 메인 매개변수 :param model: 훈련할 모델 :param train_state: 훈련 상태를 담은 딕셔너리 :returns: 새로운 훈련 상태 """ # 적어도 한 번 모델을 저장합니다 if train_state['epoch_index'] == 0: torch.save(model.state_dict(), train_state['model_filename']) train_state['stop_early'] = False # 성능이 향상되면 모델을 저장합니다 elif train_state['epoch_index'] >= 1: loss_tm1, loss_t = train_state['val_loss'][-2:] # 손실이 나빠지면 if loss_t >= loss_tm1: # 조기 종료 단계 업데이트 train_state['early_stopping_step'] += 1 # 손실이 감소하면 else: # 최상의 모델 저장 if loss_t < train_state['early_stopping_best_val']: torch.save(model.state_dict(), train_state['model_filename']) train_state['early_stopping_best_val'] = loss_t # 조기 종료 단계 재설정 train_state['early_stopping_step'] = 0 # 조기 종료 여부 확인 train_state['stop_early'] = \ train_state['early_stopping_step'] >= args.early_stopping_criteria return train_state def normalize_sizes(y_pred, y_true): """텐서 크기 정규화 매개변수: y_pred (torch.Tensor): 모델의 출력 3차원 텐서이면 행렬로 변환합니다. y_true (torch.Tensor): 타깃 예측 행렬이면 벡터로 변환합니다. """ if len(y_pred.size()) == 3: y_pred = y_pred.contiguous().view(-1, y_pred.size(2)) if len(y_true.size()) == 2: y_true = y_true.contiguous().view(-1) return y_pred, y_true def compute_accuracy(y_pred, y_true, mask_index): y_pred, y_true = normalize_sizes(y_pred, y_true) _, y_pred_indices = y_pred.max(dim=1) correct_indices = torch.eq(y_pred_indices, y_true).float() valid_indices = torch.ne(y_true, mask_index).float() n_correct = (correct_indices * valid_indices).sum().item() n_valid = valid_indices.sum().item() return n_correct / n_valid * 100 def sequence_loss(y_pred, y_true, mask_index): y_pred, y_true = normalize_sizes(y_pred, y_true) return F.cross_entropy(y_pred, y_true, ignore_index=mask_index) # + [markdown] id="9k6Di7pLK8br" # ### 설정 # + id="uwxzx0WuK8br" outputId="01d48fa6-6f61-4a19-f74b-3f33d410a127" colab={"base_uri": "https://localhost:8080/"} args = Namespace(dataset_csv="data/nmt/simplest_eng_fra.csv", vectorizer_file="vectorizer.json", model_state_file="model.pth", save_dir="model_storage/ch8/nmt_luong_sampling", reload_from_files=False, expand_filepaths_to_save_dir=True, cuda=True, seed=1337, learning_rate=5e-4, batch_size=32, num_epochs=100, early_stopping_criteria=5, source_embedding_size=24, target_embedding_size=24, encoding_size=32, catch_keyboard_interrupt=True) if args.expand_filepaths_to_save_dir: args.vectorizer_file = os.path.join(args.save_dir, args.vectorizer_file) args.model_state_file = os.path.join(args.save_dir, args.model_state_file) print("파일 경로: ") print("\t{}".format(args.vectorizer_file)) print("\t{}".format(args.model_state_file)) # CUDA 체크 if not torch.cuda.is_available(): args.cuda = False args.device = torch.device("cuda" if args.cuda else "cpu") print("CUDA 사용 여부: {}".format(args.cuda)) # 재현성을 위해 시드 설정 set_seed_everywhere(args.seed, args.cuda) # 디렉토리 처리 handle_dirs(args.save_dir) # + id="P549WmvQK8br" outputId="967b1a3a-468a-4e96-f09b-5147347ea593" colab={"base_uri": "https://localhost:8080/"} # 만약 코랩에서 실행하는 경우 아래 코드를 실행하여 전처리된 데이터를 다운로드하세요. # !mkdir data # !wget https://git.io/JqQBE -O data/download.py # !wget https://git.io/JqQB7 -O data/get-all-data.sh # !chmod 755 data/get-all-data.sh # %cd data # !./get-all-data.sh # %cd .. # + id="4CR02VV8K8bs" if args.reload_from_files and os.path.exists(args.vectorizer_file): # 체크포인트를 로드합니다. dataset = NMTDataset.load_dataset_and_load_vectorizer(args.dataset_csv, args.vectorizer_file) else: # 데이터셋과 Vectorizer를 만듭니다. dataset = NMTDataset.load_dataset_and_make_vectorizer(args.dataset_csv) dataset.save_vectorizer(args.vectorizer_file) vectorizer = dataset.get_vectorizer() # + id="KSU79tHDK8bs" outputId="5a7af29b-b4b4-43b2-afa1-25a338de6a4f" colab={"base_uri": "https://localhost:8080/"} model = NMTModel(source_vocab_size=len(vectorizer.source_vocab), source_embedding_size=args.source_embedding_size, target_vocab_size=len(vectorizer.target_vocab), target_embedding_size=args.target_embedding_size, encoding_size=args.encoding_size, target_bos_index=vectorizer.target_vocab.begin_seq_index) if args.reload_from_files and os.path.exists(args.model_state_file): model.load_state_dict(torch.load(args.model_state_file)) print("로드한 모델") else: print("새로운 모델") # + [markdown] id="j4uOlY7_K8bs" # ### 모델 훈련 # + id="QhvY2gmlK8bs" outputId="5aa9f988-aaed-4a4c-e21a-5fe522c2b070" colab={"base_uri": "https://localhost:8080/", "height": 113, "referenced_widgets": ["97c629a82cbd498e9f740ae56b3cf20c", "2609cbe60e4449dbaf04982b16483e30", "0872251cc85a4a8cb6be3ae5f53497b5", "8069d21bfcb64b8ea42c5f368f01dd83", "0cc657f692564bd1805e4eb0e62dce7a", "7010e2a735894ef48e71603f0e0fbcd9", "7ce283a711bb45aeb9c90b88f5b9ddcf", "d6514fa364864045863efedb5eaeb5f0", "597d5973b4a64b31b8ed078f4a2506f8", "b8565cfc580a4bbebf8361b4825275a2", "abe5a3f0186e46078ebc7641f0eda962", "152b8f692e424bc682c39bb560c841c7", "ee745736c7f74345913916c53846e0e9", "7f3cff1c0d5540af900da27bbf96c41b", "20f6c0d0e14041c3bd2ed0aba2dc4d12", "29caa56514724c73ad89f3e62daad395", "1760d6878a8f4da89c1806548324ea5c", "7762b72d4707476ba0b4e187c7cb066a", "e9b8e7d86c8940b1a324a56e117cfe43", "6d83aaf20703462caf3c6f105dcd561a", "d467e7a06d384b1ea5bf435f4a9d3767", "d53bcfeb92ab48d2a5f7f46500fbcd73", "45089a2cf8b64bd1b07145cdf81eac81", "9262a46414084c628af18ed7fcb22cbd"]} model = model.to(args.device) optimizer = optim.Adam(model.parameters(), lr=args.learning_rate) scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer=optimizer, mode='min', factor=0.5, patience=1) mask_index = vectorizer.target_vocab.mask_index train_state = make_train_state(args) epoch_bar = tqdm.notebook.tqdm(desc='training routine', total=args.num_epochs, position=0) dataset.set_split('train') train_bar = tqdm.notebook.tqdm(desc='split=train', total=dataset.get_num_batches(args.batch_size), position=1, leave=True) dataset.set_split('val') val_bar = tqdm.notebook.tqdm(desc='split=val', total=dataset.get_num_batches(args.batch_size), position=1, leave=True) try: for epoch_index in range(args.num_epochs): sample_probability = (20 + epoch_index) / args.num_epochs train_state['epoch_index'] = epoch_index # 훈련 세트에 대한 순회 # 훈련 세트와 배치 제너레이터 준비, 손실과 정확도를 0으로 설정 dataset.set_split('train') batch_generator = generate_nmt_batches(dataset, batch_size=args.batch_size, device=args.device) running_loss = 0.0 running_acc = 0.0 model.train() for batch_index, batch_dict in enumerate(batch_generator): # 훈련 과정은 5단계로 이루어집니다 # -------------------------------------- # 단계 1. 그레이디언트를 0으로 초기화합니다 optimizer.zero_grad() # 단계 2. 출력을 계산합니다 y_pred = model(batch_dict['x_source'], batch_dict['x_source_length'], batch_dict['x_target'], sample_probability=sample_probability) # 단계 3. 손실을 계산합니다 loss = sequence_loss(y_pred, batch_dict['y_target'], mask_index) # 단계 4. 손실을 사용해 그레이디언트를 계산합니다 loss.backward() # 단계 5. 옵티마이저로 가중치를 업데이트합니다 optimizer.step() # ----------------------------------------- # 이동 손실과 이동 정확도를 계산합니다 running_loss += (loss.item() - running_loss) / (batch_index + 1) acc_t = compute_accuracy(y_pred, batch_dict['y_target'], mask_index) running_acc += (acc_t - running_acc) / (batch_index + 1) # 진행 상태 막대 업데이트 train_bar.set_postfix(loss=running_loss, acc=running_acc, epoch=epoch_index) train_bar.update() train_state['train_loss'].append(running_loss) train_state['train_acc'].append(running_acc) # 검증 세트에 대한 순회 # 검증 세트와 배치 제너레이터 준비, 손실과 정확도를 0으로 설정 dataset.set_split('val') batch_generator = generate_nmt_batches(dataset, batch_size=args.batch_size, device=args.device) running_loss = 0. running_acc = 0. model.eval() for batch_index, batch_dict in enumerate(batch_generator): # 단계 1. 출력을 계산합니다 y_pred = model(batch_dict['x_source'], batch_dict['x_source_length'], batch_dict['x_target'], sample_probability=sample_probability) # 단계 2. 손실을 계산합니다 loss = sequence_loss(y_pred, batch_dict['y_target'], mask_index) # 단계 3. 이동 손실과 이동 정확도를 계산합니다 running_loss += (loss.item() - running_loss) / (batch_index + 1) acc_t = compute_accuracy(y_pred, batch_dict['y_target'], mask_index) running_acc += (acc_t - running_acc) / (batch_index + 1) # 진행 상태 막대 업데이트 val_bar.set_postfix(loss=running_loss, acc=running_acc, epoch=epoch_index) val_bar.update() train_state['val_loss'].append(running_loss) train_state['val_acc'].append(running_acc) train_state = update_train_state(args=args, model=model, train_state=train_state) scheduler.step(train_state['val_loss'][-1]) if train_state['stop_early']: break train_bar.n = 0 val_bar.n = 0 epoch_bar.set_postfix(best_val=train_state['early_stopping_best_val']) epoch_bar.update() except KeyboardInterrupt: print("반복 중지") # + id="B7ZzwuJeK8bs" from nltk.translate import bleu_score import seaborn as sns import matplotlib.pyplot as plt chencherry = bleu_score.SmoothingFunction() # + id="jJs5T5J8K8bs" def sentence_from_indices(indices, vocab, strict=True, return_string=True): ignore_indices = set([vocab.mask_index, vocab.begin_seq_index, vocab.end_seq_index]) out = [] for index in indices: if index == vocab.begin_seq_index and strict: continue elif index == vocab.end_seq_index and strict: break else: out.append(vocab.lookup_index(index)) if return_string: return " ".join(out) else: return out class NMTSampler: def __init__(self, vectorizer, model): self.vectorizer = vectorizer self.model = model def apply_to_batch(self, batch_dict): self._last_batch = batch_dict y_pred = self.model(x_source=batch_dict['x_source'], x_source_lengths=batch_dict['x_source_length'], target_sequence=batch_dict['x_target']) self._last_batch['y_pred'] = y_pred attention_batched = np.stack(self.model.decoder._cached_p_attn).transpose(1, 0, 2) self._last_batch['attention'] = attention_batched def _get_source_sentence(self, index, return_string=True): indices = self._last_batch['x_source'][index].cpu().detach().numpy() vocab = self.vectorizer.source_vocab return sentence_from_indices(indices, vocab, return_string=return_string) def _get_reference_sentence(self, index, return_string=True): indices = self._last_batch['y_target'][index].cpu().detach().numpy() vocab = self.vectorizer.target_vocab return sentence_from_indices(indices, vocab, return_string=return_string) def _get_sampled_sentence(self, index, return_string=True): _, all_indices = torch.max(self._last_batch['y_pred'], dim=2) sentence_indices = all_indices[index].cpu().detach().numpy() vocab = self.vectorizer.target_vocab return sentence_from_indices(sentence_indices, vocab, return_string=return_string) def get_ith_item(self, index, return_string=True): output = {"source": self._get_source_sentence(index, return_string=return_string), "reference": self._get_reference_sentence(index, return_string=return_string), "sampled": self._get_sampled_sentence(index, return_string=return_string), "attention": self._last_batch['attention'][index]} reference = output['reference'] hypothesis = output['sampled'] if not return_string: reference = " ".join(reference) hypothesis = " ".join(hypothesis) output['bleu-4'] = bleu_score.sentence_bleu(references=[reference], hypothesis=hypothesis, smoothing_function=chencherry.method1) return output # + id="Xg-jlqiqK8bt" model = model.eval().to(args.device) sampler = NMTSampler(vectorizer, model) dataset.set_split('test') batch_generator = generate_nmt_batches(dataset, batch_size=args.batch_size, device=args.device) test_results = [] for batch_dict in batch_generator: sampler.apply_to_batch(batch_dict) for i in range(args.batch_size): test_results.append(sampler.get_ith_item(i, False)) # + id="JrZt4gm6K8bt" outputId="b60fb63c-f309-4310-9543-656d9ae25731" colab={"base_uri": "https://localhost:8080/", "height": 282} plt.hist([r['bleu-4'] for r in test_results], bins=100); np.mean([r['bleu-4'] for r in test_results]), np.median([r['bleu-4'] for r in test_results]) # + id="mB4SzIapK8bt" dataset.set_split('val') batch_generator = generate_nmt_batches(dataset, batch_size=args.batch_size, device=args.device) batch_dict = next(batch_generator) model = model.eval().to(args.device) sampler = NMTSampler(vectorizer, model) sampler.apply_to_batch(batch_dict) # + id="25CpKy7OK8bt" all_results = [] for i in range(args.batch_size): all_results.append(sampler.get_ith_item(i, False)) # + id="2nA6TKFHK8bt" outputId="1d055bff-646d-4cd8-ef97-553a5cb6800a" colab={"base_uri": "https://localhost:8080/"} top_results = [x for x in all_results if x['bleu-4']>0.5] len(top_results) # + id="8qs5HatZK8bt" outputId="3d1523e5-6fea-45ff-9897-35063fb5e2e6" colab={"base_uri": "https://localhost:8080/", "height": 1000} for sample in top_results: plt.figure() target_len = len(sample['sampled']) source_len = len(sample['source']) attention_matrix = sample['attention'][:target_len, :source_len+2].transpose()#[::-1] ax = sns.heatmap(attention_matrix, center=0.0) ylabs = ["<BOS>"]+sample['source']+["<EOS>"] #ylabs = sample['source'] #ylabs = ylabs[::-1] ax.set_yticklabels(ylabs, rotation=0) ax.set_xticklabels(sample['sampled'], rotation=90) ax.set_xlabel("Target Sentence") ax.set_ylabel("Source Sentence\n\n") # + id="Y4jcpvQsK8bu" outputId="cebf7ad6-2694-408c-927c-98f3ff0f0176" colab={"base_uri": "https://localhost:8080/"} def get_source_sentence(vectorizer, batch_dict, index): indices = batch_dict['x_source'][index].cpu().data.numpy() vocab = vectorizer.source_vocab return sentence_from_indices(indices, vocab) def get_true_sentence(vectorizer, batch_dict, index): return sentence_from_indices(batch_dict['y_target'].cpu().data.numpy()[index], vectorizer.target_vocab) def get_sampled_sentence(vectorizer, batch_dict, index): y_pred = model(x_source=batch_dict['x_source'], x_source_lengths=batch_dict['x_source_length'], target_sequence=batch_dict['x_target'], sample_probability=1.0) return sentence_from_indices(torch.max(y_pred, dim=2)[1].cpu().data.numpy()[index], vectorizer.target_vocab) def get_all_sentences(vectorizer, batch_dict, index): return {"source": get_source_sentence(vectorizer, batch_dict, index), "truth": get_true_sentence(vectorizer, batch_dict, index), "sampled": get_sampled_sentence(vectorizer, batch_dict, index)} def sentence_from_indices(indices, vocab, strict=True): ignore_indices = set([vocab.mask_index, vocab.begin_seq_index, vocab.end_seq_index]) out = [] for index in indices: if index == vocab.begin_seq_index and strict: continue elif index == vocab.end_seq_index and strict: return " ".join(out) else: out.append(vocab.lookup_index(index)) return " ".join(out) results = get_all_sentences(vectorizer, batch_dict, 1) results
36.527731
1,018
7437d4f11b505d2c033e02b907509c9e520882a8
py
python
ml_workflows/ml.ipynb
ronaldokun/datacamp
['MIT']
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: datacamp # language: python # name: datacamp # --- import os import pandas as pd from sklearn.ensemble import AdaBoostClassifier from sklearn.model_selection import cross_val_score, train_test_split from sklearn.metrics import accuracy_score, make_scorer import numpy as np from pprint import pprint as pp import warnings warnings.filterwarnings('ignore') # ## Human in the Loop # In the previous chapter, you perfected your knowledge of the standard supervised learning workflows. In this chapter, you will critically examine the ways in which expert knowledge is incorporated in supervised learning. This is done through the identification of the appropriate unit of analysis which might require feature engineering across multiple data sources, through the sometimes imperfect process of labeling examples, and through the specification of a loss function that captures the true business value of errors made by your machine learning model. class PDF(object): def __init__(self, pdf, size=(1080,720)): self.pdf = pdf self.size = size def _repr_html_(self): return f'<iframe src={self.pdf} width={self.size[0]} height={self.size[1]}></iframe>' def _repr_latex_(self): return fr'\includegraphics[width=1.0\textwidth]{{{self.pdf}}}' PDF('pdf/chapter2.pdf',size=(1080, 720)) # # "Expert Knowledge" flows = pd.read_csv('data/lanl_flows.csv') # <h1 class="exercise--title">Is the source or the destination bad?</h1><div class=""><p>In the previous lesson, you used the <em>destination</em> computer as your entity of interest. However, your cybersecurity analyst just told you that it is the infected machines that generate the bad traffic, and will therefore appear as a <em>source</em>, not a destination, in the <code>flows</code> dataset. </p> # <p>The data <code>flows</code> has been preloaded, as well as the list <code>bad</code> of infected IDs and the feature extractor <code>featurizer()</code> from the previous lesson. You also have <code>numpy</code> available as <code>np</code>, <code>AdaBoostClassifier()</code>, and <code>cross_val_score()</code>.</p></div> def featurize(df): return { 'unique_ports': len(set(df['destination_port'])), 'average_packet': np.mean(df['packet_count']), 'average_duration': np.mean(df['duration']) } bads = {'C1', 'C10', 'C10005', 'C1003', 'C1006', 'C1014', 'C1015', 'C102', 'C1022', 'C1028', 'C10405', 'C1042', 'C1046', 'C10577', 'C1065', 'C108', 'C10817', 'C1085', 'C1089', 'C1096', 'C11039', 'C11178', 'C1119', 'C11194', 'C1124', 'C1125', 'C113', 'C115', 'C11727', 'C1173', 'C1183', 'C1191', 'C12116', 'C1215', 'C1222', 'C1224', 'C12320', 'C12448', 'C12512', 'C126', 'C1268', 'C12682', 'C1269', 'C1275', 'C1302', 'C1319', 'C13713', 'C1382', 'C1415', 'C143', 'C1432', 'C1438', 'C1448', 'C1461', 'C1477', 'C1479', 'C148', 'C1482', 'C1484', 'C1493', 'C15', 'C1500', 'C1503', 'C1506', 'C1509', 'C15197', 'C152', 'C15232', 'C1549', 'C155', 'C1555', 'C1567', 'C1570', 'C1581', 'C16088', 'C1610', 'C1611', 'C1616', 'C1626', 'C1632', 'C16401', 'C16467', 'C16563', 'C1710', 'C1732', 'C1737', 'C17425', 'C17600', 'C17636', 'C17640', 'C17693', 'C177', 'C1776', 'C17776', 'C17806', 'C1784', 'C17860', 'C1797', 'C18025', 'C1810', 'C18113', 'C18190', 'C1823', 'C18464', 'C18626', 'C1887', 'C18872', 'C19038', 'C1906', 'C19156', 'C19356', 'C1936', 'C1944', 'C19444', 'C1952', 'C1961', 'C1964', 'C1966', 'C1980', 'C19803', 'C19932', 'C2012', 'C2013', 'C20203', 'C20455', 'C2057', 'C2058', 'C20677', 'C2079', 'C20819', 'C2085', 'C2091', 'C20966', 'C21349', 'C21664', 'C21814', 'C21919', 'C21946', 'C2196', 'C21963', 'C22174', 'C22176', 'C22275', 'C22409', 'C2254', 'C22766', 'C231', 'C2341', 'C2378', 'C2388', 'C243', 'C246', 'C2519', 'C2578', 'C2597', 'C2604', 'C2609', 'C2648', 'C2669', 'C2725', 'C2816', 'C2844', 'C2846', 'C2849', 'C2877', 'C2914', 'C294', 'C2944', 'C3019', 'C302', 'C3037', 'C305', 'C306', 'C307', 'C313', 'C3153', 'C3170', 'C3173', 'C3199', 'C3249', 'C3288', 'C3292', 'C3303', 'C3305', 'C332', 'C338', 'C3380', 'C3388', 'C3422', 'C3435', 'C3437', 'C3455', 'C346', 'C3491', 'C3521', 'C353', 'C3586', 'C359', 'C3597', 'C3601', 'C3610', 'C3629', 'C3635', 'C366', 'C368', 'C3699', 'C370', 'C3755', 'C3758', 'C3813', 'C385', 'C3888', 'C395', 'C398', 'C400', 'C4106', 'C4159', 'C4161', 'C42', 'C423', 'C4280', 'C429', 'C430', 'C4403', 'C452', 'C4554', 'C457', 'C458', 'C46', 'C4610', 'C464', 'C467', 'C477', 'C4773', 'C4845', 'C486', 'C492', 'C4934', 'C5030', 'C504', 'C506', 'C5111', 'C513', 'C52', 'C528', 'C529', 'C5343', 'C5439', 'C5453', 'C553', 'C5618', 'C5653', 'C5693', 'C583', 'C586', 'C61', 'C612', 'C625', 'C626', 'C633', 'C636', 'C6487', 'C6513', 'C685', 'C687', 'C706', 'C7131', 'C721', 'C728', 'C742', 'C7464', 'C7503', 'C754', 'C7597', 'C765', 'C7782', 'C779', 'C78', 'C791', 'C798', 'C801', 'C8172', 'C8209', 'C828', 'C849', 'C8490', 'C853', 'C8585', 'C8751', 'C881', 'C882', 'C883', 'C886', 'C89', 'C90', 'C9006', 'C917', 'C92', 'C923', 'C96', 'C965', 'C9692', 'C9723', 'C977', 'C9945'} # + # Group by source computer, and apply the feature extractor out = flows.groupby('source_computer').apply(featurize) # Convert the iterator to a dataframe by calling list on it X = pd.DataFrame(list(out), index=out.index) # Check which sources in X.index are bad to create labels y = [x in bads for x in X.index] # - X.head() # + # Report the average accuracy of Adaboost over 3-fold CV print(np.mean(cross_val_score(AdaBoostClassifier(), X, y))) # - # <h1 class="exercise--title">Feature engineering on grouped data</h1><div class=""><p>You will now build on the previous exercise, by considering one additional feature: the number of unique protocols used by each source computer. Note that with grouped data, it is always possible to construct features in this manner: you can take the number of unique elements of all categorical columns, and the mean of all numeric columns as your starting point. As before, you have <code>flows</code> preloaded, <code>cross_val_score()</code> for measuring accuracy, <code>AdaBoostClassifier()</code>, <code>pandas</code> as <code>pd</code> and <code>numpy</code> as <code>np</code>.</p></div> # + # Create a feature counting unique protocols per source protocols = flows.groupby('source_computer').apply(lambda df: len(set(df.protocol))) # Convert this feature into a dataframe, naming the column protocols_DF = pd.DataFrame(protocols, index=protocols.index, columns=['protocol']) # Now concatenate this feature with the previous dataset, X X_more = pd.concat([X, protocols_DF], axis=1) # Refit the classifier and report its accuracy print(np.mean(cross_val_score(AdaBoostClassifier(), X_more, y))) # - # <h1 class="exercise--title">Turning a heuristic into a classifier</h1><div class=""><p>You are surprised by the fact that heuristics can be so helpful. So you decide to treat the heuristic that "too many unique ports is suspicious" as a classifier in its own right. You achieve that by thresholding the number of unique ports per source by the average number used in bad source computers -- these are computers for which the label is <code>True</code>. The dataset is preloaded and split into training and test, so you have objects <code>X_train</code>, <code>X_test</code>, <code>y_train</code> and <code>y_test</code> in memory. Your imports include <code>accuracy_score()</code>, and <code>numpy</code> as <code>np</code>. To clarify: you won't be fitting a classifier from scikit-learn in this exercise, but instead you will define your own classification rule explicitly!</p></div> X_train, X_test, y_train, y_test = train_test_split(X,y) # + #Create a new dataset `X_train_bad` by subselecting bad hosts X_train_bad = X_train[y_train] #Calculate the average of `unique_ports` in bad examples avg_bad_ports = np.mean(X_train_bad.unique_ports) #Label as positive sources that use more ports than that pred_port = X_test['unique_ports'] > avg_bad_ports #Print the `accuracy_score` of the heuristic print(accuracy_score(y_test, pred_port)) # - # <h1 class="exercise--title">Combining heuristics</h1><div class=""><p>A different cyber analyst tells you that during certain types of attack, the infected source computer sends small bits of traffic, to avoid detection. This makes you wonder whether it would be better to create a combined heuristic that simultaneously looks for large numbers of ports and small packet sizes. Does this improve performance over the simple port heuristic? As with the last exercise, you have <code>X_train</code>, <code>X_test</code>, <code>y_train</code> and <code>y_test</code> in memory. The sample code also helps you reproduce the outcome of the port heuristic, <code>pred_port</code>. You also have <code>numpy</code> as <code>np</code> and <code>accuracy_score()</code> preloaded.</p></div> # + # Compute the mean of average_packet for bad sources avg_bad_packet = np.mean(X_train[y_train]['average_packet']) # Label as positive if average_packet is lower than that pred_packet = X_test['average_packet'] < avg_bad_packet # Find indices where pred_port and pred_packet both True pred_port = X_test['unique_ports'] > avg_bad_ports pred_both = pred_packet & pred_port # Ports only produced an accuracy of 0.919. Is this better? print(accuracy_score(y_test, pred_both)) # - # <h1 class="exercise--title">Dealing with label noise</h1><div class=""><p>One of your cyber analysts informs you that many of the labels for the first 100 source computers in your training data might be wrong because of a database error. She hopes you can still use the data because most of the labels are still correct, but asks you to treat these 100 labels as "noisy". Thankfully you know how to do that, using weighted learning. The contaminated data is available in your workspace as <code>X_train</code>, <code>X_test</code>, <code>y_train_noisy</code>, <code>y_test</code>. You want to see if you can improve the performance of a <code>GaussianNB()</code> classifier using weighted learning. You can use the optional parameter <code>sample_weight</code>, which is supported by the <code>.fit()</code> methods of most popular classifiers. The function <code>accuracy_score()</code> is preloaded. You can consult the image below for guidance. </p> # <p><img src="https://assets.datacamp.com/production/repositories/3554/datasets/ea99ce2b5baa3cb9f3d9085b7387f2ea7d3bdfc8/wsl_noisy_labels.png" alt=""></p></div> y_train_noisy = y_train.copy() for i in range(100): y_train_noisy[i] = True # + from sklearn.naive_bayes import GaussianNB # Fit a Gaussian Naive Bayes classifier to the training data clf = GaussianNB().fit(X_train, y_train_noisy) # Report its accuracy on the test data print(accuracy_score(y_test, clf.predict(X_test))) # Assign half the weight to the first 100 noisy examples weights = [0.5]*100 + [1.0]*(len(X_train)-100) # Refit using weights and report accuracy. Has it improved? clf_weights = GaussianNB().fit(X_train, y_train_noisy, sample_weight=weights) print(accuracy_score(y_test, clf_weights.predict(X_test))) # - # # 3. Model Lifecycle Management # In the previous chapter, you employed different ways of incorporating feedback from experts in your workflow, and evaluating it in ways that are aligned with business value. Now it is time for you to practice the skills needed to productize your model and ensure it continues to perform well thereafter by iteratively improving it. You will also learn to diagnose dataset shift and mitigate the effect that a changing environment can have on your model's accuracy. PDF('pdf/chapter3.pdf') df = pd.read_csv('data/arrh.csv') X, y = df.iloc[:, :-1], df.iloc[:, -1] X_train, X_test, y_train, y_test = train_test_split(X, y) # <h1 class="exercise--title">Your first pipeline - again!</h1><div class=""><p>Back in the arrhythmia startup, your monthly review is coming up, and as part of that an expert Python programmer will be reviewing your code. You decide to tidy up by following best practices and replace your script for feature selection and random forest classification, with a pipeline. You are using a training dataset available as <code>X_train</code> and <code>y_train</code>, and a number of modules: <code>RandomForestClassifier</code>, <code>SelectKBest()</code> and <code>f_classif()</code> for feature selection, as well as <code>GridSearchCV</code> and <code>Pipeline</code>.</p></div> # + from sklearn.ensemble import RandomForestClassifier from sklearn.pipeline import Pipeline from sklearn.feature_selection import SelectKBest, f_classif from sklearn.model_selection import GridSearchCV from sklearn.metrics import make_scorer # Create pipeline with feature selector and classifier pipe = Pipeline([ ('feature_selection', SelectKBest(f_classif)), ('clf', rf(random_state=2))]) # Create a parameter grid params = { 'feature_selection__k':[10,20], 'clf__n_estimators':[2, 5]} # Initialize the grid search object grid_search = GridSearchCV(pipe, param_grid=params) # Fit it to the data and print the best value combination print(grid_search.fit(X_train, y_train).best_params_) # - # <h1 class="exercise--title">Custom scorers in pipelines</h1><div class=""><p>You are proud of the improvement in your code quality, but just remembered that previously you had to use a custom scoring metric in order to account for the fact that false positives are costlier to your startup than false negatives. You hence want to equip your pipeline with scorers other than accuracy, including <code>roc_auc_score()</code>, <code>f1_score()</code>, and you own custom scoring function. The pipeline from the previous lesson is available as <code>pipe</code>, as is the parameter grid as <code>params</code> and the training data as <code>X_train</code>, <code>y_train</code>. You also have <code>confusion_matrix()</code> for the purpose of writing your own metric.</p></div> # + from sklearn.metrics import make_scorer, roc_auc_score # Create a custom scorer scorer = make_scorer(roc_auc_score) # Initialize the CV object gs = GridSearchCV(pipe, param_grid=params, scoring=scorer) # Fit it to the data and print the winning combination print(gs.fit(X_train, y_train).best_params_) # + from sklearn.metrics import f1_score # Create a custom scorer scorer = make_scorer(f1_score) # Initialize the CV object gs = GridSearchCV(pipe, param_grid=params, scoring=scorer) # Fit it to the data and print the winning combination print(gs.fit(X_train, y_train).best_params_) # - from sklearn.metrics import confusion_matrix def my_metric(y_test, y_est, cost_fp=10.0, cost_fn=1.0): tn, fp, fn, tp = confusion_matrix(y_test, y_est).ravel() return cost_fp * fp + cost_fn * fn # + from sklearn.metrics import f1_score # Create a custom scorer scorer = make_scorer(my_metric) # Initialize the CV object gs = GridSearchCV(pipe, param_grid=params, scoring=scorer) # Fit it to the data and print the winning combination print(gs.fit(X_train, y_train).best_params_) # - # <h1 class="exercise--title">Pickles</h1><div class=""><p>Finally, it is time for you to push your first model to production. It is a random forest classifier which you will use as a baseline, while you are still working to develop a better alternative. You have access to the data split in training test with their usual names, <code>X_train</code>, <code>X_test</code>, <code>y_train</code> and <code>y_test</code>, as well as to the modules <code>RandomForestClassifier()</code> and <code>pickle</code>, whose methods <code>.load()</code> and <code>.dump()</code> you will need for this exercise.</p></div> # + import pickle # Fit a random forest to the training set clf = rf(random_state=42).fit( X_train, y_train) # Save it to a file, to be pushed to production with open('model.pkl', 'wb') as file: pickle.dump(clf, file=file) # Now load the model from file in the production environment with open('model.pkl', 'rb') as file: clf_from_file = pickle.load(file) # Predict the labels of the test dataset preds = clf_from_file.predict(X_test) # - # <h1 class="exercise--title">Custom function transformers in pipelines</h1><div class=""><p>At some point, you were told that the sensors might be performing poorly for obese individuals. Previously you had dealt with that using weights, but now you are thinking that this information might also be useful for feature engineering, so you decide to replace the recorded weight of an individual with an indicator of whether they are obese. You want to do this using pipelines. You have <code>numpy</code> available as <code>np</code>, <code>RandomForestClassifier()</code>, <code>FunctionTransformer()</code>, and <code>GridSearchCV()</code>.</p></div> # + from sklearn.preprocessing import FunctionTransformer # Define a feature extractor to flag very large values def more_than_average(X, multiplier=1.0): Z = X.copy() Z[:,1] = Z[:,1] > multiplier*np.mean(Z[:,1]) return Z # Convert your function so that it can be used in a pipeline pipe = Pipeline([ ('ft', FunctionTransformer(more_than_average)), ('clf', RandomForestClassifier(random_state=2))]) # Optimize the parameter multiplier using GridSearchCV params = {'ft__multiplier': [1,2,3]} gs = GridSearchCV(pipe, param_grid=params) print(gs.fit(X_train, y_train).best_params_) # -
61.675
2,706
d838e8f974e7e4cf27308d4d31c3e02c87f4a820
py
python
Phase_1/ds-sql2-main/sql.ipynb
clareadunne/ds-east-042621-lectures
['MIT']
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] toc=true # <h1>Table of Contents<span class="tocSkip"></span></h1> # <div class="toc"><ul class="toc-item"><li><span><a href="#Objectives" data-toc-modified-id="Objectives-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Objectives</a></span></li><li><span><a href="#Aggregating-Functions" data-toc-modified-id="Aggregating-Functions-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Aggregating Functions</a></span><ul class="toc-item"><li><span><a href="#Example-Simple-Aggregations" data-toc-modified-id="Example-Simple-Aggregations-2.1"><span class="toc-item-num">2.1&nbsp;&nbsp;</span>Example Simple Aggregations</a></span></li></ul></li><li><span><a href="#Grouping-in-SQL" data-toc-modified-id="Grouping-in-SQL-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Grouping in SQL</a></span><ul class="toc-item"><li><span><a href="#Example-GROUP-BY--Statements" data-toc-modified-id="Example-GROUP-BY--Statements-3.1"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>Example <code>GROUP BY</code> Statements</a></span><ul class="toc-item"><li><span><a href="#Without-GROUP-BY" data-toc-modified-id="Without-GROUP-BY-3.1.1"><span class="toc-item-num">3.1.1&nbsp;&nbsp;</span>Without <code>GROUP BY</code></a></span></li><li><span><a href="#With-GROUP-BY" data-toc-modified-id="With-GROUP-BY-3.1.2"><span class="toc-item-num">3.1.2&nbsp;&nbsp;</span>With <code>GROUP BY</code></a></span></li></ul></li><li><span><a href="#Group-Task" data-toc-modified-id="Group-Task-3.2"><span class="toc-item-num">3.2&nbsp;&nbsp;</span>Group Task</a></span><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#Possible-Solution" data-toc-modified-id="Possible-Solution-3.2.0.1"><span class="toc-item-num">3.2.0.1&nbsp;&nbsp;</span>Possible Solution</a></span></li></ul></li></ul></li><li><span><a href="#Exercises:-Grouping" data-toc-modified-id="Exercises:-Grouping-3.3"><span class="toc-item-num">3.3&nbsp;&nbsp;</span>Exercises: Grouping</a></span><ul class="toc-item"><li><span><a href="#Grouping-Exercise-1" data-toc-modified-id="Grouping-Exercise-1-3.3.1"><span class="toc-item-num">3.3.1&nbsp;&nbsp;</span>Grouping Exercise 1</a></span></li><li><span><a href="#Grouping-Exercise-2" data-toc-modified-id="Grouping-Exercise-2-3.3.2"><span class="toc-item-num">3.3.2&nbsp;&nbsp;</span>Grouping Exercise 2</a></span></li></ul></li></ul></li><li><span><a href="#Filtering-Groups-with-HAVING" data-toc-modified-id="Filtering-Groups-with-HAVING-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>Filtering Groups with <code>HAVING</code></a></span><ul class="toc-item"><li><span><a href="#Examples-of-Using-HAVING" data-toc-modified-id="Examples-of-Using-HAVING-4.1"><span class="toc-item-num">4.1&nbsp;&nbsp;</span>Examples of Using <code>HAVING</code></a></span><ul class="toc-item"><li><span><a href="#Simple-Filtering---Number-of-Airports-in-a-Country" data-toc-modified-id="Simple-Filtering---Number-of-Airports-in-a-Country-4.1.1"><span class="toc-item-num">4.1.1&nbsp;&nbsp;</span>Simple Filtering - Number of Airports in a Country</a></span></li></ul></li><li><span><a href="#Filtering-Different-Aggregation---Airport-Altitudes" data-toc-modified-id="Filtering-Different-Aggregation---Airport-Altitudes-4.2"><span class="toc-item-num">4.2&nbsp;&nbsp;</span>Filtering Different Aggregation - Airport Altitudes</a></span><ul class="toc-item"><li><span><a href="#Looking-at-the-airports-Table" data-toc-modified-id="Looking-at-the-airports-Table-4.2.1"><span class="toc-item-num">4.2.1&nbsp;&nbsp;</span>Looking at the <code>airports</code> Table</a></span></li><li><span><a href="#Looking-at-the-Highest-Airport" data-toc-modified-id="Looking-at-the-Highest-Airport-4.2.2"><span class="toc-item-num">4.2.2&nbsp;&nbsp;</span>Looking at the Highest Airport</a></span></li><li><span><a href="#Looking-at-the-Number-of-Airports-Too" data-toc-modified-id="Looking-at-the-Number-of-Airports-Too-4.2.3"><span class="toc-item-num">4.2.3&nbsp;&nbsp;</span>Looking at the Number of Airports Too</a></span></li><li><span><a href="#Finally-Filter-Aggregation" data-toc-modified-id="Finally-Filter-Aggregation-4.2.4"><span class="toc-item-num">4.2.4&nbsp;&nbsp;</span>Finally Filter Aggregation</a></span></li></ul></li></ul></li><li><span><a href="#Joins" data-toc-modified-id="Joins-5"><span class="toc-item-num">5&nbsp;&nbsp;</span>Joins</a></span><ul class="toc-item"><li><span><a href="#INNER-JOIN" data-toc-modified-id="INNER-JOIN-5.1"><span class="toc-item-num">5.1&nbsp;&nbsp;</span><code>INNER JOIN</code></a></span><ul class="toc-item"><li><span><a href="#Code-Example-for-Inner-Joins" data-toc-modified-id="Code-Example-for-Inner-Joins-5.1.1"><span class="toc-item-num">5.1.1&nbsp;&nbsp;</span>Code Example for Inner Joins</a></span><ul class="toc-item"><li><span><a href="#Inner-Join-Routes-&amp;-Airline-Data" data-toc-modified-id="Inner-Join-Routes-&amp;-Airline-Data-5.1.1.1"><span class="toc-item-num">5.1.1.1&nbsp;&nbsp;</span>Inner Join Routes &amp; Airline Data</a></span></li><li><span><a href="#Note:-Losing-Data-with-Inner-Joins" data-toc-modified-id="Note:-Losing-Data-with-Inner-Joins-5.1.1.2"><span class="toc-item-num">5.1.1.2&nbsp;&nbsp;</span>Note: Losing Data with Inner Joins</a></span></li></ul></li></ul></li><li><span><a href="#LEFT-JOIN" data-toc-modified-id="LEFT-JOIN-5.2"><span class="toc-item-num">5.2&nbsp;&nbsp;</span><code>LEFT JOIN</code></a></span><ul class="toc-item"><li><span><a href="#Code-Example-for-Left-Join" data-toc-modified-id="Code-Example-for-Left-Join-5.2.1"><span class="toc-item-num">5.2.1&nbsp;&nbsp;</span>Code Example for Left Join</a></span></li></ul></li><li><span><a href="#Exercise:-Joins" data-toc-modified-id="Exercise:-Joins-5.3"><span class="toc-item-num">5.3&nbsp;&nbsp;</span>Exercise: Joins</a></span><ul class="toc-item"><li><span><a href="#Possible-Solution" data-toc-modified-id="Possible-Solution-5.3.1"><span class="toc-item-num">5.3.1&nbsp;&nbsp;</span>Possible Solution</a></span></li></ul></li></ul></li><li><span><a href="#Level-Up:-Execution-Order" data-toc-modified-id="Level-Up:-Execution-Order-6"><span class="toc-item-num">6&nbsp;&nbsp;</span>Level Up: Execution Order</a></span></li></ul></div> # - # ![sql](img/sql-logo.jpg) # + import pandas as pd import sqlite3 import pandasql conn = sqlite3.connect("flights.db") cur = conn.cursor() # + [markdown] heading_collapsed=true # # Objectives # + [markdown] hidden=true # - Use SQL aggregation functions with GROUP BY # - Use HAVING for group filtering # - Use SQL JOIN to combine tables using keys # + [markdown] heading_collapsed=true # # Aggregating Functions # + [markdown] hidden=true # > A SQL **aggregating function** takes in many values and returns one value. # + [markdown] hidden=true # We might've already seen some SQL aggregating functions like `COUNT()`. There's also others like SUM(), AVG(), MIN(), and MAX(). # + [markdown] heading_collapsed=true hidden=true # ## Example Simple Aggregations # + hidden=true # Max value for longitude pd.read_sql(''' SELECT -- Note we have to cast to a numerical value first MAX( CAST(airports.longitude AS REAL) ) FROM airports ''', conn) # + hidden=true # Max value for id in table pd.read_sql(''' SELECT * FROM airports ''', conn) # + hidden=true # Effectively counts all the not active airlines pd.read_sql(''' SELECT COUNT() FROM airlines WHERE active='N' ''', conn) # + [markdown] hidden=true # We can also give aliases to our aggregations: # + hidden=true # Effectively counts all the active airlines pd.read_sql(''' SELECT COUNT() as number_of_active_airlines FROM airlines WHERE active='Y' ''', conn) # + [markdown] heading_collapsed=true # # Grouping in SQL # + [markdown] hidden=true # We can go deeper and use aggregation functions on _groups_ using the `GROUP BY` clause. # + [markdown] hidden=true # The `GROUP BY` clause will group one or more columns together with the same values as one group to perform aggregation functions on. # + [markdown] heading_collapsed=true hidden=true # ## Example `GROUP BY` Statements # + [markdown] hidden=true # Let's say we want to know how many active and non-active airlines there are. # + [markdown] heading_collapsed=true hidden=true # ### Without `GROUP BY` # + [markdown] hidden=true # Let's first start with just seeing how many airlines there are: # + hidden=true df_results = pd.read_sql(''' SELECT -- Reminde that this counts the number of rows before the SELECT COUNT() AS number_of_airlines FROM airlines ''', conn) df_results # + [markdown] hidden=true # One way for us to get the counts for each is to create two queries that will filter each kind of airline (active vs non-active) count these values: # + hidden=true df_active = pd.read_sql(''' SELECT COUNT() AS number_of_active_airlines FROM airlines WHERE active='Y' ''', conn) df_not_active = pd.read_sql(''' SELECT COUNT() AS number_of_not_active_airlines FROM airlines WHERE active='N' ''', conn) display(df_active) display(df_not_active) # + [markdown] hidden=true # This technically works but you can see it's probably a bit inefficient and not as clean. # + [markdown] heading_collapsed=true hidden=true # ### With `GROUP BY` # + [markdown] hidden=true # Instead, we can tell the SQL server to do the work for us by grouping values we care about for us! # + hidden=true df_results = pd.read_sql(''' SELECT COUNT() AS number_of_airlines FROM airlines GROUP BY airlines.active ''', conn) df_results # + [markdown] hidden=true # This is great! And if you look closely, you can observe we have _three_ different groups instead of our expected two! # + [markdown] hidden=true # Let's also print out the `airlines.active` value for each group/aggregation so we know what we're looking at: # + hidden=true df_results = pd.read_sql(''' SELECT airlines.active, COUNT() AS number_of_airlines FROM airlines GROUP BY airlines.active ''', conn) df_results # + [markdown] heading_collapsed=true hidden=true # ## Group Task # + [markdown] hidden=true # - Which countries have the highest numbers of active airlines? Return the top 10. # + hidden=true pd.read_sql(''' SELECT COUNT() AS number_of_airlines, airlines.country FROM airlines WHERE active='Y' GROUP BY airlines.country ORDER BY number_of_airlines DESC LIMIT 10 ''', conn) # + [markdown] heading_collapsed=true hidden=true # #### Possible Solution # + hidden=true pd.read_sql(''' SELECT COUNT() AS num, country FROM airlines WHERE active='Y' GROUP BY country ORDER BY num DESC LIMIT 10 ''', conn) # + [markdown] hidden=true # > Note that the `GROUP BY` clause is considered _before_ the `ORDER BY` and `LIMIT` clauses # + [markdown] heading_collapsed=true hidden=true # ## Exercises: Grouping # + [markdown] heading_collapsed=true hidden=true # ### Grouping Exercise 1 # + [markdown] hidden=true # - Which countries have the highest numbers of inactive airlines? Return all the countries that have more than 10. # + hidden=true inactive_airports = pd.read_sql(''' SELECT COUNT() AS number_of_airlines, airlines.country FROM airlines WHERE (active='N' OR active = 'n') GROUP BY airlines.country ORDER BY number_of_airlines DESC ''', conn) inactive_airports inactive_airports_morethanten = inactive_airports[inactive_airports['number_of_airlines'] > 10] inactive_airports_morethanten # + [markdown] heading_collapsed=true hidden=true # ### Grouping Exercise 2 # + [markdown] hidden=true # - Run a query that will return the number of airports by time zone. Each row should have a number of airports and a time zone. # + hidden=true pd.read_sql(''' SELECT COUNT() AS number_of_airlines, airlines.country FROM airlines WHERE active='N' GROUP BY airlines.country ORDER BY number_of_airlines DESC LIMIT 10 ''', conn) # + [markdown] heading_collapsed=true # # Filtering Groups with `HAVING` # + [markdown] hidden=true # We showed that you can filter tables with `WHERE`. We can similarly filter _groups/aggregations_ using `HAVING` clauses. # + [markdown] heading_collapsed=true hidden=true # ## Examples of Using `HAVING` # + [markdown] heading_collapsed=true hidden=true # ### Simple Filtering - Number of Airports in a Country # + [markdown] hidden=true # Let's come back to the aggregation of active airports: # + hidden=true pd.read_sql(''' SELECT COUNT() AS num, country FROM airlines WHERE active='Y' GROUP BY country ORDER BY num DESC ''', conn) # + [markdown] hidden=true # We can see have a lot of results. But maybe we only want to keep the countries that have more than $30$ active airports: # + hidden=true pd.read_sql(''' SELECT COUNT() AS num, country FROM airlines WHERE active='Y' GROUP BY country HAVING COUNT() > 30 ORDER BY num DESC ''', conn) # + [markdown] heading_collapsed=true hidden=true # ## Filtering Different Aggregation - Airport Altitudes # + [markdown] hidden=true # We can also filter on other aggregations. For example, let's say we want to investigate the `airports` table. # + [markdown] hidden=true # Specifically, we want to know the height of the _highest airport_ in a country given that it has _at least $100$ airports_. # + [markdown] heading_collapsed=true hidden=true # ### Looking at the `airports` Table # + hidden=true df_airports = pd.read_sql(''' SELECT COUNT() AS num, country FROM airlines WHERE active='Y' GROUP BY country HAVING COUNT() > 30 ORDER BY num DESC ''', conn) df_airports.head() # + [markdown] heading_collapsed=true hidden=true # ### Looking at the Highest Airport # + [markdown] hidden=true # Let's first get the highest altitude for each airport: # + hidden=true pd.read_sql(''' SELECT airports.country ,MAX( CAST(airports.altitude AS REAL) ) AS highest_airport_in_country FROM airports GROUP BY airports.country ORDER BY airports.country ''', conn) # + [markdown] heading_collapsed=true hidden=true # ### Looking at the Number of Airports Too # + [markdown] hidden=true # We can also get the number of airports for each country. # + hidden=true pd.read_sql(''' SELECT airports.country ,MAX( CAST(airports.altitude AS REAL) ) AS highest_airport_in_country ,COUNT() AS number_of_airports_in_country FROM airports GROUP BY airports.country ORDER BY airports.country ''', conn) # + [markdown] heading_collapsed=true hidden=true # ### Finally Filter Aggregation # + [markdown] hidden=true # > Recall: # > # > We want to know the height of the _highest airport_ in a country given that it has _at least $100$ airports_. # + hidden=true pd.read_sql(''' SELECT airports.country ,MAX( CAST(airports.altitude AS REAL) ) AS highest_airport_in_country -- Note we don't have to include this in our SELECT --,COUNT() AS number_of_airports_in_country FROM airports GROUP BY airports.country HAVING COUNT() > 100 ORDER BY airports.country ''', conn) # + [markdown] heading_collapsed=true # # Joins # + [markdown] hidden=true # The biggest advantage in using a relational database (like we've been with SQL) is that you can create **joins**. # + [markdown] hidden=true # > By using **`JOIN`** in our query, we can connect different tables using their _relationships_ to other tables. # > # > Usually we use a key (_foriegn_key_) to tell us how the two tables are related. # + [markdown] hidden=true # There are different types of joins and each has their different use case. # + [markdown] heading_collapsed=true hidden=true # ## `INNER JOIN` # + [markdown] hidden=true # > An **inner join** will join two tables together and only keep rows if the _key is in both tables_ # + [markdown] hidden=true # ![](img/inner_join.png) # + [markdown] hidden=true # Example of an inner join: # # ```sql # SELECT # table1.column_name, # table2.different_column_name # FROM # table1 # INNER JOIN table2 # ON table1.shared_column_name = table2.shared_column_name # ``` # + [markdown] heading_collapsed=true hidden=true # ### Code Example for Inner Joins # + [markdown] hidden=true # Let's say we want to look at the different airplane routes # + hidden=true pd.read_sql(''' SELECT * FROM routes ''', conn) # - pd.read_sql(''' SELECT * FROM airlines ''', conn) # + [markdown] hidden=true # This is great but notice `airline_id`. It'd be nice to have some information about the airline for that route. # + [markdown] hidden=true # We can do an **inner join** to get this information! # + [markdown] heading_collapsed=true hidden=true # #### Inner Join Routes & Airline Data # + hidden=true pd.read_sql(''' SELECT * FROM routes INNER JOIN airlines ON routes.airline_id = airlines.id ''', conn) # + [markdown] hidden=true # We can also specify to only retain certain columns in the `SELECT` clause: # + hidden=true pd.read_sql(''' SELECT routes.source AS departing ,routes.dest AS destination ,routes.stops AS stops_before_destination ,airlines.name AS airline FROM routes INNER JOIN airlines ON routes.airline_id = airlines.id ''', conn) # + [markdown] heading_collapsed=true hidden=true # #### Note: Losing Data with Inner Joins # + [markdown] hidden=true # Since data rows are kept if _both_ tables have the key, some data can be lost # + hidden=true df_all_routes = pd.read_sql(''' SELECT * FROM routes ''', conn) df_routes_after_join = pd.read_sql(''' SELECT * FROM routes INNER JOIN airlines ON routes.airline_id = airlines.id ''', conn) # + hidden=true # Look at how the number of rows are different df_all_routes.shape, df_routes_after_join.shape # + [markdown] hidden=true # If you want to keep your data from at least one of your tables, you should use a left or right join instead of an inner join. # + [markdown] heading_collapsed=true hidden=true # ## `LEFT JOIN` # + [markdown] hidden=true # > A **left join** will join two tables together and but will keep all data from the first (left) table using the key provided. # + [markdown] hidden=true # ![](img/left_join.png) # + [markdown] hidden=true # Example of a left and right join: # # ```sql # SELECT # table1.column_name, # table2.different_column_name # FROM # table1 # LEFT JOIN table2 # ON table1.shared_column_name = table2.shared_column_name # ``` # + [markdown] heading_collapsed=true hidden=true # ### Code Example for Left Join # + [markdown] hidden=true # Recall our example using an inner join and how it lost some data since the key wasn't in both the `routes` _and_ `airlines` tables. # + hidden=true df_all_routes = pd.read_sql(''' SELECT * FROM routes ''', conn) # This will lose some data (some routes not included) df_routes_after_inner_join = pd.read_sql(''' SELECT * FROM routes INNER JOIN airlines ON routes.airline_id = airlines.id ''', conn) # The number of rows are different df_all_routes.shape, df_routes_after_inner_join.shape # + [markdown] hidden=true # If wanted to ensure we always had every route even if the key in `airlines` was not found, we could replace our `INNER JOIN` with a `LEFT JOIN`: # + hidden=true # This will include all the data from routes df_routes_after_left_join = pd.read_sql(''' SELECT * FROM routes LEFT JOIN airlines ON routes.airline_id = airlines.id ''', conn) df_routes_after_left_join.shape # + [markdown] heading_collapsed=true hidden=true # ## Exercise: Joins # + [markdown] hidden=true # Which airline has the most routes listed in our database? # + hidden=true pd.read_sql(''' SELECT airlines.name, COUNT() AS num_routes FROM routes LEFT JOIN airlines ON routes.airline_id = airlines.id GROUP BY airlines.id ORDER BY num_routes DESC ''', conn) # + [markdown] heading_collapsed=true hidden=true # ### Possible Solution # + [markdown] hidden=true # ```sql # SELECT # airlines.name AS airline, # COUNT() AS number_of_routes # -- We first need to get all the relevant info via a join # FROM # routes # -- LEFT JOIN since we want all routes (even if airline id is unknown) # LEFT JOIN airlines # ON routes.airline_id = airlines.id # -- We need to group by airline's ID # GROUP BY # airlines.id # ORDER BY # number_of_routes DESC # ``` # + [markdown] heading_collapsed=true # # Level Up: Execution Order # + [markdown] hidden=true # ```SQL # SELECT # COUNT(table2.col2) AS my_new_count # ,table1.col2 # FROM # table1 # JOIN table2 # ON table1.col1 = table2.col2 # WHERE # table1.col1 > 0 # GROUP BY # table2.col1 # ``` # + [markdown] hidden=true # 1. `From` # 2. `Where` # 3. `Group By` # 4. `Having` # 5. `Select` # 6. `Order By` # 7. `Limit`
30.830703
6,108
15f7fc75453fa38073e96255ab3b94ea5ddb8f41
py
python
prediction/multitask/fine-tuning/function documentation generation/ruby/small_model.ipynb
victory-hash/CodeTrans
['MIT']
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/ruby/small_model.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="c9eStCoLX0pZ" # **<h3>Predict the documentation for ruby code using codeTrans multitask finetuning model</h3>** # <h4>You can make free prediction online through this # <a href="https://huggingface.co/SEBIS/code_trans_t5_small_code_documentation_generation_ruby_multitask_finetune">Link</a></h4> (When using the prediction online, you need to parse and tokenize the code first.) # + [markdown] id="6YPrvwDIHdBe" # **1. Load necessry libraries including huggingface transformers** # + colab={"base_uri": "https://localhost:8080/"} id="6FAVWAN1UOJ4" outputId="10f60b60-81c5-414a-c5c1-be581ecfb187" # !pip install -q transformers sentencepiece # + id="53TAO7mmUOyI" from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline # + [markdown] id="xq9v-guFWXHy" # **2. Load the token classification pipeline and load it into the GPU if avilabile** # + colab={"base_uri": "https://localhost:8080/", "height": 316, "referenced_widgets": ["27e4e833f0ec45b883a68da8542f7fc3", "6c828235f68d4573a4a8f37e1d6ee731", "966f8e4a950d4bbda93adb6a78a079b0", "9291f747c8dc4f018bdad464e6d17ca6", "2a75c040ad434206898f503f26cc3396", "d59cf261aa394ee89494a26f77e856a9", "353222d84eab4e04861a7536acc588cb", "a6b36903519a4cb6baf3c6c15ec3ba15", "b14914fa47e74467af6964b96959231b", "a12eb3a80b3e432783cdd52be04b6ad2", "8c8a97d2539943f69ecd95de62285ffb", "7b467cd8294c4241a9dbafb521297b37", "98d463047b0946e28534b95dcdad4180", "687ea22f4316463ab5c40d13d12f85aa", "7abbf9f894394df98bc392be39c83f5d", "ed733a7e0e054ff2b1aea4f9485f13f0", "789d02a170644ac493853a302fcde884", "72b49e17a8aa40818136118cf2dc83a8", "5f7321e9ec254262b15b52b3084db0e0", "f45b5f9164364a4e94f025f5aabd66f0", "4b47350ebc914c7a8c6af0d0ad1435b1", "169b4d25ce764376839a0beb0716a91c", "8b79bac8fa2145ac89c30c0ac3cfbed4", "38e5ce91d4c94c3e862e2c6d16b4fb64", "ecebdbd791f84b6faf718851258357bc", "dd8efa263b5e464cb05f5e8b854fafc5", "cb0be8ce685f43efa88085b222a487c1", "301062c186e44b61974fb21c242bb256", "c5f08805f0394d459487f2c7eff13d53", "70a2200001554d68af35364019d3d18a", "2fdc1d85a295479e8585b3ee3c92ed03", "e61585c2cf2145aabfffa5e7b4ef20b8", "e10c6c66557c4b0686980802057617e2", "69e9d790314f499bb5951af040e26a58", "833dde6448084727b1a316ff71688aaf", "648fac6dd65142f9adeecabf5e57380f", "eb0faa40fb9548bfbcc6b60f645019fe", "678b1cb2aff04681b17cde2abc8abaf0", "9fb49c0a6b9a40b39726bdadf1900f98", "8333817df92046878389d8357e9dbcda"]} id="5ybX8hZ3UcK2" outputId="432cd8a1-2022-4502-e654-cbe6e65ea702" pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_ruby_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_ruby_multitask_finetune", skip_special_tokens=True), device=0 ) # + [markdown] id="hkynwKIcEvHh" # **3 Give the code for summarization, parse and tokenize it** # + id="nld-UUmII-2e" code = "def add(severity, progname, &block)\n return true if io.nil? || severity < level\n message = format_message(severity, progname, yield)\n MUTEX.synchronize { io.write(message) }\n true\n end" #@param {type:"raw"} # + id="cJLeTZ0JtsB5" colab={"base_uri": "https://localhost:8080/"} outputId="75d2f532-d203-4e48-cdcf-3675963bb558" # !pip install tree_sitter # !git clone https://github.com/tree-sitter/tree-sitter-ruby # + id="hqACvTcjtwYK" from tree_sitter import Language, Parser Language.build_library( 'build/my-languages.so', ['tree-sitter-ruby'] ) RUBY_LANGUAGE = Language('build/my-languages.so', 'ruby') parser = Parser() parser.set_language(RUBY_LANGUAGE) # + id="LLCv2Yb8t_PP" def get_string_from_code(node, lines): line_start = node.start_point[0] line_end = node.end_point[0] char_start = node.start_point[1] char_end = node.end_point[1] if line_start != line_end: code_list.append(' '.join([lines[line_start][char_start:]] + lines[line_start+1:line_end] + [lines[line_end][:char_end]])) else: code_list.append(lines[line_start][char_start:char_end]) def my_traverse(node, code_list): lines = code.split('\n') if node.child_count == 0: get_string_from_code(node, lines) elif node.type == 'string': get_string_from_code(node, lines) else: for n in node.children: my_traverse(n, code_list) return ' '.join(code_list) # + id="BhF9MWu1uCIS" colab={"base_uri": "https://localhost:8080/"} outputId="5abc5d42-25a0-4c97-c39e-b07c2978a89d" tree = parser.parse(bytes(code, "utf8")) code_list=[] tokenized_code = my_traverse(tree.root_node, code_list) print("Output after tokenization: " + tokenized_code) # + [markdown] id="sVBz9jHNW1PI" # **4. Make Prediction** # + colab={"base_uri": "https://localhost:8080/"} id="KAItQ9U9UwqW" outputId="a696ab61-6bc3-4d60-bb97-ca442a5fdf2d" pipeline([tokenized_code])
54.367347
1,594
00516cd54e93f4aef4084eeaaba783a5a94f79ec
py
python
quant_finance_lectures/Lecture28-Market-Impact-Models.ipynb
jonrtaylor/quant-finance-lectures
['CC-BY-4.0']
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <img alt="QuantRocket logo" src="https://www.quantrocket.com/assets/img/notebook-header-logo.png"> # # © Copyright Quantopian Inc.<br> # © Modifications Copyright QuantRocket LLC<br> # Licensed under the [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/legalcode). # # <a href="https://www.quantrocket.com/disclaimer/">Disclaimer</a> # # Market Impact Models # # By Dr. Michele Goe # # In this lecture we seek to clarify transaction costs and how they impact algorithm performance. By the end of this lecture you should be able to: # 1. Understand the attributes that influence transaction costs based on published market impact model research and our own experience # 2. Understand the impact of turnover rate, transaction costs, and leverage on your strategy performance # 3. Become familiar with how institutional quant trading teams think about and measure transaction cost. import numpy as np import matplotlib.pyplot as plt import pandas as pd import time # ## Intro to Transaction Costs # # # Transaction costs fall into two categories # * Direct (commissions and fees): explicit, easily measured, and in institutional trading, relatively "small" # * Indirect (market impact and spread costs): **the purpose of this lecture** # # Slippage is when the price 'slips' before the trade is fully executed, leading to the fill price being different from the price at the time of the order. # The attributes of a trade that our research shows have the most influence on slippage are: # 1. **Volatility** # 2. **Liquidity** # 3. **Relative order size** # 4. **Bid - ask spread** # ## Transaction Cost Impact on Portfolio Performance # # Let’s consider a hypothetical mid-frequency statistical arbitrage portfolio. (A mid-freqency strategy refers roughly to a daily turnover between $0.05$ - $0.67$. This represents a holding period between a day and a week. Statistical arbitrage refers to the use of computational algorithms to simultaneously buy and sell stocks according to a statistical model.) # # Algo Attribute| Qty # ---|---- # Holding Period (weeks) |1 # Leverage | 2 # AUM (million) | 100 # Trading Days per year | 252 # Fraction of AUM traded per day | 0.4 # # # This means we trade in and out of a new portfolio roughly 50 times a year. At 2 times leverage, on 100 million in AUM, we trade 20 billion dollars per year. # # **Q: For this level of churn what is the impact of 1 bps of execution cost to the fund’s returns?** # # This means for every basis point ($0.01\%$) of transaction cost we lose $2\%$ off algo performance. # # + jupyter={"outputs_hidden": false} def perf_impact(leverage, turnover , trading_days, txn_cost_bps): p = leverage * turnover * trading_days * txn_cost_bps/10000. return p # + jupyter={"outputs_hidden": false} print(perf_impact(leverage=2, turnover=0.4, trading_days=252, txn_cost_bps=1)) # - # ## How do institutional quant trading teams evaluate transaction cost ? # # Quantitiative institutional trading teams typically utilize execution tactics that aim to complete parent orders fully, while minimizing the cost of execution. To achieve this goal, parent orders are often split into a number of child orders, which are routed to different execution venues, with the goal to capture all the available liquidity and minimize the bid-ask spread. The parent-level execution price can be expressed as the volume-weighted average price of all child orders. # # **Q: What benchmark(s) should we compare our execution price to ?** # # *Example Benchmarks :* # * **Arrival Price** - the "decision" price of the algo, defined as the mid-quote at the time the algo placed the parent order (mid is the half-way point between the best bid and ask quotes) # * **Interval VWAP** - volume-weighted average price during the life of the order # * **T + 10 min** - reversion benchmark, price 10 min after the last fill vs execution price # * **T + 30 min** - reversion benchmark, price 30 min after the last fill vs execution price # * **Close** - reversion benchmark, price at close vs the execution price # * **Open** - momentum benchmark, price at open vs the execution price # * **Previous close** - momentum benchmark, price at previous close vs execution price # *Other metrics and definitions # # # $$ Metric = \frac{Side * (Benchmark - Execution\thinspace Price )* 100 * 100}{ Benchmark }$$ # # *Key Ideas* # * **Execution Price** - volume-weighted average price of all fills or child orders # * **Cost vs Arrival Price** - difference between the arrival price and the execution price, expressed in basis points. The idea with this benchmark is to compare the execution price against the decision price of the strategy. This cost is sometimes called "slippage" or "implementation shortfall." # # The reversion metrics give us an indication of our temporary impact after the order has been executed. Generally, we'd expect the stock price to revert a bit, upon our order completion, as our contribution to the buy-sell imbalance is reflected in the market. The momentum metrics give us an indication of the direction of price drift prior to execution. Often, trading with significant momentum can affect our ability to minimize the bid-ask spread costs. # # When executing an order, one of the primary tradeoffs to consider is timing risk vs. market impact: # * **Timing Risk** - risk of price drift and information leakage as interval between arrival mid quote and last fill increases. # * **Market Impact** - (high urgency) risk of moving the market by shortening the interval between arrival mid quote and last fill. # # Within this framework, neutral urgency of execution occurs at the intersection of market risk and market impact - in this case, each contributes the same to execution costs. # + jupyter={"outputs_hidden": false} x = np.linspace(0,1,101) risk = np.cos(x*np.pi) impact = np.cos(x* np.pi+ np.pi) fig,ax = plt.subplots(1) # Make your plot, set your axes labels ax.plot(x,risk) ax.plot(x,impact) ax.set_ylabel('Transaction Cost in bps', fontsize=15) ax.set_xlabel('Order Interval', fontsize=15) ax.set_yticklabels([]) ax.set_xticklabels([]) ax.grid(False) ax.text(0.09, -0.6, 'Timing Risk', fontsize=15, fontname="serif") ax.text(0.08, 0.6, 'Market Impact', fontsize=15, fontname="serif") plt.title('Timing Risk vs Market Impact Affect on Transaction Cost', fontsize=15) plt.show() # - # ## **Liquidity** # # Liquidity can be viewed through several lenses. Within the context of execution management, we can think of it as activity, measured in shares and USD traded, as well as frequency and size of trades executed in the market. "Good" liquidity is also achieved through a diverse number of market participants on both sides of the market. # # Assess Liquidity by: # * intraday volume curve # * percent of day's volume # * percent of average daily dollar volume in an interval # * cummulative intraday volume curve # * relative order size # # In general, liquidity is highest as we approach the close, and second highest at the open. Mid day has the lowest liquidity. Liquidity should also be viewed relative to your order size and other securities in the same sector and class. # + jupyter={"outputs_hidden": false} from quantrocket.master import get_securities from quantrocket import get_prices securities = get_securities(symbols='AAPL', vendors='usstock') AAPL = securities.index[0] data = get_prices('usstock-free-1min', sids=AAPL, data_frequency='minute', start_date='2016-01-01', end_date='2016-07-01', fields='Volume') dat = data.loc['Volume'][AAPL] # Combine separate Date and Time in index into datetime dat.index = pd.to_datetime(dat.index.get_level_values('Date').astype(str) + ' ' + dat.index.get_level_values('Time')) plt.subplot(211) dat['2016-04-14'].plot(title='Intraday Volume Profile') # intraday volume profile plot plt.subplot(212) (dat['2016-04-14'].resample('10t', closed='right').sum()/\ dat['2016-04-14'].sum()).plot(); # percent volume plot plt.title('Intraday Volume Profile, % Total Day'); # + jupyter={"outputs_hidden": false} df = pd.DataFrame(dat) # Apple minutely volume data df.columns = ['interval_vlm'] df_daysum = df.resample('d').sum() # take sum of each day df_daysum.columns = ['day_vlm'] df_daysum['day'] = df_daysum.index.date # add date index as column df['min_of_day']=(df.index.hour-9)*60 + (df.index.minute-30) # calculate minutes from open df['time']=df.index.time # add time index as column conversion = {'interval_vlm':'sum', 'min_of_day':'last', 'time':'last'} df = df.resample('10t', closed='right').apply(conversion) # apply conversions to columns at 10 min intervals df['day'] = df.index.date df = df.merge(df_daysum, how='left', on='day') # merge df and df_daysum dataframes df['interval_pct'] = df['interval_vlm'] / df['day_vlm'] # calculate percent of days volume for each row df.head() # + jupyter={"outputs_hidden": false} plt.scatter(df.min_of_day, df.interval_pct) plt.xlim(0,400) plt.xlabel('Time from the Open (minutes)') plt.ylabel('Percent Days Volume') # + jupyter={"outputs_hidden": false} grouped = df.groupby(df.min_of_day) grouped = df.groupby(df.time) # group by 10 minute interval times m = grouped.median() # get median values of groupby x = m.index y = m['interval_pct'] ax1 = (100*y).plot(kind='bar', alpha=0.75) # plot percent daily volume grouped by 10 minute interval times ax1.set_ylim(0,10); plt.title('Intraday Volume Profile'); ax1.set_ylabel('% of Day\'s Volume in Bucket'); # - # ## Relative Order Size # # As we increase relative order size at a specified participation rate, the time to complete the order increases. Let's assume we execute an order using VWAP, a scheduling strategy, which executes orders over a pre-specified time window, according to the projections of volume distribution throughout that time window: At 3% participation rate for VWAP execution, we require the entire day to trade if our order represents 3% of average daily volume. # # If we expect our algo to have high relative order sizes then we may want to switch to a liquidity management execution strategy when trading to ensure order completion by the end of the day. Liquidity management execution strategies have specific constraints for the urgency of execution, choice of execution venues and spread capture with the objective of order completion. Going back to our risk curves, we expect higher transaction costs the longer we trade. Therefore, the higher percent ADV of an order the more expensive to trade. # + jupyter={"outputs_hidden": false} data = get_prices('usstock-free-1min', sids=AAPL, data_frequency='minute', start_date='2016-01-01', end_date='2018-01-02', fields='Volume') dat = data.loc['Volume'][AAPL] # Combine separate Date and Time in index into datetime dat.index = pd.to_datetime(dat.index.get_level_values('Date').astype(str) + ' ' + dat.index.get_level_values('Time')) # + def relative_order_size(participation_rate, pct_ADV): fill_start = dat['2017-10-02'].index[0] # start order at 9:31 ADV20 = int(dat.resample("1d").sum()[-20:].mean()) # calculate 20 day ADV order_size = int(pct_ADV * ADV20) try : ftime = dat['2017-10-02'][(order_size * 1.0 / participation_rate)<=dat['2017-10-02'].cumsum().values].index[0] except: ftime = dat['2017-10-02'].index[-1] # set fill time to 4p fill_time = max(1,int((ftime - fill_start).total_seconds()/60.0)) return fill_time def create_plots(participation_rate, ax): df_pr = pd.DataFrame(data=np.linspace(0.0,0.1,100), columns = ['adv'] ) # create dataframe with intervals of ADV df_pr['pr'] = participation_rate # add participation rate column df_pr['fill_time'] = df_pr.apply(lambda row: relative_order_size(row['pr'],row['adv']), axis = 1) # get fill time ax.plot(df_pr['adv'],df_pr['fill_time'], label=participation_rate) # generate plot line with ADV and fill time fig, ax = plt.subplots() for i in [0.01,0.02,0.03,0.04,0.05,0.06,0.07]: # for participation rate values create_plots(i,ax) # generate plot line plt.ylabel('Time from Open (minutes)') plt.xlabel('Percent Average Daily Volume') plt.title('Trade Completion Time as Function of Relative Order Size and Participation Rate') plt.xlim(0.,0.04) ax.legend() # - # # ## Volatility # # Volatilty is a statistical measure of dispersion of returns for a security, calculated as the standard deviation of returns. The volatility of any given stock typically peaks at the open and therafter decreases until mid-day.The higher the volatility the more uncertainty in the returns. This uncertainty is an artifact of larger bid-ask spreads during the price discovery process at the start of the trading day. In contrast to liquidity, where we would prefer to trade at the open to take advantage of high volumes, to take advantage of low volatility we would trade at the close. # # We use two methods to calculate volatility for demonstration purposes, OHLC and, the most common, close-to-close. OHLC uses the Garman-Klass Yang-Zhang volatilty estimate that employs open, high, low, and close data. # # OHLC VOLATILITY ESTIMATION METHOD # # $$\sigma^2 = \frac{Z}{n} \sum \left[\left(\ln \frac{O_i}{C_{i-1}} \right)^2 + \frac{1}{2} \left( \ln \frac{H_i}{L_i} \right)^2 - (2 \ln 2 -1) \left( \ln \frac{C_i}{O_i} \right)^2 \right]$$ # # # # CLOSE TO CLOSE HISTORICAL VOLATILITY ESTIMATION METHOD # # Volatility is calculated as the annualised standard deviation of log returns as detailed in the equation below. # # $$ Log \thinspace return = x_1 = \ln (\frac{c_i + d_i}{c_i-1} ) $$ # where d_i = ordinary(not adjusted) dividend and ci is close price # $$ Volatilty = \sigma_x \sqrt{ \frac{1}{N} \sum_{i=1}^{N} (x_i - \bar{x})^2 }$$ # # See end of notebook for references # + jupyter={"outputs_hidden": false} data = get_prices('usstock-free-1min', sids=AAPL, data_frequency='minute', start_date='2016-01-01', end_date='2016-07-01') df = data[AAPL].unstack(level='Field') # Combine separate Date and Time in index into datetime df.index = pd.to_datetime(df.index.get_level_values('Date').astype(str) + ' ' + df.index.get_level_values('Time')) df.head() # + jupyter={"outputs_hidden": false} def gkyz_var(open, high, low, close, close_tm1): # Garman Klass Yang Zhang extension OHLC volatility estimate return np.log(open/close_tm1)**2 + 0.5*(np.log(high/low)**2) \ - (2*np.log(2)-1)*(np.log(close/open)**2) def historical_vol(close_ret, mean_ret): # close to close volatility estimate return np.sqrt(np.sum((close_ret-mean_ret)**2)/390) # + jupyter={"outputs_hidden": false} df['min_of_day'] = (df.index.hour-9)*60 + (df.index.minute-30) # calculate minute from the open df['time'] = df.index.time # add column time index df['day'] = df.index.date # add column date index df.head() # + jupyter={"outputs_hidden": false} df['close_tm1'] = df.groupby('day')['Close'].shift(1) # shift close value down one row df.close_tm1 = df.close_tm1.fillna(df.Open) df['min_close_ret'] = np.log( df['Close'] /df['close_tm1']) # log of close to close close_returns = df.groupby('day')['min_close_ret'].mean() # daily mean of log of close to close new_df = df.merge(pd.DataFrame(close_returns), left_on ='day', right_index = True) # handle when index goes from 16:00 to 9:31: new_df['variance'] = new_df.apply( lambda row: historical_vol(row.min_close_ret_x, row.min_close_ret_y), axis=1) new_df.head() # + jupyter={"outputs_hidden": false} df_daysum = pd.DataFrame(new_df['variance'].resample('d').sum()) # get sum of intraday variances daily df_daysum.columns = ['day_variance'] df_daysum['day'] = df_daysum.index.date df_daysum.head() # + jupyter={"outputs_hidden": false} conversion = {'variance':'sum', 'min_of_day':'last', 'time':'last'} df = new_df.resample('10t', closed='right').apply(conversion) df['day'] = df.index.date df['time'] = df.index.time df.head() # + jupyter={"outputs_hidden": false} df = df.merge(df_daysum, how='left', on='day') # merge daily and intraday volatilty dataframes df['interval_pct'] = df['variance'] / df['day_variance'] # calculate percent of days volatility for each row df.head() # + jupyter={"outputs_hidden": false} plt.scatter(df.min_of_day, df.interval_pct) plt.xlim(0,400) plt.ylim(0,) plt.xlabel('Time from Open (minutes)') plt.ylabel('Interval Contribution of Daily Volatility') plt.title('Probabilty Distribution of Daily Volatility ') # + jupyter={"outputs_hidden": false} import datetime grouped = df.groupby(df.min_of_day) grouped = df.groupby(df.time) # groupby time m = grouped.median() # get median x = m.index y = m['interval_pct'][datetime.time(9,30):datetime.time(15,59)] (100*y).plot(kind='bar', alpha=0.75);# plot interval percent of median daily volatility plt.title('Intraday Volatility Profile') ax1.set_ylabel('% of Day\'s Variance in Bucket'); # - # ## Bid-Ask Spread # # # The following relationships between bid-ask spread and order attributes are seen in our live trading data: # # * As **market cap ** increases we expect spreads to decrease. Larger companies tend to exhibit lower bid-ask spreads. # # * As **volatility ** increases we expect spreads to increase. Greater price uncertainty results in wider bid-ask spreads. # # * As **average daily dollar volume ** increases, we expect spreads to decrease. Liquidity tends to be inversely proportional to spreads, due to larger number of participants and more frequent updates to quotes. # # * As **price ** increases, we expect spreads to decrease (similar to market cap), although this relationship is not as strong. # # * As **time of day ** progresses we expect spreads to decrease. During early stages of a trading day, price discovery takes place. in contrast, at market close order completion is the priority of most participants and activity is led by liquidity management, rather than price discovery. # # The Trading Team developed a log-linear model fit to our live data that predicts the spread for a security with which we have the above listed attributes. # + def model_spread(time, vol, mcap = 1.67 * 10 ** 10, adv = 84.5, px = 91.0159): time_bins = np.array([0.0, 960.0, 2760.0, 5460.0, 21660.0]) #seconds from market open time_coefs = pd.Series([0.0, -0.289, -0.487, -0.685, -0.952]) vol_bins = np.array([0.0, .1, .15, .2, .3, .4]) vol_coefs = pd.Series([0.0, 0.251, 0.426, 0.542, 0.642, 0.812]) mcap_bins = np.array([0.0, 2.0, 5.0, 10.0, 25.0, 50.0]) * 10 ** 9 mcap_coefs = pd.Series([0.291, 0.305, 0.0, -0.161, -0.287, -0.499]) adv_bins = np.array([0.0, 50.0, 100.0, 150.0, 250.0, 500.0]) * 10 ** 6 adv_coefs = pd.Series([0.303, 0.0, -0.054, -0.109, -0.242, -0.454]) px_bins = np.array([0.0, 28.0, 45.0, 62.0, 82.0, 132.0]) px_coefs = pd.Series([-0.077, -0.187, -0.272, -0.186, 0.0, 0.380]) return np.exp(1.736 +\ time_coefs[np.digitize(time, time_bins) - 1] +\ vol_coefs[np.digitize(vol, vol_bins) - 1] +\ mcap_coefs[np.digitize(mcap, mcap_bins) - 1] +\ adv_coefs[np.digitize(adv, adv_bins) - 1] +\ px_coefs[np.digitize(px, px_bins) - 1]) # - # ### Predict the spread for the following order : # * Stock: DPS # * Qty: 425 shares # * Time of day : 9:41 am July 19, 2017, 600 seconds from open # * Market Cap : 1.67e10 # * Volatility: 18.8% # * ADV : 929k shares ; 84.5M dollars # * Avg Price : 91.0159 # + jupyter={"outputs_hidden": false} t = 10 * 60 vlty = 0.188 mcap = 1.67 * 10 ** 10 adv = 84.5 *10 price = 91.0159 print(model_spread(t, vlty, mcap, adv, price), 'bps') # + jupyter={"outputs_hidden": false} x = np.linspace(0,390*60) # seconds from open shape (50,) y = np.linspace(.01,.7) # volatility shape(50,) mcap = 1.67 * 10 ** 10 adv = 84.5 px = 91.0159 vlty_coefs = pd.Series([0.0, 0.251, 0.426, 0.542, 0.642, 0.812]) vlty_bins = np.array([0.0, .1, .15, .2, .3, .4]) time_bins = np.array([0.0, 960.0, 2760.0, 5460.0, 21660.0]) #seconds from market open time_coefs = pd.Series([0.0, -0.289, -0.487, -0.685, -0.952]) mcap_bins = np.array([0.0, 2.0, 5.0, 10.0, 25.0, 50.0]) * 10 ** 9 mcap_coefs = pd.Series([0.291, 0.305, 0.0, -0.161, -0.287, -0.499]) adv_bins = np.array([0.0, 50.0, 100.0, 150.0, 250.0, 500.0]) * 10 ** 6 adv_coefs = pd.Series([0.303, 0.0, -0.054, -0.109, -0.242, -0.454]) px_bins = np.array([0.0, 28.0, 45.0, 62.0, 82.0, 132.0]) px_coefs = pd.Series([-0.077, -0.187, -0.272, -0.186, 0.0, 0.380]) # shape (1, 50) time_contrib = np.take(time_coefs, np.digitize(x, time_bins) - 1).values.reshape((1, len(x))) # shape (50, 1) vlty_contrib = np.take(vlty_coefs, np.digitize(y, vlty_bins) - 1).values.reshape((len(y), 1)) # scalar mcap_contrib = mcap_coefs[np.digitize((mcap,), mcap_bins)[0] - 1] # scalar adv_contrib = adv_coefs[np.digitize((adv,), adv_bins)[0] - 1] # scalar px_contrib = px_coefs[np.digitize((px,), px_bins)[0] - 1] z_scalar_contrib = 1.736 + mcap_contrib + adv_contrib + px_contrib Z = np.exp(z_scalar_contrib + time_contrib + vlty_contrib) cmap=plt.get_cmap('jet') X, Y = np.meshgrid(x,y) CS = plt.contour(X/60,Y,Z, linewidths=3, cmap=cmap, alpha=0.8); plt.clabel(CS) plt.xlabel('Time from the Open (Minutes)') plt.ylabel('Volatility') plt.title('Spreads for varying Volatility and Trading Times (mcap = 16.7B, px = 91, adv = 84.5M)') plt.show() # - # ## **Quantifying Market Impact** # # Theoritical Market Impact models attempt to estimate transaction costs of trading by utilizing order attributes. There are many published market impact models. Here are some examples: # # 1. Zipline Volume Slippage Model # 2. Almgren et al 2005 # 3. Kissell et al. 2004 # 4. J.P. Morgan Model 2010 # # # The models have a few commonalities such as the inclusion of relative order size, volatility as well as custom parameters calculated from observed trades.There are also notable differences in the models such as (1) JPM explictly calls out spread impact, (2) Almgren considers fraction of outstanding shares traded daily, (3) Q Slipplage Model does not consider volatility, and (4) Kissel explicit parameter to proportion temporary and permenant impact, to name a few. # # The academic models have notions of temporary and permanant impact. **Temporary Impact** captures the impact on transaction costs due to urgency or aggressiveness of the trade. While **Permanant Impact** estimates with respect to information or short term alpha in a trade. # # ### Almgren et al. model (2005) # # This model assumes the initial order, X, is completed at a uniform rate of trading over a volume time # interval T. That is, the trade rate in volume units is v = X/T, and is held # constant until the trade is completed. Constant rate in these units is # equivalent to VWAP execution during the time of execution. # # # Almgren et al. model these two terms as # # # # $$\text{tcost} = 0.5 \overbrace{\gamma \sigma \frac{X}{V}\left(\frac{\Theta}{V}\right)^{1/4}}^{\text{permanent}} + \overbrace{\eta \sigma \left| \frac{X}{VT} \right|^{3/5}}^{\text{temporary}} $$ # # # where $\gamma$ and $\eta$ are the "universal coefficients of market impact" and estimated by the authors using a large sample of institutional trades; $\sigma$ is the daily volatility of the stock; $\Theta$ is the total shares outstanding of the stock; $X$ is the number of shares you would like to trade (unsigned); $T$ is the time width in % of trading time over which you slice the trade; and $V$ is the average daily volume ("ADV") in shares of the stock. The interpretation of $\frac{\Theta}{V}$ is the inverse of daily "turnover", the fraction of the company's value traded each day. # # For reference, FB has 2.3B shares outstanding, its average daily volume over 20 days is 18.8M therefore its inverse turnover is approximately 122, put another way, it trades less than 4% of outstanding shares daily. # # # # ### Potential Limitations # # Note that the Almgren et al (2005) and Kissell, Glantz and Malamut (2004) papers were released prior to the adoption and phased implementation of [__Reg NMS__](https://www.sec.gov/rules/final/34-51808.pdf), prior to the "quant meltdown" of August 2007, prior to the financial crisis hitting markets in Q4 2008, and other numerous developments in market microstructure. # # + jupyter={"outputs_hidden": false} def perm_impact(pct_adv, annual_vol_pct = 0.25, inv_turnover = 200): gamma = 0.314 return 10000 * gamma * (annual_vol_pct / 16) * pct_adv * (inv_turnover)**0.25 def temp_impact(pct_adv, minutes, annual_vol_pct = 0.25, minutes_in_day = 60*6.5): eta = 0.142 day_frac = minutes / minutes_in_day return 10000 * eta * (annual_vol_pct / 16) * abs(pct_adv/day_frac)**0.6 def tc_bps(pct_adv, minutes, annual_vol_pct = 0.25, inv_turnover = 200, minutes_in_day = 60*6.5): perm = perm_impact(pct_adv, annual_vol_pct=annual_vol_pct, inv_turnover=inv_turnover) temp = temp_impact(pct_adv, minutes, annual_vol_pct=annual_vol_pct, minutes_in_day=minutes_in_day) return 0.5 * perm + temp # - # So if we are trading 10% of ADV of a stock with a daily vol of 1.57% and we plan to do this over half the day, we would expect 8bps of TC (which is the Almgren estimate of temporary impact cost in this scenario). From the paper, this is a sliver of the output at various trading speeds: # # Variable | IBM # ------------- | ------------- # Inverse turnover ($\Theta/V$) | 263 # Daily vol ($\sigma$) | 1.57% # Trade % ADV (X/V) | 10% # # Item | Fast | Medium | Slow # -----|------|--------|------- # Permanent Impact (bps) | 20 | 20 | 20 # Trade duration (day fraction %) | 10% | 20% | 50% # Temporary Impact (bps) | 22 | 15 | 8 # Total Impact (bps) | 32 | 25 | 18 # # + jupyter={"outputs_hidden": false} print('Cost to trade Fast (First 40 mins):', round(tc_bps(pct_adv=0.1, annual_vol_pct=16*0.0157, inv_turnover=263, minutes=0.1*60*6.5),2), 'bps') print('Cost to trade Medium (First 90 mins):', round(tc_bps(pct_adv=0.1, annual_vol_pct=16*0.0157, inv_turnover=263, minutes=0.2*60*6.5),2), 'bps' ) print('Cost to trade Slow by Noon:', round(tc_bps(pct_adv=0.1, annual_vol_pct=16*0.0157, inv_turnover=263, minutes=0.5*60*6.5),2), 'bps') # - # Trading 0.50% of ADV of a stock with a daily vol of 1.57% and we plan to do this over 30 minutes... # + jupyter={"outputs_hidden": false} print(round(tc_bps(pct_adv=0.005, minutes=30, annual_vol_pct=16*0.0157),2)) # - # Let's say we wanted to trade \$2M notional of Facebook, and we are going to send the trade to an execution algo (e.g., VWAP) to be sliced over 15 minutes. # + jupyter={"outputs_hidden": false} trade_notional = 2000000 # 2M notional stock_price = 110.89 # dollars per share shares_to_trade = trade_notional/stock_price stock_adv_shares = 30e6 # 30 M stock_shares_outstanding = 275e9/110.89 expected_tc = tc_bps(shares_to_trade/stock_adv_shares, minutes=15, annual_vol_pct=0.22) print("Expected tc in bps: %0.2f" % expected_tc) print("Expected tc in $ per share: %0.2f" % (expected_tc*stock_price / 10000)) # - # And to motivate some intuition, at the total expected cost varies as a function of how much % ADV we want to trade in 30 minutes. # + jupyter={"outputs_hidden": false} x = np.linspace(0.0001,0.03) plt.plot(x*100,tc_bps(x,30,0.25), label=r"$\sigma$ = 25%"); plt.plot(x*100,tc_bps(x,30,0.40), label=r"$\sigma$ = 40%"); plt.ylabel('tcost in bps') plt.xlabel('Trade as % of ADV') plt.title(r'tcost in Basis Points of Trade Value; $\sigma$ = 25% and 40%; time = 15 minutes'); plt.legend(); # - # And let's look a tcost as a function of trading time and % ADV. # + jupyter={"outputs_hidden": false} x = np.linspace(0.001,0.03) y = np.linspace(5,30) X, Y = np.meshgrid(x,y) Z = tc_bps(X,Y,0.20) levels = np.linspace(0.0, 60, 30) cmap=plt.get_cmap('Reds') cmap=plt.get_cmap('hot') cmap=plt.get_cmap('jet') plt.subplot(1,2,1); CS = plt.contour(X*100, Y, Z, levels, linewidths=3, cmap=cmap, alpha=0.55); plt.clabel(CS); plt.ylabel('Trading Time in Minutes'); plt.xlabel('Trade as % of ADV'); plt.title(r'tcost in Basis Points of Trade Value; $\sigma$ = 20%'); plt.subplot(1,2,2); Z = tc_bps(X,Y,0.40) CS = plt.contour(X*100, Y, Z, levels, linewidths=3, cmap=cmap, alpha=0.55); plt.clabel(CS); plt.ylabel('Trading Time in Minutes'); plt.xlabel('Trade as % of ADV'); plt.title(r'tcost in Basis Points of Trade Value; $\sigma$ = 40%'); # - # Alternatively, we might want to get some intuition as to if we wanted to limit our cost, how does the trading time vary versus % of ADV. # + jupyter={"outputs_hidden": false} x = np.linspace(0.001,0.03) # % ADV y = np.linspace(1,60*6.5) # time to trade X, Y = np.meshgrid(x, y) levels = np.linspace(0.0, 390, 20) cmap=plt.get_cmap('Reds') cmap=plt.get_cmap('hot') cmap=plt.get_cmap('jet') plt.subplot(1,2,1); Z = tc_bps(X,Y,0.20) plt.contourf(X*100, Z, Y, levels, cmap=cmap, alpha=0.55); plt.title(r'Trading Time in Minutes; $\sigma$ = 20%'); plt.xlabel('Trade as % of ADV'); plt.ylabel('tcost in Basis Points of Trade Value'); plt.ylim(5,20) plt.colorbar(); plt.subplot(1,2,2); Z = tc_bps(X,Y,0.40) plt.contourf(X*100, Z, Y, levels, cmap=cmap, alpha=0.55); plt.title(r'Trading Time in Minutes; $\sigma$ = 40%'); plt.xlabel('Trade as % of ADV'); plt.ylabel('tcost in Basis Points of Trade Value'); plt.ylim(5,20); plt.colorbar(); # - # ### The Breakdown: Permanent and Temporary # # For a typical stock, let's see how the tcost is broken down into permanent and temporary. # + jupyter={"outputs_hidden": false} minutes = 30 x = np.linspace(0.0001,0.03) f, (ax1, ax2) = plt.subplots(ncols=2, sharex=True, sharey=True) f.subplots_adjust(hspace=0.15) p = 0.5*perm_impact(x,0.20) t = tc_bps(x,minutes,0.20) ax1.fill_between(x*100, p, t, color='b', alpha=0.33); ax1.fill_between(x*100, 0, p, color='k', alpha=0.66); ax1.set_ylabel('tcost in bps') ax1.set_xlabel('Trade as % of ADV') ax1.set_title(r'tcost in bps of Trade Value; $\sigma$ = 20%; time = 15 minutes'); p = 0.5*perm_impact(x, 0.40) t = tc_bps(x,minutes, 0.40) ax2.fill_between(x*100, p, t, color='b', alpha=0.33); ax2.fill_between(x*100, 0, p, color='k', alpha=0.66); plt.xlabel('Trade as % of ADV') plt.title(r'tcost in bps of Trade Value; $\sigma$ = 40%; time = 15 minutes'); # - # ### Kissell et al Model (2004) # # This model assumes there is a theoretical instaenous impact cost $I^*$ incurred by the investor if all shares $Q$ were released to the market. # # $$ MI_{bp} = b_1 I^* POV^{a_4} + (1-b_1)I^*$$ # # # $$ I^* = a_1 (\frac{Q}{ADV})^{a_2} \sigma^{a_3}$$ # # $$POV = \frac{Q}{Q+V}$$ # # * $I^*$ is instanteous impact # * $POV$ is percentage of volume trading rate # * $V$ is the expected volume in the interval of trading # * $b_1$ is the temporary impact parameter # * $ADV$ is 30 day average daily volume # * $Q$ is order size # # # Parameter | Fitted Values # ------------- | ------------- # $b_1$ | 0.80 # $a_1$ | 750 # $a_2$ | 0.50 # $a_3$ | 0.75 # $a_4$ | 0.50 # + jupyter={"outputs_hidden": false} def kissell(adv, annual_vol, interval_vol, order_size): b1, a1, a2, a3, a4 = 0.9, 750., 0.2, 0.9, 0.5 i_star = a1 * ((order_size/adv)**a2) * annual_vol**a3 PoV = order_size/(order_size + adv) return b1 * i_star * PoV**a4 + (1 - b1) * i_star # + jupyter={"outputs_hidden": false} print(kissell(adv=5*10**6, annual_vol=0.2, interval_vol=adv * 0.06, order_size=0.01 * adv ), 'bps') # + jupyter={"outputs_hidden": false} x = np.linspace(0.0001,0.1) plt.plot(x,kissell(5*10**6,0.1, 2000*10**3, x*2000*10**3), label=r"$\sigma$ = 10%"); plt.plot(x,kissell(5*10**6,0.25, 2000*10**3, x*2000*10**3), label=r"$\sigma$ = 25%"); plt.ylabel('tcost in bps') plt.xlabel('Trade as % of ADV') plt.title(r'tcost in Basis Points of Trade Value; $\sigma$ = 25% and 40%; time = 15 minutes'); plt.legend(); # - # ## The J.P. Morgan Market Impact Model # # # $$MI(bps) = I \times \omega \times \frac{2 \times PoV}{1 + PoV} + (1-\omega) \times I + S_c$$ # Where # # $$I = \alpha \times PoV^\beta \times Volatility^\gamma$$ # # * $\omega$ is the fraction of temporary impact (liquidity cost) # * $\alpha$ is a scaling parameter # * $PoV$ is relative order size as fraction of average daily volume # * $S_c$ is the spread ( basis point difference between the bid and ask ) # # For US equities, the fitted parameters as of June 2016 are # # Parameter | Fitted Value # ------|----- # b ($\omega$) | 0.931 # a1 ($\alpha$)| 168.5 # a2 ($\beta$) | 0.1064 # a3 ($\gamma$) | 0.9233 # # + def jpm_mi(size_shrs, adv, day_frac=1.0, spd=5, spd_frac=0.5, ann_vol=0.25, omega=0.92, alpha=350, beta=0.370, gamma=1.05): PoV = (size_shrs/(adv*day_frac)) I = alpha*(PoV**beta)*(ann_vol**gamma) MI = I*omega*(2*PoV)/(1+PoV) + (1-omega)*I + spd*spd_frac return MI def jpm_mi_pct(pct_adv, **kwargs): return jpm_mi(pct_adv, 1.0, **kwargs) # - # Let's assume the following order: # * Buy 100,000 XYZ, trading at 10% of the volume # * XYZ's ADV = 1,000,000 shares # * XYZ Annualized Volatility = 25% # * XYZ's Average Spread = 5 bps # + jupyter={"outputs_hidden": false} spy_adv = 85603411.55 print(round(jpm_mi(size_shrs=10000, adv=1e6),2), 'bps') # 1% pct ADV order print(round(jpm_mi(size_shrs=0.05*spy_adv, adv=spy_adv, spd=5, day_frac=1.0),2), 'bps') # 5% pct ADV of SPY order # - # ## Zipline Volume Share Slippage # # The Zipline `VolumeShareSlippage` model ([API Reference](https://www.quantrocket.com/docs/api/#zipline.finance.slippage.VolumeShareSlippage)) expressed in the style of the equation below # # $$\text{tcost} = 0.1 \left| \frac{X}{VT} \right|^2 $$ # # where $X$ is the number of shares you would like to trade; $T$ is the time width of the bar in % of a day; $V$ is the ADV of the stock. def tc_Zipline_vss_bps(pct_adv, minutes=1.0, minutes_in_day=60*6.5): day_frac = minutes / minutes_in_day tc_pct = 0.1 * abs(pct_adv/day_frac)**2 return tc_pct*10000 # To reproduce the given examples, we trade over a bar # + jupyter={"outputs_hidden": false} print(tc_Zipline_vss_bps(pct_adv=0.1/390, minutes=1)) print(tc_Zipline_vss_bps(pct_adv=0.25/390, minutes=1)) # - # As this model is convex, it gives very high estimates for large trades. # + jupyter={"outputs_hidden": false} print(tc_Zipline_vss_bps(pct_adv=0.1, minutes=0.1*60*6.5)) print(tc_Zipline_vss_bps(pct_adv=0.1, minutes=0.2*60*6.5)) print(tc_Zipline_vss_bps(pct_adv=0.1, minutes=0.5*60*6.5)) # - # Though for small trades, the results are comparable. # + jupyter={"outputs_hidden": false} print(tc_bps(pct_adv=0.005, minutes=30, annual_vol_pct=0.2)) print(tc_Zipline_vss_bps(pct_adv=0.005, minutes=30)) # + jupyter={"outputs_hidden": false} x = np.linspace(0.0001, 0.01) plt.plot(x*100,tc_bps(x, 30, 0.20), label=r"Almgren $\sigma$ = 20%"); plt.plot(x*100,tc_bps(x, 30, 0.40), label=r"Almgren $\sigma$ = 40%"); plt.plot(x*100,tc_Zipline_vss_bps(x, minutes=30),label="Zipline VSS"); plt.plot(x*100,jpm_mi_pct(x, ann_vol=0.2), label=r"JPM MI1 $\sigma$ = 20%"); plt.plot(x*100,jpm_mi_pct(x, ann_vol=0.4), label=r"JPM MI1 $\sigma$ = 40%"); plt.plot(x*100,kissell(5*10**6,0.20, 2000*10**3, x*2000*10**3), label=r"Kissell $\sigma$ = 20%"); plt.plot(x*100,kissell(5*10**6,0.40, 2000*10**3, x*2000*10**3), label=r"Kissell $\sigma$ = 40%", color='black'); plt.ylabel('tcost in bps') plt.xlabel('Trade as % of ADV') plt.title('tcost in Basis Points of Trade Value; time = 30 minutes'); plt.legend(); # - # ## Conclusions # # ### The following order atttributes leads to higher market impact: # * Higher relative order size # * Trading illquid names # * Trading names with lower daily turnover (in terms of shares outstanding) # * Shorter trade duration # * Higher volatility names # * More urgency or higher POV # * Short term alpha # * Trading earlier in the day # * Trading names with wider spreads # * Trading lower ADV names or on days when market volume is down # # ## References: # # * Almgren, R., Thum, C., Hauptmann, E., & Li, H. (2005). Direct estimation of equity market impact. Risk, 18(7), 5862. # # * Bennett, C. and Gil, M.A. (2012, Februrary) Measuring Historic Volatility, Santander Equity Derivatives Europe Retreived from: (http://www.todaysgroep.nl/media/236846/measuring_historic_volatility.pdf) # # * Garman, M. B., & Klass, M. J. (1980). On the estimation of security price volatilities from historical data. Journal of business, 67-78. # # * Kissell, R., Glantz, M., & Malamut, R. (2004). A practical framework for estimating transaction costs and developing optimal trading strategies to achieve best execution. Finance Research Letters, 1(1), 35-46. # # * Zipline Slippage Model see: https://www.quantrocket.com/docs/api/#zipline.finance.slippage.VolumeShareSlippage # # --- # # **Next Lecture:** [Universe Selection](Lecture29-Universe-Selection.ipynb) # # [Back to Introduction](Introduction.ipynb) # --- # # *This presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. ("Quantopian") or QuantRocket LLC ("QuantRocket"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, neither Quantopian nor QuantRocket has taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information believed to be reliable at the time of publication. Neither Quantopian nor QuantRocket makes any guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances.*
43.639115
1,167
228e3881b920dafb7400affd13565c2d9a68a827
py
python
Automate the Boring Stuff with Python Ch10.ipynb
pgaods/Automate-the-Boring-Stuff-with-Python
['MIT']
"# ---\n# jupyter:\n# jupytext:\n# text_representation:\n# extension: .py\n# forma(...TRUNCATED)
116.259259
1,248
8f14ac09a097bac974c2f4009fd66a622a9fd535
py
python
notebooks/drug-efficacy/main.ipynb
shrikant9793/notebooks
['Apache-2.0']
"# ---\n# jupyter:\n# jupytext:\n# text_representation:\n# extension: .py\n# forma(...TRUNCATED)
88.62037
2,600
57a64df41aaa9ca5408c90fee3ff804cdf7bef30
py
python
MNIST.ipynb
Fifth-marauder/Kaggle
['Apache-2.0']
"# ---\n# jupyter:\n# jupytext:\n# text_representation:\n# extension: .py\n# forma(...TRUNCATED)
62.957265
7,233

Dataset Card for "jupyter_python_max_line_length_1000"

More Information needed

Downloads last month
0
Edit dataset card