File size: 121,394 Bytes
a67be9a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 2621 2622 2623 2624 2625 2626 2627 2628 2629 2630 2631 2632 2633 2634 2635 2636 2637 2638 2639 2640 2641 2642 2643 2644 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 2655 2656 2657 2658 2659 2660 2661 2662 2663 2664 2665 2666 2667 2668 2669 2670 2671 2672 2673 2674 2675 2676 2677 2678 2679 2680 2681 2682 2683 2684 2685 2686 2687 2688 2689 2690 2691 2692 2693 2694 2695 2696 2697 2698 2699 2700 2701 2702 2703 2704 2705 2706 2707 2708 2709 2710 2711 2712 2713 2714 2715 2716 2717 2718 2719 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 2739 2740 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 2751 2752 2753 2754 2755 2756 2757 2758 2759 2760 2761 2762 2763 2764 2765 2766 2767 2768 2769 2770 2771 2772 2773 2774 2775 2776 2777 2778 2779 2780 2781 2782 2783 2784 2785 2786 2787 2788 2789 2790 2791 2792 2793 2794 2795 2796 2797 2798 2799 2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812 2813 2814 2815 2816 2817 2818 2819 2820 2821 2822 2823 2824 2825 2826 2827 2828 2829 2830 2831 2832 2833 2834 2835 2836 2837 2838 2839 2840 2841 2842 2843 2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 2888 2889 2890 2891 2892 2893 2894 2895 2896 2897 2898 2899 2900 2901 2902 2903 2904 2905 2906 2907 2908 2909 2910 2911 2912 2913 2914 2915 2916 2917 2918 2919 2920 2921 2922 2923 2924 2925 2926 2927 2928 2929 2930 2931 2932 2933 2934 2935 2936 2937 2938 2939 2940 2941 2942 2943 2944 2945 2946 2947 2948 2949 2950 2951 2952 2953 2954 2955 2956 2957 2958 2959 2960 2961 2962 2963 2964 2965 2966 2967 2968 2969 2970 2971 2972 2973 2974 2975 2976 2977 2978 2979 2980 2981 2982 2983 2984 2985 2986 2987 2988 2989 2990 2991 2992 2993 2994 2995 2996 2997 2998 2999 3000 3001 3002 3003 3004 3005 3006 3007 3008 3009 3010 3011 3012 3013 3014 3015 3016 3017 3018 3019 3020 3021 3022 3023 3024 3025 3026 3027 3028 3029 3030 3031 3032 3033 3034 3035 3036 3037 3038 3039 3040 3041 3042 3043 3044 3045 3046 3047 3048 3049 3050 3051 3052 3053 3054 3055 3056 3057 3058 3059 3060 3061 3062 3063 3064 3065 3066 3067 3068 3069 3070 3071 3072 3073 3074 3075 3076 3077 3078 3079 3080 3081 3082 3083 3084 3085 3086 3087 3088 3089 3090 3091 3092 3093 3094 3095 3096 3097 3098 3099 3100 3101 3102 3103 3104 3105 3106 3107 3108 3109 3110 3111 3112 3113 3114 3115 3116 3117 3118 3119 3120 3121 3122 3123 3124 3125 3126 3127 3128 3129 3130 3131 3132 3133 3134 3135 3136 3137 3138 3139 3140 3141 3142 3143 3144 3145 3146 3147 3148 3149 3150 3151 3152 3153 3154 3155 3156 3157 3158 3159 3160 3161 3162 3163 3164 3165 3166 3167 3168 3169 3170 3171 3172 3173 3174 3175 3176 3177 3178 3179 3180 3181 3182 3183 3184 3185 3186 3187 3188 3189 3190 3191 3192 3193 3194 3195 3196 3197 3198 3199 3200 3201 3202 3203 3204 3205 3206 3207 3208 3209 3210 3211 3212 3213 3214 3215 3216 3217 3218 3219 3220 3221 3222 3223 3224 3225 3226 3227 3228 3229 3230 3231 3232 3233 3234 3235 3236 3237 3238 3239 3240 3241 3242 3243 3244 3245 3246 3247 3248 3249 3250 3251 3252 3253 3254 3255 3256 3257 3258 3259 3260 3261 3262 3263 3264 3265 3266 3267 3268 3269 3270 3271 3272 3273 3274 3275 3276 3277 3278 3279 3280 3281 3282 3283 3284 3285 3286 3287 3288 3289 3290 3291 3292 3293 3294 3295 3296 3297 3298 3299 3300 3301 3302 3303 3304 3305 3306 3307 3308 3309 3310 3311 3312 3313 3314 3315 3316 3317 3318 3319 3320 3321 3322 3323 3324 3325 3326 3327 3328 3329 3330 3331 3332 3333 3334 3335 3336 3337 3338 3339 3340 3341 3342 3343 3344 3345 3346 3347 3348 3349 3350 3351 3352 3353 3354 3355 3356 3357 3358 3359 3360 3361 3362 3363 3364 3365 3366 3367 3368 3369 3370 3371 3372 3373 3374 3375 3376 3377 3378 3379 3380 3381 3382 3383 3384 3385 3386 3387 3388 3389 3390 3391 3392 3393 3394 3395 3396 3397 3398 3399 3400 3401 3402 3403 3404 3405 3406 3407 3408 3409 3410 3411 3412 3413 3414 3415 3416 3417 3418 3419 3420 3421 3422 3423 3424 3425 3426 3427 3428 3429 3430 3431 3432 3433 3434 3435 3436 3437 3438 3439 3440 3441 3442 3443 3444 3445 3446 3447 3448 3449 3450 3451 3452 3453 3454 3455 3456 3457 3458 3459 3460 3461 3462 3463 3464 3465 3466 3467 3468 3469 3470 3471 3472 3473 3474 3475 3476 3477 3478 3479 3480 3481 3482 3483 3484 3485 3486 3487 3488 3489 3490 3491 3492 3493 3494 3495 3496 3497 3498 3499 3500 3501 3502 3503 3504 3505 3506 3507 3508 3509 3510 3511 3512 3513 3514 3515 3516 3517 3518 3519 3520 3521 3522 3523 3524 3525 3526 3527 3528 3529 3530 3531 3532 3533 3534 3535 3536 3537 3538 3539 3540 3541 3542 3543 3544 3545 3546 3547 3548 3549 3550 3551 3552 3553 3554 3555 3556 3557 3558 3559 3560 3561 3562 3563 3564 3565 3566 3567 3568 3569 3570 3571 3572 3573 3574 3575 3576 3577 3578 3579 3580 3581 3582 3583 3584 3585 3586 3587 3588 3589 3590 3591 3592 3593 3594 3595 3596 3597 3598 3599 3600 3601 3602 3603 3604 3605 3606 3607 3608 3609 3610 3611 3612 3613 3614 3615 3616 3617 3618 3619 3620 3621 3622 3623 3624 3625 3626 3627 3628 3629 3630 3631 3632 3633 3634 3635 3636 3637 3638 3639 3640 3641 3642 3643 3644 3645 3646 3647 3648 3649 3650 3651 3652 3653 3654 3655 3656 3657 3658 3659 3660 3661 3662 3663 3664 3665 3666 3667 3668 3669 3670 3671 3672 3673 3674 3675 3676 3677 3678 3679 3680 3681 3682 3683 3684 3685 3686 3687 3688 3689 3690 3691 3692 3693 3694 3695 3696 3697 3698 3699 3700 3701 3702 3703 3704 3705 3706 3707 3708 3709 3710 3711 3712 3713 3714 3715 3716 3717 3718 3719 3720 3721 3722 3723 3724 3725 3726 3727 3728 3729 3730 3731 3732 3733 3734 3735 3736 3737 3738 3739 3740 3741 3742 3743 3744 3745 3746 3747 3748 3749 3750 3751 3752 3753 3754 3755 3756 3757 3758 3759 3760 3761 3762 3763 3764 3765 3766 3767 3768 3769 3770 3771 3772 3773 3774 3775 3776 3777 3778 3779 3780 3781 3782 3783 3784 3785 3786 3787 3788 3789 3790 3791 3792 3793 3794 3795 3796 3797 3798 3799 3800 3801 3802 3803 3804 3805 3806 3807 3808 3809 3810 3811 3812 3813 3814 3815 3816 3817 3818 3819 3820 3821 3822 3823 3824 3825 3826 3827 3828 3829 3830 3831 3832 3833 3834 3835 3836 3837 3838 3839 3840 3841 3842 3843 3844 3845 3846 3847 3848 3849 3850 3851 3852 3853 3854 3855 3856 3857 3858 3859 3860 3861 3862 3863 3864 3865 3866 3867 3868 3869 3870 3871 3872 3873 3874 3875 3876 3877 3878 3879 3880 3881 3882 3883 3884 3885 3886 3887 3888 3889 3890 3891 3892 3893 3894 3895 3896 3897 3898 3899 3900 3901 3902 3903 3904 3905 3906 3907 3908 3909 3910 3911 3912 3913 3914 3915 3916 3917 3918 3919 3920 3921 3922 3923 3924 3925 3926 3927 3928 3929 3930 3931 3932 3933 3934 3935 3936 3937 3938 3939 3940 3941 3942 3943 3944 3945 3946 3947 3948 3949 3950 3951 3952 3953 3954 3955 3956 3957 3958 3959 3960 3961 3962 3963 3964 3965 3966 3967 3968 3969 3970 3971 3972 3973 3974 3975 3976 3977 3978 3979 3980 3981 3982 3983 3984 3985 3986 3987 3988 3989 3990 3991 3992 3993 3994 3995 3996 3997 3998 3999 4000 4001 4002 4003 4004 4005 4006 4007 4008 4009 4010 4011 4012 4013 4014 4015 4016 4017 4018 4019 4020 4021 4022 4023 4024 4025 4026 4027 4028 4029 4030 4031 4032 4033 4034 4035 4036 4037 4038 4039 4040 4041 4042 4043 4044 4045 4046 4047 4048 4049 4050 4051 4052 4053 4054 4055 4056 4057 4058 4059 4060 4061 4062 4063 4064 4065 4066 4067 4068 4069 4070 4071 4072 4073 4074 4075 4076 4077 4078 4079 4080 4081 4082 4083 4084 4085 4086 4087 4088 4089 4090 4091 4092 4093 4094 4095 4096 4097 4098 4099 4100 4101 4102 4103 4104 4105 4106 4107 4108 4109 4110 4111 4112 4113 4114 4115 4116 4117 4118 4119 4120 4121 4122 4123 4124 4125 4126 4127 4128 4129 4130 4131 4132 4133 4134 4135 4136 4137 4138 4139 4140 4141 4142 4143 4144 4145 4146 4147 4148 4149 4150 4151 4152 4153 4154 4155 4156 4157 4158 4159 4160 4161 4162 4163 4164 4165 4166 4167 4168 4169 4170 4171 4172 4173 4174 4175 4176 4177 4178 4179 4180 4181 4182 4183 4184 4185 4186 4187 4188 4189 4190 4191 4192 4193 4194 4195 4196 4197 4198 4199 4200 4201 4202 4203 4204 4205 4206 4207 4208 4209 4210 4211 4212 4213 4214 4215 4216 4217 4218 4219 4220 4221 4222 4223 4224 4225 4226 4227 4228 4229 4230 4231 4232 4233 4234 4235 4236 4237 4238 4239 4240 4241 4242 4243 4244 4245 4246 4247 4248 4249 4250 4251 4252 4253 4254 4255 4256 4257 4258 4259 4260 4261 4262 4263 4264 4265 4266 4267 4268 4269 4270 4271 4272 4273 4274 4275 4276 4277 4278 4279 4280 4281 4282 4283 4284 4285 4286 4287 4288 4289 4290 4291 4292 4293 4294 4295 4296 4297 4298 4299 4300 4301 4302 4303 4304 4305 4306 4307 4308 4309 4310 4311 4312 4313 4314 4315 4316 4317 4318 4319 4320 4321 4322 4323 4324 4325 4326 4327 4328 4329 4330 4331 4332 4333 4334 4335 4336 4337 4338 4339 4340 4341 4342 4343 4344 4345 4346 4347 4348 4349 4350 4351 4352 4353 4354 4355 4356 4357 4358 4359 4360 4361 4362 4363 4364 4365 4366 4367 4368 4369 4370 4371 4372 4373 4374 4375 4376 4377 4378 4379 4380 4381 4382 4383 4384 4385 4386 4387 4388 4389 4390 4391 4392 4393 4394 4395 4396 4397 4398 4399 4400 4401 4402 4403 4404 4405 4406 4407 4408 4409 4410 4411 4412 4413 4414 4415 4416 4417 4418 4419 4420 4421 4422 4423 4424 4425 4426 4427 4428 4429 4430 4431 4432 4433 4434 4435 4436 4437 4438 4439 4440 4441 4442 4443 4444 4445 4446 4447 4448 4449 4450 4451 4452 4453 4454 4455 4456 4457 4458 4459 4460 4461 4462 4463 4464 4465 4466 4467 4468 4469 4470 4471 4472 4473 4474 4475 4476 4477 4478 4479 4480 4481 4482 4483 4484 4485 4486 4487 4488 4489 4490 4491 4492 4493 4494 4495 4496 4497 4498 4499 4500 4501 4502 4503 4504 4505 4506 4507 4508 4509 4510 4511 4512 4513 4514 4515 4516 4517 4518 4519 4520 4521 4522 4523 4524 4525 4526 4527 4528 4529 4530 4531 4532 4533 4534 4535 4536 4537 4538 4539 4540 4541 4542 4543 4544 4545 4546 4547 4548 4549 4550 4551 4552 4553 4554 4555 4556 4557 4558 4559 4560 4561 4562 4563 4564 4565 4566 4567 4568 4569 4570 4571 4572 4573 4574 4575 4576 4577 4578 4579 4580 4581 4582 4583 4584 4585 4586 4587 4588 4589 4590 4591 4592 4593 4594 4595 4596 4597 4598 4599 4600 4601 4602 4603 4604 4605 4606 4607 4608 4609 4610 4611 4612 4613 4614 4615 4616 4617 4618 4619 4620 4621 4622 4623 4624 4625 4626 4627 4628 4629 4630 4631 4632 4633 4634 4635 4636 4637 4638 4639 4640 4641 4642 4643 4644 4645 4646 4647 4648 4649 4650 4651 4652 4653 4654 4655 4656 4657 4658 4659 4660 4661 4662 4663 4664 4665 4666 4667 4668 4669 4670 4671 4672 4673 4674 4675 4676 4677 4678 4679 4680 4681 4682 4683 4684 4685 4686 4687 4688 4689 4690 4691 4692 4693 4694 4695 4696 4697 4698 4699 4700 4701 4702 4703 4704 4705 4706 4707 4708 4709 4710 4711 4712 4713 4714 4715 4716 4717 4718 4719 4720 4721 4722 4723 4724 4725 4726 4727 4728 4729 4730 4731 4732 4733 4734 4735 4736 4737 4738 4739 4740 4741 4742 4743 4744 4745 4746 4747 4748 4749 4750 4751 4752 4753 4754 4755 4756 4757 4758 4759 4760 4761 4762 4763 4764 4765 4766 4767 4768 4769 4770 4771 4772 4773 4774 4775 4776 4777 4778 4779 4780 4781 4782 4783 4784 4785 4786 4787 4788 4789 4790 4791 4792 4793 4794 4795 4796 4797 4798 4799 4800 4801 4802 4803 4804 4805 4806 4807 4808 4809 4810 4811 4812 4813 4814 4815 4816 4817 4818 4819 4820 4821 4822 4823 4824 4825 4826 4827 4828 4829 4830 4831 4832 4833 4834 4835 4836 4837 4838 4839 4840 4841 4842 4843 4844 4845 4846 4847 4848 4849 4850 4851 4852 4853 4854 4855 4856 4857 4858 4859 4860 4861 4862 4863 4864 4865 4866 4867 4868 4869 4870 4871 4872 4873 4874 4875 4876 4877 4878 4879 4880 4881 4882 4883 4884 4885 4886 4887 4888 4889 4890 4891 4892 4893 4894 4895 4896 4897 4898 4899 4900 4901 4902 4903 4904 4905 4906 4907 4908 4909 4910 4911 4912 4913 4914 4915 4916 4917 4918 4919 4920 4921 4922 4923 4924 4925 4926 4927 4928 4929 4930 4931 4932 4933 4934 4935 4936 4937 4938 4939 4940 4941 4942 4943 4944 4945 4946 4947 4948 4949 4950 4951 4952 4953 4954 4955 4956 4957 4958 4959 4960 4961 4962 4963 4964 4965 4966 4967 4968 4969 4970 4971 4972 4973 4974 4975 4976 4977 4978 4979 4980 4981 4982 4983 4984 4985 4986 4987 4988 4989 4990 4991 4992 4993 4994 4995 4996 4997 4998 4999 5000 5001 5002 5003 5004 5005 5006 5007 5008 5009 5010 5011 5012 5013 5014 5015 5016 5017 5018 5019 5020 5021 5022 5023 5024 5025 5026 5027 5028 5029 5030 5031 5032 5033 5034 5035 5036 5037 5038 5039 5040 5041 5042 5043 5044 5045 5046 5047 5048 5049 5050 5051 5052 5053 5054 5055 5056 5057 5058 5059 5060 5061 5062 5063 5064 5065 5066 5067 5068 5069 5070 5071 5072 5073 5074 5075 5076 5077 5078 5079 5080 5081 5082 5083 5084 5085 5086 5087 5088 5089 5090 5091 5092 5093 5094 5095 5096 5097 5098 5099 5100 5101 5102 5103 5104 5105 5106 5107 5108 5109 5110 5111 5112 5113 5114 5115 5116 5117 5118 5119 5120 5121 5122 5123 5124 5125 5126 5127 5128 5129 5130 5131 5132 5133 5134 5135 5136 5137 5138 5139 5140 5141 5142 5143 5144 5145 5146 5147 5148 5149 5150 5151 5152 5153 5154 5155 5156 5157 5158 5159 5160 5161 5162 5163 5164 5165 5166 5167 5168 5169 5170 5171 5172 5173 5174 5175 5176 5177 5178 5179 5180 5181 5182 5183 5184 5185 5186 5187 5188 5189 5190 5191 5192 5193 5194 5195 5196 5197 5198 5199 5200 5201 5202 5203 5204 5205 5206 5207 5208 5209 5210 5211 5212 5213 5214 5215 5216 5217 5218 5219 5220 5221 5222 5223 5224 5225 5226 5227 5228 5229 5230 5231 5232 5233 5234 5235 5236 5237 5238 5239 5240 5241 5242 5243 5244 5245 5246 5247 5248 5249 5250 5251 5252 5253 5254 5255 5256 5257 5258 5259 5260 5261 5262 5263 5264 5265 5266 5267 5268 5269 5270 5271 5272 5273 5274 5275 5276 5277 5278 5279 5280 5281 5282 5283 5284 5285 5286 5287 5288 5289 5290 5291 5292 5293 5294 5295 5296 5297 5298 5299 5300 5301 5302 5303 5304 5305 5306 5307 5308 5309 5310 5311 5312 5313 5314 5315 5316 5317 5318 5319 5320 5321 5322 5323 5324 5325 5326 5327 5328 5329 5330 5331 5332 5333 5334 5335 5336 5337 5338 5339 5340 5341 5342 5343 5344 5345 5346 5347 5348 5349 5350 5351 5352 5353 5354 5355 5356 5357 5358 5359 5360 5361 5362 5363 5364 5365 5366 5367 5368 5369 5370 5371 5372 5373 5374 5375 5376 5377 5378 5379 5380 5381 5382 5383 5384 5385 5386 5387 5388 5389 5390 5391 5392 5393 5394 5395 5396 5397 5398 5399 5400 5401 5402 5403 5404 5405 5406 5407 5408 5409 5410 5411 5412 5413 5414 5415 5416 5417 5418 5419 5420 5421 5422 5423 5424 5425 5426 5427 5428 5429 5430 5431 5432 5433 5434 5435 5436 5437 5438 5439 5440 5441 5442 5443 5444 5445 5446 5447 5448 5449 5450 5451 5452 5453 5454 5455 5456 5457 5458 5459 5460 5461 5462 5463 5464 5465 5466 5467 5468 5469 5470 5471 5472 5473 5474 5475 5476 5477 5478 5479 5480 5481 5482 5483 5484 5485 5486 5487 5488 5489 5490 5491 5492 5493 5494 5495 5496 5497 5498 5499 5500 5501 5502 5503 5504 5505 5506 5507 5508 5509 5510 5511 5512 5513 5514 5515 5516 5517 5518 5519 5520 5521 5522 5523 5524 5525 5526 5527 5528 5529 5530 5531 5532 5533 5534 5535 5536 5537 5538 5539 5540 5541 5542 5543 5544 5545 5546 5547 5548 5549 5550 5551 5552 5553 5554 5555 5556 5557 5558 5559 5560 5561 5562 5563 5564 5565 5566 5567 5568 5569 5570 5571 5572 5573 5574 5575 5576 5577 5578 5579 5580 5581 5582 5583 5584 5585 5586 5587 5588 5589 5590 5591 5592 5593 5594 5595 5596 5597 5598 5599 5600 5601 5602 5603 5604 5605 5606 5607 5608 5609 5610 5611 5612 5613 5614 5615 5616 5617 5618 5619 5620 5621 5622 5623 5624 5625 5626 5627 5628 5629 5630 5631 5632 5633 5634 5635 5636 5637 5638 5639 5640 5641 5642 5643 5644 5645 5646 5647 5648 5649 5650 5651 5652 5653 5654 5655 5656 5657 5658 5659 5660 5661 5662 5663 5664 5665 5666 5667 5668 5669 5670 5671 5672 5673 5674 5675 5676 5677 5678 5679 5680 5681 5682 5683 5684 5685 5686 5687 5688 5689 5690 5691 5692 5693 5694 5695 5696 5697 5698 5699 5700 5701 5702 5703 5704 5705 5706 5707 5708 5709 5710 5711 5712 5713 5714 5715 5716 5717 5718 5719 5720 5721 5722 5723 5724 5725 5726 5727 5728 5729 5730 5731 5732 5733 5734 5735 5736 5737 5738 5739 5740 5741 5742 5743 5744 5745 5746 5747 5748 5749 5750 5751 5752 5753 5754 5755 5756 5757 5758 5759 5760 5761 5762 5763 5764 5765 5766 5767 5768 5769 5770 5771 5772 5773 5774 5775 5776 5777 5778 5779 5780 5781 5782 5783 5784 5785 5786 5787 5788 5789 5790 5791 5792 5793 5794 5795 5796 5797 5798 5799 5800 5801 5802 5803 5804 5805 5806 5807 5808 5809 5810 5811 5812 5813 5814 5815 5816 5817 5818 5819 5820 5821 5822 5823 5824 5825 5826 5827 5828 5829 5830 5831 5832 5833 5834 5835 5836 5837 5838 5839 5840 5841 5842 5843 5844 5845 5846 5847 5848 5849 5850 5851 5852 5853 5854 5855 5856 5857 5858 5859 5860 5861 5862 5863 5864 5865 5866 5867 5868 5869 5870 5871 5872 5873 5874 5875 5876 5877 5878 5879 5880 5881 5882 5883 5884 5885 5886 5887 5888 5889 5890 5891 5892 5893 5894 5895 5896 5897 5898 5899 5900 5901 5902 5903 5904 5905 5906 5907 5908 5909 5910 5911 5912 5913 5914 5915 5916 5917 5918 5919 5920 5921 5922 5923 5924 5925 5926 5927 5928 5929 5930 5931 5932 5933 5934 5935 5936 5937 5938 5939 5940 5941 5942 5943 5944 5945 5946 5947 5948 5949 5950 5951 5952 5953 5954 5955 5956 5957 5958 5959 5960 5961 |
WEBVTT Kind: captions; Language: en-US
NOTE
Created on 2024-02-07T20:57:30.0567947Z by ClassTranscribe
00:02:02.640 --> 00:02:03.590
Good morning, everybody.
00:02:07.770 --> 00:02:08.170
So.
00:02:09.360 --> 00:02:12.270
I lost my HDMI connector so the slides
00:02:12.270 --> 00:02:14.740
are a little stretched out but still
00:02:14.740 --> 00:02:15.230
visible.
00:02:15.860 --> 00:02:17.190
I guess that's what it does with PGA.
00:02:18.980 --> 00:02:19.570
All right.
00:02:19.570 --> 00:02:22.390
So last class we learned about
00:02:22.390 --> 00:02:24.700
Perceptrons and MLPS.
00:02:25.620 --> 00:02:28.410
So we talked about how Perceptrons are
00:02:28.410 --> 00:02:30.340
linear prediction models and really the
00:02:30.340 --> 00:02:32.070
only difference between a Perceptron
00:02:32.070 --> 00:02:32.760
and a.
00:02:33.920 --> 00:02:36.330
Logistic Regressors that often people
00:02:36.330 --> 00:02:38.290
will draw Perceptron in terms of these
00:02:38.290 --> 00:02:40.210
inputs and weights and outputs.
00:02:40.210 --> 00:02:40.450
So.
00:02:41.140 --> 00:02:43.110
Almost more of A-frame of thought than
00:02:43.110 --> 00:02:44.060
a different algorithm.
00:02:45.880 --> 00:02:48.580
MLP ups are nonlinear prediction
00:02:48.580 --> 00:02:51.510
models, so composed of, so they're
00:02:51.510 --> 00:02:54.080
basically Perceptron stacked on top of
00:02:54.080 --> 00:02:54.805
each other.
00:02:54.805 --> 00:02:57.240
So given some inputs, you predict some
00:02:57.240 --> 00:02:59.030
intermediate values in the inner
00:02:59.030 --> 00:02:59.460
layers.
00:03:00.160 --> 00:03:01.250
And then they go through some
00:03:01.250 --> 00:03:03.830
nonlinearity like a Sigmoid or ReLU.
00:03:04.470 --> 00:03:06.970
And then from those intermediate values
00:03:06.970 --> 00:03:08.940
you then predict the next layer of
00:03:08.940 --> 00:03:10.100
values or the Output.
00:03:11.890 --> 00:03:13.780
And MLP's are multilayer.
00:03:13.780 --> 00:03:17.090
Perceptrons can Model more complicated
00:03:17.090 --> 00:03:18.995
functions, but they're harder to
00:03:18.995 --> 00:03:19.400
optimize.
00:03:19.400 --> 00:03:21.830
So while a Perceptron is convex, you
00:03:21.830 --> 00:03:24.180
can optimize it kind of perfectly to
00:03:24.180 --> 00:03:24.880
some precision.
00:03:25.860 --> 00:03:27.990
A MLP is very nonconvex.
00:03:27.990 --> 00:03:31.280
The decision if you were to plot the
00:03:31.280 --> 00:03:34.400
loss versus the weights, it would be
00:03:34.400 --> 00:03:35.540
really bumpy.
00:03:35.540 --> 00:03:37.448
There's lots of different local minima
00:03:37.448 --> 00:03:41.360
within that within that lost surface,
00:03:41.360 --> 00:03:43.090
and that makes it harder to optimize.
00:03:45.090 --> 00:03:47.204
The way that you optimize it, the way
00:03:47.204 --> 00:03:48.640
that you optimize Perceptrons
00:03:48.640 --> 00:03:52.210
classically as well as MLPS, is by a
00:03:52.210 --> 00:03:54.310
stochastic gradient descent where you
00:03:54.310 --> 00:03:56.170
iterate over batches of data you
00:03:56.170 --> 00:03:56.590
compute.
00:03:57.370 --> 00:03:59.370
How you could change those weights in
00:03:59.370 --> 00:04:01.390
order to reduce the loss a little bit
00:04:01.390 --> 00:04:03.235
on that data and then take a step in
00:04:03.235 --> 00:04:03.850
that direction?
00:04:07.300 --> 00:04:08.570
So there is another.
00:04:10.050 --> 00:04:10.970
Sorry, one SEC.
00:04:10.970 --> 00:04:12.120
OK, I'll leave it.
00:04:12.830 --> 00:04:14.370
Yeah, it's a little hard to see, but
00:04:14.370 --> 00:04:16.930
anyway, so there's another application
00:04:16.930 --> 00:04:19.640
I want to talk about of MLPS, and this
00:04:19.640 --> 00:04:21.500
is actually one of the stretch goals
00:04:21.500 --> 00:04:23.720
and the homework, or part of part of
00:04:23.720 --> 00:04:25.349
this is a stretch goal in the homework.
00:04:26.330 --> 00:04:27.020
So.
00:04:28.410 --> 00:04:31.000
So the idea here is to use an MLP.
00:04:31.770 --> 00:04:35.970
In order to encode data or images.
00:04:37.120 --> 00:04:38.690
So you just have.
00:04:38.690 --> 00:04:41.140
The concept is kind of simple.
00:04:41.140 --> 00:04:43.670
You have this network, it takes as
00:04:43.670 --> 00:04:44.690
input.
00:04:45.700 --> 00:04:47.550
Positional features, so this could be
00:04:47.550 --> 00:04:48.960
like a pixel position.
00:04:50.200 --> 00:04:53.110
And then you have some transform on it,
00:04:53.110 --> 00:04:54.400
which I'll talk about in a moment, but
00:04:54.400 --> 00:04:55.178
you could just have it.
00:04:55.178 --> 00:04:57.040
In the simplest case, the Input is just
00:04:57.040 --> 00:04:58.310
two pixel positions.
00:04:59.280 --> 00:05:01.760
And then the output is the color the
00:05:01.760 --> 00:05:04.370
red, green and blue value of the given
00:05:04.370 --> 00:05:04.850
pixel.
00:05:06.700 --> 00:05:11.380
And so in this paper is experiments
00:05:11.380 --> 00:05:14.600
NERF, which was sort of.
00:05:14.600 --> 00:05:16.170
There's another related paper for you
00:05:16.170 --> 00:05:18.360
Features, which explains some aspect of
00:05:18.360 --> 00:05:18.490
it.
00:05:19.540 --> 00:05:21.350
They just have LL two loss.
00:05:21.350 --> 00:05:23.020
So you want to you have at the end of
00:05:23.020 --> 00:05:26.610
this sum Sigmoid that maps maps values
00:05:26.610 --> 00:05:28.190
into zeros and ones, and then you have
00:05:28.190 --> 00:05:31.062
an L2 loss on what was the color that
00:05:31.062 --> 00:05:33.063
you predicted versus the true color of
00:05:33.063 --> 00:05:33.720
the pixel.
00:05:34.570 --> 00:05:36.870
And based on that you can like compress
00:05:36.870 --> 00:05:38.460
an image, you can encode an image in
00:05:38.460 --> 00:05:40.180
the network, which can make it like a
00:05:40.180 --> 00:05:41.620
very highly compressed form.
00:05:42.770 --> 00:05:45.140
You can also encode 3D shapes with
00:05:45.140 --> 00:05:47.360
similar things where you Map from XYZ
00:05:47.360 --> 00:05:49.440
to some kind of occupancy value whether
00:05:49.440 --> 00:05:52.070
a point in the scene is inside a
00:05:52.070 --> 00:05:52.830
surface or not.
00:05:53.820 --> 00:05:56.550
You can encode MRI images by mapping
00:05:56.550 --> 00:05:59.775
XYZ to density, and you can even create
00:05:59.775 --> 00:06:02.820
3D models by solving for.
00:06:03.460 --> 00:06:06.750
The intensities of all the images given
00:06:06.750 --> 00:06:08.870
the position poses of the images.
00:06:09.830 --> 00:06:11.780
I think we're here first and then.
00:06:13.320 --> 00:06:13.730
Yeah.
00:06:21.890 --> 00:06:25.010
So L1 and L2 are distances.
00:06:25.010 --> 00:06:27.463
L1 is the sum of absolute differences
00:06:27.463 --> 00:06:29.760
of two vectors, so they're both like
00:06:29.760 --> 00:06:32.250
distance vector vector distances.
00:06:33.240 --> 00:06:35.600
L1 is the sum of absolute differences
00:06:35.600 --> 00:06:35.770
in.
00:06:35.770 --> 00:06:38.620
L2 is the square root of the sum of
00:06:38.620 --> 00:06:39.600
squared distances.
00:06:40.640 --> 00:06:43.165
They're like so like my L2 distance to
00:06:43.165 --> 00:06:45.060
that corner is if I just take a
00:06:45.060 --> 00:06:47.337
straight line to that corner and my L1
00:06:47.337 --> 00:06:49.334
distance is if I like walk in this
00:06:49.334 --> 00:06:51.081
direction and then I walk in this
00:06:51.081 --> 00:06:52.890
direction and then I keep doing that.
00:06:56.160 --> 00:06:56.420
Yep.
00:07:01.980 --> 00:07:03.210
Yeah, right.
00:07:03.210 --> 00:07:04.020
Exactly.
00:07:04.020 --> 00:07:06.030
So it's just taking XY coordinates and
00:07:06.030 --> 00:07:07.210
it's predicting the color.
00:07:07.210 --> 00:07:07.420
Yep.
00:07:08.870 --> 00:07:11.990
And so it's not like.
00:07:11.990 --> 00:07:14.120
So you might be thinking like why would
00:07:14.120 --> 00:07:14.750
you do this?
00:07:14.750 --> 00:07:16.235
Or like what's the point of doing that
00:07:16.235 --> 00:07:17.210
for an image?
00:07:17.210 --> 00:07:18.440
It could be for compression.
00:07:19.230 --> 00:07:20.930
But the really amazing thing, I mean
00:07:20.930 --> 00:07:23.550
this is the basic idea behind this
00:07:23.550 --> 00:07:24.620
technique called NERF.
00:07:25.280 --> 00:07:27.950
Which is an exploding topic and
00:07:27.950 --> 00:07:28.750
computer vision.
00:07:29.550 --> 00:07:32.210
And the surprising thing is that if you
00:07:32.210 --> 00:07:34.830
have a bunch of images, where the
00:07:34.830 --> 00:07:37.020
positions of those images in 3D space
00:07:37.020 --> 00:07:37.800
and where they're looking?
00:07:38.580 --> 00:07:42.190
And you simply solve to map from the
00:07:42.190 --> 00:07:45.333
pixel or from the array, like through a
00:07:45.333 --> 00:07:47.370
pixel of each image, or from a 3D point
00:07:47.370 --> 00:07:51.065
and direction into the color of the
00:07:51.065 --> 00:07:53.860
image that observes that point or that
00:07:53.860 --> 00:07:54.160
Ray.
00:07:54.910 --> 00:07:59.130
You can solve like if you optimize that
00:07:59.130 --> 00:07:59.980
problem.
00:07:59.980 --> 00:08:02.700
Then you solve for kind of like colored
00:08:02.700 --> 00:08:06.300
3D scene that allows you to draw new
00:08:06.300 --> 00:08:08.660
pictures from arbitrary positions and
00:08:08.660 --> 00:08:09.830
they look photorealistic.
00:08:10.820 --> 00:08:13.020
So the network actually discovers the
00:08:13.020 --> 00:08:14.820
underlying geometry because it's the
00:08:14.820 --> 00:08:16.480
simplest explanation for the
00:08:16.480 --> 00:08:17.910
intensities that are observed in all
00:08:17.910 --> 00:08:18.480
these pictures.
00:08:22.340 --> 00:08:24.576
So the network is pretty simple, it's
00:08:24.576 --> 00:08:25.720
just a four layer.
00:08:25.720 --> 00:08:27.960
They use 6 layers for this nerve
00:08:27.960 --> 00:08:29.868
problem, but for all the others it's
00:08:29.868 --> 00:08:31.159
just a four layer network.
00:08:32.090 --> 00:08:35.100
They're linear layers followed by ReLU,
00:08:35.100 --> 00:08:36.560
except on the Output.
00:08:36.560 --> 00:08:39.940
For RGB for example, you have a Sigmoid
00:08:39.940 --> 00:08:41.450
so that you map it to a zero to 1
00:08:41.450 --> 00:08:41.850
value.
00:08:43.820 --> 00:08:47.040
And one of the points of the paper is
00:08:47.040 --> 00:08:49.610
that if you try to encode the pixel
00:08:49.610 --> 00:08:52.510
positions directly, it kind of works,
00:08:52.510 --> 00:08:55.520
but you get these results shown above
00:08:55.520 --> 00:08:57.806
where, oops, sorry, these results shown
00:08:57.806 --> 00:08:59.530
above where it's like pretty blurry.
00:09:00.190 --> 00:09:02.180
And the reason for that is that the
00:09:02.180 --> 00:09:05.230
mapping from pixel position.
00:09:05.940 --> 00:09:08.430
To color is very nonlinear.
00:09:09.420 --> 00:09:09.830
So.
00:09:10.610 --> 00:09:12.050
Essentially you can think of the
00:09:12.050 --> 00:09:15.550
Networks in as I talked about with like
00:09:15.550 --> 00:09:18.120
kernel representations and the duality
00:09:18.120 --> 00:09:19.290
of linear models.
00:09:20.080 --> 00:09:21.900
You can think about linear models as
00:09:21.900 --> 00:09:24.770
effectively saying that the similarity
00:09:24.770 --> 00:09:26.510
of two points is based on their dot
00:09:26.510 --> 00:09:27.759
product, like the product of
00:09:27.760 --> 00:09:29.260
corresponding elements summed together.
00:09:30.110 --> 00:09:32.030
And if you take the dot product of two
00:09:32.030 --> 00:09:33.600
pixel positions, it doesn't reflect
00:09:33.600 --> 00:09:35.410
their similarity at all really.
00:09:35.410 --> 00:09:36.570
So like if you get.
00:09:37.640 --> 00:09:40.361
Two pixel positions in the that are
00:09:40.361 --> 00:09:42.240
high that are next to each other.
00:09:42.240 --> 00:09:43.500
When you take the dot product, it's
00:09:43.500 --> 00:09:44.490
still a very high value.
00:09:47.010 --> 00:09:50.730
If you transform those features using
00:09:50.730 --> 00:09:53.840
sinusoidal encoding, so you just
00:09:53.840 --> 00:09:55.630
compute sines and cosines of the
00:09:55.630 --> 00:09:58.830
original positions, then it makes it so
00:09:58.830 --> 00:10:00.366
that if you take the dot product of
00:10:00.366 --> 00:10:01.340
those encoded.
00:10:02.330 --> 00:10:03.610
Positions.
00:10:03.610 --> 00:10:06.280
Then positions that are very close
00:10:06.280 --> 00:10:07.870
together will have high similarity.
00:10:10.000 --> 00:10:11.830
So that's in a nutshell.
00:10:11.900 --> 00:10:15.490
At the idea, I mean there's like a
00:10:15.490 --> 00:10:15.990
whole.
00:10:17.680 --> 00:10:20.410
Theory and stuff behind it, but that's
00:10:20.410 --> 00:10:21.650
the basic idea, is that they have a
00:10:21.650 --> 00:10:23.760
simple transformation that makes this
00:10:23.760 --> 00:10:25.920
mapping more, that makes this
00:10:25.920 --> 00:10:28.170
similarity more linear, and that
00:10:28.170 --> 00:10:30.320
enables you to get high frequency
00:10:30.320 --> 00:10:31.850
images and stuff.
00:10:31.850 --> 00:10:33.660
You can include high frequency images
00:10:33.660 --> 00:10:33.920
better.
00:10:37.910 --> 00:10:39.990
Right, so I want to spend a little time
00:10:39.990 --> 00:10:42.700
talking about homework two and.
00:10:43.580 --> 00:10:44.350
I'm also going.
00:10:44.350 --> 00:10:45.860
I can also take questions.
00:10:45.860 --> 00:10:49.180
This is due in about 12 days or so.
00:10:50.520 --> 00:10:51.270
11 days.
00:10:52.260 --> 00:10:52.890
Yeah, mine.
00:10:53.690 --> 00:10:56.280
I'm on vgas, unfortunately so.
00:10:57.460 --> 00:11:02.560
My Size of things is annoyingly small
00:11:02.560 --> 00:11:03.300
and stretched.
00:11:09.940 --> 00:11:13.630
Take things down from like 4K to 480.
00:11:18.060 --> 00:11:18.420
All right.
00:11:20.890 --> 00:11:23.130
So for homework two first overview,
00:11:23.130 --> 00:11:23.940
there's three parts.
00:11:25.270 --> 00:11:26.430
Alright, I guess I won't overview.
00:11:26.430 --> 00:11:27.180
I'll go into each part.
00:11:27.850 --> 00:11:30.260
So the first part is and I'll take
00:11:30.260 --> 00:11:30.695
questions.
00:11:30.695 --> 00:11:32.520
I'll just describe it briefly and then
00:11:32.520 --> 00:11:34.000
see if anybody has any clarifying
00:11:34.000 --> 00:11:34.542
questions.
00:11:34.542 --> 00:11:38.160
The first part is to look at like bias
00:11:38.160 --> 00:11:41.130
variants and tree tree models.
00:11:42.470 --> 00:11:44.620
So we're doing the same temperature
00:11:44.620 --> 00:11:46.340
problem that we saw in homework one.
00:11:47.260 --> 00:11:48.990
Same exact features and labels.
00:11:49.920 --> 00:11:52.200
And we are going to look at three
00:11:52.200 --> 00:11:54.870
different kinds of models, regression
00:11:54.870 --> 00:11:55.410
trees.
00:11:56.850 --> 00:11:59.590
Random forests and boosted regression
00:11:59.590 --> 00:12:02.020
trees, and in particular we're using
00:12:02.020 --> 00:12:04.510
like this Gradient boost method, but
00:12:04.510 --> 00:12:06.400
the type of boosting is not really
00:12:06.400 --> 00:12:07.459
important and we're not going to
00:12:07.460 --> 00:12:08.500
implement it, we're just going to use
00:12:08.500 --> 00:12:08.910
the library.
00:12:09.670 --> 00:12:11.055
So what we're going to do is we're
00:12:11.055 --> 00:12:13.250
going to test what is the Training
00:12:13.250 --> 00:12:15.350
error and the validation error.
00:12:15.960 --> 00:12:18.000
For five different depths.
00:12:19.450 --> 00:12:22.170
And these five depths meaning how deep
00:12:22.170 --> 00:12:22.910
do we grow the tree?
00:12:24.220 --> 00:12:27.192
And then we're going to plot it and
00:12:27.192 --> 00:12:28.810
then answer some questions about it.
00:12:30.180 --> 00:12:32.410
So looking at this Starter code.
00:12:38.400 --> 00:12:39.980
So this is just loading the temperature
00:12:39.980 --> 00:12:40.260
data.
00:12:40.260 --> 00:12:42.523
It's the same as before plotting it,
00:12:42.523 --> 00:12:44.340
just to give a sense of what it means.
00:12:46.640 --> 00:12:47.470
And then I've got.
00:12:48.320 --> 00:12:49.460
This error.
00:12:51.440 --> 00:12:53.580
This function is included to plot the
00:12:53.580 --> 00:12:56.570
errors and it's just taking as input
00:12:56.570 --> 00:12:58.560
the that Depth array.
00:12:59.320 --> 00:13:02.280
And corresponding list surveys that
00:13:02.280 --> 00:13:05.670
store the Training error and validation
00:13:05.670 --> 00:13:08.240
error for each Model.
00:13:09.110 --> 00:13:12.756
Training error means the RMSE error on
00:13:12.756 --> 00:13:14.982
the training set and validation means
00:13:14.982 --> 00:13:17.360
the validation error on the validation
00:13:17.360 --> 00:13:18.819
I mean the error on the validation set.
00:13:21.850 --> 00:13:22.420
These are.
00:13:22.420 --> 00:13:27.230
I provide the code to compute a given
00:13:27.230 --> 00:13:29.070
or to initialize a given model.
00:13:29.070 --> 00:13:31.950
So the you can create this model, you
00:13:31.950 --> 00:13:33.700
can do Model dot fit with the training
00:13:33.700 --> 00:13:36.220
data and Model dot.
00:13:37.270 --> 00:13:40.730
And then you can like compute the RMSE,
00:13:40.730 --> 00:13:42.555
evaluate the validation data and
00:13:42.555 --> 00:13:43.310
compute RMSE.
00:13:43.310 --> 00:13:44.960
So it's like it's not meant to be.
00:13:44.960 --> 00:13:46.430
It's not like it's not like an
00:13:46.430 --> 00:13:48.135
algorithm coding problem, it's more of
00:13:48.135 --> 00:13:49.990
an evaluation and analysis problem.
00:13:52.180 --> 00:13:53.330
No, you don't need to code these
00:13:53.330 --> 00:13:53.725
functions.
00:13:53.725 --> 00:13:54.600
You just call this.
00:13:56.450 --> 00:13:58.200
So you would for example call the
00:13:58.200 --> 00:13:58.960
decision tree.
00:13:58.960 --> 00:14:00.990
You'd do a loop through the Max Steps.
00:14:01.920 --> 00:14:04.440
Call for each of these you like.
00:14:04.440 --> 00:14:07.159
Instantiate the Model, fit, predict on
00:14:07.160 --> 00:14:08.510
train, predict on test.
00:14:09.570 --> 00:14:12.080
Compute the RMSE error.
00:14:13.080 --> 00:14:15.250
If you want to use built-in scoring
00:14:15.250 --> 00:14:17.450
functions to compute RMSE, it's fine
00:14:17.450 --> 00:14:18.860
with me as long as it's accurate.
00:14:20.850 --> 00:14:22.910
And then you and then you record them
00:14:22.910 --> 00:14:24.650
and then you plot it with this
00:14:24.650 --> 00:14:25.170
Function.
00:14:28.350 --> 00:14:30.280
And.
00:14:30.710 --> 00:14:33.190
So let's look at the report template a
00:14:33.190 --> 00:14:33.610
little bit.
00:14:34.300 --> 00:14:36.830
Right, so just generating that plot is
00:14:36.830 --> 00:14:37.890
worth 10 points.
00:14:38.540 --> 00:14:42.580
And analyzing the result is worth 20
00:14:42.580 --> 00:14:43.070
points.
00:14:43.070 --> 00:14:44.900
So there's more points for answering
00:14:44.900 --> 00:14:45.780
questions about it, yeah.
00:15:01.480 --> 00:15:04.110
So it's in some cases it's pretty
00:15:04.110 --> 00:15:05.490
literally from the plot.
00:15:05.490 --> 00:15:06.980
For example, for regression trees,
00:15:06.980 --> 00:15:08.610
which tree Depth achieves minimum
00:15:08.610 --> 00:15:09.730
validation error?
00:15:09.730 --> 00:15:11.100
That's something that you should be
00:15:11.100 --> 00:15:11.600
able to.
00:15:12.400 --> 00:15:14.430
Basically, read directly from the plot.
00:15:14.430 --> 00:15:18.200
In other cases it requires some other
00:15:18.200 --> 00:15:20.170
knowledge and interpretation, so for
00:15:20.170 --> 00:15:20.820
example.
00:15:22.310 --> 00:15:24.955
Deputies trees seem to perform better
00:15:24.955 --> 00:15:26.580
with smaller or larger trees.
00:15:26.580 --> 00:15:27.040
Why?
00:15:27.040 --> 00:15:28.474
So whether they perform better with
00:15:28.474 --> 00:15:29.960
smaller or larger trees is something
00:15:29.960 --> 00:15:31.760
you can observe directly from the plot,
00:15:31.760 --> 00:15:33.900
but the Y is like applying your
00:15:33.900 --> 00:15:34.840
understanding of.
00:15:35.880 --> 00:15:38.120
Bias variance in the tree algorithm to
00:15:38.120 --> 00:15:40.555
be able to say why what you observe is
00:15:40.555 --> 00:15:40.940
the case.
00:15:43.360 --> 00:15:45.500
Likewise, like model is least pruned to
00:15:45.500 --> 00:15:45.870
overfitting.
00:15:45.870 --> 00:15:48.170
You can observe that if you understand
00:15:48.170 --> 00:15:49.660
what overfitting means directly in the
00:15:49.660 --> 00:15:52.150
plot, but again like the Y requires
00:15:52.150 --> 00:15:52.990
some understanding.
00:15:53.750 --> 00:15:57.850
And which model has the lowest bias
00:15:57.850 --> 00:15:59.470
requires that you understand what bias
00:15:59.470 --> 00:16:01.230
means, but if you do, then you can read
00:16:01.230 --> 00:16:03.380
it directly from the plot as well.
00:16:05.360 --> 00:16:05.630
Yeah.
00:16:10.460 --> 00:16:10.790
OK.
00:16:10.790 --> 00:16:12.770
Any other questions about part one?
00:16:15.580 --> 00:16:18.060
OK, so Part 2.
00:16:18.740 --> 00:16:22.110
Is going back to MNIST again and we
00:16:22.110 --> 00:16:23.740
will move beyond these data sets for
00:16:23.740 --> 00:16:24.040
homework.
00:16:24.040 --> 00:16:25.490
Three but.
00:16:27.350 --> 00:16:30.230
But going back to MNIST and now and now
00:16:30.230 --> 00:16:30.820
like.
00:16:32.200 --> 00:16:34.570
Applying MLPS to MNIST.
00:16:36.910 --> 00:16:39.470
So let's go to the Starter code again.
00:16:43.390 --> 00:16:45.160
Right, so this is the same code as
00:16:45.160 --> 00:16:47.210
before, just to load the MNIST data.
00:16:47.210 --> 00:16:48.800
We're not going to actually use like
00:16:48.800 --> 00:16:51.052
different the sub splits, we're just
00:16:51.052 --> 00:16:52.730
going to use the full training set.
00:16:53.430 --> 00:16:54.390
And validation set.
00:16:56.230 --> 00:16:58.690
There's some code here to OK, so let me
00:16:58.690 --> 00:17:01.090
first talk about what the problem is.
00:17:02.100 --> 00:17:03.770
So you're going to train a network.
00:17:03.770 --> 00:17:05.570
We give you a starting like learning
00:17:05.570 --> 00:17:08.290
rate and optimizer to use and Batch
00:17:08.290 --> 00:17:08.780
Size.
00:17:09.520 --> 00:17:11.870
And you record the training and the
00:17:11.870 --> 00:17:14.690
validation loss after each epoch.
00:17:15.680 --> 00:17:16.890
That's the cycle through the training
00:17:16.890 --> 00:17:17.090
data.
00:17:17.800 --> 00:17:19.505
And then you compute the validation of
00:17:19.505 --> 00:17:21.590
the final model, and then you report
00:17:21.590 --> 00:17:24.010
some of these errors and losses in the
00:17:24.010 --> 00:17:24.350
report.
00:17:25.030 --> 00:17:27.375
And then we say try some different
00:17:27.375 --> 00:17:28.600
learning rates.
00:17:28.600 --> 00:17:31.340
So vary that ETA the learning rate of
00:17:31.340 --> 00:17:32.410
your optimizer.
00:17:33.090 --> 00:17:35.750
And again compare.
00:17:35.750 --> 00:17:38.050
Create these plots of the Training
00:17:38.050 --> 00:17:39.630
validation loss and compare them for
00:17:39.630 --> 00:17:40.400
different learning rate.
00:17:41.640 --> 00:17:42.070
Question.
00:17:47.510 --> 00:17:50.610
It's in some ways it's an arbitrary
00:17:50.610 --> 00:17:52.520
choice, but Pi torch is a really
00:17:52.520 --> 00:17:54.310
popular package for Deep Learning.
00:17:54.310 --> 00:17:55.730
So like there are others but.
00:17:56.340 --> 00:17:59.133
Since we're since we're using Python, I
00:17:59.133 --> 00:18:01.110
would use a Python package and it's
00:18:01.110 --> 00:18:01.515
just like.
00:18:01.515 --> 00:18:03.260
I would say that probably like the most
00:18:03.260 --> 00:18:04.710
popular framework right now.
00:18:08.830 --> 00:18:11.490
Yeah, tensor flow is also another,
00:18:11.490 --> 00:18:12.830
would be another good candidate.
00:18:12.830 --> 00:18:15.120
Or Keras I guess, which is I think
00:18:15.120 --> 00:18:16.220
based on tensor flow maybe.
00:18:17.350 --> 00:18:19.596
But yeah, we're using torch.
00:18:19.596 --> 00:18:20.920
Yeah, there's no like.
00:18:20.920 --> 00:18:22.670
I don't have anything against the other
00:18:22.670 --> 00:18:25.600
packages, but I think π torch is.
00:18:26.740 --> 00:18:29.760
Probably one of the more still probably
00:18:29.760 --> 00:18:30.840
edges out tensor flow.
00:18:30.840 --> 00:18:32.460
Right now is the most popular I would
00:18:32.460 --> 00:18:32.590
say.
00:18:34.580 --> 00:18:35.170
00:18:37.650 --> 00:18:41.452
Then finally you try to like.
00:18:41.452 --> 00:18:42.840
You can adjust the learning rate and
00:18:42.840 --> 00:18:44.305
the hidden layer size and other things
00:18:44.305 --> 00:18:45.890
to try to improve the network and you
00:18:45.890 --> 00:18:48.460
should be able to get validation error
00:18:48.460 --> 00:18:49.292
less than 25.
00:18:49.292 --> 00:18:50.800
So this is basically.
00:18:50.800 --> 00:18:53.180
I just chose this because like in a few
00:18:53.180 --> 00:18:55.200
minutes or down now 15 minutes of
00:18:55.200 --> 00:18:55.522
experimentation.
00:18:55.522 --> 00:18:57.376
This is like roughly what I was able to
00:18:57.376 --> 00:18:57.509
get.
00:18:58.730 --> 00:18:59.280
00:19:00.200 --> 00:19:00.940
So.
00:19:01.790 --> 00:19:02.940
If we look at the.
00:19:06.020 --> 00:19:07.730
So then we have like.
00:19:07.730 --> 00:19:09.610
So basically the main part of the code
00:19:09.610 --> 00:19:11.070
that you need to write is in here.
00:19:11.070 --> 00:19:14.580
So where you have the training and it's
00:19:14.580 --> 00:19:16.999
pretty similar to the example that I
00:19:17.000 --> 00:19:18.220
gave in class.
00:19:18.220 --> 00:19:20.244
But the biggest difference is that in
00:19:20.244 --> 00:19:22.040
the example I did in class.
00:19:22.800 --> 00:19:25.380
It's a binary problem and so you
00:19:25.380 --> 00:19:27.254
represent you have only one output, and
00:19:27.254 --> 00:19:29.034
if that Output is negative then it
00:19:29.034 --> 00:19:30.259
indicates one class and if it's
00:19:30.260 --> 00:19:31.870
positive it indicates another class.
00:19:32.820 --> 00:19:33.380
If you have.
00:19:34.120 --> 00:19:35.810
Multiple classes.
00:19:36.170 --> 00:19:36.730
00:19:37.510 --> 00:19:38.920
That obviously doesn't work.
00:19:38.920 --> 00:19:40.825
You can't represent it with one Output.
00:19:40.825 --> 00:19:43.980
You instead need to Output one value
00:19:43.980 --> 00:19:45.200
for each of your classes.
00:19:45.200 --> 00:19:46.645
So if you have three classes, if you
00:19:46.645 --> 00:19:48.009
have two classes, you can have one
00:19:48.009 --> 00:19:48.253
Output.
00:19:48.253 --> 00:19:50.209
If you have three classes, you need 3
00:19:50.210 --> 00:19:50.540
outputs.
00:19:51.280 --> 00:19:54.060
You have one output for each class and
00:19:54.060 --> 00:19:57.020
that Output you.
00:19:57.020 --> 00:19:58.780
Depending on how you set up your loss,
00:19:58.780 --> 00:20:02.450
it can either be a probability, so zero
00:20:02.450 --> 00:20:04.450
to one, or it can be a logic.
00:20:05.530 --> 00:20:08.690
Negative Infinity to Infinity the log
00:20:08.690 --> 00:20:09.430
class ratio.
00:20:13.080 --> 00:20:17.090
And then you need to like reformat
00:20:17.090 --> 00:20:20.043
instead of representing the label as
00:20:20.043 --> 00:20:22.069
like 0123456789.
00:20:22.680 --> 00:20:24.390
You represent it with what's called A1
00:20:24.390 --> 00:20:26.640
hot vector and it's explained what that
00:20:26.640 --> 00:20:27.250
is in the Tips.
00:20:27.250 --> 00:20:30.370
But basically A3 is represented as like
00:20:30.370 --> 00:20:33.479
you have a ten element vector and the
00:20:33.480 --> 00:20:35.830
third value of that vector is 1 and all
00:20:35.830 --> 00:20:37.480
the other values are zero.
00:20:37.480 --> 00:20:39.370
So it's like you just represent which
00:20:39.370 --> 00:20:42.210
of these ten labels is on for this
00:20:42.210 --> 00:20:42.760
example.
00:20:45.180 --> 00:20:46.070
Otherwise.
00:20:47.420 --> 00:20:49.010
That makes some small differences and
00:20:49.010 --> 00:20:52.170
how you compute loss just like code
00:20:52.170 --> 00:20:54.500
wise, but otherwise it's essentially
00:20:54.500 --> 00:20:54.890
the same.
00:20:55.860 --> 00:20:57.090
I also have.
00:21:00.540 --> 00:21:02.420
And one more.
00:21:02.420 --> 00:21:02.730
OK.
00:21:02.730 --> 00:21:04.640
So first let me go to the report for
00:21:04.640 --> 00:21:04.750
that.
00:21:05.500 --> 00:21:06.850
So your port, your training and your
00:21:06.850 --> 00:21:09.600
validation loss and your curves, your
00:21:09.600 --> 00:21:09.930
plots.
00:21:11.230 --> 00:21:12.240
And your final losses?
00:21:13.520 --> 00:21:15.630
I mean you're final errors.
00:21:18.240 --> 00:21:18.920
00:21:21.010 --> 00:21:23.600
So what was I going to say?
00:21:23.600 --> 00:21:24.040
Yes.
00:21:24.040 --> 00:21:26.900
So the so the tips and tricks.
00:21:30.700 --> 00:21:33.600
Are focused on the Part 2 because I
00:21:33.600 --> 00:21:36.670
think part one is a little bit.
00:21:36.670 --> 00:21:38.850
There's not that much to it really code
00:21:38.850 --> 00:21:39.140
wise.
00:21:41.720 --> 00:21:44.300
So there's if you're probably most of
00:21:44.300 --> 00:21:46.933
you are new to π torch or Deep Learning
00:21:46.933 --> 00:21:47.779
or MLP's.
00:21:49.400 --> 00:21:51.520
So I would recommend looking at this
00:21:51.520 --> 00:21:52.460
tutorial first.
00:21:53.130 --> 00:21:56.170
And it explains it like pretty clearly
00:21:56.170 --> 00:21:57.780
how to do things.
00:21:57.780 --> 00:22:00.060
You can also like the code that I wrote
00:22:00.060 --> 00:22:03.470
before is like mostly a lot of it can
00:22:03.470 --> 00:22:04.390
be applied directly.
00:22:05.180 --> 00:22:05.560
And it's.
00:22:05.560 --> 00:22:09.250
Also the basic loop is down here so.
00:22:10.470 --> 00:22:13.805
You shouldn't like abstractly it's not.
00:22:13.805 --> 00:22:15.965
It's not necessarily that you can see
00:22:15.965 --> 00:22:18.490
the slides and understand MLPS and know
00:22:18.490 --> 00:22:19.690
exactly how you should code it.
00:22:19.690 --> 00:22:21.490
You need you will need to look at the
00:22:21.490 --> 00:22:23.830
tutorial or in like this code
00:22:23.830 --> 00:22:24.210
structure.
00:22:26.280 --> 00:22:28.840
Because it's using libraries still like
00:22:28.840 --> 00:22:31.180
TORCH handles for us all the
00:22:31.180 --> 00:22:33.230
optimization that you just specify a
00:22:33.230 --> 00:22:35.829
loss, you specify your structure of the
00:22:35.830 --> 00:22:37.130
network and then it kind of does
00:22:37.130 --> 00:22:38.020
everything else for you.
00:22:40.840 --> 00:22:43.355
OK, so the Tips also say how you set up
00:22:43.355 --> 00:22:47.046
a data loader and the basic procedure,
00:22:47.046 --> 00:22:50.585
how you get GPU to work on collabs and
00:22:50.585 --> 00:22:53.988
how you can compute the softmax which
00:22:53.988 --> 00:22:55.970
is the probability of a particular
00:22:55.970 --> 00:22:56.300
label.
00:22:56.300 --> 00:22:58.940
So this is like the probability of this
00:22:58.940 --> 00:23:00.540
ground truth label Val I.
00:23:01.730 --> 00:23:04.190
Given the data, if this is stored as
00:23:04.190 --> 00:23:05.260
like a zero to 9 value.
00:23:10.130 --> 00:23:12.900
Alright, any questions about two?
00:23:12.980 --> 00:23:13.150
Yes.
00:23:21.340 --> 00:23:25.230
So if you have multiple classes, that's
00:23:25.230 --> 00:23:25.870
not what I want to do.
00:23:26.770 --> 00:23:29.135
If you have multiple classes, then you
00:23:29.135 --> 00:23:29.472
have.
00:23:29.472 --> 00:23:31.594
Then at the Output layer you have
00:23:31.594 --> 00:23:33.640
multiple nodes, and each of those nodes
00:23:33.640 --> 00:23:35.010
are connected to the previous layer
00:23:35.010 --> 00:23:36.080
with their own set of weights.
00:23:37.600 --> 00:23:39.560
And so they use like the same
00:23:39.560 --> 00:23:40.476
intermediate features.
00:23:40.476 --> 00:23:42.800
They use the same representations that
00:23:42.800 --> 00:23:45.360
are in the hidden layers or in the
00:23:45.360 --> 00:23:46.950
inner layers of the network.
00:23:46.950 --> 00:23:48.950
But they each have their own predictor
00:23:48.950 --> 00:23:51.300
at the end, and so it actually it
00:23:51.300 --> 00:23:53.270
doesn't it instead of producing a
00:23:53.270 --> 00:23:55.210
single value, it produces an array of
00:23:55.210 --> 00:23:55.700
values.
00:23:56.460 --> 00:23:59.200
In that array will typically represent
00:23:59.200 --> 00:24:00.690
like the probability of each class.
00:24:04.970 --> 00:24:05.160
Yeah.
00:24:10.980 --> 00:24:13.660
There to get the.
00:24:13.820 --> 00:24:15.180
Loss for the validation set.
00:24:15.180 --> 00:24:17.070
Your evaluate the validation examples
00:24:17.070 --> 00:24:20.310
so call like X Val.
00:24:21.210 --> 00:24:23.827
And then you compute the negative log
00:24:23.827 --> 00:24:26.252
probability of the true Label given the
00:24:26.252 --> 00:24:28.660
given the data, which will be based on
00:24:28.660 --> 00:24:30.450
the outputs of your network.
00:24:30.450 --> 00:24:31.985
So the network will give you the
00:24:31.985 --> 00:24:33.130
probability of each class.
00:24:33.830 --> 00:24:35.930
And then you sum the negative log
00:24:35.930 --> 00:24:37.110
probability of the true class.
00:24:47.700 --> 00:24:50.440
For each example for each class, yeah.
00:24:53.590 --> 00:24:57.780
So Part 3 is.
00:24:58.970 --> 00:25:01.350
More a data exploration problem in a
00:25:01.350 --> 00:25:01.540
way.
00:25:02.310 --> 00:25:06.190
So there's this data set, the Palmer
00:25:06.190 --> 00:25:08.120
Archipelago Penguin data set.
00:25:08.750 --> 00:25:10.650
That where they recorded various
00:25:10.650 --> 00:25:13.270
measurements of Penguins and you're
00:25:13.270 --> 00:25:14.740
trying to predict a species of the
00:25:14.740 --> 00:25:15.150
Penguin.
00:25:16.360 --> 00:25:18.140
And it had something original data had
00:25:18.140 --> 00:25:20.270
some nans and stuff.
00:25:20.270 --> 00:25:21.110
So we.
00:25:21.910 --> 00:25:22.860
We like kind of.
00:25:22.860 --> 00:25:23.850
I cleaned it up a bit.
00:25:24.460 --> 00:25:25.690
Where we clean it up a bit.
00:25:27.870 --> 00:25:31.300
And then in some of the Starter code we
00:25:31.300 --> 00:25:34.600
turned some of the strings into one hot
00:25:34.600 --> 00:25:37.470
vectors because Sklearn doesn't deal
00:25:37.470 --> 00:25:38.120
with the strings.
00:25:40.450 --> 00:25:43.680
So the first part is to like look at
00:25:43.680 --> 00:25:44.560
some of the.
00:25:45.730 --> 00:25:47.600
To just like do scatter plots if some
00:25:47.600 --> 00:25:48.230
of the features.
00:25:50.150 --> 00:25:52.060
And then in the report.
00:25:53.820 --> 00:25:54.950
You just.
00:25:56.400 --> 00:25:58.050
You just like share with the scatter
00:25:58.050 --> 00:26:00.662
plots and you say if you had to choose
00:26:00.662 --> 00:26:02.410
two features like what 2 features would
00:26:02.410 --> 00:26:03.800
you choose based on looking at some of
00:26:03.800 --> 00:26:04.420
the scatterplot?
00:26:05.390 --> 00:26:06.890
It's not like there's not like
00:26:06.890 --> 00:26:08.800
necessarily a single right answer to if
00:26:08.800 --> 00:26:09.542
it makes sense.
00:26:09.542 --> 00:26:11.490
If your answer just makes if you try
00:26:11.490 --> 00:26:13.404
out some different combinations and
00:26:13.404 --> 00:26:14.760
your answer makes sense given what you
00:26:14.760 --> 00:26:15.510
tried, that's fine.
00:26:15.510 --> 00:26:16.990
It's not like that you have to find the
00:26:16.990 --> 00:26:19.080
very best answer by trying all pairs or
00:26:19.080 --> 00:26:19.600
anything like.
00:26:20.980 --> 00:26:23.840
So it's more of an exercise than like
00:26:23.840 --> 00:26:25.310
right or wrong kind of thing.
00:26:26.090 --> 00:26:26.610
00:26:27.280 --> 00:26:29.460
And in this Starter code the.
00:26:30.240 --> 00:26:30.820
00:26:31.830 --> 00:26:34.460
We provide an example so you just can
00:26:34.460 --> 00:26:37.130
run this scatterplot code with
00:26:37.130 --> 00:26:39.330
different combinations of features.
00:26:43.910 --> 00:26:45.530
Alright and then.
00:26:48.400 --> 00:26:50.830
The second part is to use a decision
00:26:50.830 --> 00:26:51.140
tree.
00:26:51.140 --> 00:26:53.910
If you train a decision tree and
00:26:53.910 --> 00:26:57.480
visualize it on the Features, then
00:26:57.480 --> 00:27:00.230
you'll be able to see a tree structure
00:27:00.230 --> 00:27:00.410
that.
00:27:01.260 --> 00:27:02.970
That kind of shows you like.
00:27:02.970 --> 00:27:04.580
You can think of that tree in terms of
00:27:04.580 --> 00:27:05.280
different rules.
00:27:05.280 --> 00:27:07.530
If you follow the branches down, each
00:27:07.530 --> 00:27:09.885
like path through the tree is a set of
00:27:09.885 --> 00:27:10.140
rules.
00:27:10.900 --> 00:27:12.860
And there are different Rule
00:27:12.860 --> 00:27:15.230
combinations that can almost perfectly
00:27:15.230 --> 00:27:17.373
distinguish Gentius from all the other
00:27:17.373 --> 00:27:18.940
species from the other two Species.
00:27:20.180 --> 00:27:22.830
So just train the tree and visualize
00:27:22.830 --> 00:27:23.920
and as a stretch goal.
00:27:23.920 --> 00:27:25.560
You can find a different rule, for
00:27:25.560 --> 00:27:27.180
example by eliminating some feature
00:27:27.180 --> 00:27:29.460
that was used in the first rule or by
00:27:29.460 --> 00:27:32.003
using a different criterion for the
00:27:32.003 --> 00:27:32.870
tree Learning.
00:27:35.620 --> 00:27:37.210
Then you include the rule in your
00:27:37.210 --> 00:27:37.610
report.
00:27:37.610 --> 00:27:38.780
So it should be something.
00:27:38.780 --> 00:27:40.910
If A is greater than five and B is less
00:27:40.910 --> 00:27:42.300
than two, then it's a Gen.
00:27:42.300 --> 00:27:43.380
2, otherwise it's not.
00:27:46.700 --> 00:27:47.040
Name.
00:27:48.400 --> 00:27:50.370
And then finally design an MLP model to
00:27:50.370 --> 00:27:51.560
maximize your accuracy.
00:27:52.190 --> 00:27:54.000
This is not actually.
00:27:55.080 --> 00:27:56.750
Again, you don't have to program it,
00:27:56.750 --> 00:27:57.390
you just.
00:27:57.390 --> 00:27:59.340
This is actually kind of like.
00:28:01.580 --> 00:28:03.150
Almost like, ridiculously easy.
00:28:03.830 --> 00:28:06.020
You can just call your different.
00:28:06.020 --> 00:28:08.560
We've learned a bunch of models, for
00:28:08.560 --> 00:28:10.840
example these models up here.
00:28:11.500 --> 00:28:13.600
You can try these different models that
00:28:13.600 --> 00:28:15.820
we used in this experiment, as well as
00:28:15.820 --> 00:28:17.600
any other models that you think might
00:28:17.600 --> 00:28:20.840
be applicable except for.
00:28:20.840 --> 00:28:21.730
Just make sure you're using
00:28:21.730 --> 00:28:23.126
Classification models and not
00:28:23.126 --> 00:28:23.759
regression models.
00:28:23.760 --> 00:28:25.820
But you can try logistic regression or
00:28:25.820 --> 00:28:27.480
random forests or trees.
00:28:28.550 --> 00:28:31.130
And when you instantiate the Model,
00:28:31.130 --> 00:28:32.069
just define the model.
00:28:32.070 --> 00:28:34.180
Here for example, logistic model equals
00:28:34.180 --> 00:28:37.820
logistic regression empty, empty prin.
00:28:38.910 --> 00:28:40.365
And then if you put the Model in here
00:28:40.365 --> 00:28:42.700
and your data, this will do the cross
00:28:42.700 --> 00:28:44.190
validation for you and compute the
00:28:44.190 --> 00:28:44.660
score.
00:28:44.660 --> 00:28:46.255
So it really just try different models
00:28:46.255 --> 00:28:49.540
and see what works well and I found
00:28:49.540 --> 00:28:52.830
pretty quickly a model that was 99.5%
00:28:52.830 --> 00:28:53.230
accurate.
00:28:53.900 --> 00:28:54.120
So.
00:28:55.410 --> 00:28:56.690
So again, it's just like a little bit
00:28:56.690 --> 00:28:58.600
of a simple model testing.
00:28:58.870 --> 00:28:59.050
OK.
00:29:00.310 --> 00:29:00.710
Experiment.
00:29:01.560 --> 00:29:04.135
So that's the main part of homework 2.
00:29:04.135 --> 00:29:06.710
The stretch goals to further improve
00:29:06.710 --> 00:29:09.190
MNIST by improving the design of your
00:29:09.190 --> 00:29:09.630
network.
00:29:11.320 --> 00:29:13.310
Find a second rule which I mentioned in
00:29:13.310 --> 00:29:15.000
the positional encoding.
00:29:15.000 --> 00:29:18.660
So this is the like Multi layer network
00:29:18.660 --> 00:29:20.390
for predicting color given position.
00:29:22.460 --> 00:29:24.560
And it should be possible to get the
00:29:24.560 --> 00:29:26.450
full points, easing the positional
00:29:26.450 --> 00:29:26.860
encoding.
00:29:26.860 --> 00:29:28.210
You should be able to generate like a
00:29:28.210 --> 00:29:30.440
fairly natural looking image should
00:29:30.440 --> 00:29:30.770
look.
00:29:30.770 --> 00:29:32.250
It might not be quite as sharp as the
00:29:32.250 --> 00:29:33.540
original, but it should be pretty good.
00:29:37.670 --> 00:29:39.180
OK, one more question.
00:29:50.410 --> 00:29:54.500
Yeah, maybes and cannon are two
00:29:54.500 --> 00:29:56.040
examples of Classification algorithms.
00:29:56.720 --> 00:30:00.500
And night Bayes is not usually the best
00:30:00.500 --> 00:30:00.990
so.
00:30:02.960 --> 00:30:04.230
Not the first thing I would try.
00:30:06.260 --> 00:30:09.630
So random forest, decision trees, SVMS,
00:30:09.630 --> 00:30:11.090
naibs, logistic regression.
00:30:11.860 --> 00:30:12.870
All of those can apply.
00:30:15.190 --> 00:30:18.430
So that was a little bit, it took some
00:30:18.430 --> 00:30:19.580
time, but that's OK.
00:30:21.230 --> 00:30:22.740
That was one of the things that a lot
00:30:22.740 --> 00:30:24.550
of students wanted was, or at least I
00:30:24.550 --> 00:30:26.720
think that they said they wanted, is
00:30:26.720 --> 00:30:29.090
like to talk like a little bit more in
00:30:29.090 --> 00:30:30.360
depth about the homework and try to
00:30:30.360 --> 00:30:32.420
explain like what we're trying to ask
00:30:32.420 --> 00:30:32.710
for.
00:30:32.710 --> 00:30:34.330
So hopefully that does help a little
00:30:34.330 --> 00:30:34.480
bit.
00:30:36.390 --> 00:30:38.485
Alright, so now we can move on to Deep
00:30:38.485 --> 00:30:40.020
Learning, which is a pretty exciting
00:30:40.020 --> 00:30:41.180
topic.
00:30:41.180 --> 00:30:42.545
I'm sure everyone's heard of Deep
00:30:42.545 --> 00:30:42.810
Learning.
00:30:43.950 --> 00:30:45.470
So I'm going to tell the story of how
00:30:45.470 --> 00:30:47.580
Deep Learning became so important, and
00:30:47.580 --> 00:30:48.650
then I'm going to talk about the
00:30:48.650 --> 00:30:49.440
Optimizers.
00:30:49.440 --> 00:30:51.460
So going beyond the Vanilla SGD.
00:30:52.130 --> 00:30:55.940
And get into Residual Networks, which
00:30:55.940 --> 00:30:59.210
is one of the mainstays and.
00:31:00.160 --> 00:31:01.730
I'm kind of like conscious that I'm a
00:31:01.730 --> 00:31:03.190
computer vision researcher, so I was
00:31:03.190 --> 00:31:03.940
like, am I?
00:31:05.520 --> 00:31:07.543
For Deep Learning, do I just focus on
00:31:07.543 --> 00:31:09.280
like I don't want to just focus on the
00:31:09.280 --> 00:31:11.639
vision Networks if there were like
00:31:11.640 --> 00:31:12.935
other things that were important for
00:31:12.935 --> 00:31:14.020
the development of Deep Learning?
00:31:14.640 --> 00:31:16.090
But when I looked into it, I realized
00:31:16.090 --> 00:31:17.930
that vision was like the breakthrough
00:31:17.930 --> 00:31:18.560
in Deep Learning.
00:31:18.560 --> 00:31:21.496
So the first big algorithms for Deep
00:31:21.496 --> 00:31:24.060
Learning were like as you'll see, based
00:31:24.060 --> 00:31:26.149
on ImageNet and Image image based
00:31:26.150 --> 00:31:26.880
classifiers.
00:31:27.990 --> 00:31:29.160
And then it's huge.
00:31:29.160 --> 00:31:32.870
Impact on NLP came a little bit later,
00:31:32.870 --> 00:31:35.203
but mainly Deep Learning makes its
00:31:35.203 --> 00:31:37.200
impact on structured data, where you
00:31:37.200 --> 00:31:39.660
have things like images and text, where
00:31:39.660 --> 00:31:41.880
relationships between the different
00:31:41.880 --> 00:31:43.720
elements that are fed into the network
00:31:43.720 --> 00:31:46.005
need to be Learned, where you're trying
00:31:46.005 --> 00:31:47.540
to learn patterns of these elements.
00:31:51.310 --> 00:31:53.050
Alright, so Deep Learning starts with
00:31:53.050 --> 00:31:55.260
the Perceptron, which we already talked
00:31:55.260 --> 00:31:55.480
about.
00:31:55.480 --> 00:31:58.470
This was proposed by Rosenblatt 1958.
00:31:59.850 --> 00:32:03.480
And you won't be let me read some of
00:32:03.480 --> 00:32:04.030
this out loud.
00:32:04.030 --> 00:32:06.150
So here's in 1958 New York Times
00:32:06.150 --> 00:32:07.580
article about the Perceptron.
00:32:08.310 --> 00:32:11.210
Called New Navy device learns by doing.
00:32:12.000 --> 00:32:14.720
Psychologist shows Embryo of computer
00:32:14.720 --> 00:32:16.670
designed to read and grow Wiser.
00:32:18.050 --> 00:32:20.510
There's the Navy revealed, revealed the
00:32:20.510 --> 00:32:22.350
embryo of an electronic computer today
00:32:22.350 --> 00:32:23.950
that expects we'll be able to walk,
00:32:23.950 --> 00:32:25.810
talk, see right and reproduce itself
00:32:25.810 --> 00:32:28.220
and be conscious of its existence.
00:32:28.980 --> 00:32:30.750
The Embryo, the Weather Bureau is
00:32:30.750 --> 00:32:33.630
$2,000,000 seven 104 Computer learn to
00:32:33.630 --> 00:32:35.530
differentiate between right and left
00:32:35.530 --> 00:32:37.419
after 50 attempts in the Navy's
00:32:37.420 --> 00:32:38.770
demonstration for newsmen.
00:32:39.730 --> 00:32:40.270
This.
00:32:41.040 --> 00:32:43.830
I don't know why it took 50 attempts.
00:32:43.830 --> 00:32:45.520
There's only two answers.
00:32:46.240 --> 00:32:48.970
But the service said it would use this
00:32:48.970 --> 00:32:51.630
principle to build the first of its
00:32:51.630 --> 00:32:53.535
Perceptron thinking machines that we'll
00:32:53.535 --> 00:32:54.670
be able to read and write.
00:32:54.670 --> 00:32:56.570
It is expected to be finished in about
00:32:56.570 --> 00:32:58.920
a year at a cost of $100,000.
00:33:01.970 --> 00:33:02.605
So going on.
00:33:02.605 --> 00:33:04.860
So they're pretty underestimated.
00:33:04.860 --> 00:33:06.880
The complexity of artificial
00:33:06.880 --> 00:33:09.133
intelligence obviously is like we have
00:33:09.133 --> 00:33:10.670
the, we have the Perceptron, we'll be
00:33:10.670 --> 00:33:11.800
done next year with the.
00:33:12.700 --> 00:33:13.240
And.
00:33:15.620 --> 00:33:17.460
They did, though, get some of the
00:33:17.460 --> 00:33:18.023
impact right.
00:33:18.023 --> 00:33:20.155
So they said the brain is designed to
00:33:20.155 --> 00:33:21.940
remember images and information as
00:33:21.940 --> 00:33:22.930
perceived itself.
00:33:22.930 --> 00:33:24.540
Ordinary computers remember only what
00:33:24.540 --> 00:33:26.220
has fed into them on punch cards or
00:33:26.220 --> 00:33:28.220
magnetic tape, so the information is
00:33:28.220 --> 00:33:29.210
stored in the weights of the network.
00:33:30.090 --> 00:33:31.650
Later Perceptrons will be able to
00:33:31.650 --> 00:33:33.300
recognize people and call out their
00:33:33.300 --> 00:33:35.589
names and instantly translate speech in
00:33:35.590 --> 00:33:37.860
one language to speech or writing in
00:33:37.860 --> 00:33:39.580
another language, it was predicted.
00:33:40.180 --> 00:33:44.110
So it took 70 years, but it happened.
00:33:46.150 --> 00:33:50.130
So it's at least shows some insight
00:33:50.130 --> 00:33:51.780
into like what this what this
00:33:51.780 --> 00:33:53.900
technology could become.
00:33:54.880 --> 00:33:56.430
So it's a pretty interesting article.
00:33:58.120 --> 00:34:01.000
So from the Perceptron we eventually
00:34:01.000 --> 00:34:03.120
went to a two layer, two layer neural
00:34:03.120 --> 00:34:03.550
network.
00:34:03.550 --> 00:34:05.170
I think that didn't happen until the
00:34:05.170 --> 00:34:06.220
early 80s.
00:34:06.700 --> 00:34:07.260
00:34:08.120 --> 00:34:09.440
And these are more difficult to
00:34:09.440 --> 00:34:11.910
optimize the big thing that's, I mean
00:34:11.910 --> 00:34:14.147
if you think about it before the 80s
00:34:14.147 --> 00:34:16.420
you couldn't even like store digital
00:34:16.420 --> 00:34:17.720
data in any quantities.
00:34:17.720 --> 00:34:19.320
So it's really hard to do things like.
00:34:20.350 --> 00:34:22.515
Multi layer Networks or machine
00:34:22.515 --> 00:34:23.410
learning.
00:34:23.410 --> 00:34:25.162
So that's kind of why like the machine
00:34:25.162 --> 00:34:27.520
learning in 1958 was a huge deal, even
00:34:27.520 --> 00:34:28.830
if it's in a very limited form.
00:34:31.000 --> 00:34:33.023
And then with these nonlinearities you
00:34:33.023 --> 00:34:34.550
can then learn nonlinear functions,
00:34:34.550 --> 00:34:36.220
while Perceptrons are limited to linear
00:34:36.220 --> 00:34:36.740
linear functions.
00:34:36.740 --> 00:34:38.520
And then you can have Multi layer
00:34:38.520 --> 00:34:40.390
neural networks where you just have
00:34:40.390 --> 00:34:41.130
more layers.
00:34:42.480 --> 00:34:43.780
And we talked about how you can
00:34:43.780 --> 00:34:46.550
optimize these Networks using a form of
00:34:46.550 --> 00:34:47.520
Gradient Descent.
00:34:48.760 --> 00:34:50.280
And in particular you do back
00:34:50.280 --> 00:34:52.270
propagation where you allow the
00:34:52.270 --> 00:34:54.434
Gradients or like how you should change
00:34:54.434 --> 00:34:57.710
the error the Gradients are based on,
00:34:57.710 --> 00:34:59.642
like how the weights affect the error
00:34:59.642 --> 00:35:01.570
and that can be propagated back through
00:35:01.570 --> 00:35:02.130
the network.
00:35:02.970 --> 00:35:03.520
00:35:04.430 --> 00:35:06.920
And then you can optimize using
00:35:06.920 --> 00:35:08.890
stochastic gradient descent, where you
00:35:08.890 --> 00:35:10.640
find the best Update based on a small
00:35:10.640 --> 00:35:11.790
amount of data at a time.
00:35:14.670 --> 00:35:18.240
So now to get to the next Phase I need
00:35:18.240 --> 00:35:21.085
to get into MLP's applied to images.
00:35:21.085 --> 00:35:23.180
So I want to just very briefly tell you
00:35:23.180 --> 00:35:24.300
a little bit about images.
00:35:25.480 --> 00:35:27.730
So images, if you have an intensity
00:35:27.730 --> 00:35:29.842
image like what we saw for MNIST, then
00:35:29.842 --> 00:35:32.140
you have then the image is a matrix.
00:35:32.860 --> 00:35:35.550
So the rows will be the Y position,
00:35:35.550 --> 00:35:36.942
there will be the rows of the image,
00:35:36.942 --> 00:35:38.417
the columns or the columns of the image
00:35:38.417 --> 00:35:40.440
and the values range from zero to 1,
00:35:40.440 --> 00:35:43.235
where usually like one is as bright and
00:35:43.235 --> 00:35:44.140
zero is dark.
00:35:47.410 --> 00:35:49.100
If you have a color image, then you
00:35:49.100 --> 00:35:50.769
have three of these matrices, one for
00:35:50.770 --> 00:35:54.080
each color channel, and the standard
00:35:54.080 --> 00:35:55.760
way it's stored is in RGB.
00:35:55.760 --> 00:35:57.100
So you have one for the red, one for
00:35:57.100 --> 00:35:58.490
the green, one for the blue.
00:36:01.760 --> 00:36:05.200
And so in Python, an image in RGB image
00:36:05.200 --> 00:36:07.310
is stored as a 3 dimensional matrix.
00:36:08.560 --> 00:36:11.440
Where for example, the upper left
00:36:11.440 --> 00:36:14.983
corner of it, 000 is the red value of
00:36:14.983 --> 00:36:16.360
the top left pixel.
00:36:17.590 --> 00:36:21.010
Yaxe in general is the.
00:36:21.430 --> 00:36:23.920
Is the Cth color, so C can be zero, one
00:36:23.920 --> 00:36:25.290
or two for red, green or blue.
00:36:26.320 --> 00:36:29.390
The Wyeth row and the XTH column, so
00:36:29.390 --> 00:36:31.780
it's a color of a particular pixel.
00:36:32.670 --> 00:36:34.990
So that's how images are stored.
00:36:35.800 --> 00:36:38.680
In computers, if you read it will be a
00:36:38.680 --> 00:36:40.490
3D matrix if it's a color image.
00:36:44.730 --> 00:36:47.780
So the wait.
00:36:47.780 --> 00:36:48.890
Did I miss something?
00:36:48.890 --> 00:36:51.705
Yes, I meant to talk about this first.
00:36:51.705 --> 00:36:53.592
So when you're analyzing images.
00:36:53.592 --> 00:36:56.450
So in the MNIST problem, we just like
00:36:56.450 --> 00:36:58.265
turn the image into a column vector so
00:36:58.265 --> 00:36:59.995
that we can apply a linear classifier
00:36:59.995 --> 00:37:00.660
to it.
00:37:00.660 --> 00:37:02.900
In that case, like there's no longer
00:37:02.900 --> 00:37:05.823
any positional structure stored in the
00:37:05.823 --> 00:37:09.920
vector, and the logistic regressor KNN
00:37:09.920 --> 00:37:11.620
doesn't really care whether pixels were
00:37:11.620 --> 00:37:12.710
next to each other or not.
00:37:12.710 --> 00:37:14.280
It's just like treating them as like
00:37:14.280 --> 00:37:15.040
separate.
00:37:15.420 --> 00:37:17.630
Individual Input values that it's going
00:37:17.630 --> 00:37:19.520
to use to determine similarity or make
00:37:19.520 --> 00:37:20.350
some Prediction.
00:37:21.300 --> 00:37:24.121
But we can do much better analysis of
00:37:24.121 --> 00:37:26.255
images if we take into account that
00:37:26.255 --> 00:37:28.130
like local patterns and the images are
00:37:28.130 --> 00:37:28.760
important.
00:37:28.760 --> 00:37:31.260
So by like trying to find edges or
00:37:31.260 --> 00:37:33.040
finding patterns like things that look
00:37:33.040 --> 00:37:36.043
like eyes or faces, we can do much
00:37:36.043 --> 00:37:38.060
better analysis than if we just like
00:37:38.060 --> 00:37:39.680
treat it as a big long vector of
00:37:39.680 --> 00:37:40.140
values.
00:37:42.690 --> 00:37:44.139
So if you.
00:37:45.030 --> 00:37:46.770
One of the common ways of processing
00:37:46.770 --> 00:37:50.480
images is that you apply some.
00:37:50.610 --> 00:37:51.170
00:37:52.010 --> 00:37:54.800
You apply some weights.
00:37:55.470 --> 00:37:57.930
To like different little patches in the
00:37:57.930 --> 00:37:59.775
image and you take up dot product of
00:37:59.775 --> 00:38:00.780
the weights with the patch.
00:38:01.440 --> 00:38:03.130
So a simple example is that you could
00:38:03.130 --> 00:38:06.150
take the value of a pixel in the center
00:38:06.150 --> 00:38:08.510
minus the value of the pixel to the
00:38:08.510 --> 00:38:10.439
left minus the value of the pixel to
00:38:10.440 --> 00:38:11.760
its right, and that would tell you if
00:38:11.760 --> 00:38:13.700
there's an edge at that position.
00:38:16.620 --> 00:38:17.070
Right.
00:38:19.290 --> 00:38:19.760
So.
00:38:20.730 --> 00:38:22.766
When we represented again when we
00:38:22.766 --> 00:38:25.590
represented these Networks in MLPS, I
00:38:25.590 --> 00:38:28.401
mean when we represented these Networks
00:38:28.401 --> 00:38:31.870
in homework one and homework two in
00:38:31.870 --> 00:38:32.230
fact.
00:38:33.100 --> 00:38:36.050
We just represent the digits as like a
00:38:36.050 --> 00:38:38.520
long vector values as I said, and in
00:38:38.520 --> 00:38:40.090
that case we would have like these
00:38:40.090 --> 00:38:41.340
Fully connected layers.
00:38:41.990 --> 00:38:44.060
Where we have a set of weights for each
00:38:44.060 --> 00:38:45.100
intermediate Output.
00:38:45.100 --> 00:38:46.552
That's just like a linear prediction
00:38:46.552 --> 00:38:48.640
from the from all of the inputs.
00:38:48.640 --> 00:38:50.520
So this is not yet taking into account
00:38:50.520 --> 00:38:51.660
the structure of the image.
00:38:53.730 --> 00:38:56.970
Could I take into account the to do
00:38:56.970 --> 00:38:58.500
something more like filtering where we
00:38:58.500 --> 00:39:00.733
want to try to take advantage of that
00:39:00.733 --> 00:39:02.460
the image is composed of different
00:39:02.460 --> 00:39:04.260
patches that are kind of like locally
00:39:04.260 --> 00:39:06.530
meaningful or the relative values of
00:39:06.530 --> 00:39:07.870
nearby pixels are important?
00:39:08.680 --> 00:39:11.060
We can do what's called a Convolutional
00:39:11.060 --> 00:39:11.560
network.
00:39:12.860 --> 00:39:14.460
They're in a Convolutional network.
00:39:15.510 --> 00:39:18.060
Your weights are just analyzing a local
00:39:18.060 --> 00:39:19.585
neighborhood of the image, and by
00:39:19.585 --> 00:39:21.000
analyzing I just mean a dot product.
00:39:21.000 --> 00:39:23.489
So it's just a linear product, a linear
00:39:23.490 --> 00:39:25.986
combination of the pixel values in a
00:39:25.986 --> 00:39:28.521
local portion of the image, like a 7 by
00:39:28.521 --> 00:39:31.400
7, seven pixel by 7 pixel image patch.
00:39:33.170 --> 00:39:37.200
And if you scan like if you scan that
00:39:37.200 --> 00:39:39.290
patch or scan the weights across the
00:39:39.290 --> 00:39:42.630
image, you can then extract features or
00:39:42.630 --> 00:39:44.975
feature for every position in the
00:39:44.975 --> 00:39:45.310
Image.
00:39:48.700 --> 00:39:50.200
And these weights can be learned if
00:39:50.200 --> 00:39:51.670
you're using a network.
00:39:52.780 --> 00:39:54.880
And so for a given set of weights, you
00:39:54.880 --> 00:39:56.725
get what's called a feature map.
00:39:56.725 --> 00:39:58.075
So this could be representing whether
00:39:58.075 --> 00:39:59.948
there's a vertical edge at each
00:39:59.948 --> 00:40:02.050
position, or horizontal edge at each
00:40:02.050 --> 00:40:03.490
position, or whether there's like a
00:40:03.490 --> 00:40:04.930
dark patch in the middle of a bright
00:40:04.930 --> 00:40:06.200
area, something like that.
00:40:08.690 --> 00:40:10.380
And if you have a bunch of these sets
00:40:10.380 --> 00:40:11.940
of Learned weights, then you can
00:40:11.940 --> 00:40:14.180
generate a bunch of feature maps, so
00:40:14.180 --> 00:40:15.490
they're just representing different
00:40:15.490 --> 00:40:16.940
things about the edges or local
00:40:16.940 --> 00:40:18.110
patterns in the Image.
00:40:21.010 --> 00:40:22.025
Here's an example.
00:40:22.025 --> 00:40:24.960
So let's say we have this edge filter
00:40:24.960 --> 00:40:25.205
here.
00:40:25.205 --> 00:40:26.820
So it's just saying like is there
00:40:26.820 --> 00:40:28.520
looking for diagonal edges.
00:40:28.520 --> 00:40:30.625
Essentially whether they're the sum of
00:40:30.625 --> 00:40:32.200
values in the upper right is greater
00:40:32.200 --> 00:40:33.460
than the sum of values in the lower
00:40:33.460 --> 00:40:33.710
left.
00:40:34.820 --> 00:40:36.390
Kind of like scan that across the
00:40:36.390 --> 00:40:36.640
image.
00:40:36.640 --> 00:40:38.370
So for each Image position you take the
00:40:38.370 --> 00:40:39.850
dot product of these weights with the
00:40:39.850 --> 00:40:40.520
image pixels.
00:40:41.720 --> 00:40:43.220
And then that gives you some feature
00:40:43.220 --> 00:40:43.890
map.
00:40:43.890 --> 00:40:46.160
So here like dark and bright values
00:40:46.160 --> 00:40:47.950
mean that there is like a strong edge
00:40:47.950 --> 00:40:48.970
in that direction.
00:40:51.200 --> 00:40:53.220
And then you can do that with other
00:40:53.220 --> 00:40:55.140
filters to look for other kinds of
00:40:55.140 --> 00:40:57.010
edges or patterns, and you get a bunch
00:40:57.010 --> 00:40:58.960
of these feature maps and then they get
00:40:58.960 --> 00:41:00.190
stacked together as your next
00:41:00.190 --> 00:41:01.020
representation.
00:41:02.580 --> 00:41:03.605
So then we get like.
00:41:03.605 --> 00:41:05.220
The Width here is like the number of
00:41:05.220 --> 00:41:05.960
feature maps.
00:41:06.770 --> 00:41:08.350
Sometimes people call them channels.
00:41:08.350 --> 00:41:10.317
So you start with an RGB 3 channel
00:41:10.317 --> 00:41:11.803
image and then you have like a feature
00:41:11.803 --> 00:41:12.489
channel Image.
00:41:15.010 --> 00:41:16.680
And next you can do the same thing.
00:41:16.680 --> 00:41:17.615
Now your weights.
00:41:17.615 --> 00:41:19.580
Now, instead of operating on RGB
00:41:19.580 --> 00:41:21.417
values, you operate on the feature
00:41:21.417 --> 00:41:23.160
values, but you still analyze local
00:41:23.160 --> 00:41:24.860
patches of these feature maps.
00:41:25.720 --> 00:41:27.180
And produce new feature maps.
00:41:29.350 --> 00:41:31.030
And that's the basic idea of a
00:41:31.030 --> 00:41:32.480
Convolutional network.
00:41:32.480 --> 00:41:34.670
So you start with the input image.
00:41:35.630 --> 00:41:38.550
You do some Convolution using Learned
00:41:38.550 --> 00:41:39.150
weights.
00:41:39.150 --> 00:41:41.600
You apply some nonlinearity like a
00:41:41.600 --> 00:41:42.030
ReLU.
00:41:43.050 --> 00:41:45.110
And then you often do like some kind of
00:41:45.110 --> 00:41:46.280
spatial pooling.
00:41:47.300 --> 00:41:50.480
Which is basically if you take like 2
00:41:50.480 --> 00:41:52.390
by two groups of pixels in the image
00:41:52.390 --> 00:41:54.070
and you represent the value or the Max
00:41:54.070 --> 00:41:54.920
of those pixels.
00:41:55.690 --> 00:41:57.371
Then you can like reduce the size of
00:41:57.371 --> 00:41:59.009
the image or reduce the size of the
00:41:59.010 --> 00:42:01.060
feature map and still like retain a lot
00:42:01.060 --> 00:42:02.760
of the original information.
00:42:03.400 --> 00:42:05.530
And so this is like the general
00:42:05.530 --> 00:42:07.900
structure of convolutional neural
00:42:07.900 --> 00:42:10.750
networks or CNS, that you apply a
00:42:10.750 --> 00:42:13.320
filter, you apply nonlinearity, and
00:42:13.320 --> 00:42:15.360
then you like downsample the image,
00:42:15.360 --> 00:42:17.830
meaning you reduce its size by taking
00:42:17.830 --> 00:42:20.456
averages of small blocks or maxes of
00:42:20.456 --> 00:42:20.989
small blocks.
00:42:23.360 --> 00:42:25.630
And you just keep repeating that until
00:42:25.630 --> 00:42:28.090
you finally at the end have some linear
00:42:28.090 --> 00:42:29.100
layers for Prediction.
00:42:31.020 --> 00:42:33.110
So this is just again showing the basic
00:42:33.110 --> 00:42:34.980
structure you do Convolution pool, so
00:42:34.980 --> 00:42:37.320
it's basically convolved, downsample,
00:42:37.320 --> 00:42:39.590
convolve down sample et cetera and then
00:42:39.590 --> 00:42:41.710
linear layers for your final MLP
00:42:41.710 --> 00:42:42.230
Prediction.
00:42:48.040 --> 00:42:48.810
So.
00:42:49.580 --> 00:42:53.300
So this was the CNN was first invented
00:42:53.300 --> 00:42:54.430
by Jian LeCun.
00:42:55.220 --> 00:42:58.230
For character digit recognition in the
00:42:58.230 --> 00:42:58.930
late 90s.
00:43:00.360 --> 00:43:01.249
I'm pretty sure.
00:43:01.249 --> 00:43:03.780
I'm pretty sure this is the first
00:43:03.780 --> 00:43:04.780
published CNN.
00:43:05.950 --> 00:43:07.830
So here it's a little misleading.
00:43:07.830 --> 00:43:09.450
It's showing a letter and then 10
00:43:09.450 --> 00:43:12.040
outputs, but it was applied to both
00:43:12.040 --> 00:43:14.370
characters and digits, so.
00:43:15.270 --> 00:43:17.500
The Input would be some like.
00:43:17.500 --> 00:43:18.950
This was also applied to MNIST.
00:43:20.030 --> 00:43:21.840
But the Input would be some digit or
00:43:21.840 --> 00:43:22.360
character.
00:43:23.390 --> 00:43:25.765
You have like 6 feature maps that were
00:43:25.765 --> 00:43:28.248
like really big filters, 28 by 28 or
00:43:28.248 --> 00:43:28.589
not.
00:43:28.590 --> 00:43:29.980
They're not necessarily big filters,
00:43:29.980 --> 00:43:30.280
sorry.
00:43:30.280 --> 00:43:32.730
Produce a 28 by 28 Size image after
00:43:32.730 --> 00:43:34.420
like filtering the image or applying
00:43:34.420 --> 00:43:36.700
these filters to the image, so a value
00:43:36.700 --> 00:43:37.820
at each position.
00:43:38.690 --> 00:43:40.520
That's like inside of this patch.
00:43:41.710 --> 00:43:43.110
They have six feature maps.
00:43:43.110 --> 00:43:45.410
Then you do an average pooling, which
00:43:45.410 --> 00:43:47.220
means that you average two by two
00:43:47.220 --> 00:43:47.690
blocks.
00:43:48.720 --> 00:43:51.320
And then you get more feature maps by
00:43:51.320 --> 00:43:53.900
applying like filters to these skies,
00:43:53.900 --> 00:43:56.170
so a weighted combination of feature
00:43:56.170 --> 00:43:58.420
values at each position in local
00:43:58.420 --> 00:43:59.010
neighborhoods.
00:44:00.070 --> 00:44:01.910
So now we have 16 feature maps that are
00:44:01.910 --> 00:44:05.120
size 10 by 10 and then we again do some
00:44:05.120 --> 00:44:07.520
average pooling and then we have our
00:44:07.520 --> 00:44:09.370
linear layers of the MLP.
00:44:10.300 --> 00:44:12.470
And there were sigmoids in between
00:44:12.470 --> 00:44:12.720
them.
00:44:13.670 --> 00:44:16.245
And so that's the basic idea.
00:44:16.245 --> 00:44:17.990
So this was actually like a kind of
00:44:17.990 --> 00:44:20.070
like a big deal, but it never got
00:44:20.070 --> 00:44:22.406
pushed any further for a long time.
00:44:22.406 --> 00:44:23.019
So for.
00:44:23.850 --> 00:44:25.100
Between 1998.
00:44:25.770 --> 00:44:28.790
In 2012, there were really no more
00:44:28.790 --> 00:44:30.710
breakthroughs involving convolutional
00:44:30.710 --> 00:44:32.270
neural networks or any form of Deep
00:44:32.270 --> 00:44:32.650
Learning.
00:44:33.600 --> 00:44:37.090
John LeCun and.
00:44:37.160 --> 00:44:41.280
Bateau and Yoshua Bengio and Andrew
00:44:41.280 --> 00:44:42.860
Yang and others were like pushing on
00:44:42.860 --> 00:44:43.410
Deep Networks.
00:44:43.410 --> 00:44:45.270
They're writing papers like why this
00:44:45.270 --> 00:44:47.870
makes sense, why it's like the right
00:44:47.870 --> 00:44:48.410
thing to do.
00:44:49.250 --> 00:44:50.700
And they're trying to get them to work,
00:44:50.700 --> 00:44:52.560
but they just kind of couldn't.
00:44:52.560 --> 00:44:55.310
Like they were hard to train and just
00:44:55.310 --> 00:44:56.950
not getting results that were better
00:44:56.950 --> 00:44:58.509
than other approaches that were better
00:44:58.510 --> 00:44:58.990
understood.
00:44:59.750 --> 00:45:02.070
So people give up on Deep Networks in
00:45:02.070 --> 00:45:04.370
MLP and Convolutional Nets.
00:45:05.090 --> 00:45:06.648
And we're just doing like SVMS and
00:45:06.648 --> 00:45:08.536
things that were in random forests and
00:45:08.536 --> 00:45:09.760
things that had better theoretical
00:45:09.760 --> 00:45:10.560
justification.
00:45:11.600 --> 00:45:12.850
And there are some of the researchers
00:45:12.850 --> 00:45:14.590
got really frustrated, like Jian
00:45:14.590 --> 00:45:16.106
Lacour, and wrote a letter that said he
00:45:16.106 --> 00:45:17.950
was like not going to CVPR anymore
00:45:17.950 --> 00:45:20.000
because he's because they're rejecting
00:45:20.000 --> 00:45:22.270
his papers and he was quitting.
00:45:22.270 --> 00:45:24.086
I mean, he didn't quit, but he quit
00:45:24.086 --> 00:45:24.349
CVPR.
00:45:25.510 --> 00:45:27.270
I can kind of like poke at him a bit
00:45:27.270 --> 00:45:28.570
because now he's made millions of
00:45:28.570 --> 00:45:30.567
dollars and won the Turing award, so he
00:45:30.567 --> 00:45:32.240
got, he got his rewards.
00:45:35.350 --> 00:45:39.130
So all this changed in 2012.
00:45:39.780 --> 00:45:41.385
And one of the things that happened is
00:45:41.385 --> 00:45:43.633
that this big data set was created by
00:45:43.633 --> 00:45:45.166
Faye Faye Lee and her students.
00:45:45.166 --> 00:45:48.278
She was actually at UEC and then she
00:45:48.278 --> 00:45:49.590
went to Princeton and then she went to
00:45:49.590 --> 00:45:49.890
Stanford.
00:45:52.110 --> 00:45:56.140
There were fourteen million, so they
00:45:56.140 --> 00:45:58.140
got a ton of images, a ton of different
00:45:58.140 --> 00:45:58.790
classes.
00:45:59.530 --> 00:46:00.980
And they labeled them.
00:46:00.980 --> 00:46:02.990
So it was this enormous at the end,
00:46:02.990 --> 00:46:06.250
this enormous data set that had 1.2
00:46:06.250 --> 00:46:09.330
million Training images in 1000
00:46:09.330 --> 00:46:10.180
different classes.
00:46:10.180 --> 00:46:12.090
So a lot of data to learn from.
00:46:13.430 --> 00:46:15.440
A lot of researchers weren't like all
00:46:15.440 --> 00:46:16.830
that interested in this because
00:46:16.830 --> 00:46:18.810
Classification is a relatively simple
00:46:18.810 --> 00:46:21.140
problem compared to object detection or
00:46:21.140 --> 00:46:22.980
segmentation or other kinds of vision
00:46:22.980 --> 00:46:23.420
problems.
00:46:25.180 --> 00:46:26.660
But there were challenges that were
00:46:26.660 --> 00:46:28.160
held a year to year.
00:46:29.950 --> 00:46:33.720
And so and one of these challenges that
00:46:33.720 --> 00:46:35.740
2012 ImageNet Challenge.
00:46:36.720 --> 00:46:38.090
There are a lot of methods that were
00:46:38.090 --> 00:46:39.710
proposed and they all got pretty
00:46:39.710 --> 00:46:41.090
similar results.
00:46:41.090 --> 00:46:44.347
So you can see one of the methods got
00:46:44.347 --> 00:46:46.890
35% error, one got 30% error, these
00:46:46.890 --> 00:46:49.280
others got like maybe 27% error.
00:46:50.440 --> 00:46:54.520
And then there is one more that got 15%
00:46:54.520 --> 00:46:54.930
error.
00:46:55.860 --> 00:46:59.210
And it's like if you see for a couple
00:46:59.210 --> 00:47:01.630
years, everybody's getting like 25 to
00:47:01.630 --> 00:47:03.640
30% error and then all of a sudden
00:47:03.640 --> 00:47:05.580
somebody gets 15% error.
00:47:05.580 --> 00:47:07.160
That's like a big difference.
00:47:07.160 --> 00:47:08.717
It's like, what the heck happened?
00:47:08.717 --> 00:47:09.458
How is that?
00:47:09.458 --> 00:47:10.760
How is that possible?
00:47:11.630 --> 00:47:11.930
So.
00:47:13.740 --> 00:47:17.180
And I was actually at this workshop at
00:47:17.180 --> 00:47:21.740
Ecv in France, in Marseille, I think.
00:47:22.450 --> 00:47:25.260
And I remember it like people were
00:47:25.260 --> 00:47:25.510
pretty.
00:47:25.510 --> 00:47:27.113
Everyone was talking about it and was
00:47:27.113 --> 00:47:28.090
like, what does this mean?
00:47:28.090 --> 00:47:29.480
Did Deep Learning finally work?
00:47:29.480 --> 00:47:31.910
And, like, now we have to start paying
00:47:31.910 --> 00:47:33.990
attention to these people?
00:47:33.990 --> 00:47:35.543
So they're really astonished.
00:47:35.543 --> 00:47:37.750
I mean, everyone was really astonished.
00:47:37.750 --> 00:47:40.280
And this was what was behind us, this
00:47:40.280 --> 00:47:40.960
AlexNet.
00:47:41.890 --> 00:47:42.830
So AlexNet.
00:47:43.540 --> 00:47:46.010
With this same kind of network as
00:47:46.010 --> 00:47:48.950
LeCun's network with just some changes.
00:47:48.950 --> 00:47:52.373
So same kind of Convolution and pool.
00:47:52.373 --> 00:47:54.610
Convolution and pool followed by dient
00:47:54.610 --> 00:47:55.080
flares.
00:47:56.080 --> 00:47:58.673
But one difference is that so there's
00:47:58.673 --> 00:48:00.650
important differences in non important
00:48:00.650 --> 00:48:02.220
differences and at the time people
00:48:02.220 --> 00:48:03.456
didn't really know what was important
00:48:03.456 --> 00:48:04.270
and what wasn't.
00:48:04.270 --> 00:48:07.306
But a non important difference was Max
00:48:07.306 --> 00:48:08.740
pooling versus average pooling.
00:48:08.740 --> 00:48:10.950
Taking the Max a little window, little
00:48:10.950 --> 00:48:12.470
groups of pixels instead of the average
00:48:12.470 --> 00:48:13.350
when you downsample.
00:48:14.440 --> 00:48:16.040
An important difference was ReLU
00:48:16.040 --> 00:48:18.140
nonlinearity instead of Sigmoid.
00:48:18.140 --> 00:48:19.820
That made it much more optimizable.
00:48:21.010 --> 00:48:22.550
An important difference was that there
00:48:22.550 --> 00:48:24.340
was a lot more data to learn from.
00:48:24.340 --> 00:48:27.010
You had these thousand classes and 1.2
00:48:27.010 --> 00:48:28.680
million images where previously
00:48:28.680 --> 00:48:30.360
datasets were created that were just
00:48:30.360 --> 00:48:31.950
big enough for the current algorithms.
00:48:32.560 --> 00:48:35.170
So actually like people found that you
00:48:35.170 --> 00:48:38.000
kind of you might have like a 10,000
00:48:38.000 --> 00:48:39.436
images in your data set and people
00:48:39.436 --> 00:48:40.660
found well if you make it bigger,
00:48:40.660 --> 00:48:42.300
things don't really get better anyway.
00:48:42.300 --> 00:48:44.370
So no point wasting all that time
00:48:44.370 --> 00:48:45.390
making a bigger dataset.
00:48:46.820 --> 00:48:48.690
But you needed that data for these
00:48:48.690 --> 00:48:49.220
Networks.
00:48:50.640 --> 00:48:54.800
They made a bigger model than than Jian
00:48:54.800 --> 00:48:55.560
Laguna's Model.
00:48:56.270 --> 00:48:57.770
60 million parameters.
00:48:57.770 --> 00:49:00.260
It's actually a really big Model, even
00:49:00.260 --> 00:49:01.440
by today's standards.
00:49:01.440 --> 00:49:02.990
You often use smaller models in this.
00:49:04.590 --> 00:49:06.910
I mean, it's not really big, but it's
00:49:06.910 --> 00:49:09.190
pretty big GPU.
00:49:09.190 --> 00:49:10.940
And then they had a GPU implementation
00:49:10.940 --> 00:49:13.120
which gave A50X speedup over the CPU.
00:49:13.120 --> 00:49:14.280
So that meant that you could do the
00:49:14.280 --> 00:49:16.720
optimization where before they Trained
00:49:16.720 --> 00:49:18.020
on 2 GPUs for a week.
00:49:18.020 --> 00:49:20.300
But if you imagine A50X speedup, it
00:49:20.300 --> 00:49:23.680
would have taken a year on CPUs.
00:49:24.300 --> 00:49:26.290
So obviously, like if you're a network,
00:49:26.290 --> 00:49:28.450
if your Model takes a year to train,
00:49:28.450 --> 00:49:30.220
that's kind of like a little too long.
00:49:32.230 --> 00:49:33.640
And then they did this Dropout
00:49:33.640 --> 00:49:35.150
regularization, which I won't talk
00:49:35.150 --> 00:49:36.740
about because it's actually turned out
00:49:36.740 --> 00:49:37.650
not to be all that important.
00:49:38.370 --> 00:49:40.330
But it is something worth knowing if
00:49:40.330 --> 00:49:41.920
you want to be a Deep Learning expert.
00:49:44.530 --> 00:49:47.340
What enabled the breakthrough is this
00:49:47.340 --> 00:49:50.660
ReLU Activation enabled large models to
00:49:50.660 --> 00:49:52.420
be optimized because the Gradients more
00:49:52.420 --> 00:49:53.900
easily flow through the network, where
00:49:53.900 --> 00:49:55.620
the Sigmoid like squeezes off the
00:49:55.620 --> 00:49:56.460
Gradients up both ends.
00:49:58.080 --> 00:50:00.300
There is a ImageNet data set provided
00:50:00.300 --> 00:50:02.861
diverse and massive annotation to take
00:50:02.861 --> 00:50:05.068
advantage of that could take so that
00:50:05.068 --> 00:50:08.170
could take advantage of the models or
00:50:08.170 --> 00:50:09.530
the models could take advantage of this
00:50:09.530 --> 00:50:11.310
large data they need each other.
00:50:12.350 --> 00:50:14.640
And then there's GPU processing that
00:50:14.640 --> 00:50:16.510
made the optimization practicable,
00:50:16.510 --> 00:50:17.080
practicable.
00:50:17.080 --> 00:50:19.450
So you needed like basically all three
00:50:19.450 --> 00:50:21.110
of these ingredients at once in order
00:50:21.110 --> 00:50:21.980
to make the breakthrough.
00:50:21.980 --> 00:50:23.210
So that's why even though there are
00:50:23.210 --> 00:50:24.810
people pushing on, it didn't.
00:50:26.150 --> 00:50:26.990
It took a while.
00:50:29.280 --> 00:50:31.020
So it wasn't just ImageNet and
00:50:31.020 --> 00:50:31.930
Classification?
00:50:32.840 --> 00:50:34.120
It turned out all kinds of other
00:50:34.120 --> 00:50:36.280
problems also benefited tremendously
00:50:36.280 --> 00:50:38.550
from Deep Learning, and in pretty
00:50:38.550 --> 00:50:39.250
simple ways.
00:50:39.250 --> 00:50:42.210
So, like in the next two years later,
00:50:42.210 --> 00:50:43.990
Girshick et al.
00:50:44.140 --> 00:50:44.690
00:50:45.670 --> 00:50:48.380
Found that if you take a network that
00:50:48.380 --> 00:50:50.400
has been trained on Imagenet and you
00:50:50.400 --> 00:50:52.260
use it for object detection.
00:50:52.260 --> 00:50:54.590
So you basically just like make, use it
00:50:54.590 --> 00:50:56.550
to analyze like each patch of the image
00:50:56.550 --> 00:50:58.720
and make predictions off of those
00:50:58.720 --> 00:51:01.225
features that are generated from the
00:51:01.225 --> 00:51:01.500
ImageNet.
00:51:02.250 --> 00:51:04.520
Network for each patch.
00:51:04.520 --> 00:51:06.945
Then they were able to get a big boost
00:51:06.945 --> 00:51:08.040
in Detection.
00:51:08.040 --> 00:51:10.170
So again, if you think about it, this
00:51:10.170 --> 00:51:12.620
is the Dalal Triggs detector that I
00:51:12.620 --> 00:51:14.710
talked about in the context of SVM.
00:51:16.230 --> 00:51:17.690
And then there's like these Deformable
00:51:17.690 --> 00:51:19.440
parts models which are like more
00:51:19.440 --> 00:51:21.700
complex models modeling the parts of
00:51:21.700 --> 00:51:22.260
the objects.
00:51:23.080 --> 00:51:25.570
You get some improvement over A6 year
00:51:25.570 --> 00:51:28.920
period from .2 to .4.
00:51:28.920 --> 00:51:29.940
Higher is better here.
00:51:30.720 --> 00:51:32.770
And then in one year it goes from .4 to
00:51:32.770 --> 00:51:36.170
6, so again a huge jump and then this
00:51:36.170 --> 00:51:39.960
rapidly even shut up higher and
00:51:39.960 --> 00:51:40.610
following years.
00:51:42.160 --> 00:51:43.430
And then there are papers like this
00:51:43.430 --> 00:51:45.240
that showed, hey, if you just take the
00:51:45.240 --> 00:51:47.890
features from this network that's
00:51:47.890 --> 00:51:50.400
trained on Imagenet and you apply it to
00:51:50.400 --> 00:51:52.350
a whole range of Classification task.
00:51:53.010 --> 00:51:55.810
It outperforms the classifiers that
00:51:55.810 --> 00:51:58.250
were that had handcrafted features for
00:51:58.250 --> 00:51:59.300
each of these data sets.
00:52:00.280 --> 00:52:02.790
So basically just like everything was
00:52:02.790 --> 00:52:04.970
being reset like expectations and what
00:52:04.970 --> 00:52:08.360
kind of performance is achievable and
00:52:08.360 --> 00:52:09.925
Deep Networks were outperforming
00:52:09.925 --> 00:52:10.580
everything.
00:52:13.370 --> 00:52:13.780
So.
00:52:14.650 --> 00:52:17.350
I'm not going to take the full break,
00:52:17.350 --> 00:52:19.390
sorry, but I will show you this video.
00:52:20.860 --> 00:52:22.610
So it was kind of, it was pretty
00:52:22.610 --> 00:52:23.640
interesting time.
00:52:23.640 --> 00:52:26.595
It's really a Deep, it's truly like a
00:52:26.595 --> 00:52:28.230
Deep Learning revolution for machine
00:52:28.230 --> 00:52:29.180
learning.
00:52:29.180 --> 00:52:30.980
All the other methods and concepts are
00:52:30.980 --> 00:52:34.150
still applicable, but a lot of the high
00:52:34.150 --> 00:52:36.180
performance is coming out of the use of
00:52:36.180 --> 00:52:37.620
big data and Deep Learning.
00:52:37.620 --> 00:52:37.950
Question.
00:52:45.560 --> 00:52:46.510
Do annotated them.
00:52:48.240 --> 00:52:50.040
So I think they use what's called
00:52:50.040 --> 00:52:51.410
Amazon Mechanical Turk.
00:52:51.410 --> 00:52:53.990
So that's like a crowdsourcing platform
00:52:53.990 --> 00:52:55.050
where you can put up.
00:52:56.050 --> 00:52:58.110
Somebody like tabs through images and
00:52:58.110 --> 00:53:00.730
you pay them to.
00:53:00.840 --> 00:53:01.430
Label them.
00:53:02.220 --> 00:53:04.065
But they first, So what they did is
00:53:04.065 --> 00:53:04.570
they actually.
00:53:04.570 --> 00:53:05.910
It's not a stupid question by the way.
00:53:05.910 --> 00:53:07.560
It's like how you annotate, how do you
00:53:07.560 --> 00:53:07.980
get data.
00:53:07.980 --> 00:53:09.710
Annotation is like the key problem in
00:53:09.710 --> 00:53:10.380
applications.
00:53:11.680 --> 00:53:12.310
But.
00:53:14.080 --> 00:53:16.000
What they did is they first they use
00:53:16.000 --> 00:53:18.870
Wordnet to get a set of like different
00:53:18.870 --> 00:53:21.680
nouns and then they use image search to
00:53:21.680 --> 00:53:23.280
download images that correspond to
00:53:23.280 --> 00:53:24.320
those nouns.
00:53:24.320 --> 00:53:25.829
So then they needed people to like
00:53:25.830 --> 00:53:27.565
curate the data to say whether or not
00:53:27.565 --> 00:53:29.250
like if they searched for.
00:53:30.300 --> 00:53:32.640
For golden retriever for example, like
00:53:32.640 --> 00:53:34.183
make sure that it's actually a golden
00:53:34.183 --> 00:53:36.200
retriever, so kind of clean the labels
00:53:36.200 --> 00:53:38.580
rather than assign it to one out of
00:53:38.580 --> 00:53:39.200
1000 labels.
00:53:40.280 --> 00:53:41.870
But it was pretty massive project.
00:53:42.710 --> 00:53:42.930
Yeah.
00:53:45.130 --> 00:53:49.140
So at the time, it felt like computer
00:53:49.140 --> 00:53:50.409
vision researchers were like the
00:53:50.410 --> 00:53:52.921
samurai, like you like Learned all
00:53:52.921 --> 00:53:54.940
these, made friends with the pixels you
00:53:54.940 --> 00:53:56.930
had, learned all these feature
00:53:56.930 --> 00:53:57.450
representations.
00:53:57.450 --> 00:53:59.430
You Applied your expertise to solve the
00:53:59.430 --> 00:53:59.880
problems.
00:54:00.940 --> 00:54:02.530
And then big data came along.
00:54:03.640 --> 00:54:05.510
And Deep Learning.
00:54:06.360 --> 00:54:07.920
And it's not that inappropriate.
00:54:07.920 --> 00:54:08.550
Don't worry.
00:54:11.290 --> 00:54:12.280
And.
00:54:13.140 --> 00:54:15.040
It was like this scene in the Last
00:54:15.040 --> 00:54:15.780
samurai.
00:54:16.720 --> 00:54:18.610
Where there's these like.
00:54:19.270 --> 00:54:21.680
Craftsman of war and of combat.
00:54:21.680 --> 00:54:24.097
And then the other side buys these
00:54:24.097 --> 00:54:27.060
Gatling guns and just pours bullets
00:54:27.060 --> 00:54:28.400
into the Gatling guns.
00:54:29.720 --> 00:54:32.120
And justice moves down the samurai.
00:54:37.180 --> 00:54:39.150
So that was basically Deep Learning.
00:54:39.150 --> 00:54:40.420
It's like you no longer like
00:54:40.420 --> 00:54:42.090
handcrafting these features and
00:54:42.090 --> 00:54:43.840
applying all of this art and knowledge.
00:54:43.840 --> 00:54:45.516
You just have this big network and you
00:54:45.516 --> 00:54:47.865
just like pour in data and it totally
00:54:47.865 --> 00:54:49.360
like massacres all the other
00:54:49.360 --> 00:54:50.220
algorithms.
00:54:58.600 --> 00:54:59.210
Yeah.
00:55:10.130 --> 00:55:12.380
What is the next thing?
00:55:17.790 --> 00:55:20.040
So all right, so in my personal
00:55:20.040 --> 00:55:23.350
opinion, so to me the limitation
00:55:23.350 --> 00:55:25.340
there's two major limitations of Deep
00:55:25.340 --> 00:55:25.690
Learning.
00:55:26.470 --> 00:55:28.060
One is that the Networks.
00:55:28.060 --> 00:55:30.535
There's only there's one kind of
00:55:30.535 --> 00:55:31.460
network structure.
00:55:31.460 --> 00:55:33.450
All the information is encoded within
00:55:33.450 --> 00:55:34.440
the weights of the network.
00:55:35.330 --> 00:55:38.270
For humans, for example, we actually
00:55:38.270 --> 00:55:39.340
have different kinds of memory
00:55:39.340 --> 00:55:40.070
structures.
00:55:40.070 --> 00:55:42.440
We have like the ability to remember
00:55:42.440 --> 00:55:43.245
independent facts.
00:55:43.245 --> 00:55:45.300
We also have our implicit memory, which
00:55:45.300 --> 00:55:46.659
guides our action and like is
00:55:46.660 --> 00:55:49.400
immediately like kind of like
00:55:49.400 --> 00:55:51.260
accumulates a lot of information.
00:55:51.260 --> 00:55:53.550
We have muscle memory, which is based
00:55:53.550 --> 00:55:55.180
on repetition, like reinforcement
00:55:55.180 --> 00:55:55.730
learning.
00:55:55.730 --> 00:55:57.930
And that muscle memory, like never goes
00:55:57.930 --> 00:55:58.470
away.
00:55:58.470 --> 00:56:00.110
It's retained for like 20 years.
00:56:00.110 --> 00:56:01.760
So we have many different memory
00:56:01.760 --> 00:56:04.350
systems in our bodies and brains.
00:56:05.070 --> 00:56:07.530
But the memory systems used by Deep
00:56:07.530 --> 00:56:09.170
Learning are homogeneous.
00:56:09.170 --> 00:56:10.720
So I think like figuring out how do we
00:56:10.720 --> 00:56:12.713
create more heterogeneous memory
00:56:12.713 --> 00:56:14.950
systems that can have different
00:56:14.950 --> 00:56:16.970
advantages, but work together to solve
00:56:16.970 --> 00:56:18.740
tasks is one thing.
00:56:19.620 --> 00:56:22.360
Another is that the systems are still
00:56:22.360 --> 00:56:23.830
essentially pattern recognition.
00:56:23.830 --> 00:56:25.310
So you have what's called sequence of
00:56:25.310 --> 00:56:27.380
sequence Networks for example, where
00:56:27.380 --> 00:56:29.411
like text comes in, text goes out or
00:56:29.411 --> 00:56:31.469
Image comes in, Image in, text comes in
00:56:31.470 --> 00:56:33.059
and text goes out or Image comes out.
00:56:33.970 --> 00:56:35.330
But they're like one shot.
00:56:36.020 --> 00:56:37.543
Or like a lot of things that we do, if
00:56:37.543 --> 00:56:39.625
you're writing, if you're going to
00:56:39.625 --> 00:56:40.750
like, I don't know, order a plane
00:56:40.750 --> 00:56:42.017
ticket or something, there's a bunch of
00:56:42.017 --> 00:56:43.425
steps that you go through.
00:56:43.425 --> 00:56:46.410
And so you make a plan, you execute
00:56:46.410 --> 00:56:48.100
that plan, and each of those steps
00:56:48.100 --> 00:56:49.550
involves some pattern recognition and
00:56:49.550 --> 00:56:50.140
various things.
00:56:50.740 --> 00:56:52.720
So there's a lot of compositionality to
00:56:52.720 --> 00:56:54.770
the kinds of problems that we solve
00:56:54.770 --> 00:56:55.310
day-to-day.
00:56:55.930 --> 00:56:58.635
And that compositionality is not really
00:56:58.635 --> 00:57:00.590
is only handled to a very limited
00:57:00.590 --> 00:57:03.060
extent by these by these Networks by
00:57:03.060 --> 00:57:03.600
themselves.
00:57:03.600 --> 00:57:05.980
So I think also better ways to form
00:57:05.980 --> 00:57:07.570
plans to execute.
00:57:08.430 --> 00:57:11.420
In terms of different steps and to make
00:57:11.420 --> 00:57:14.420
large problems more modular is also
00:57:14.420 --> 00:57:14.760
important.
00:57:20.090 --> 00:57:20.420
OK.
00:57:21.760 --> 00:57:22.782
So, all right.
00:57:22.782 --> 00:57:23.392
So I'm going to.
00:57:23.392 --> 00:57:24.920
I'm going to keep going because I want
00:57:24.920 --> 00:57:25.950
to.
00:57:26.400 --> 00:57:27.260
Because I want to.
00:57:29.290 --> 00:57:32.500
So the next part is optimization, so.
00:57:33.470 --> 00:57:34.720
The.
00:57:36.100 --> 00:57:39.124
So we talked previously about SGD and
00:57:39.124 --> 00:57:40.910
the optimization approaches are just
00:57:40.910 --> 00:57:42.767
like extensions of SGD.
00:57:42.767 --> 00:57:45.610
And these really cool illustrations or
00:57:45.610 --> 00:57:47.370
I think they're cool helpful
00:57:47.370 --> 00:57:49.630
illustrations are from this data
00:57:49.630 --> 00:57:51.880
science site, which somebody really
00:57:51.880 --> 00:57:53.620
nicely explains like the different
00:57:53.620 --> 00:57:55.340
optimization methods and.
00:57:56.180 --> 00:57:57.760
And provides these illustrations.
00:57:59.690 --> 00:58:00.440
So.
00:58:00.590 --> 00:58:01.240
00:58:02.060 --> 00:58:05.090
They so these different.
00:58:05.090 --> 00:58:07.710
All of these are like stochastic
00:58:07.710 --> 00:58:09.660
gradient descent, so I don't need to
00:58:09.660 --> 00:58:10.650
talk about the algorithm.
00:58:10.650 --> 00:58:12.607
They're all based on computing some
00:58:12.607 --> 00:58:14.900
Gradient of the loss with respect to
00:58:14.900 --> 00:58:15.500
your weights.
00:58:16.180 --> 00:58:18.170
And then they just differ in how you
00:58:18.170 --> 00:58:19.380
update the weights given that
00:58:19.380 --> 00:58:20.030
information.
00:58:21.070 --> 00:58:23.250
So this is basic SGD, which we talked
00:58:23.250 --> 00:58:25.660
about, some representing the Gradient
00:58:25.660 --> 00:58:27.020
of your loss with respect to the
00:58:27.020 --> 00:58:27.540
weights.
00:58:27.540 --> 00:58:29.346
You multiply it by some negative ETA
00:58:29.346 --> 00:58:31.170
and you add it the learning rate, and
00:58:31.170 --> 00:58:32.450
then you add it to your previous weight
00:58:32.450 --> 00:58:32.750
values.
00:58:34.010 --> 00:58:35.460
And this is a nice illustration of
00:58:35.460 --> 00:58:35.610
like.
00:58:36.400 --> 00:58:37.970
Compute the gradient with respect to
00:58:37.970 --> 00:58:39.660
each weight, and then you step in both
00:58:39.660 --> 00:58:40.880
those directions, right?
00:58:43.110 --> 00:58:43.300
Right.
00:58:43.300 --> 00:58:45.850
The next step is Momentum.
00:58:45.850 --> 00:58:47.914
So Momentum is what's letting this ball
00:58:47.914 --> 00:58:49.010
roll up the hill.
00:58:49.010 --> 00:58:51.667
If you just have SGD, then you can roll
00:58:51.667 --> 00:58:53.120
down the hill, but you'll never like
00:58:53.120 --> 00:58:54.494
really roll up it again because you
00:58:54.494 --> 00:58:56.229
don't have any Momentum, because the
00:58:56.230 --> 00:58:56.981
Gradient is up.
00:58:56.981 --> 00:58:58.660
You don't, you don't go up, you only go
00:58:58.660 --> 00:58:58.820
down.
00:59:00.710 --> 00:59:05.360
Momentum is important because in these
00:59:05.360 --> 00:59:08.010
Multi layer Networks you don't just
00:59:08.010 --> 00:59:11.000
have like one good low solution, a big
00:59:11.000 --> 00:59:12.823
bowl, you have like lots of pockets in
00:59:12.823 --> 00:59:14.780
the bowl so that the solution space
00:59:14.780 --> 00:59:16.483
looks more like an egg carton than a
00:59:16.483 --> 00:59:16.669
bowl.
00:59:16.670 --> 00:59:18.230
There's like lots of little pits.
00:59:19.120 --> 00:59:20.375
So you want to be able to roll through
00:59:20.375 --> 00:59:21.750
the little pits and get into the big
00:59:21.750 --> 00:59:21.990
pits?
00:59:23.390 --> 00:59:24.766
I guess join here.
00:59:24.766 --> 00:59:28.355
So here the purple ball has Momentum
00:59:28.355 --> 00:59:30.300
Momentum and the blue ball does not
00:59:30.300 --> 00:59:30.711
have Momentum.
00:59:30.711 --> 00:59:32.600
So the blue ball as soon as it rolls
00:59:32.600 --> 00:59:34.070
into like a little dip, it gets stuck
00:59:34.070 --> 00:59:34.250
there.
00:59:35.810 --> 00:59:38.010
Momentum is pretty simple to calculate,
00:59:38.010 --> 00:59:40.163
it's just one way to calculate it is
00:59:40.163 --> 00:59:43.360
just it's your Gradient plus some like
00:59:43.360 --> 00:59:45.510
9 times the last Gradient.
00:59:45.510 --> 00:59:46.800
So that way, like the previous
00:59:46.800 --> 00:59:47.990
Gradient, you keep moving in that
00:59:47.990 --> 00:59:48.750
direction a little bit.
00:59:49.560 --> 00:59:51.310
This is another way to represent it,
00:59:51.310 --> 00:59:52.690
where we represent this Momentum
00:59:52.690 --> 00:59:55.750
variable Mo FWT which is beta times.
00:59:55.750 --> 00:59:57.590
The last value beta would be for
00:59:57.590 --> 00:59:59.940
example 09 plus the current Gradient.
01:00:01.120 --> 01:00:02.603
So you just keep moving.
01:00:02.603 --> 01:00:04.060
You prefer to keep moving in the
01:00:04.060 --> 01:00:04.760
current direction.
01:00:05.560 --> 01:00:08.150
Every even if you call SGD and you do
01:00:08.150 --> 01:00:10.240
not mention Momentum to Pie torch by
01:00:10.240 --> 01:00:12.050
default it will ease Momentum because
01:00:12.050 --> 01:00:12.800
it's pretty important.
01:00:13.440 --> 01:00:15.340
And I think the default parameter is .9
01:00:15.340 --> 01:00:15.690
for beta.
01:00:18.890 --> 01:00:19.280
Question.
01:00:25.810 --> 01:00:27.880
It cannot go up.
01:00:27.880 --> 01:00:30.520
So with Manila SGD, you're always
01:00:30.520 --> 01:00:31.330
trying to go down.
01:00:32.040 --> 01:00:33.890
So you get into a little hole, you go
01:00:33.890 --> 01:00:35.020
down into the little hole, and you
01:00:35.020 --> 01:00:35.930
can't get back out of it.
01:00:36.610 --> 01:00:38.330
But Momentum, if it's a little hole and
01:00:38.330 --> 01:00:39.970
you've been rolling fast, you roll up
01:00:39.970 --> 01:00:41.630
out of it and you can get into other
01:00:41.630 --> 01:00:42.040
ones.
01:00:42.040 --> 01:00:42.840
Question.
01:00:56.070 --> 01:00:57.210
That's a good question.
01:00:57.210 --> 01:00:58.820
So I think the question is like, could
01:00:58.820 --> 01:01:00.640
you end up getting into a better
01:01:00.640 --> 01:01:02.560
solution and rolling out of it and then
01:01:02.560 --> 01:01:03.780
ending up in a worse one?
01:01:05.100 --> 01:01:05.990
That can happen.
01:01:06.860 --> 01:01:07.950
It's.
01:01:07.950 --> 01:01:09.360
I guess it's less likely though,
01:01:09.360 --> 01:01:10.940
because the larger holes usually have
01:01:10.940 --> 01:01:13.180
like bigger basins too, but.
01:01:13.300 --> 01:01:17.920
One thing people do, it's partially for
01:01:17.920 --> 01:01:20.230
that but more to more for overfitting
01:01:20.230 --> 01:01:21.950
is that you often see checkpoints.
01:01:21.950 --> 01:01:23.490
So you might save your Model at various
01:01:23.490 --> 01:01:25.662
points and at the end Choose the model
01:01:25.662 --> 01:01:28.440
that had the lowest validation loss,
01:01:28.440 --> 01:01:30.160
the OR the lowest validation error.
01:01:31.320 --> 01:01:33.230
So that even if you were to further
01:01:33.230 --> 01:01:34.930
optimize into a bad solution, you can
01:01:34.930 --> 01:01:35.640
go back.
01:01:35.640 --> 01:01:37.250
There's also like more complex
01:01:37.250 --> 01:01:39.940
algorithms that are I forget what it's
01:01:39.940 --> 01:01:41.300
called now, but when you go back and
01:01:41.300 --> 01:01:43.770
forth, so you take, you take really
01:01:43.770 --> 01:01:45.270
aggressive steps and then you back
01:01:45.270 --> 01:01:47.436
trace if you need to and then you take
01:01:47.436 --> 01:01:48.909
like more aggressive steps and back
01:01:48.909 --> 01:01:50.610
trace it's look ahead and something
01:01:50.610 --> 01:01:50.750
else.
01:01:53.020 --> 01:01:54.730
So there's like more complex algorithms
01:01:54.730 --> 01:01:55.680
that try to deal with that.
01:01:58.700 --> 01:02:01.270
So the other thing by the way that
01:02:01.270 --> 01:02:03.705
helps with this is the Stochastic part
01:02:03.705 --> 01:02:04.550
of SGD.
01:02:04.550 --> 01:02:07.000
Different little samples of data will
01:02:07.000 --> 01:02:08.300
actually have different Gradients.
01:02:08.300 --> 01:02:10.370
So what might be a pit for one data
01:02:10.370 --> 01:02:12.160
sample is not a pit for another data
01:02:12.160 --> 01:02:12.460
sample.
01:02:13.120 --> 01:02:15.620
And so that can help you get out of
01:02:15.620 --> 01:02:19.390
like little help with the optimization
01:02:19.390 --> 01:02:19.910
that way too.
01:02:22.830 --> 01:02:24.050
Alright, so there's another thing.
01:02:24.050 --> 01:02:25.865
Now we're not doing Momentum anymore.
01:02:25.865 --> 01:02:29.060
We're just trying to regularize our
01:02:29.060 --> 01:02:30.060
Descent.
01:02:30.170 --> 01:02:30.680
01:02:31.330 --> 01:02:34.863
So the intuition behind this is that in
01:02:34.863 --> 01:02:37.609
some cases is that in some cases some
01:02:37.610 --> 01:02:39.230
weights might not be initialized very
01:02:39.230 --> 01:02:39.520
well.
01:02:40.240 --> 01:02:42.027
And so they're not really like
01:02:42.027 --> 01:02:44.343
contributing to the Output very much.
01:02:44.343 --> 01:02:46.039
And as a result they don't get
01:02:46.040 --> 01:02:47.882
optimized much because they're not
01:02:47.882 --> 01:02:48.168
contributing.
01:02:48.168 --> 01:02:50.145
So they don't get, they basically don't
01:02:50.145 --> 01:02:51.360
get touched, they get left alone.
01:02:52.350 --> 01:02:54.840
The idea of AdaGrad is that you want
01:02:54.840 --> 01:02:57.410
to, like optimize, allow each of the
01:02:57.410 --> 01:02:59.960
weights to be optimized and so.
01:03:00.590 --> 01:03:02.920
You keep track of the total path length
01:03:02.920 --> 01:03:03.649
of those weights.
01:03:03.650 --> 01:03:05.399
So how have the weights changed
01:03:05.399 --> 01:03:05.694
overtime?
01:03:05.694 --> 01:03:08.117
And if the weights have changed a lot
01:03:08.117 --> 01:03:10.830
overtime, then you reduce how much
01:03:10.830 --> 01:03:12.120
you're going to move those particular
01:03:12.120 --> 01:03:14.080
weights, and if they haven't changed
01:03:14.080 --> 01:03:16.500
very much overtime, then you allow
01:03:16.500 --> 01:03:17.730
those weights to move more.
01:03:18.750 --> 01:03:20.310
So in terms of the math.
01:03:21.220 --> 01:03:23.230
You keep track of this magnitude.
01:03:23.230 --> 01:03:25.627
Is the path length, so it's just like
01:03:25.627 --> 01:03:26.910
the length of these curves.
01:03:27.820 --> 01:03:29.190
During the optimization.
01:03:29.870 --> 01:03:31.470
And that's just the sum of squared
01:03:31.470 --> 01:03:34.050
values of the Gradients square rooted.
01:03:34.050 --> 01:03:36.316
So it's the Euclidean distance of your
01:03:36.316 --> 01:03:39.249
Gradient of your Gradients of your
01:03:39.250 --> 01:03:39.960
weight Gradient.
01:03:41.520 --> 01:03:44.600
And then you normalize by that when
01:03:44.600 --> 01:03:45.700
you're computing your Update.
01:03:46.390 --> 01:03:48.500
And so in this case, for example, if
01:03:48.500 --> 01:03:50.390
you don't do, you get the cyan ball
01:03:50.390 --> 01:03:51.960
that rolls down in One Direction that's
01:03:51.960 --> 01:03:53.666
the fastest direction, and then rolls
01:03:53.666 --> 01:03:54.610
in the other direction.
01:03:55.420 --> 01:03:57.580
And if you do it, you get a more direct
01:03:57.580 --> 01:03:59.900
path to the final solution with the
01:03:59.900 --> 01:04:00.390
white ball.
01:04:04.430 --> 01:04:05.430
And then one.
01:04:06.210 --> 01:04:08.436
The problem with that approach is that
01:04:08.436 --> 01:04:10.210
your path lengths keep getting longer
01:04:10.210 --> 01:04:12.331
and so your steps keep getting smaller
01:04:12.331 --> 01:04:14.040
and smaller, and so it can take a
01:04:14.040 --> 01:04:15.600
really long time to converge.
01:04:15.600 --> 01:04:18.300
So RMSProp tries to deal with that root
01:04:18.300 --> 01:04:19.370
means squared propagation.
01:04:19.990 --> 01:04:21.450
By instead of doing it based on the
01:04:21.450 --> 01:04:23.376
total path length, it's based on a
01:04:23.376 --> 01:04:25.020
moving average of the path length, and
01:04:25.020 --> 01:04:26.879
you can one way to do a moving average.
01:04:27.570 --> 01:04:29.390
Is that you take the last value and
01:04:29.390 --> 01:04:31.340
multiply it by epsilon and then you do
01:04:31.340 --> 01:04:33.370
1 minus epsilon times the new value.
01:04:33.370 --> 01:04:36.273
So if this is like 999, if epsilon is
01:04:36.273 --> 01:04:38.970
999 then it will mostly reflect like
01:04:38.970 --> 01:04:41.040
the recent observations of the Squared
01:04:41.040 --> 01:04:41.410
value.
01:04:42.590 --> 01:04:43.750
A moving average.
01:04:44.360 --> 01:04:45.980
And then otherwise they normalization
01:04:45.980 --> 01:04:46.500
is the same.
01:04:47.670 --> 01:04:49.620
There are the green ball which is
01:04:49.620 --> 01:04:51.520
RMSProp moves faster than white ball.
01:04:52.870 --> 01:04:55.170
And finally, we get to Adam, which is
01:04:55.170 --> 01:04:57.610
the most commonly used just Vanilla
01:04:57.610 --> 01:04:58.110
SGD.
01:04:58.110 --> 01:05:00.049
Plus, Momentum is commonly used,
01:05:00.050 --> 01:05:01.430
especially by people that have really
01:05:01.430 --> 01:05:01.990
big computers.
01:05:02.790 --> 01:05:05.590
But by Adam is most commonly used if
01:05:05.590 --> 01:05:07.200
you don't want to have to like mess too
01:05:07.200 --> 01:05:09.394
much with your learning rate and other
01:05:09.394 --> 01:05:10.740
and other parameters.
01:05:10.740 --> 01:05:11.730
It's pretty robust.
01:05:12.500 --> 01:05:16.860
So Adam is combining Momentum, so it's
01:05:16.860 --> 01:05:18.260
got this Momentum term.
01:05:19.120 --> 01:05:22.570
And also this RMSProp normalization
01:05:22.570 --> 01:05:22.930
term.
01:05:23.880 --> 01:05:26.590
And so it's kind of like regularizing
01:05:26.590 --> 01:05:28.320
the directions that you move to try to
01:05:28.320 --> 01:05:29.510
make sure that you're like paying
01:05:29.510 --> 01:05:30.510
attention to all the weights.
01:05:31.190 --> 01:05:33.312
And it's also incorporates some
01:05:33.312 --> 01:05:33.664
momentum.
01:05:33.664 --> 01:05:35.600
So the Momentum, not only does it get
01:05:35.600 --> 01:05:37.140
you out of local minima, but it can
01:05:37.140 --> 01:05:38.040
accelerate you.
01:05:38.040 --> 01:05:39.970
So if you keep moving in the same
01:05:39.970 --> 01:05:41.338
direction, you'll start moving faster
01:05:41.338 --> 01:05:42.389
and faster and faster.
01:05:43.330 --> 01:05:45.870
So these two things in combination are
01:05:45.870 --> 01:05:48.770
helpful because the Momentum helps you
01:05:48.770 --> 01:05:50.680
accelerate when you should be moving
01:05:50.680 --> 01:05:51.340
faster.
01:05:52.110 --> 01:05:55.750
And the regularization of this RMSProp
01:05:55.750 --> 01:05:57.180
helps make sure that things don't get
01:05:57.180 --> 01:05:58.100
too out of control.
01:05:58.100 --> 01:05:58.760
So if you're like.
01:05:59.470 --> 01:06:00.785
Really likes accelerating?
01:06:00.785 --> 01:06:03.480
You don't like fly off into Nan Land?
01:06:03.480 --> 01:06:06.720
You get normalized by your G mag before
01:06:06.720 --> 01:06:07.430
you.
01:06:07.600 --> 01:06:07.770
OK.
01:06:08.390 --> 01:06:10.320
Before it gets like too crazy.
01:06:11.520 --> 01:06:13.300
Otherwise you can imagine like with the
01:06:13.300 --> 01:06:14.610
bowl you can be like.
01:06:15.700 --> 01:06:17.820
And you're like fly off into like
01:06:17.820 --> 01:06:18.490
Infinity.
01:06:21.650 --> 01:06:23.430
And if you ever start seeing Nans and
01:06:23.430 --> 01:06:24.680
your losses, that's probably what
01:06:24.680 --> 01:06:24.960
happened.
01:06:26.260 --> 01:06:26.770
01:06:27.690 --> 01:06:29.430
So there's some cool videos here.
01:06:31.850 --> 01:06:34.910
So just showing like some races of
01:06:34.910 --> 01:06:37.470
these different approaches and.
01:06:40.290 --> 01:06:41.900
So I think let's see.
01:06:44.810 --> 01:06:46.160
So they were on YouTube, so.
01:06:47.090 --> 01:06:48.350
More of a pain to grab them.
01:06:48.350 --> 01:06:49.900
The other ones are gifts, which is
01:06:49.900 --> 01:06:50.210
nice.
01:06:50.820 --> 01:06:53.430
That's just showing this is blue is.
01:06:54.130 --> 01:06:55.770
Blue is.
01:06:56.990 --> 01:06:57.680
Adam, yes.
01:06:57.680 --> 01:06:58.030
Thank you.
01:06:58.930 --> 01:07:00.750
So you can see that the blue is
01:07:00.750 --> 01:07:02.020
actually able to find a better
01:07:02.020 --> 01:07:04.060
solution, a lower point.
01:07:04.060 --> 01:07:06.430
These are like loss manifolds, so if
01:07:06.430 --> 01:07:08.445
you have like 2 weights, this is like
01:07:08.445 --> 01:07:09.670
the loss as a function of those
01:07:09.670 --> 01:07:09.930
weights.
01:07:14.350 --> 01:07:15.850
So the optimization is trying to find
01:07:15.850 --> 01:07:17.450
the lowest the weights that give you
01:07:17.450 --> 01:07:18.160
the lowest loss.
01:07:19.320 --> 01:07:20.200
Here's another example.
01:07:20.200 --> 01:07:21.870
They all start at the same point so
01:07:21.870 --> 01:07:23.090
that you can only see one ball, but
01:07:23.090 --> 01:07:23.660
they're all there.
01:07:26.580 --> 01:07:27.120
01:07:31.150 --> 01:07:33.400
The Momentum got there first, but both
01:07:33.400 --> 01:07:35.600
Momentum and Adam got there at the end.
01:07:35.600 --> 01:07:36.840
The other ones would have gotten there
01:07:36.840 --> 01:07:38.260
too because that was an easy case, but
01:07:38.260 --> 01:07:39.110
they just take longer.
01:07:40.840 --> 01:07:41.910
Yeah, so anyway.
01:07:44.100 --> 01:07:46.170
Any questions about Momentum about?
01:07:47.160 --> 01:07:48.530
SGD momentum, Adam.
01:07:50.550 --> 01:07:53.043
So I would say typically I see people
01:07:53.043 --> 01:07:54.990
use SGD or atom.
01:07:54.990 --> 01:07:58.323
And so in your homework we first say
01:07:58.323 --> 01:07:59.009
use SGD.
01:08:00.270 --> 01:08:01.570
Because it's the main one we taught.
01:08:01.570 --> 01:08:03.090
But then when you try to like make it
01:08:03.090 --> 01:08:04.920
better, I would probably switch to Adam
01:08:04.920 --> 01:08:07.290
because it makes it like a lot, it's
01:08:07.290 --> 01:08:09.080
less sensitive to Learning rates and
01:08:09.080 --> 01:08:11.910
it's a mix optimization, a bit easier
01:08:11.910 --> 01:08:13.190
for the Model designer.
01:08:14.750 --> 01:08:16.360
All of that's handled by.
01:08:16.360 --> 01:08:18.150
All you have to do is change SGD to
01:08:18.150 --> 01:08:18.560
Adam.
01:08:18.560 --> 01:08:20.350
There's not a lot that you have to do
01:08:20.350 --> 01:08:22.050
in terms of the when typing keys.
01:08:24.510 --> 01:08:25.430
All right, so.
01:08:26.460 --> 01:08:27.250
Even with.
01:08:28.820 --> 01:08:30.840
Even with ReLU and Adam optimization,
01:08:30.840 --> 01:08:32.830
though, it was hard to get very Deep
01:08:32.830 --> 01:08:34.840
Networks to work very well.
01:08:35.840 --> 01:08:37.720
So there were Networks, this one going
01:08:37.720 --> 01:08:39.690
deeper with convolutions where they
01:08:39.690 --> 01:08:40.450
would.
01:08:40.600 --> 01:08:42.130
They would.
01:08:42.390 --> 01:08:44.860
And they would have losses at various
01:08:44.860 --> 01:08:45.086
stages.
01:08:45.086 --> 01:08:47.193
So you'd basically build build
01:08:47.193 --> 01:08:48.820
classifiers off of branches of the
01:08:48.820 --> 01:08:49.215
network.
01:08:49.215 --> 01:08:51.815
At layer five and seven and nine, you'd
01:08:51.815 --> 01:08:53.609
have a whole bunch of classifiers so
01:08:53.610 --> 01:08:55.100
that each of these can like feed.
01:08:55.960 --> 01:08:58.389
Gradients into the earlier parts of the
01:08:58.390 --> 01:09:00.465
network, because if you didn't do this
01:09:00.465 --> 01:09:02.150
and you just had the Classification
01:09:02.150 --> 01:09:04.620
here the Gradient, you'd have this
01:09:04.620 --> 01:09:06.676
vanishing gradient problem where like
01:09:06.676 --> 01:09:10.410
the values like chop off like kill some
01:09:10.410 --> 01:09:12.470
of your Gradients and no Gradients are
01:09:12.470 --> 01:09:13.630
getting back to the beginning, so
01:09:13.630 --> 01:09:14.690
you're not able to optimize.
01:09:15.760 --> 01:09:18.350
They do these really heavy solutions
01:09:18.350 --> 01:09:19.440
where you train a whole bunch of
01:09:19.440 --> 01:09:21.410
classifiers and each one is helping to
01:09:21.410 --> 01:09:22.960
inform the previous layers.
01:09:25.620 --> 01:09:27.710
Even with that, people are finding that
01:09:27.710 --> 01:09:29.390
they were running out of steam, like
01:09:29.390 --> 01:09:31.660
you couldn't build deeper, a lot bigger
01:09:31.660 --> 01:09:31.930
Networks.
01:09:31.930 --> 01:09:33.190
There were, there were still
01:09:33.190 --> 01:09:36.800
Improvements, VGG and Google, LeNet,
01:09:36.800 --> 01:09:39.040
but they weren't able to get like
01:09:39.040 --> 01:09:40.060
really Deep Networks.
01:09:40.860 --> 01:09:43.014
And so it wasn't clear like, was the
01:09:43.014 --> 01:09:44.660
problem that the Deep Networks were
01:09:44.660 --> 01:09:46.020
overfitting the training data, they
01:09:46.020 --> 01:09:47.676
were just too powerful or was the
01:09:47.676 --> 01:09:49.716
problem that we couldn't just that we
01:09:49.716 --> 01:09:51.850
just couldn't optimize them or some
01:09:51.850 --> 01:09:52.470
combination?
01:09:53.900 --> 01:09:56.910
So my question to you is, what is a way
01:09:56.910 --> 01:09:58.630
that we could answer this question if
01:09:58.630 --> 01:10:00.080
we don't know whether the Networks are
01:10:00.080 --> 01:10:01.430
overfitting the training data?
01:10:02.120 --> 01:10:04.130
Or whether we're just having problems
01:10:04.130 --> 01:10:05.130
optimizing them.
01:10:05.130 --> 01:10:06.040
In other words, they're like
01:10:06.040 --> 01:10:07.380
essentially underfitting the training
01:10:07.380 --> 01:10:07.570
data.
01:10:08.360 --> 01:10:11.090
What would we do to diagnose that?
01:10:26.640 --> 01:10:28.400
So we want to.
01:10:28.400 --> 01:10:30.460
So the answer was compare the Training
01:10:30.460 --> 01:10:31.680
area and the test error.
01:10:31.680 --> 01:10:32.000
Yes.
01:10:32.000 --> 01:10:33.930
So we just we basically want to look at
01:10:33.930 --> 01:10:34.105
the.
01:10:34.105 --> 01:10:35.480
We need to look at the training error
01:10:35.480 --> 01:10:35.960
as well.
01:10:36.880 --> 01:10:39.550
And so that's what he had all did.
01:10:40.170 --> 01:10:42.660
This is the Resnet paper, which has
01:10:42.660 --> 01:10:44.980
been cited 150,000 times.
01:10:46.020 --> 01:10:46.590
So.
01:10:47.320 --> 01:10:49.668
They plot the Training error and they
01:10:49.668 --> 01:10:52.090
plot the test error and they say, look,
01:10:52.090 --> 01:10:53.910
you have a model that got bigger from
01:10:53.910 --> 01:10:56.420
20 to 56 and the Training error went up
01:10:56.420 --> 01:10:56.930
by a lot.
01:10:57.890 --> 01:10:59.210
So that's pretty weird.
01:10:59.210 --> 01:11:01.335
Like you have a bigger model, it has to
01:11:01.335 --> 01:11:03.410
have less bias in like traditional
01:11:03.410 --> 01:11:03.840
terms.
01:11:04.460 --> 01:11:06.776
But we're getting higher error in
01:11:06.776 --> 01:11:08.469
training, not just in test.
01:11:08.470 --> 01:11:09.742
And if you have higher error in
01:11:09.742 --> 01:11:11.300
Training, that also will mean that you
01:11:11.300 --> 01:11:12.680
probably have higher error in test,
01:11:12.680 --> 01:11:14.142
because the test error is the Training
01:11:14.142 --> 01:11:16.060
error plus a generalization error.
01:11:16.060 --> 01:11:17.192
So this is a test.
01:11:17.192 --> 01:11:18.050
This is the train.
01:11:19.610 --> 01:11:20.760
So they have like a couple
01:11:20.760 --> 01:11:21.580
explanations.
01:11:22.570 --> 01:11:24.670
One is the Vanishing Gradients problem.
01:11:24.670 --> 01:11:27.440
So here is for example a VGG 18.
01:11:28.190 --> 01:11:28.870
Network.
01:11:28.870 --> 01:11:32.616
Here's a 34 layer like network that is
01:11:32.616 --> 01:11:34.980
convolutions and full of convolutions
01:11:34.980 --> 01:11:36.070
and downsample et cetera.
01:11:37.180 --> 01:11:38.610
The one problem is what's called
01:11:38.610 --> 01:11:40.510
Vanishing Gradients, that the early
01:11:40.510 --> 01:11:42.493
weights have a long path to reach the
01:11:42.493 --> 01:11:42.766
output.
01:11:42.766 --> 01:11:45.350
So when we talked about back
01:11:45.350 --> 01:11:47.242
propagation, remember that the early
01:11:47.242 --> 01:11:49.480
weights have this product of weight
01:11:49.480 --> 01:11:51.393
terms in them.
01:11:51.393 --> 01:11:56.170
So if any as the weights are, if the
01:11:56.170 --> 01:11:59.390
output of the later nodes are zero,
01:11:59.390 --> 01:12:02.160
then the earlier Gradients get cut off.
01:12:04.390 --> 01:12:06.200
So it's hard to optimize the early
01:12:06.200 --> 01:12:08.120
layers and you can do the multiple
01:12:08.120 --> 01:12:09.820
stages of supervision like Google in
01:12:09.820 --> 01:12:13.720
it, but it's complicated and time
01:12:13.720 --> 01:12:14.794
consuming to do.
01:12:14.794 --> 01:12:16.650
So it's very heavy Training.
01:12:17.440 --> 01:12:19.480
The other problem is information
01:12:19.480 --> 01:12:20.150
propagation.
01:12:20.840 --> 01:12:22.350
So you can think of a Multi layer
01:12:22.350 --> 01:12:24.280
network as at each stage of the network
01:12:24.280 --> 01:12:26.005
you're propagating the information from
01:12:26.005 --> 01:12:28.290
the previous layer and then doing some
01:12:28.290 --> 01:12:30.180
additional analysis on top of it to
01:12:30.180 --> 01:12:33.050
hopefully add some or useful features
01:12:33.050 --> 01:12:34.620
for the final Prediction.
01:12:35.210 --> 01:12:37.370
So you start with the Input, which is a
01:12:37.370 --> 01:12:39.440
complete representation of the data,
01:12:39.440 --> 01:12:40.910
all the information's there.
01:12:40.910 --> 01:12:42.895
And then you transform it with the next
01:12:42.895 --> 01:12:44.651
layer and transform it with the next
01:12:44.651 --> 01:12:46.408
layer and transform it with the next
01:12:46.408 --> 01:12:46.659
layer.
01:12:46.659 --> 01:12:48.330
And each time you have to try to
01:12:48.330 --> 01:12:50.250
maintain the information that's in the
01:12:50.250 --> 01:12:53.150
previous layer, but also put it into a
01:12:53.150 --> 01:12:55.290
form that's more useful for Prediction.
01:12:56.540 --> 01:12:57.070
And.
01:12:57.750 --> 01:12:59.620
The and so.
01:13:00.350 --> 01:13:02.860
If you initialize the weights to 0, for
01:13:02.860 --> 01:13:04.516
example, then it's not retaining the
01:13:04.516 --> 01:13:05.900
information in the previous layer, so
01:13:05.900 --> 01:13:07.555
it has to actually learn something just
01:13:07.555 --> 01:13:09.630
to reproduce that original information.
01:13:11.540 --> 01:13:13.850
So their solution to this and I'll stop
01:13:13.850 --> 01:13:16.260
with this slide and I'll continue with
01:13:16.260 --> 01:13:17.660
this in the vision portion since I'm
01:13:17.660 --> 01:13:18.740
kind of like getting into vision
01:13:18.740 --> 01:13:21.060
anyway, but let me tell you about this
01:13:21.060 --> 01:13:21.730
module.
01:13:22.390 --> 01:13:23.920
The.
01:13:24.090 --> 01:13:26.500
Their solution in this is the RESNET
01:13:26.500 --> 01:13:27.110
module.
01:13:28.430 --> 01:13:31.580
So they use what's called a skip or
01:13:31.580 --> 01:13:34.990
shortcut connection around two to three
01:13:34.990 --> 01:13:35.950
layer MLP.
01:13:35.950 --> 01:13:36.650
So you.
01:13:37.530 --> 01:13:39.935
Your Input goes into a weight layer, a
01:13:39.935 --> 01:13:42.830
linear layer array, Lau another linear
01:13:42.830 --> 01:13:45.370
layer and then you add back the input
01:13:45.370 --> 01:13:46.200
to the end.
01:13:46.880 --> 01:13:49.020
And this allows the Gradients to flow
01:13:49.020 --> 01:13:50.580
back through this because this is just
01:13:50.580 --> 01:13:51.810
F of X = X.
01:13:51.810 --> 01:13:54.295
So Gradients can flow straight around
01:13:54.295 --> 01:13:55.660
this network if they need to.
01:13:56.320 --> 01:13:58.680
As well as flowing through this way and
01:13:58.680 --> 01:14:01.390
also this guy, even if these weights
01:14:01.390 --> 01:14:03.360
are zero, that information is still
01:14:03.360 --> 01:14:06.120
preserved because you add X to the
01:14:06.120 --> 01:14:08.760
output of these layers and so each
01:14:08.760 --> 01:14:10.890
module only needs to like add
01:14:10.890 --> 01:14:12.070
information, doesn't need to worry
01:14:12.070 --> 01:14:13.670
about reproducing the previous
01:14:13.670 --> 01:14:14.350
information.
01:14:15.370 --> 01:14:17.280
And I'm just going to show you one
01:14:17.280 --> 01:14:19.550
thing so that so that caused this
01:14:19.550 --> 01:14:20.690
revolution of Depth.
01:14:21.440 --> 01:14:24.390
Where in 2012 the winner of ImageNet
01:14:24.390 --> 01:14:27.817
was 8 layers, in 2014 it was 19 layers.
01:14:27.817 --> 01:14:31.570
In 2015 it was Resnet with 152 layers.
01:14:32.410 --> 01:14:34.530
So this allowed you to basically train
01:14:34.530 --> 01:14:38.870
networks of any depth, and you could
01:14:38.870 --> 01:14:40.470
even have 1000 layer network if you
01:14:40.470 --> 01:14:42.270
wanted and you'd be able to train it.
01:14:43.020 --> 01:14:44.540
And the reason is because the data can
01:14:44.540 --> 01:14:46.410
just flow straight through these skip
01:14:46.410 --> 01:14:47.630
connections all the way to the
01:14:47.630 --> 01:14:48.170
beginning.
01:14:48.170 --> 01:14:49.930
So it's actually like you can optimize
01:14:49.930 --> 01:14:51.990
all these blocks like separately from
01:14:51.990 --> 01:14:52.450
each other.
01:14:53.060 --> 01:14:54.395
And it causes.
01:14:54.395 --> 01:14:56.540
It also causes an interesting behavior
01:14:56.540 --> 01:14:58.430
where they kind of act as ensembles
01:14:58.430 --> 01:15:00.670
because the information can like skip
01:15:00.670 --> 01:15:01.710
sections of the network.
01:15:01.710 --> 01:15:03.230
So you can basically have like separate
01:15:03.230 --> 01:15:04.400
predictors that are learned and
01:15:04.400 --> 01:15:05.060
recombined.
01:15:05.840 --> 01:15:07.570
And so with larger models, you actually
01:15:07.570 --> 01:15:10.680
get a property of reducing the variance
01:15:10.680 --> 01:15:12.490
instead of increasing the variance,
01:15:12.490 --> 01:15:13.840
even though you have more parameters in
01:15:13.840 --> 01:15:14.780
your model.
01:15:14.780 --> 01:15:17.280
That's a little bit of a speculation,
01:15:17.280 --> 01:15:18.660
but that seems to be the behavior.
01:15:19.820 --> 01:15:23.556
All right, so Tuesday I'm going to do
01:15:23.556 --> 01:15:25.935
like another like consolidation review
01:15:25.935 --> 01:15:26.580
do.
01:15:26.580 --> 01:15:28.590
If you have anything specific you want
01:15:28.590 --> 01:15:30.620
me to cover about the questions or
01:15:30.620 --> 01:15:33.210
concepts, post it on campus wire.
01:15:33.210 --> 01:15:34.620
You can find the posts there.
01:15:34.620 --> 01:15:35.260
Reply to it.
01:15:36.030 --> 01:15:39.120
And then I'm going to continue talking
01:15:39.120 --> 01:15:40.560
about Deep Networks with computer
01:15:40.560 --> 01:15:43.160
vision examples on Thursday.
01:15:43.160 --> 01:15:44.050
So thank you.
01:15:44.050 --> 01:15:44.820
Have a good weekend.
|