File size: 64,440 Bytes
8cf8144 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 |
\documentclass[10pt,twocolumn,letterpaper]{article}
\usepackage{iccv}
\usepackage{times}
\usepackage{epsfig}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{tikz}
\usetikzlibrary{fit, shapes, arrows, positioning, calc}
\usepackage{forest}
\usepackage{color}
\newcommand{\rmn}[1]{{\textcolor{blue}{\bf [{\sc rmn:} #1]}}}
\usepackage[pagebackref=true,breaklinks=true,letterpaper=true,colorlinks,bookmarks=false]{hyperref}
\iccvfinalcopy
\def\iccvPaperID{****} \def\httilde{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}}
\begin{document}
\title{DeepArchitect: Automatically Designing and Training Deep Architectures}
\author{Renato Negrinho\\
Carnegie Mellon University\\
{\tt\small negrinho@cs.cmu.edu}
\and
Geoff Gordon\\
Carnegie Mellon University\\
{\tt\small ggordon@cs.cmu.edu}
}
\maketitle
\begin{abstract}
In deep learning, performance is strongly affected by the choice of
architecture and hyperparameters.
While there has been extensive work on
automatic hyperparameter optimization
for simple spaces,
complex spaces such as the space of
deep architectures remain largely unexplored.
As a result, the choice of architecture is done manually by the human expert
through a slow trial and error process guided mainly by intuition.
In this paper we describe
a framework for automatically
designing and training deep models.
We propose
an extensible and modular language
that allows the human expert
to compactly represent complex search spaces
over architectures and their hyperparameters.
The resulting search spaces are
tree-structured and therefore easy to traverse.
Models can be automatically compiled to computational graphs
once values for all hyperparameters have been chosen.
We can leverage the structure of the search space
to introduce different model search algorithms,
such as
random search, Monte Carlo tree search (MCTS), and
sequential model-based optimization (SMBO).
We present experiments comparing the different
algorithms on CIFAR-10 and
show that MCTS and SMBO outperform
random search.
In addition, these experiments
show that our framework
can be used effectively
for model discovery,
as
it is possible
to describe expressive search spaces
and discover competitive models
without much effort from the human expert.
Code for our framework and experiments
has been made publicly available.
\end{abstract}
\section{Introduction}
\label{sec:intro}
Deep learning has seen a surge in popularity due to
breakthroughs in applications such as
computer vision,
natural language processing, and
reinforcement learning~\cite{he2016deep,karpathy2015deep, silver2016mastering,sutskever2014sequence}.
An important observation in much of the recent work is that complex
architectures are important for achieving high performance~\cite{he2016deep, mnih2013playing}.
Larger datasets and more powerful computing infrastructures are likely to
increase our ability to effectively train larger, deeper, and more complex architectures.
However, improving the performance of a neural network is not as simple as
adding more layers or parameters---it
often requires clever ideas such as creating more
branches~\cite{szegedy2015going} or adding skip connections~\cite{he2016deep}.
Even popular techniques such as dropout~\cite{srivastava2014dropout} and
batch normalization~\cite{ioffe2015batch}
do not always lead to better performance,
and need to be judiciously applied to be helpful.
Currently, choosing appropriate values for these architectural hyperparameters
requires close supervision by a human expert, in a trial and error manual search
process largely guided by intuition.
The expert is burdened by having to make the
large number of choices
involved in the specification of a deep model.
Choices interact in non-obvious ways and
strongly impact performance.
The typical workflow has the expert specify a single model,
train it, and
compute a validation score.
Based on
the validation score,
previous experience, and
information gathered during training,
the expert decides if the trained model
is satisfactory or not.
If the model is considered unsatisfactory,
the expert has to think about model variations
that
may lead to better performance.
From the perspective of the expert, it would be convenient to
search over architectures automatically, just as
we search over
simple scalar hyperparameters,
such as the learning rate and
the regularization coefficient.
Ideally, the expert would have control
in setting up the search space to incorporate
inductive biases about the task being solved and
constraints about computational resources.
Prior to this work, achieving this goal
was hard because
expressing model search spaces
using
general hyperparameter optimization tools
requires the human expert to manually distill a
set of relevant scalar architectural hyperparameters.
The main contributions of our work are
\begin{enumerate}
\item a modular, compositional, and extensible language
for compactly
representing expressive search spaces over models that
\begin{enumerate}
\item gives control to the human expert over
what model variations to consider;
\item makes it easy to automatically search for
performant
models in the search space;
\item allows models to be directly compiled to
computational graphs
without the human expert
having to write additional code.
\end{enumerate}
\item model search algorithms that rely on
the tree-structured search spaces induced by our language
to systematically and efficiently
search for performant models; namely, we
\begin{enumerate}
\item show that by using the constructs
in our language, even
random search can be effective;
\item compare different
model search algorithms experimentally,
and show that random search is outperformed
by
algorithms that leverage the
structure of the search space to
generalize more effectively across different models.
\end{enumerate}
\end{enumerate}
The main differences between our work and previous work are that we
develop a modular, composable and extensible language,
focusing on the problem of searching over deep architectures.
This focus allows the expert to compactly set up a search space,
search over it,
and automatically compile models
to their corresponding computational graphs.
Our language can be seen as an effort to combine the functionalities of
a deep model specification language
(e.g., Tensorflow~\cite{abadi2016tensorflow}) and
a structured hyperparameter search language
(e.g., Hyperopt~\cite{bergstra13modelsearch}).
\section{Related Work}
\label{sec:related_work}
Model search has a long and rich history in machine learning and statistics.
There has been a wide variety of theoretical and empirical research
in this area~\cite{agarwal2011oracle,bergstra11hyperparameter,bergstra12randomSearch,sabharwal2015selecting},
including Bayesian optimization methods~\cite{hutter2011sequential,kandasamy15addBO,snoek12practicalBO}.
However, conventional methods are primarily designed for searching over
hyperparameters living in Euclidean space.
Such methods are ill suited in today's context,
where
the discrete architectural choices are
just as important
as the numerical values of the hyperparameters.
Searching over architectures using current hyperparameter optimization
algorithms requires
the expert to distill structural choices into
scalar hyperparameters.
As a result, typically only a few simple global
structural hyperparameters are considered,
e.g., the depth of the network or whether to use dropout or not.
This constrains the richness of the search space,
preventing the expert from finding
unexpected model variations
leading to better performance; e.g.,
perhaps dropout is useful only after certain types of layers, or
batch normalization only helps in the first half of the network.
Architecture search
has also been considered under
the topic of neuroevolution~\cite{stanley2002evolving},
which uses evolutionary (i.e., genetic) strategies
to define and search a space of models.
In classical approaches, neuroevolution attempts to
jointly choose
the topology and the parameters of the architecture
using genetic algorithms.
Architecture search has received renewed interest recently.
Wierstra \etal
\cite{wierstra2005modeling},
Floreano \etal \cite{floreano2008neuroevolution},
and Real \etal \cite{large2017real}
use evolutionary
algorithms which start from an initial model and
evolve it based on its validation performance.
Zoph and Le \cite{zoph2016neural}
propose a reinforcement learning procedure
based on policy gradient
for searching for
convolutional and
LSTM architectures.
Baker \etal \cite{baker2016designing}
propose a reinforcement learning procedure
based on Q-learning
for searching for
convolutional architectures.
Unfortunately all these approaches consider
fixed hard-coded model search spaces
that do not easily allow the
human expert to incorporate inductive biases
about the task being solved, making them unsuitable
as general tools for architecture search.
For example, evolutionary approaches require
an encoding for the models in the search space
and genetic operators (e.g., mutation and crossover)
which generate encodings for new models out of
encodings of old ones.
These aspects are handcrafted and hard-coded
so it is hard for the human expert to
change the search space in flexible ways.
Perhaps
different model encodings
or genetic operators can be considered,
but these knobs give somewhat loose and indirect control over the
model search space.
The reinforcement learning approaches considered
suffer from similar issues---the
search spaces are hard-coded and not easily modifiable.
None of these approaches have
the compositionality, modularity, and extensibility properties
of our language.
Bergstra \etal \cite{bergstra11hyperparameter} propose
Tree of Parzen Estimators (TPE), which
can be used to search over structured hyperparameter spaces,
and use it to tune the hyperparameters of a Deep Boltzmann Machine.
Bergstra \etal \cite{bergstra13modelsearch} use
TPE to search for values of the hyperparameters
of a computer vision system,
and show that it can find better values
than the best ones previously known.
TPE
is a general
hyperparameter search algorithm,
and therefore requires considerable effort to use---for
any fixed model search space, using TPE requires
the human expert to distill the hyperparameters
of the search space,
express the search space in Hyperopt~\cite{bergstra13modelsearch}
(an implementation of TPE), and write the code describing
how values of the hyperparameters in the search space
compile to a computational graph.
In contrast, our language is modular and composable in
the sense that:
\begin{enumerate}
\item search spaces (defined through modules)
are constructed compositionally out of
simpler search spaces (i.e., simpler modules);
\item hyperparameters for
composite modules are derived automatically
from the hyperparameters of simpler modules;
\item once values for all hyperparameters of a module have been chosen,
the resulting model can be automatically mapped
to a computational
graph without the human expert having to
write additional code.
\end{enumerate}
\section{Roadmap to the DeepArchitect Framework}
\label{sec:roadmap}
Our framework
reduces the problem of searching over models into
three modular components:
the \emph{model search space specification language},
the \emph{model search algorithm},
and the \emph{model evaluation algorithm}.
\paragraph{Model Search Specification Language:}
The model search space specification language is built around the concept of a
modular \emph{computational module}.
This is akin to the concept of a module~\cite{bottou1991framework}
used in deep learning frameworks such as Torch~\cite{torch}:
by implementing the module interface,
the internal implementation becomes irrelevant.
These modules allow one to
express easily complex design choices such as
whether to include a module or not,
choose between modules of different types,
or choose how many times to repeat a module structure.
The main insight is that
complex modules can be created \emph{compositionally} out of simpler ones.
The behavior of complex modules
is generated automatically out of the behavior of
simpler modules.
Furthermore, our language is extensible,
allowing the implementation of new types of modules
by implementing a high-level interface
local to the module.
\paragraph{Model Search Algorithm:}
The way the model search space is explored is determined by
the \emph{model search algorithm}.
This part of the framework decides
how much effort to allocate to each part
of the search space based on
the performance observed for previous models.
The model search algorithm typically requires a
\emph{model evaluation algorithm} that
computes the performance of a fully specified model.
The search algorithm will then use this information to determine
which models to try next.
The search algorithm interacts with the search space only through a
minimal interface that allows it to traverse the space of models
and evaluate models discovered this way.
This interface is the same irrespective of the specific
search space under consideration.
We experiment with
different search algorithms,
such as
Monte Carlo tree search~\cite{browne2012survey} and
Sequential Model Based Optimization~\cite{hutter2011sequential}.
\paragraph{Model Evaluation Algorithm:}
Having fully specified a model, i.e., having reached a leaf in the
tree defined by our model search space, we can evaluate how good this model is
according to some criterion defined by the expert.
This typically involves training the model on a training set and
evaluating it on a validation set.
The training procedure often has multiple hyperparameters that can
be tuned
(e.g., the choice of the
optimization algorithm and
its hyperparameters,
and the learning rate schedule).
If the expert does not know how to
write down a reasonable training procedure
for every model in the search space,
the expert can introduce hyperparameters for the evaluation algorithm
and search over them
using our specification language.
\paragraph{}
Any of the above components
can be changed, improved, or extended,
while keeping the others fixed.
The fact that different components interact only through
well-defined interfaces makes it possible to
extend and reuse this framework.
We believe that DeepArchitect will be an interesting platform
for future research in deep learning and
hyperparameter tuning
for architecture search.
\section{Model Search Space Specification Language}
\label{sec:model_space_spec_lang}
\subsection{Search Space Definition}
\label{ssec:space_spec}
The \emph{computational module} is the fundamental unit
of our model search space specification language.
We define a computational module as a function
\begin{equation}
f : n \to
\left( \mathcal H \to ( \mathbb R ^p \to (\mathbb R ^{n} \to \mathbb R ^{m} ) )
\right),
\label{eq:module_def}
\end{equation}
where
$n$ is the dimensionality of the \emph{input},
$\mathcal H$ is the set of valid values for the \emph{hyperparameters},
$p$ is the number of \emph{parameters}, and
$m$ is the dimensionality of the \emph{output}.
The set
$\mathcal H$ can be structured
or simply the cross product of scalar hyperparameter sets,
i.e., $\mathcal H = \mathcal H _1 \times \ldots \times \mathcal H _H$,
where $H$ is the number of scalar hyperparameters.
The set $\mathcal H$ is assumed to be discrete in both cases.
Definition~\eqref{eq:module_def} merits some discussion.
For conciseness
we have not explicitly represented it, but
\emph{the number of parameters $p$ and
the output dimensionality $m$
can both be
functions of
the input dimensionality $n$
and the chosen hyperparameter values $h \in \mathcal H$}.
For example, an affine module
with
$h$ dense hidden units
has output dimensionality $m = h$
and number of parameters
$p = (n + 1) h$:
a
weight matrix $W \in \mathbb R ^{h \times n}$
and a bias vector $b \in \mathbb R ^h$.
A similar reasoning can be carried out
for a convolutional module:
the number of parameters
$p$ depends on
the input dimensionality,
the number of filters,
and the size of the filters;
the dimensionality of the output $m$ depends on
the input dimensionality,
the number of filters,
the size of the filters,
the stride,
and the padding scheme.
The fact that $p$ and $m$ are functions of
the input dimensionality
and the chosen hyperparameter values
is one of the main observations
that allows us to do architecture search---\emph{once we
know the input dimensionality and
have fixed values for the hyperparameters,
the structure of the computation
performed by the module is determined},
and this information can be propagated to other modules.
We say that a module is \emph{fully specified}
when values for all hyperparameters of the module
have been chosen and the input dimensionality is known.
We focus
on search spaces for architectures that have
a \emph{single input terminal} and
a \emph{single output terminal}.
By this, we only mean that the input and output of the
module have to be a \emph{single} tensor of arbitrary order and dimensionality.
For example, convolutional modules take as input
an order three tensor and
return as output
an order three tensor,
therefore they are
single-input single-output modules
under our definition.
We also assume that the output of a module
is used as input to at most a single module, i.e.,
we \emph{assume no output sharing}.
These restrictions were introduced to
simplify exposition.
The single-input single-output case
with no sharing is simpler to develop
and exemplifies the main ideas that
allow us to develop a framework for automatic architecture search.
The ideas developed in this work extend naturally
to the multiple-input multiple-output case with sharing.
Additionally, often
we can represent modules that
are not single-input single-output
by defining
new modules that encapsulate many signal paths
from input to output.
For example, a residual module~\cite{he2016deep}
can be treated in our framework
by noting that it is single-input before
the skip connection split
and single-output after
the skip connection merge.
Many top performing architectures,
such as
AlexNet~\cite{krizhevsky2012imagenet},
VGG~\cite{simonyan2014very}, and
ResNet~\cite{he2016deep},
are captured in our language.
We distinguish between
\emph{basic computational modules} and
\emph{composite computational modules}.
Basic modules
do some well defined transformation.
Affine, batch normalization, and dropout are examples of basic modules.
Composite modules are defined in terms of other (composite or basic) modules,
i.e., the instantiation of a composite module
takes other modules as arguments.
Composite modules may introduce
hyperparameters of their own
and inherit hyperparameters of
the modules taken as arguments.
For example, an \texttt{Or} module
takes a list of modules and
chooses one of the modules
to use.
It
introduces a discrete hyperparameter
for
which module to use,
and chooses values for the
hyperparameters of the chosen module;
the hyperparameters available are conditional on
the choice of the module to use.
Most of the representational power of our language
arises from the compositionality
of composite and basic modules.
The ideas developed in this section are perhaps
best illustrated with an example.
See Figure~\ref{fig:ss_program_ex} for
the definition of an example search space
in LISP-like pseudocode that closely parallels our implementation.
The search space,
which results from the composition of several modules,
and therefore is also a module itself,
encodes $24$ different models, corresponding
to the different $24$ possible paths
from the root to the leaves of the tree.
The space is defined using three composite modules
(\texttt{Concat}, \texttt{MaybeSwap}, and \texttt{Optional})
and five basic modules
(\texttt{Conv2D}, \texttt{BatchNormalization}, \texttt{ReLU}, \texttt{Dropout},
and \texttt{Affine}).
\texttt{Concat} introduces no additional hyperparameters,
but it has to specify all the modules that have been delegated to it;
\texttt{MaybeSwap} introduces a binary hyperparameter that encodes
whether to swap the order of the pair of modules or not;
\texttt{Optional} introduces a binary hyperparameter that encodes
whether to include the module or not.
The behavior of the basic modules in Figure~\ref{fig:ss_program_ex}
is relatively straightforward:
\texttt{Conv2D} takes lists of possible values for
the number of filters,
the size of the filters, and
the stride;
\texttt{BatchNormalization} and \texttt{ReLU}
have no hyperparameters;
\texttt{Dropout} takes a list for the possible values for
the dropout probability;
\texttt{Affine} takes a list for the possible values of
the number of hidden units.
\begin{figure}[t]
{\raggedleft
\begin{verbatim}
(Concat
(Conv2D [32, 64] [3, 5] [1])
(MaybeSwap BatchNormalization ReLU)
(Optional (Dropout [0.5, 0.9]))
(Affine [10]))
\end{verbatim}
}
\caption{A simple search space with $24$ different models.
See Figure~\ref{fig:ss_tree_ex} for
a path through the search space.}
\label{fig:ss_program_ex}
\end{figure}
Choosing different values for the hyperparameters
of the composite modules may
affect the structure of the resulting architecture,
while choosing different values for the hyperparameters
of the basic modules
only affects the structure of the corresponding local transformations.
The search space of Figure~\ref{fig:ss_program_ex} results
from the composition of basic and composite modules;
therefore it is a module itself and
can be characterized by its
input,
output,
parameters,
and hyperparameters.
Our set of composite modules in not minimal:
e.g.,
given an
\texttt{Empty} basic module,
which
has no hyperparameters or parameters and
simply does the identity transformation,
and a
\texttt{Or} composite module,
which introduces an extra hyperparameter
encoding the choice of a specific module in its list,
the composite modules
\texttt{Optional} and \texttt{MaybeSwap} can be defined
as
\texttt{(Optional B) = (Or Empty B)}
and
\texttt{(MaybeSwap B1 B2) = (Or (Concat B1 B2), (Concat B2 B1))}.
\subsection{Search Space Traversal}
\label{ssec:traversal}
Given a search space defined by a module,
there is an underlying tree over fully specified models:
we build this tree by
sequentially
assigning values to each of the
hyperparameters of the module.
Each internal node in the tree corresponds to some partial
assignment to the hyperparameters of the module,
and each terminal node (i.e., each leaf)
corresponds to a fully specified model.
We can also think about an internal node as corresponding to
the state of a module
before assigning a value to the next unassigned hyperparameter.
The branching factor of a node corresponds to the number
of possible values for the hyperparameter under consideration
at that node,
and traversing a specific edge from that node to a child corresponds
to assigning the value encoded by that edge to the
hyperparameter under consideration.
As a tree has a single path between the root and any leaf,
the paths from root to leaves are in one-to-one correspondence
with fully specified models.
A leaf is reached when there are no hyperparameters left to specify.
\begin{figure}[t]
\centering
\begin{forest}
for tree={l sep=15pt, s sep=40pt, thick}
[0,circle,draw, minimum size = 8mm,
[ $\vdots$ ]
[1,circle, draw, minimum size = 8mm,
edge label={ node[midway,above right] {64} }
[2, circle, draw, minimum size = 8mm,
edge label={ node[midway,above left] {3} }
[3,circle, draw, minimum size = 8mm,
edge label={ node[midway,above left] {BR} }
[4,circle, draw, minimum size = 8mm,
edge label={ node[midway,above left] {NoD} } ]
[ $\vdots$ ]
]
[ $\vdots$ ]
]
[ $\vdots$ ]
]
]
\end{forest}
\caption{A path through the
search space
of Figure~\ref{fig:ss_program_ex}.
The model encoded by this path has a
convolutional module with
$64$ filters of
size $3\times 3$, with stride $1$,
followed by batch normalization, ReLU and affine modules.
The model does not use dropout.
Branches encoding hyperparameters with a single choice were omitted.
}
\label{fig:ss_tree_ex}
\end{figure}
In Figure~\ref{fig:ss_tree_ex} we have drawn a path through the
search space of
Figure~\ref{fig:ss_program_ex}
from the
root (labeled node $0$),
where all hyperparameters are unassigned,
to a terminal node (labeled node $4$),
where all hyperparameters have been assigned values.
Each branch in the tree corresponds to the assignment of some
value to some hyperparameter.
At node $0$,
we are choosing between
$32$ or $64$ filters;
at node $1$,
we are choosing between
filters of size $3$ or $5$;
at node $2$, we are choosing between
applying batch normalization
before or after ReLU;
at node $3$, we are choosing
whether to do dropout or not.
Node $4$ is terminal and corresponds to
a fully specified model.
Decisions at each node are conditional on
decisions previously made.
Internal nodes with a single child
(i.e., branches for hyperparameters with a single possible value)
have been collapsed and omitted from Figure~\ref{fig:ss_program_ex}.
Other paths may have different lengths, e.g.,
picking a path through the right child of node $3$
corresponds to adding a \texttt{Dropout} module,
which requires an additional hyperparameter choice for
the dropout probability
when compared to the path from the root to node $4$.
Search spaces arising from module composition
have their traversal functionality
automatically derived from the traversal
functionality of their component modules:
a basic module knows how to
sequentially assign values to its hyperparameters,
and a composite module knows how to
sequentially assign values to its hyperparameters and
call the sequential assignment functionality for its component modules.
This behavior is akin to recursive expression evaluation in programming languages.
To traverse the search space,
i.e., to assign values to all hyperparameters
of the module defining the
search space,
all that it is needed is that each module knows how to
sequentially specify itself.
\emph{Modules resulting from the composition of modules will then be
automatically sequentially specifiable}.
The three \emph{local} operations that a module
needs to implement for traversal are:
\emph{to test whether it is fully specified}
(i.e., whether it has reached a leaf yet);
if it is not specified,
\emph{to return which hyperparameter it is specifying and what are the
possible values for it};
and given a choice for
the current hyperparameter under consideration,
\emph{to traverse the edge to the child of the current node
corresponding to chosen value}.
\subsection{Compilation}
\label{ssec:compilation}
Once values for
all hyperparameters of a module have been chosen,
the
fully specified model
can be
automatically
mapped to its
corresponding
\emph{computational graph}.
We call this mapping
\emph{compilation}.
This operation only requires that each module knows how to locally map
itself to a computational graph: compilation is derived recursively
from the compilation of simpler modules.
For example, if we know how to compile
\texttt{Conv2D}, \texttt{ReLU}, and \texttt{Or} modules,
we will automatically be able to compile
all modules built from them.
This behavior is also similar
to recursive expression evaluation in
programming languages.
\section{Model Search Algorithms}
\label{sec:model_search}
In this section, we consider different search algorithms that
are built on top of the functionality described above.
Some of these algorithms rely on the fact that the search space is
tree structured.
One of the challenges of our setting is that deep models are expensive to
train, so unless we have access to extraordinary computational resources,
only a moderate number of evaluations
will be practical.
\subsection{Random Search}
\label{ssec:rand_search}
Random search is the simplest algorithm that we can consider.
At each node of the tree, we choose
an outgoing edge uniformly at random,
until we reach a leaf node (i.e., a model).
Even just random search is interesting,
as
the model search space specification language allows us to
capture expressive structural search spaces.
Without our language,
randomly selecting an interesting architecture to
try would not be possible
without considerable effort from the human expert.
\subsection{Monte Carlo Tree Search}
\label{ssec:mcts_search}
Monte Carlo tree search (MCTS)~\cite{browne2012survey, kocsis2006bandit}
is an approximate planning technique that has been used
effectively in many domains~\cite{silver2016mastering}.
Contrary to random search, MCTS uses the information gathered so far
to steer its policy towards better performing parts of the
search space.
MCTS maintains a search tree that is
expanded incrementally one node at a time.
MCTS uses two policies:
a \emph{tree policy},
which determines
the path to be traversed from the root
to the frontier
of the already expanded tree; and
a \emph{rollout policy},
which determines
the path to be traversed from the frontier
of the already expanded tree
until a leaf is reached.
Once a leaf is reached, the model encoded by it is
evaluated
(e.g., trained on the training set and
evaluated on the validation set),
and the resulting score is used to update the statistics of the
nodes
in the currently expanded tree
in the path to the leaf.
Each node in the expanded tree keeps statistics
about the number of times it was visited and the
average score of the models that were evaluated in the
subtree at that node.
The rollout policy is often simple, e.g.,
the random policy described in Section~\ref{ssec:rand_search}.
The tree policy typically uses an
upper confidence bound (UCB)
approach.
Let $n$ be the number of visits of a node $v \in \mathcal T$,
where $\mathcal T$ denotes the currently expanded tree, and
$n _1, \ldots, n _b$ and
$\bar X _1, \ldots, \bar X _b$ be, respectively,
the number of visits and
the average scores of the $b$ children of $v$.
The tree policy at $x$ chooses to traverse
an edge corresponding to a child maximizing the UCB score:
\begin{align}
\max _{i \in \{1, \ldots, b \} }
\bar X _i +
2 c
\sqrt
{ \frac {2 \log n }
{ n _i }},
\end{align}
where $c \in \mathbb R _+$ is a
constant capturing the trade-off between
exploration and exploitation---larger
values of $c$ correspond to
larger amounts of exploration.
If at node $x$, some of its children
have not been added to the tree, there will be some
$i \in \{1, \ldots, b \} $ for which $n _i = 0$;
in this case we define the UCB score to be infinite,
and therefore, unexpanded children
always take precedence over
expanded children.
If multiple unexpanded children are available,
we expand one uniformly at random.
\subsection{Monte Carlo Tree Search with Tree Restructuring}
\label{ssec:restruct_mcts_search}
When MCTS visits
a node in the expanded part of
the tree,
it has to expand all children of that node before
expanding any children of its currently expanded children.
This is undesirable
when there are hyperparameters
that can take a
large number of related values.
We often consider
hyperparameters which take \emph{numeric} values,
and
similar values result in similar performance.
For example, choosing between
$64$ or $80$ filters for a convolutional module
might not have a dramatic impact
on performance.
A way of addressing such hyperparameters
is to restructure the branches of the
tree by doing bisection.
Assume that the set of hyperparameters has
a natural ordering.
At a node,
rather than committing directly
to a value of the
hyperparameter,
we commit sequentially---first we decide
if we are choosing a value in the
first or second half of the set of
hyperparameters,
and then we recurse on the chosen half
until we have narrow it down to a single value.
See an example tree in Figure~\ref{fig:tree_branch} and
the corresponding
restructured tree in Figure~\ref{fig:tree_bisect}.
\begin{figure}[thb]
\centering
\begin{forest}
for tree={l sep=16pt, s sep=16pt, thick}
[0,circle,draw, minimum size = 8mm,
[1,circle, draw, minimum size = 8mm,
edge label={ node[midway,left] {16} } ]
[2,circle, draw, minimum size = 8mm,
edge label={ node[midway,left] {32} } ]
[3,circle, draw, minimum size = 8mm,
edge label={ node[midway,left] {48} } ]
[4,circle, draw, minimum size = 8mm,
edge label={ node[midway,right] {64} } ]
[5,circle, draw, minimum size = 8mm,
edge label={ node[midway,right] {80} } ]
]
\end{forest}
\caption{A tree encoding an hyperparameter and its five possible values.
MCTS applied to this tree is sample-inefficient
as there is no sharing
of information between the different child nodes.
See also Figure~\ref{fig:tree_bisect}.
}
\label{fig:tree_branch}
\end{figure}
\begin{figure}[thb]
\centering
\begin{forest}
for tree={l sep=16pt, s sep=16pt, thick}
[0,circle,draw, minimum size = 8mm,
[,circle, draw, minimum size = 8mm,
edge label={ node[midway,left] {[16, 32, 48]} }
[,circle, draw, minimum size = 8mm,
edge label={ node[midway,left] {[16, 32]} }
[1,circle, draw, minimum size = 8mm,
edge label={ node[midway,left] {16} } ]
[2,circle, draw, minimum size = 8mm,
edge label={ node[midway,right] {32} } ]
]
[3,circle, draw, minimum size = 8mm,
edge label={ node[midway,right] {48} } ]
]
[,circle, draw, minimum size = 8mm,
edge label={ node[midway,right] {[64, 80]} }
[4,circle, draw, minimum size = 8mm,
edge label={ node[midway,left] {64} } ]
[5,circle, draw, minimum size = 8mm,
edge label={ node[midway,right] {80} } ]
]
]
\end{forest}
\caption{The result of restructuring the tree
in Figure~\ref{fig:tree_branch} with bisection.
MCTS applied to this tree results in more sharing when compared to the
original tree.
For example, sampling a path reaching node~$1$ provides information about
nodes~$1$, $2$, and $3$.
}
\label{fig:tree_bisect}
\end{figure}
Tree restructuring involves a tradeoff between depth and breadth:
the tree in Figure~\ref{fig:tree_branch} has depth $1$,
while the tree in Figure~\ref{fig:tree_bisect} has depth $3$.
The restructured tree can have better properties in the sense that
there more sharing between different values of the
hyperparameters.
We could also consider restructured trees with
branching factors different than two,
again trading off
depth and breadth.
If the branching factor of the restructured tree
is larger than the number of children of the hyperparameter,
the restructuring has no effect, i.e.,
the original and restructured trees are equal.
The restructuring operation allows MCTS
to \emph{effectively consider
hyperparameters with a large number of possible values}.
\subsection{Sequential Model Based Optimization}
\label{ssec:smbo_search}
MCTS is tabular in the sense that
it keeps statistics for each node in the tree.
While the restructuring operation described
in Section~\ref{ssec:restruct_mcts_search}
increases sharing between different hyperparameter
values, it still suffers from the problem that
nodes have no way of
sharing information other than through
common ancestors.
This is problematic because differences in hyperparameter
values at the top levels of the tree lead to little sharing between
models, even if the resulting models happen to
be very similar.
Sequential Model Based Optimization (SMBO)~\cite{hutter2011sequential}
allows us to address
this problem by introducing a surrogate function which can
be used to capture relationships between models
and how
promising it is to evaluate any specific model.
The surrogate function can use expressive
features to
capture architecture patterns that influence performance,
e.g., features about sequences of
basic modules that occur in the model.
The surrogate function can then be optimized to choose
which model to evaluate next.
Exactly optimizing the surrogate function over a search space
can be difficult
as often
there is a combinatorially large number
of models.
To approximately optimize the surrogate function, we
do some number of random rollouts from the root of the tree
until we hit leaf nodes (i.e., models),
we evaluate the surrogate function (i.e., we determine, according to the
surrogate function, how promising it is to evaluate that model),
and evaluate the model that has the highest score according to the
surrogate function.
We also introduce an exploratory component
where we flip a biased coin
and choose between evaluating a random model
or evaluating the
best model according to the surrogate function.
The surrogate function is updated after each evaluation.
In our experiments, we use a simple surrogate function:
we train a ridge regressor to
predict model performance,
using the models evaluated so far
and their corresponding performances
as training data.
We only use features based on $n$-grams of sequences of
basic modules,
disregarding the values of the hyperparameters.
More complex features, surrogate functions, and training losses
are likely to lead to better search performance,
but we leave these to future work.
\begin{figure*}[htp]
\centering
\includegraphics[width=.32\linewidth]{searcher_comp_1_to_64.pdf}
\includegraphics[width=0.32\linewidth]{searcher_comp_1_to_16.pdf}
\includegraphics[width=0.32\linewidth]{searcher_comp_16_to_64.pdf}
\caption{Average maximum validation score achieved as a function of the number of
evaluation across five repetitions. The error bars indicate standard error.
The two plots on the right have the same results as the plot on the left,
but are zoomed in for better visualization. }
\label{fig:searcher_comp}
\end{figure*}
\section{Model Evaluation Algorithms}
\label{sec:model_eval}
As a reminder, once we assign values to all hyperparameters of the module
defining the search space,
we need to compute a score for the resulting model,
i.e., a score for the path from the root to the corresponding leaf
encoding the model to evaluate.
The specific way to
compute scores
is defined by the human expert,
and it typically amounts to training the model on
a training set and evaluating the trained model on a validation set.
The score of the model is
the resulting validation performance.
The training process often has its own hyperparameters,
such as:
what optimization algorithm to use and
its corresponding hyperparameters,
the learning rate schedule
(e.g., the initial learning rate,
the learning rate reduction multiplier,
and how many epochs
without improving
the validation performance
the algorithm waits
before reducing the learning rate),
how many epochs without improving the validation performance
the algorithm waits before terminating
the training process
(i.e., early stopping), and
what
data augmentation strategies to use and
their corresponding hyperparameters.
The behavior of the evaluation algorithm
with respect to the values of
its hyperparameters is defined by the expert for the task being considered,
so the compilation step described in Section~\ref{ssec:compilation}
for this functionality has to
be implemented by the expert.
Nonetheless, these \emph{user hyperparameters} can be
included in the search space and searched over in the same way as
the architecture hyperparameters
described in Section~\ref{ssec:space_spec}.
\section{Experiments}
\label{sec:experiments}
We illustrate how our framework can be used to
search over all hyperparameters of a model,
i.e., both architecture and training hyperparameters,
using only high-level insights.
We choose a search space
of deep convolutional models
based around the
ideas that
depth is important,
batch normalization helps convergence,
and dropout is sometimes helpful.
We search over architectures and
evaluate our models on CIFAR-10~\cite{krizhevsky2009learning}.
The training hyperparameters that we consider are
whether to use ADAM or SGD with momentum,
the initial learning rate,
the learning rate reduction multiplier, and
the rate reduction patience, i.e.,
how many epochs without improvement to wait before reducing the
current learning rate.
We use standard data augmentation techniques:
we zero pad the CIFAR-10 images
to size $40 \times 40 \times 3$,
randomly crop a $32 \times 32 \time 3$ portion,
and flip horizontally at random.
We could search over these too if desired.
We compare the search algorithms described in
Section~\ref{sec:model_search} in terms of the best model found,
according to validation performance, as a function of the
number of evaluations.
We run each algorithm
$5$ times, for $64$ model evaluations each time.
All models were trained for $30$ minutes on GeForce GTX 970
GPUs in machines with similar specifications.
On the leftmost plot of Figure~\ref{fig:searcher_comp},
we see that all search algorithms find
performant solutions (around $89\%$ accuracy) after $64$ evaluations.
On the center plot of Figure~\ref{fig:searcher_comp}, we see that
for fewer than $6$ evaluations there is considerable
variance between the different algorithms; the more sophisticated
model search algorithms
are not able to outperform random search with so few evaluations.
On the rightmost plot in Figure~\ref{fig:searcher_comp}, we see that
both SMBO and MCTS with bisection eventually outperform random search;
MCTS with bisection starts outperforming random search around $32$ evaluations,
while for SMBO, it happens around $16$ evaluations.
\begin{figure}[tp]
\centering
\includegraphics[width=0.9\columnwidth]{quants.pdf}
\caption{Percentage of models above a given validation threshold performance.
MCTS with bisection and SMBO outperform random search.
when compared to random search.
The error bars have size equal to the standard error.}
\label{fig:quants}
\end{figure}
Surprisingly, MCTS without restructuring
does not outperform random search.
We think that this is because there are too many possible values for the
first few hyperparameters in the tree, so MCTS
will not be able to
identify and focus on high-performance regions of the search space
within the number of evaluations available.
MCTS with bisection and SMBO do not suffer from these problems, and therefore
can identify and focus on
high performance regions of the search space earlier.
In addition to achieving a higher top accuracy,
MCTS with bisection and
SMBO evaluate a larger fraction of high-performance models when compared to
random search, which can be seen in
Figure~\ref{fig:quants}.
There is a significant number of models that
fail to train
for the particular search space
that we considered, i.e.,
the classification performance in the validation set
fails to be better than random.
This is seen in
Figure~\ref{fig:quants} by the large drop in the fraction of models
that outperform $10 \%$ accuracy.
This observation is in accordance with the knowledge that very
deep convolutional architectures are hard to train.
Additionally,
in Figure~\ref{fig:quants}
we see that models
have large variability in terms of
the performance that they achieve in $30$ minutes of
training.
This is a desirable aspect of our model search specification
language as it shows that it can represent
expressive search spaces with models achieving
significantly different performances.
\section{Conclusion}
\label{sec:conclusion}
We described
a framework for automatically
designing and training deep models.
This framework consists of three fundamental components:
the model search space specification language,
the model search algorithm, and
the model evaluation algorithm.
The model search space specification language is composable, modular,
and extensible, and allows us to easily define expressive search spaces
over architectures.
The model evaluation algorithm determines how to compute a score for a model
in the search space.
Models can be automatically compiled to their corresponding computational graphs.
Using the model search space specification language and the model evaluation algorithm,
we can introduce model search algorithms for exploring the
search space.
Using our framework, it is possible to do random search
over interesting spaces of architectures
without much effort from the expert.
We also described more complex model search algorithms, such as
MCTS, MCTS with tree restructuring, and SMBO.
We present experiments on CIFAR-10 comparing
different model search algorithms and
show that MCTS with tree restructuring and SMBO outperform
random search.
Code for our framework and experiments
has been made publicly available.
We hope that this paper will lead to more work and better tools for
automatic architecture search.
\section*{Acknowledgements}
The authors would like to thank Matt Gormley, Matt Barnes, Christoph Dann,
and Kirthevasan Kandasamy
for helpful discussions and detailed comments on
early versions of the manuscript.
{\small
\bibliographystyle{ieee}
\bibliography{egbib_clean}
}
\clearpage
\appendix
\section{Detailed Experimental Setup}
\label{sec:detailed_exps}
In
Section~\ref{sec:experiments},
we considered a search space of
deep convolutional models having
structural hyperparameters
for
the depth of the network,
whether to apply batch normalization before or after
ReLU, and
whether to use dropout;
hyperparameters for the
number and size of the convolutional filters;
training hyperparameters for the learning rate schedule.
We show in
Figure~\ref{fig:ss_experiments_lisp}
the LISP-like pseudocode for
the search space considered in Section~\ref{sec:experiments},
and in Figure~\ref{fig:ss_experiments_python}
the corresponding runnable Python implementation in our framework.
\begin{figure}[htp]
\raggedleft{
{\tiny
\begin{verbatim}
MH = (UserHyperparams
{ 'optimizer_type' : ['adam', 'sgd'],
'learning_rate_init' : logspace(10^-2, 10^-7, 32),
'rate_mult' : logspace(10-2, 0.9, 8),
'rate_patience' : [4, 8, 12, 16, 20, 24, 28, 32],
'stop_patience' : [64],
'learning_rate_min' : [10^-9] })
M1 = (Conv2D [48, 64, 80, 96, 112, 128] [3, 5, 7] [2])
M2 = (RepeatTied
(Concat
(Conv2D [48, 64, 80, 96, 112, 128] [3, 5] [1])
(MaybeSwap BatchNormalization ReLU)
(Optional (Dropout [0.5, 0.9])))
[1, 2, 4, 8, 16, 32])
M = (Concat MH M1 M2 M1 M2 (Affine [10]))
\end{verbatim} } }
\caption{Specification of the model search space used
in Section~\ref{sec:experiments}
in LISP-like pseudocode.
See Figure~\ref{fig:ss_experiments_python}
for the corresponding runnable Python code. }
\label{fig:ss_experiments_lisp}
\end{figure}
\begin{figure}[htp]
\raggedleft{
{\tiny
\begin{verbatim}
MH = UserHyperparams(['optimizer_type',
'learning_rate_init',
'rate_mult',
'rate_patience',
'stop_patience',
'learning_rate_min' ],
[['adam', 'sgd_mom'],
list( np.logspace(-2, -7, num=32) ),
list( np.logspace(-2, np.log10(0.9), num=8) ),
range(4, 33, 4),
[64],
[1e-9] ])
conv_initers = [ kaiming2015delving_initializer_conv(1.0) ]
aff_initers = [ xavier_initializer_affine( 1.0 )]
def Module_fn(filter_ns, filter_ls, keep_ps, repeat_ns):
b = RepeatTied(
Concat([
Conv2D(filter_ns, filter_ls, [1], ["SAME"], conv_initers),
MaybeSwap_fn( ReLU(), BatchNormalization() ),
Optional_fn( Dropout(keep_ps) )
]), repeat_ns)
return b
filter_nums = range(48, 129, 16)
repeat_nums = [2 ** i for i in xrange(6)]
mult_fn = lambda ls, alpha: list(alpha * np.array(ls))
M = Concat([MH,
Conv2D(filter_nums, [3, 5, 7], [2], ["SAME"], conv_initers),
Module_fn(filter_nums, [3, 5], [0.5, 0.9], repeat_nums),
Conv2D(filter_nums, [3, 5, 7], [2], ["SAME"], conv_initers),
Module_fn(mult_fn(filter_nums, 2), [3, 5], [0.5, 0.9], repeat_nums),
Affine([num_classes], aff_initers) ])
\end{verbatim} } }
\caption{Runnable specification of the model search space used in
Section~\ref{sec:experiments} in our Python implementation of the framework.
See Figure~\ref{fig:ss_experiments_lisp} for the specification of the
same search space in the LISP-like pseudocode
used throughout this paper.}
\label{fig:ss_experiments_python}
\end{figure}
In Figure~\ref{fig:ss_experiments_lisp}
and Figure~\ref{fig:ss_experiments_python},
to include training hyperparameters in the search space,
we concatenate
the module that encapsulates the training hyperparameters
(the module assigned to \texttt{MH})
and the modules that encapsulate the
remaining model hyperparameters
(the modules other than \texttt{MH}
in the declaration of \texttt{M}).
The Python specification
of the model search space
in Figure~\ref{fig:ss_experiments_python} is remarkably close in both
semantics and length
to the LISP-like pseudocode in Figure~\ref{fig:ss_experiments_lisp}.
We omit some hyperparameters in Figure~\ref{fig:ss_experiments_lisp}
because we did not consider multiple values for them, e.g.,
for \texttt{Conv2D} modules, we always used
same size padding
and the
initialization scheme described in He \etal~\cite{he2015delving}.
Our implementation has code modularity and reusability benefits.
For example,
we can define an auxiliary function to instantiate modules and then
use it in the instantiation of the module for the complete search space.
This is illustrated in Figure~\ref{fig:ss_experiments_python}
with the definition of \texttt{Module\_fn}
and its use in the declaration of \texttt{M}.
\section{List of Modules}
\label{sec:list_modules}
We provide a brief description of
a representative subset of
the types of basic and composite modules that
we have implemented in our framework.
\subsection{Basic Modules}
Basic modules take no
other modules when instantiated,
having only local hyperparameters
and parameters.
\begin{itemize}
\item \texttt{Affine}:
Dense affine transformation.
Hyperparameters:
number of the hidden units and
initialization scheme of the parameters.
Parameters:
dense matrix and
bias vector.
\item \texttt{ReLU}:
ReLU nonlinearity.
Hyperparameters: none.
Parameters: none.
\item \texttt{Dropout}:
Dropout.
Hyperparameter:
dropout probability.
Parameters:
none.
\item \texttt{Conv2D}:
Two-dimensional convolution.
Hyperparameters:
number of filters,
size of the filters,
stride,
padding scheme, and
initialization scheme of the parameters.
Parameters:
convolutional filters and
bias vector.
\item \texttt{MaxPooling2D}:
Two-dimensional max pooling.
Hyperparameters:
size of the filters,
stride, and
padding scheme.
Parameters:
none.
\item \texttt{BatchNormalization}:
Batch normalization.
Hyperparameters:
none.
Parameters:
translation coefficients and
scaling coefficients.
\item \texttt{UserHyperparams}:
User-defined hyperparameters.
Hyperparameters:
hyperparameters determined by the user expert.
Parameters:
none.
\item \texttt{Empty}:
Identity.
Hyperparameters:
none.
Parameters:
none.
\end{itemize}
\subsection{Composite Modules}
Composite modules take other modules as arguments
when instantiated,
which we will call \emph{submodules}.
The behavior of a composite module depends
on its submodules.
The hyperparameters
which a
composite module has to specify
depend on the values of the hyperparameters of the composite module
and the hyperparameters of the submodules;
e.g., \texttt{Or} takes a list of submodules
but it only has to specify
the hyperparameters of the
submodule that it ends up choosing.
A composite module
is responsible for specifying its submodules,
which is done through calls to
the module interfaces of the submodules.
\begin{itemize}
\item \texttt{Concat}:
Takes a list of submodules and connects them
in series.
Hyperparameters:
hyperparameters of the submodules.
Parameters:
parameters of the submodules.
\item \texttt{Or}:
Chooses one of its submodules to use.
Hyperparameters:
which submodule to use and
hyperparameters of the submodule chosen.
Parameters:
parameters of the submodule chosen.
\item \texttt{Repeat}:
Repeats a submodule some number of times,
connecting the repetitions in series;
values for the hyperparameters of the repetitions
are chosen independently.
Hyperparameters:
number of times to repeat the submodule
and
hyperparameters of the repetitions of the submodule.
Parameters:
parameters of the repetitions of the submodule.
\item \texttt{RepeatTied}:
Same as \texttt{Repeat}, but values for the hyperparameters of the
submodule are chosen once and used for all the submodule repetitions.
Hyperparameters:
the number of times to repeat the submodule and
hyperparameters of the submodule.
Parameters:
parameters of the repetitions of the submodule.
\item \texttt{Optional}:
Takes a submodule and chooses whether to use it or not.
Hyperparameters:
whether to include the submodule or not and,
if included,
hyperparameters of the submodule.
Parameters:
if included, parameters of the submodule.
\item \texttt{Residual}:
Takes a submodule and implements a skip connection adding
the input and output;
if the input and output have different dimensions,
they are padded to make addition possible.
Hyperparameters:
hyperparameters of the submodule.
Parameters:
parameters of the submodule.
\item \texttt{MaybeSwap}:
Takes two submodules and connects them in series,
choosing which submodule comes first.
Hyperparameters:
which of the submodules comes first
and hyperparameters of the submodules.
Parameters:
parameters of the submodules.
\end{itemize}
\section{Module Interface}
We describe the module interface as we implemented it in
Python. To implement a new type of module, one only
needs to implement the module interface.
\begin{figure}[htp]
{\raggedleft
{\scriptsize
\begin{verbatim}
class Module(object):
def initialize(self, in_d, scope)
def get_outdim(self)
def is_specified(self)
def get_choices(self)
def choose(self, choice_i)
def compile(self, in_x, train_feed, eval_feed)
\end{verbatim} } }
\caption{Module interface used by all modules irrespective if they are
basic or composite. To implement a new type of module, the human expert only
needs to implement the module interface.}
\label{fig:interface}
\end{figure}
\begin{itemize}
\item \texttt{initialize}:
Tells a module its input dimensionality.
A composite module is responsible
for initializing the submodules
that it uses.
\item \texttt{get\_outdim}:
Once a module is fully specified, we can determine its output dimensionality
by calling \texttt{get\_outdim}.
The output dimensionality is a function of the input dimensionality
(which is determined when \texttt{initialize} is called)
and the values of the hyperparameters chosen.
\item \texttt{is\_specified}:
Tests whether a module is fully specified.
If a module is fully specified, \texttt{outdim} and \texttt{compile}
may be called.
\item \texttt{get\_choices}:
Returns a list of the possible values for the hyperparameter
currently being specified.
\item \texttt{choose}:
Chooses one of the possible values for the hyperparameter
being specified.
The module assigns the chosen value to that hyperparameter and either
transitions to the next hyperparameter to
specify or becomes fully specified.
The module maintains
internally the state of its search process.
\item \texttt{compile}:
Creates the computational graph of the model in a deep learning
model specification language, such as Tensorflow or PyTorch.
For composite modules, compilation can be performed recursively,
through calls to the \texttt{compile} functions of its submodules.
\end{itemize}
Composite modules rely on calls to the module interfaces of its submodules
to implement their own module interfaces. For example, \texttt{Concat} needs
to call \texttt{out\_dim} for the last submodule of the series connection
to determine its own output dimensionality,
and needs to call \texttt{choose} on the submodules to specify itself.
One of the design choices that make the language modular
is the fact that a \emph{composite module can
implement its own module interface through calls to the module interfaces of
its submodules}.
All information about the specification of a module is local
to itself or kept within its submodules.
\section{Beyond Single-Input Single-Output Modules}
\label{sec:beyond_siso}
We can define new modules with
complex signal paths as long as their existence is encapsulated, i.e.,
a module may have
many signal paths
as long they
fork from a single input and
merge to a single output,
as illustrated in
Figure~\ref{fig:complex_module}.
\tikzset{
module/.style = {draw, fill=white, rectangle, minimum height=2em, minimum width=3em},
tmp/.style = {coordinate},
input/.style = {coordinate},
output/.style= {coordinate},
sum/.style= {draw, fill=white, circle},
}
\begin{figure}[htp]
\centering
\resizebox{\columnwidth}{!}{
\begin{tikzpicture}[auto, node distance=1.5cm,>=latex', scale=0.50]
\node [module] (b2) {\texttt{M2}};
\node [module, above of=b2] (b1) {\texttt{M1}};
\node [module, below of=b2] (b3) {\texttt{M3}};
\node [input, left of=b1] (in_m1) {};
\node [input, left of=b2] (in_m2) {};
\node [input, left of=b3] (in_m3) {};
\node [input, left of=in_m2] (in_m) {};
\draw [-] (in_m) -- (in_m2);
\draw [-] (in_m1) -- (in_m2);
\draw [-] (in_m2) -- (in_m3);
\draw [-] (in_m1) -- (b1);
\draw [-] (in_m1) -- (b1);
\draw [-] (in_m2) -- (b2);
\draw [-] (in_m3) -- (b3);
\node [sum, right of=b2] (g1) {$g _1$};
\node [tmp, right of=b1] (tmp1_g1) {};
\draw [-] (b2) -- (g1);
\draw [-] (b1) -- (tmp1_g1);
\draw [-] (tmp1_g1) -- (g1);
\node [module, right of=b3] (b4) {\texttt{M4}};
\draw [-] (b3) -- (b4);
\node [module, right of=g1] (b5) {\texttt{M5}};
\draw [-] (g1) -- (b5);
\node [sum, right of=b5] (g2) {$g _2$};
\node [tmp, below of=g2] (tmp4_g2) {};
\draw [-] (b5) -- (g2);
\draw [-] (b4) -- (tmp4_g2);
\draw [-] (tmp4_g2) -- (g2);
\node [output, right of=g2] (out_b) {};
\draw [-] (g2) -- (out_b) {};
\node [tmp, above left of=in_m1] (tmp_lu) {};
\node [tmp, below left of=in_m3] (tmp_ld) {};
\node [tmp, above of=g2] (tmp_ru_aux) {};
\node [tmp, above right of=tmp_ru_aux] (tmp_ru) {};
\node [tmp, below right of=tmp4_g2] (tmp_rd) {};
\draw [dashed] (tmp_lu) -- (tmp_ru) -- (tmp_rd) -- (tmp_ld) -- (tmp_lu);
\node [tmp, below of=tmp_ru] (tmp_anot) {};
\node [above left of=tmp_anot] {\texttt{NewModule}};
\end{tikzpicture}
}
\caption{A module with many signal paths from input to output.
To implement a module, the human expert
only needs to implement its
module interface.
\texttt{M1}, \texttt{M2}, \texttt{M3}, and \texttt{M4} are
arbitrary
single-input single-output modules;
$g _1$ and $g _2$ are arbitrary transformations
that may have additional hyperparameters.
The hyperparameters of $g _1$ and $g _2$
can be managed internally by \texttt{NewModule}. }
\label{fig:complex_module}
\end{figure}
In Figure~\ref{fig:complex_module} there is a single
input fed into
\texttt{M1}, \texttt{M2}, and \texttt{M3}.
\texttt{M1}, \texttt{M2}, \texttt{M3}, \texttt{M4}, \texttt{M5}
are arbitrary single-input single-output
submodules of \texttt{NewModule}.
The module interface of \texttt{NewModule}
can be implemented
using the module interfaces of its submodules.
Instantiating a module of type \texttt{NewModule} requires submodules
for \texttt{M1}, \texttt{M2}, \texttt{M3}, \texttt{M4}, and \texttt{M5},
and potentially lists of possible values
for the hyperparameters of $g _1$ and $g _2$.
A residual module which chooses what
type of merging function to apply, e.g., additive or multiplicative,
is an example of a module with hyperparameters for the merging functions
A module of the type \texttt{NewModule} is fully specified after
we choose values for all the hyperparameters of
\texttt{M1}, \texttt{M2}, \texttt{M3}, \texttt{M4}, \texttt{M5}, $g_1$,
and $g _2$.
Testing if \texttt{M1}, \texttt{M2}, \texttt{M3}, \texttt{M4},
and \texttt{M5} are fully specified
can be done by calling
\texttt{is\_specified}
on the corresponding submodule.
The output dimensionality of \texttt{NewModule} can be computed
as a function of
the
values of the hyperparameters of $g _2$
and
the output dimensionality of \texttt{M5}
and \texttt{M4}, which
can be obtained
by calling \texttt{get\_outdim}.
Similarly, for \texttt{get\_choices}
we have to keep track of which
hyperparameter we are specifying,
which can either come from
\texttt{M1}, \texttt{M2}, \texttt{M3}, \texttt{M4}, and \texttt{M5},
or from
$g _1$ and $g _2$.
If we are choosing values for an hyperparameter in
\texttt{M1}, \texttt{M2}, \texttt{M3}, \texttt{M4}, and \texttt{M5}
we can call \texttt{get\_choices} and \texttt{choose} on that submodule,
while for the hyperparameters of $g _1$ and $g _2$
we have to keep track of the state in \texttt{NewModule}.
\texttt{compile} is similar
in the sense that it is
implemented using calls
to the \texttt{compile} functionality of
the submodules.
\end{document}
|